Welcome to the meteoLCD blog

September 28, 2008

blog-2018

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to the questions of climate change, global warming, energy etc…

RAF in Diekirch, LU, March 2022

March 13, 2022

Many times in the past I have posted on the relationship between the thickness of the ozone column (the TOC measured in the Dobson Unit DU), and the effective UVB irradiance at ground level. We (Mike Zimmer and myself) measure the first with our Microtops instruments, and the latter with our UVB Biometer (in minimal erythemal dose power, unit MED/h, or the derived UV index (UVI, were 1 MED/h corresponds to 25/9 UVI): all weather and solar conditions being the same, a higher TOC means lower UVB (or UVI) intensity. See references [1] to [5] or just type RAF into the SEARCH box on the upper right of the screen to find all previous publications in the blog.

From the 4th to the 10th March 2022 we had very fine weather in Diekirch, with cloudless blue skies:

This plot clearly shows that during the 4 days from the 5th to 8th March the UVI ( or the UVBeff given in the green boxes) is increasing, and the TOC given in the red boxes is decreasing; the opposite happens during the next 3 days. The measurements have all been done at nearly the same hour (around 12:00 UTC), the solar position given by the solar zenith angle (SZA) is practically the same (around 55°), and the solar irradiance measured by our pyranometers also varies not much (maximum between 570 and 621 W/m2 on a horizontal surface).

In the next plots I have added the data from the 11th March (same blue sky). First a simple plot of UVI versus TOC:

If we take the first and sixth points as a “worst case”, we see that a decline of about 60 DU increases the UVI from 1.70 to 2.50. A simple rule of three to keep in mind would be “100 DU less means 1.3 more UVI”.
This is effectively nothing to be afraid of!

The RAF

The radiation amplification factor is not defined by a linear relationship, but by a logarithmic one; if we plot the negative natural logarithm of UVBeff (= -ln(UBVBeff)) against the natural logarithm of the TOC ( =ln(TOC)), the slope of the trend-line represents the RAF:

From the equation of the trend-line we see that the slope is 1.178, so the RAF computed from these 8 days is about 1.18

The following table gives all the meteoLCD calculations, and as a comparison the values given by Jay Herman in his paper from 2010 [6]:

It’s really surprising how close our March values are.

Please read also the interesting paper of Schwarz et al. who use anomaly values for the study of the relationship between TCO and UVI [ref. 7]:

There is some discussion if the TOC readings should be replaced by the TOCslant = TOC/cos(SZA), as the RAF varies with the SZA (see the Herman paper); for these March 2002 readings, this would change the RAF to 1.43.

We will retain the non-slanted TOC in our comparisons (past an future).

References:

[1] MASSEN, Francis: Radiation Amplification Factor in April 2021 (link)

[2] MASSEN, Francis, 2018: UVI and Total Ozone (link)

[3] MASSEN, Francis, 2016: First Radiation Amplification Factor for 2016 (link)

[4] MASSEN, Francis, 2014 : RAF revisited (link)

[5] MASSEN, Francis, 2013: Computing the Radiation Amplification Factor RAF using a sudden
dip in Total Ozone Column measured at Diekirch, Luxembourg (link)

[6] HERMAN, Jay, 2010: Use of an improved radiation amplification factor to estimate
the effect of total ozone changes on action spectrum weighted irradiances and an instrument response function. Journal of Geophysical Research, vol.115, 2010 (link)

[7] Schwarz et al.: Influence of low ozone episodes on erythemal UVB-radiation in Austria. Theor. Appl. Climatol. DOI 10.1007/s007004-017-2170-1 (link)

——————————————-

A small EXCEL file with all March 2022 parameters can be found here.

The Montreal Protocol, a success or not?

September 27, 2021
15 Sep 2021 ozone hole (Nasa ozone watch)

An important addendum has been added the 5th October 2021.
Please read it at the end of the main text!
(missing fig. 19 added 05 Nov 2021)

———————————————-

It’s springtime in the Antarctica, and the well known ozone hole has again reached a spectacular dimension, extending well over the border of the continent; the picture shows the 23 million km2 hole at the 15th September 2021, covering more than the whole Antarctic continent (see https://ozonewatch.gsfc.nasa.gov/statistics/annual_data.html).

The Montreal Protocol signed in 1987 (and now counting 196 signature nations) was meant to avoid this anthropogenic thinning of the ozone layer by phasing out the CFC’s (chlorofluorocarbons) gases found to be the big ozone destroyers.

In this blog, I will see if this aim has been reached. I will start with some comments on ozone, its measurement and the scientists who received the Nobel price in 1995 for their research of atmospheric chemistry, and especially ozone creation and destruction.

1. Ozone, a delicate gas

Ozone (O3) is a very fragile gas, easily destroyed (it splits into O2 + O, where this single oxygen than reacts readily with other substances); for instance blowing a stream of ozone gas over a rough wall destroys a large percentage of the O3 molecules. This means that if ozone is sucked into a measuring instrument , the tubing must be very smooth, so that normally Teflon is used.

Ozone is created in nature essentially by two means: by lightning (electrical discharge) and by UV radiation.  As it is very reactive, it is considered at ground level and higher concentrations  a pollutant, harming biological tissues (like lungs and bronches) and also plants and crops. Its sterilizing properties are used in UV lamps installed for instance in the chirurgical environment and in water treatment plants. It has a very distinctive smell, and often can be smelled after a strong lightning storm.

2. Ozone in the atmosphere

Ozone is ubiquitous, found throughout the whole atmosphere, from ground level up to it’s top (TOA = Top Of the Atmosphere); the concentrations vary strongly with altitude. The ground level concentrations for instance are about 40 times lower than the peak concentration between 30-40 km:

Fig. 1. Smoothed concentration curve of ozone; the peak at approx. 32km altitude corresponds to a concentration of 8 ppm (8 O3 molecules per one million of air molecules (picture Nasa Ozone Watch)

The atmospheric O3 concentration at ground level is normally given in Dobson Unit (DU), named after Gordon Dobson (Uni Oxford), who in 1920 built the first instrument to measure ozone concentration. Imagine a cylinder with a base of 1m2 going from ground level to TOA. Now imagine that all gases except the O3 molecules are removed from this cylinder. Finally compress with a piston the O3 molecules to the bottom (kept at 0°C) until a pressure of 1013 hPa is reached. The height of the compressed gas would normally be about  3mm or 300 hundredth of a mm. One DU = 1/100 mm, so the concentration would be given as 300 DU and in fact corresponds to 300*2.687*10^20 = 8*10^22 ozone molecules. The DU number represents the Total Ozone Column (TOC), and all ground based measuring stations use this unit.

As UV radiation creates O3, it is also absorbed by ozone; the usual TOC instruments measure the absorption of several wavelengths in the UVB region (typical around 300 nm), and possibly some others to measure for instance the atmospheric water content. The Microtops II instruments that are in use at meteoLCD for measuring the TOC use the 305.5, 312.5 and 320 nm wavelengths [ref.1].

The very expensive Brewer Mark I and II instruments of the RMI (Royal Meteorological Institute of Uccle, Belgium) use optical gratings to have a much higher number of relevant wavelengths, and as such a better resolution.

The TOC varies enormously  during very short periods, but over many years the daily averages follow a sine pattern, having in our region a maximum in spring and a minimum in autumn. The sinus-curve in the following figure represents the daily average at Uccle from 1971 to 2020 [ref. 3]

Fig. 2. Daily TOC at Uccle (Belgium), 23 Sep 2021. The orange sine-curve is the daily average from 1971 to 2021 [ref.2]

Read also this short paper on the cycle found at meteoLCD [ref.12]

Ozone soundings with balloons or lasers through the atmosphere are made from time to time; the following figure from RMI shows that the real distribution of ozone through the atmosphere is much more variable than suggested by the smooth curve in figure 1:

Fig.3. Ozone sounding at Uccle, 16 April 2012. Note the differences with fig. 1 (see ref.2)

3. The ozone hole

Regular TOC measurements started with the International Geophysical Year in 1956. The next figure (which is a subset) shows the measurements over the minimum of Antarctic TOC from 1956 to 1979:

Fig.4. Minimum of Total Ozone Column over Antarctica 1956-1979 [https://ozonewatch.gsfc.nasa.gov/facts/images/halley_toms_ozone.png]

During the 80’s it was found that in autumn (September-October)  the TOC reached very low values over the South Pole (which corresponds to the local spring time), extending even over much the Antarctic continent.  The British  scientists Farman, Gardiner and Shanklin reported in 1985 exceptionally low TOC values over the Halley and Faraday Antarctic research stations, a drop of about 40%. It was Farman who is credited introducing the name “ozone hole”, but it is possible that the Washington Post first used it in an article . A record minimum of 73 DU was measured in September 1994; for comparison the lowest value measured by meteoLCD in Diekirch since 1997 was 208.5 DU (18 November 2020). Fig. 5 shows the yearly minima of the Antarctic ozone hole and at Diekirch (Luxembourg):

Fig. 5. Minima yearly TOC

Why should one worry about this ozone hole? The ozone layer in the stratosphere absorbs completely the shortest UV radiation wavelengths , as UVC (< 280nm), and a large part of the UVB spectrum (280-320 nm). UVB is important for some biological processes (like the production of vitamin D in humans), but  a too large intensity creates havoc with cellular DNA and may lead to skin cancer, for instance. As the Antarctic is virtually population-free and vegetation free, this is not a problem on that continent; but if the CFC caused destruction works over the whole planet, there certainly will be an important health risk when the ozone protection screen becomes too thin.

Our measurements at meteoLCD show clearly that a lower TOC increases the biologically effective UVB radiation at ground-level, as documented in the following figure [ref. 11]:

Fig. 6. When TOC diminishes from 381.8 to 265.8 DU, UVBeff increases by ~2 UVI

4. The 1995 Nobel price in chemistry

The 1995 Nobel price in chemistry was given to 3 researchers in atmospheric chemistry: Paul Crutzen, Mario Molina and F. Sherwood Rowland. Crutzen’s work in the 1970’s was on the O3 destruction by naturally occurring nitrogen oxides, but Molina and Roland found in 1974 that the chlorofluorocarbon type chemical substances (CFC’s, with CFC-11 often called Freon by its tradename from Dupont de Nemour) release a very active chlorine ion in the cold upper atmosphere, which acts as a catalytic destroyer of ozone. One Cl- ion may destroy 100000 ozone molecules! The system of chemical reactions involved is rather complicated, and I suggest to read the lecture given by Molina in 1995 during the Nobel price ceremony [ref. 4].

Fig.7. From The Molina Nobel price lecture, red border showing the O3 destruction added

This complex chemistry is not only fuelled by human emitted substances. One very often finds in the literature that CFC’s are a finger print of anthropogenic emissions, but it seems that the situation is more complicated. According to cfc-geologist [ref. 5] volcanoes also emit these compounds; a second important natural source of ozone depleting substances (ODS) is methyl bromide (CH3Br, also called bromomethane), emitted by algae (but also by humans for soil fumigation, for instance). A list of the many ODS can be found at the EPA website [ref.6]. This list also contains halons, like these used in fire protection systems.
For a very complete and more recent (2017) discussion of all the relevant equations and their kinetics see the paper by Wohltmann et al. [ref. 7].

5. The Montreal Protocol

This protocol, signed by 46 nations in  September 1987, came in effect the 1st January 1989 and is ratified today by 197 nations. Its aim was to reduce the production of long-lived CFC’s by 50% by the year 2000, and finally phase it out completely. It is universally acclaimed as being the first protocol in global environmental protection. In 2016 an amendment was signed in Kigali to also phase out the HFC (hydrochlorofluorocarbons) which were the substances used to replace for instance Freon (CFC-11).

The production number of CFC’s indeed plunged to a very low level, at least according to the data submitted by the nations:

Fig.8. Consumption of ODS until 2014

So considering this planned phase-out, the protocol seems to be a success.

6. Evolution of the Antarctic ozone hole

The next figure puts some cold water on the efficiency of the protocol. It shows the above graph with the yearly maximum ozone hole area superimposed:

Fig. 9. World CFC’s production and maximum ozone hole area

Even if the CFC’s production becomes very low, the area of the ozone hole does not seem to change very much, except some wiggling around the 1994/96 levels. The next two figures show the maximum ozone hole area from 1990 to 2020 (data from NASA ozonewatch):

Fig. 10. Maximum annual ozone hole area, all data, with linear regression.

This plot shows that during the 31 years since 1990 the maximum ozone hole area shrinks by less than 10%. The linear regression suggests a shrinking by 0.7 millions km2 per decade. But note the two strange outlier years 2002 and 2019, which really seem suspect. If we omit these two outliers, we have the following picture:

Fig.11. Maximum ozone hole area 1990-2020, without the 2002 and 2019 outliers

Without these two outliers, the maximum ozone hole is shrinking in 31 years by a very meagre 0.73 million km2, i.e. by about 3.3%. The linear trend corresponds to an nearly insignificant -0.2 million km2 per decade. This very small number shows that the observational result of the Montreal Protocol is extremely limited until now.

7. Causes of the not shrinking hole

Let us first show the plot given in figure 4 in its entirety:

Fig.12. Maximum Ozone Hole Measurements since 1956 (TOMS, OMI, OMPS = satellite measurements)

The figure shows that there is a distinctive ongoing thinning from the mid 1970’s on; but starting after 1992, the picture does not become clearer, neither after 2000: the measurement data show an important spread from year to year, and more intriguingly, from instrument to instrument.

There could be some very banal causes, which might be suggested by the stratospheric concentration data and by an increase of CFC-11 emissions after 2013 (possibly caused by illegal manufacture in China) [ref 8]:

Fig. 13. CFC-11 yearly emissions (dotted curve) with uptick in 2013 (zoom window) and different models

Another or further reason might be that reversing the thinning takes many decades, possibly more than half a century, as predicted by some scientists [ref. 9] [ref.13]:

Fig. 14. Minimum TOC predicted by TOMCAT model R2000 with different assumptions [ref.13]

But there are also some reasons for hope. Direct measurements of reactive Cl- ions in the ozone hole (which were nearly impossible in the past) by a new satellite microwave sounder (MLS) has given examples that a yearly smaller DU loss correlates with a somewhat lower chlorine concentration [ref. 10]

Fig.15. Comparison of chlorine concentration and TOC in 2006 and 2011

8. Conclusion

So the least we can conclude, is that the picture remains muddled, and that reality is not what is found in the often glowing media reports, which present the Montreal Protocol as a definitive success story. The protocol is probably efficient in the very long run, but it certainly is not a quick acting fix. It is a very good example that tinkering with atmospheric gases is a complex matter, with the science far from settled, as shown for instance in the 2018 paper by Rolf Müller et al. [ref.14]. T

This lesson should also be heard by the CO2 alarmist who expect that rushing into a net-zero carbon policy would have immediate results.


ADDENDUM (05Oct2021):

There is very important chapter 2 by Douglass et al. published in the 2010 version of “The Scientific Assessment of Ozone Depletion” (link to all assessment reports); the title of this chapter 2 is “Stratospheric Ozone and Surface Ultraviolet Radiation” (link); here a link to a version with highlights added by me.

First let us start with fig.5 which shows the variations of TOC at two stations not too far away from Diekirch, Hohenpeissenberg and Haute Provence:

Fig. 16. TOC anomalies at Hohenpeissenberg (DE) and Observatoire de Haute Provence (FR)

The measured TOC anomalies w.r. to the 1979-2010 mean (black curve) do not show any visible trend from about 1995 on. The next figure shows the erythemal UVB irradiances during the summer months May to August at Bilthoven (NL) and Uclle (BE): not surprisingly there is no trend in Bilthoven starting 1995, and a possible small positive trend in Uccle:

Fig.16. Daily mean UVB erythemal dose at Bilthoven and Uccle (red curve = observations)

At meteoLCD the corresponding daily dose is decreasing from 2000 to 2010, as shown in figure 17:

Fig.17. The daily erythemal UVB dose is slightly decreasing at meteoLCD, Diekirch (LU)

Douglass also shows a figure with the minimum yearly TOC over the Antarctic core zone: look at the right part of the bottom graph which shows DU values that are practically constant.

Fig. 18. Antarctic hole TOC, raw and adjusted/corrected data

The report many times writes that the ozone changes are not only caused by chemical destruction, but also for a large part by atmospheric dynamics, and that it is not easy to disentangle these to effects.

The last figure 19 on the chemical depletion also shows that from about 1992 to 2005 the few direct observations (black triangels) show a practically constant ozone depletion by chemical effects (and a huge variability of the many models).

References:

Ref 1:
https://meteo.lcd.lu/papers/Comparison_Microtops_Brewer16_2012.pdf

Ref 2:
https://ozone.meteo.be/research-themes/ozone/instrumental-research#graph-ozone-container

Ref 3:
https://meteo.lcd.lu/papers/annual_cycle.pdf

Ref 4:
nobelprize.org/uploads/2018/06/Molina-lecture.pdf

Ref 5:
cfc.geologist-1011.net

Ref 6:
https://www.epa.gov/ozone-layer-protection/ozone-depleting-substances

Ref 7:
https://acp.copernicus.org/articles/17/1/10535/2017/acp-17-10535-2017.pdf

Ref 8:
https://www.pnas.org/content/118/12/e2021528118

Ref 9:
https://www.nature.com/articles/s41467-019-13717-x

Ref 10:
https://earthobservatory.nasa.gov/images/91694/measurements-show-reduction-in-ozone-eating-chemical

Ref 11:
https://meteo.lcd.lu/papers/MASSEN/RAF_from_sudden_TOC_dip.pdf

Ref 12:
https://meteo.lcd.lu/papers/annual_cycle.pdf

Ref 13:
https://www.nature.com/articles/s41467-019-13717-x.pdf

Ref 14:
https://acp.copernicus.org/articles/18/2985/2018/

Ref 15:

https://www.researchgate.net/publication/236964864_Stratospheric_Ozone_and_Surface_Ultraviolet_Radiation?enrichId=rgreq-7b911d44dff34b7908e6a4357345f6f6-XXX&enrichSource=Y292ZXJQYWdlOzIzNjk2NDg2NDtBUzoxNDk3OTAzNTY2NzY2MDhAMTQxMjcyNDIxMjQ1Nw%3D%3D&el=1_x_2&_esc=publicationCoverPdf

Sun variability and NH temperature

September 3, 2021

There is a new paper published in April 2021 in Research in Astronomy and Astrophysics titled “How much has the Sun influenced Northern Hemisphere temperature trends? An ongoing debate” (link). The authors are R. Connolly, W. Soon, M. Connolly together with 19 coauthors. All these people are from well-known universities or research facilities, and as such have impressive scientific backgrounds. The paper is quite long, more than 70 pages including a huge 536 items reference list. I recommend a careful reading of this paper that is the best overview of scientific knowledge regarding the sun-climate question I know of. What makes this paper unique is that it presents many facets of the problems of the TSI variability and of NH surface temperature series. It honestly states that not all coauthors share the same conclusions, so it clearly is not a cherry-picking paper pushing an activist agenda. As this is such a large and diverse paper, I will just touch on a few aspects, and try to give a short summary.

Its main conclusion is that the IPCC’s stand on the influence of the sun on global warming is at least open for discussion, and ignores a huge amount of scientific findings that conflict with its anthropocentric view on human caused climate change.

The problems with knowing TSI

Everybody knows that the sun is the engine that drives Earth’s climate, and that the energy output of this big thermonuclear reactor is not constant. Best known are the 11 years Schwabe cycle of total solar intensity, and the 22 years Hale cycle of its magnetic activity. The TSI (irradiance in W/m2 on a surface perpendicular the the solar rays, measured at TOA, the top of the atmosphere) really is directly and continuously measured only since the satellite times, starting in 1978 with the NIMBUS 7 satellite and its ERB (Earth Radiation Budget) mission. Previous data are more patchy, coming from soundings with balloons and rockets, or from indirect proxies like solar spots, changes of the solar magnetism measured at ground level or even planetary (astronomical) causes.

43 years of satellite measurements covering nearly 7 Schwabe cycles should be enough to yield a definitive answer for TSI variability, but this is alas not the case. The satellites instruments degrade with time, and successive satellites have different biases and measurement problems:

The different TSI measurements series from 1978 on (link)

The figure shows that the series differ by about 10 W/m2, so simply stitching together these series is impossible (just to set this number: the increased radiative forcing caused by the higher atmospheric CO2 concentration from 1750 to 2011 is about 1.82 W/m2, according to the IPCC AR5) . Two best-known efforts to get a continuous “homogenized” series are those from the ACRIM (USA) and PMOD (Davos, CH) teams. Both come to different conclusions: according to ACRIM there is a general increase in TSI, whereas PMOD thinks that TSI remains more or less constant. Needless to say that the IPCC adopts the PMOD view that conforms more to its policy of anthropogenic caused climate warming, and ignores ACRIM

If one includes the proxy series, as this paper does, there are 16 different TSI reconstructions that may be valid. So the least that can be said is that an honest scientific broker should exanimate, and not ignore, them all.

High and low variable TSI series

The 11 year cycle is not the only one influencing TSI; there are also many multidecadal/multicentennial/multimillennial cycles which can be found by spectral analysis or by astronomical causes, like the Milankovitch cycles. If these longer cycle variations are considered small w.r. to the Schwabe cycle, the reconstruction is consider “low variability”, in opposition to “high variability”. The authors try to compare both type of reconstructions with the changes in the NH surface temperature, and they find that the latter (high variability) series correspond better with the NH temperature changes since 1850.

What NH temperature series to use?

Clearly the vast majority of weather stations are located in the NH. A big problem is that the ongoing urbanization introduces an urban island warming bias, which is still visible in many of the homogenized series like those of NASA-GISS. So the authors propose to use only the stations that were and still are rural since about 1850. The difference can be startling, as shown in the next figure which takes a NH subset of 4 regions (Arctic, Ireland, USA, China):

The warming trend is 0.41 °C/century for the rural only stations, whereas it is more than double with 0.94 °C/century for the combined rual+urban stations. Notice also the much greater variability (i.e. lower r2) of the rural only series!

This makes it clear that the choice of including all stations (with the risk of including an urban warming bias) or only the rural ones (with the handicap of having much fewer stations) will command the outcome of every sun-temperature research.

An example of the solar influence

The next figure is a subset of figure 16 of the paper; it shows how the trend of a linear temperature fit (blue box = fit of temperature w.r. to time) can be compared to that of the solar influence (yellow box= fit of temp w.r. to TSI) and the anthropogenic greenhouse gas forcings (grey box = fit of temp. w.r. to GHG forcings); the latter are the residuals left over from the (Temperature, TSI) linear fit.

Using rural only stations, and a high variability TSI reconstruction shows that the solar changes could explain 98% of the secular temperature trend (of the NH surface temperatures); using both urban and rural stations, the solar influence is still 57%, i.e. more than the half of the warming can be explained by a solar cause.

Conclusion

In this short comment I could only glance some points of the paper. It has many more very interesting chapters, for instance on the temperature reconstructions from sea surface temperatures, glacier length, tree rings etc.

What remains is an overall picture of complexity, which is ignored by the IPCC, as well in the AR5 and the new AR6. The science on the influence of solar variability, be it in the visible or UV spectrum, is far from settled. The IPCC ignores datasets that conflict with its predefined political view. The recent warming is only unusual if calculated from the rural + urban data series, but mainly unexceptional if temperature data are restricted to the rural stations.

Radioactivity and precipitation

July 27, 2021

In the past I have written many times on the observational fact that due to radon washout, the ambient gamma radiation shows sometimes impressive peaks ( see here, here, here, here, here, ).

In this blog I will show that the graphs of cumulative rainfall and gamma radiation might give a wrong picture, and that using the original time-series yield a more correct insight.

Here what the graphs of cumulative rainfall and gamma radiation of atmospheric air shows for the week covering the 24 and 25 July 2021:

These graphs are not faulty, but give a wrong picture: the two rainfall peaks cause two radiation peaks, with the second higher that the first, even if its “cause” (= the precipitation in mm per half-hour) is much less. This could be a sign of radon washout during the first peak, and radioactivity levels which have not yet recovered to their usual background.The third precipitation peak during the 25th July does only cause a mild surge of the gamma radiation intensity.

Now let’s zoom on the half-hourly levels of precipitation and gamma radiation:

The picture becomes somewhat clearer: there are 2 precipitation peaks during the 24th July 2021, and the intensity of the second is close to the double of the first ( the X scale represent the multiples of half-hours, starting at 00:30). The second radiation peak is practically the same as the first: the gamma levels have not sufficiently recovered from the first washout during the approx. 7 hours to yield a proportional higher peak.

The third event during the 25th July is more “smeared out”: the total rain volume falls down during ca. 3.5 hours (7 half-hours), and is not concentrated on a single half-hour event. This does not cause a strong radiation increase, even after 20 hours have passed since the last rain-fall peak, a time-span probably long enough to compensate for the previous washout. I suggested in one of the previous blogs a recover period of approx. 1 day.

I always marvel why our “greens” have not yet discovered this natural phenomenon of radiation increase, and not jumped on this pattern which should give a good scare. The second peak here is about 97-85 = 12nSv/h, i.e. 14% higher than the usual background. What would Greenpeace say if radioactivity from the Cattenom nuclear facility had increased by this amount?

The declining value of wind and solar electricity

July 23, 2021

.

link

Many scientists have predicted that wind and PV electricity value will decline above a certain level of penetration, where penetration means the percentage of installed wind and solar capacity w.r. to the total installed electricity production systems (which include for instance fossil fuel and nuclear systems).

A new outstanding paper by Dev Millstein et al. published in JOULE (a CellPress Open Access publication) puts these predictions on solid data foundations. The title is “Solar and wind grid system value in the United States: The effect of transmission congestion, generation profiles and curtailment” (link). The authors analyzed data from 2100 US intermittent renewable electricity producers, and separated the influence of the production profile (e.g. the sun does not shine at night), transmission congestion (e.g. difficulties to transport excessive solar and wind electricity during favorable periods) and curtailment (i.e. cost of shutting down solar and wind producers to avoid net infrastructure problems).

This is a long 28 pages paper, well worth reading several times to become familiar with the different technical concepts.

In this blog, I try to condense the essentials in a few lines.

  1. By how much do the prices for wind and solar electricity fall ?

The short answer is that above 20% of wind/solar penetration, the produced electricity value falls by 30 to 40%.

This is an enormous amount, which may put a barrier to higher wind & solar penetration. This barrier is basically rooted in economic realities, not physics or engineering problems!

2. What is the parameter which has the most influence on value creep?

The authors find that the production profile i.e. the timing of production over a day or longer period is the principal cause of the fall in electricty value:

Look at the highlighted CAISO numbers, which correspond to the situation in California. The solar penetration is large (19%), as is the value fall of 37% ( the is percentage of value decline w.r. to electricity prices which would have been customary when there were no intermittent wind & solar renewable producers).

For most sites, the decline of electricity value follows a logistic curve ( = exponential decline at the beginning which stabilizes at an horizontal asymptote). This is not the case for CAISO, where the decline is practically linear (see the yellow double-line, highlights by me):

The decline from 0 to 20% penetration is nearly 50%, from about 1.3 to 0.6, which is close to breathtaking!

3. What do we have to expect?

Up to now, most of this decline was cancelled or obscured by the falling prices of wind and solar installations. But many factors suggest that the easy part of lowering prices to make PV’s and wind turbines is bygone. There surely will be some fall in prices, but not at the level previously seen. The scarcity of raw materials and rare earths, the low number of producing countries and regions, the increased world-wide demand all point to an end of the spectacular price falls seen during the last years.

So in absence of a breakthrough in storage technology which could change the production profile (remember: this is the main factor of the fall of electricity prices!), some countries will rapidly hit the wall. Sure, politics and overt or hidden subsidies for wind & solar may obscure this price creep, but these will inflate electricity prices above levels that even the most green-inclined citizens are willing to pay. Knowing that their sacrifices will have no measurable influence on supposed global evil climate destructive CO2 levels will certainly be a barrier for increasing sacrifices in life-style, which are asked for, and by this value decline in wind & solar electricity that should save the planet.

4. Some conclusions of the authors

  • Some models indicate …. that value decline might soon get worse; our empirical values provide little solace on that front
  • Forward-looking models, which have been roughly correct to date, suggest that we will soon enter a regime of accelerating value decline

All this should somehow dampen the naïve and “politically correct” enthusiasm for an exclusively wind and solar driven world!

Radiation Amplification Factor RAF in April 2021

April 29, 2021

We had a period of several cloudless, blue sky days at the end of April 2021. So time to redo a calculation of the Radiation Amplification Factor RAF. In short, we want to see how the variation of the Total Ozone Column (TOC) influences the effective UVB radiation at ground level. I wrote several time on this, and usually we found an RAF of approx 1.05 to 1.10.

First here a graph showing the variation of total solar irradiance (blue curve, unit W/m2) and the effective UVB (red curve, unit mMED/h):

First remark that the peak solar irradiance was practically constant; the 24th April was a bit hazy, so it will be left out in the computations. The numbers in the turquoise boxes are the maximum values of the TOC, measured in DU (Dobson Unit) with our Microtops II instrument (serial 5375). Let us first plot the UVBeff versus the TOC:

fig. 1 UVBeff versus maximum daily TOC (5 days: 23 and 25 to 28 April 2021)

Clearly the UVBeff values decrease with increasing TOC, as the thicker ozone column filters out more UVB radiation. The empirical relationship is practically linear, and suggests that a dip of 100 DU (a quite substantial thinning of the ozone layer) would cause an increase of effective UVB of about 0.6 MED/h or 1.7 UVI (as 1 MED/h = 25/9 UVI).

The numerical correct definition of the RAF is : UVB = C * TOC**RAF where ** means “at the power of” Taking the natural logarithm gives ln(UVB) = ln(C) +RAF*ln(TOC) or RAF = [ln(UVB – ln(C)]/ln(TOC).

If we have many measurement couples of UVB and TOC, it can be shown (see here) that

RAF = [-ln(UVBi/UVB0)]/[ln(TOCi/TOC0)]

where the index i corresponds to the ith measurement couple, and 0 to that taken as a reference (usually i=0). This is equivalent to say that RAF is the slope of the linear regression line through the scattterplot of -1*ln(UVBi/UVB0) versus ln(TOCi/TOC0).

Here is that plot:

RAF computed from TOC

The slope is 1.0461, so the (erythemal) RAF computed from the 5 blue sky days is RAF = 1.0461 ~1.05

This has to be compared to the value RAF = 1.08 in the referenced paper [ref. 1]. Note the excellent R2 = 0.96 of this linear fit.

There is some discussion if TOC should be replaced by TOCslant = TOC/cos(SZA), where SZA is the solar zenith angle. If we do this, the RAF ~ 1.10, close to the previous value; the R2 is somewhat lower with R2=0.91. The SZA is practically constant for the 5 days wuth SZA ~38° .

RAF computed from TOC slant = TOC/cos(SZA)

The RAF = 1.10 value is close to what Jay Herman published in GRL in figure 8 [ref. 2] (red lines added):

RAF from Erythemal UVB as a function of SZA

Conclusion

These 5 days of cloudless sky give practically the same results for RAF as that found during previous investigations. As a very raw rule of thumb one could keep in mind that a dip of 100 DU yields an increase of at most 2 UVI. The following table resumes the findings of this paper and the references 1 to 5:

Table of erthymal RAF’s

______________________________________

References:

[1] MASSEN, Francis, 2013: Computing the Radiation Amplification Factor RAF using a sudden
dip in Total Ozone Column measured at Diekirch, Luxembourg (link)

[2] HERMAN, Jay, 2010: Use of an improved radiation amplification factor to estimate
the effect of total ozone changes on action spectrum weighted irradiances and an instrument response function.
Journal of Geophysical Research, vol.115, 2010 (link)

[3] MASSEN, Francis, 2014 : RAF revisited (link)

[4] MASSEN, Francis, 2016: First Radiation Amplification Factor for 2016 (link)

[5] MASSEN, Francis, 2018: UVI and Total Ozone (link)

Greens for Nuclear Energy

April 8, 2021

We are so used to the absolute rejection of everything related to nuclear energy by the Greens we are familiar with, that this new UK movement comes a bit as a surprise.

Sure, it is their estimation that climate change is an existential threat that underlies their new appreciation of what nuclear as a carbon free energy can do. I can live with that, even if in my opinion there is no climate emergency (read the Clintel declaration).

The Greens for Nuclear Energy home page has a short video that pushes the need for nuclear energy quite far: not only in developing new technologies, but also in keeping in activity running facilities; this is something that would give the German Greens a heart attack!

With Michael Shellenberger, Bill Gates and other well known Greens or former Greens (like Patrick Moore) saying clearly that nuclear energy is a must in a realistic energy mix, will the wind turn ? And how will our EU Greens adapt? Will they change their opinion or stick with their image of a movement that only knows to present a future “to save the planet” made of restrictions in every aspect of life, be it housing, moving, eating or traveling…

You might read this very sober article by Gail H. Marcus in physicsworld (April 2017) “How green is nuclear energy?“, who concludes that “nuclear energy is indeed green, and it offers several other advantages as well. It should, therefore, be considered in this light in decision-making on future energy-supply options”.

_____________________________

added 10-Apr-2021:

Read this comment on the upcoming ( and partially leaked) JRC report for the EU commission which also says that nuclear energy is sustainable.

Link to the full paper “An Assessment of the Sustainability of Nuclear Power for the EU Taxonomy Consultation 2019

Global temperatures from historic documents (1/2)

August 20, 2020

1. Introduction

When we speak of global warming, the following picture is practically omnipresent:

It presents the global temperature anomaly (i.e. the difference of the actual yearly temperature with the average from 1961-1990) as given by the 3 most known temperature reconstructions of GISS (= NASA), HADCRUT4 (England) and BERKELEY (Berkeley BEST project, USA). These series more or less agree for the last 50 years, but nevertheless show visible difference for the preceding 50 to 70 years. The data used are those from known weather stations, but also from proxies like treerings, ice cores etc. What is rarely mentioned, is that during the late 19th and the beginning 20th century there were many famous scientists who worked on the same problem: find global mean yearly temperatures according to the latitudes (the so-called zonal temperatures) and/or find the global yearly isotherms which were known not to coincide with the latitude circles. Many of these ancient researchers like von Hann and von Betzold were from Germany and published in German. This may explain the poor interest shown in these papers by “modern” researchers.

This situation has some similarities with the reconstructions of global CO2 levels. Here also mostly ice-cores or other proxies are used, and the papers from the 19th century scientists which made real CO2 measurements with chemical methods are often belittled. The late Ernst-Georg BECK (a chemistry and biology German teacher) made an outstanding effort to find and evaluate these old measurements, and found that these values were much more variable as told by the “consensus” climatology. I wrote with Beck a paper published in 2009 by Springer on how to try to validate these old measurements, of which there were not many and their focus typical local (link).

2. The KRAMM et al. paper

Gerard Kramm from Engineering Meteorological Consulting in Fairbanks and his co-authors (Martina Berger, Ralph Dlugi from the German Arbeitsgruppe Atmophärische Prozesse. Munich, and Nicole Mölders, University of Alaska Fairbanks) have published in Natural Science, 2020 (link) a very important paper on how researchers from the old times calculated zonal, hemispheric and global annual temperatures. The very long title is “Meridional Distributions of Historical Zonal Averages and Their Use to Quantify the Global and Spheroidal Mean Near-Surface Temperature of the Terrestrial Atmosphere“, and this 45 page paper is a blockbuster. It contains it’s fair share of mathematics, and I had to read it several times to understand the finer points. I first stumbled on that paper from a discussion at the NoTricksZone blog (link), and you might well first reading the comment of Kenneth Richard.

The 4 authors all seem German speaking people, what explains that many citations are given in its original language. They tell us that very famous scientists of the second half of the 19th and the start of the 20th century worked to find global average temperatures. One must remember that in 1887 for instance 459 land based meteorological stations (outside the USA and the polar regions) and about 600 vessels gathered meteorological data; the first Meteorological Congress held in 1873 in Vienna had standardized the equipment (for instance of dry and moist thermometers). The best known authors of big climate treaties written in the 1852-1913 time span are von Hann ( Julius-Ferdinand von Hann, 1839 – 1921 ) and von Betzold (Wilhelm von Betzold, 1837 – 1907 ), who referred to numerous other authors.

The Kramm paper tries to validate the results given by these authors, using papers from other authors and mathematical calculations.

Just to show how good the results of these authors were, look at the following extract of  a graph from von Hann (1887) showing the zonal isotherms over the whole globe. I have added the text boxes:

The yellow dot shows the approximate location of Diekirch, slightly south of the 10°C isotherm. The yellow box shows that the mean temperature measured by meteoLCD was 10.6°C over the 21 years period 1998 – 2019, very close to the von Hann isotherm of 1887.

The authors write that “obviously the results of well-known climate researchers ….are notably higher than those derived from Hadcrut4, Berkeley and Nasa GISS“. So the question is have these institutions (willingly or not) lowered the temperatures of the past and so amplified the global warming?

(to be continued)

Colle Gnifetti ice core… a new European temperature reconstruction

August 5, 2020

CG_drilling

(picture from the PhD thesis of Licciulli, 2018)

When we want to know the temperatures of say the last 1000 years, we must use proxies like changes in the O18 isotope, changes in leaf stomata or tree rings (for instance in the famous bristlecone trees) etc… The best known proxies (beside tree rings) are ice cores, most coming from drilling in Antarctica or Greenland glaciers. Ice cores from European glaciers are few, so the paper by Bohleber et al. on ice cores from the Monta Rosa region is remarkable. The title is “Temperature and mineral dust variability recorded in two low-accumulation Alpine ice cores over the last millenium” (link), and it was published

graphic_cp_cover_homepage

in the “Climate of the Past” series of the European Geosciences Union (EGU) in January 2018. I became aware of this paper by an excellent comment of Willis Eschenbach in WUWT (24-Jul-2020), I will come back to this later.

What makes the paper of Bohleber so special, is that the location of the 2 ice cores is on the Colle Gnifetti saddle (4450m asl) in the Monte Rosa region (border between Italy and Switzerland), so really in our neighborhood when compared to Antarctica and Greenland. This glacier is not very thick (about 140m only), as the prevailing winds remove a good part of the yearly snowfall. But the ca. 65m deep drillings allow going back by more than 1000 years. The researchers studied the dust layers found in the ice cores, especially the abundance of Ca2+ ions. These dust layers are very thin, so they used quite sophisticated laser technologies to investigate them. They found a good agreement between the observed temperature trends and those of the Ca2+ dust layers (mostly dust from the Sahara: warmer temperatures increase the advection of dust-rich air masses).

The IPCC’s view at the last 1000 years temperatures

In its first assessment report (FAR) of 1990, the IPCC gave a graph form Hubert Lamb showing (without any clear temperature scale) the existence of a warmer period (MWP) around year 1000 and the later distinctive cooling of the Little Ice Age (LIA):

Medieval_Warm_FAR

With the infamous Hockey-Stick paper by Mann in the 3rd report (TAR, 1999) the MWP disappeared, or was ignored (link to original paper):

hockeystick_1999

For political or activist reasons, this faulty graph from a junior PhD became a poster-child in the global warming debate, and remained so for long years, despite the fact that it was shown wrong for an incorrect application of statistical calculations (PCA, principal component analysis) and inadequate choice of tree rings.

Today there are many reconstructions of the NH temperatures, and the figure below (blue arrow and highlights added by me) shows how different they are, and that at least one (Christiansen and Ljungqvist, 2012) gives hugely changing temperatures, with a very pronounced MWP nearly as warm as today (link):

many_reconstructions_NH

Now, here follows the reconstruction by Bohleber et al, based as seen above on the study of dust layers, a factor that was not considered in the hockeystick paper.

CG_temp_reconstruction

I have added the text boxes and the arrows to the original graph. First one should note the temperatures are anomalies (=deviations) from the average temperature at GG during 1860 – 2000. The horizontal time axis is reversed, i.e. the most recent period is left, and the “calibration” period is the interval 1860 to 2000. The red curve shows an independent reconstruction by Luterbach of mean European summer temperature anomalies. The black curve gives (if I understand this correctly) these same anomalies as measured by meteorological instruments over Europe (West Europe?).

Willis Eschenbach made a linear regression with the BEST NH temperature reconstructions, and adjusted the Ca2+ curve using this function (y = 1.6*x – 0.2). The visual correlation for the last 250 years is excellent (except a divergence for the last ~25 years):

Eschenbach_BEST_NH

Applying the same regression on the whole CG data, and smoothing by a 15 year filter makes the important details still more visible:

Eschenbach_CG_linearadj

We clearly see two warm periods: one around 850 AD and the other corresponding the the MWP, today called MCA = Medieval Climate Anomaly, because it seems inconvenient to the “consensus climatology” that some CO2 low medieval times were nearly as warm as today. So Bohleber et al. write in their conclusion “the reconstruction reproduces the overall features of the LIA … and reveal an exceptional medieval period around AD 1100-1200”.

What also clearly can be seen in all these graphs is that the climate never was stable for very long times: the normal situation is a changing climate!

 

 

 

Lindzen’s new paper: An oversimplified picture

June 23, 2020

MIT Prof. Richard Lindzen (retired) has published (19 May 2020) a very interesting new paper in The European Physical Journal Plus (Springer) titled “An oversimplified picture of the climate behavior based on a single process can lead to distorted conclusions“. The full article is paywalled (a shockingly high 45€ for 10 pages!), but it is easy to find an accessible version by googling.

The article is written in very easy terms, at least concerning the first 3 chapters and the conclusion in chapter 5. I read it carefully several times and will try to summarize as best I can.

  1. Introduction

In the introduction Lindzen recall’s that greenhouse warming is a recent element in climate literature, and even if known and mentioned, played a minor role in climate science before 1980. He also repeats a mostly ignored argument, i.e. that even if there is some global warming now (from whatever causes) the claim that this must be catastrophic should be seen with suspicion.

2. Chapter 2

Chapter 2 is titled “The climate system” and on these less than 1.5 pages Lindzen excels in clarity. He writes nothing that could be controversial, but many of these facts are constantly ignored in the media: the uneven solar heating between the equator and the poles drives the motions of heat in the air and the oceans; in the latter there are changes in timescales ranging from years (e.g. El-Nino, PDO and AMO) to millenia, and these changes are present even if the composition of the atmosphere would be unchanging.

The surface of the oceans is never in equilibrium with space, and the complicated air flow over geographic landscapes causes regional variations in climate (not well described by climate models). Not CO2, but water vapor and clouds are the two most important greenhouse substances; doubling the atmospheric CO2 content would increase the energy budget by less than 2%.

He writes that the political/scientific consensus is that changes in global radiative forcing are the unique cause of changes of global temperatures, and these changes are predominantly caused by increasing CO2 emissions. This simplified picture of one global cause (global radiative forcing) and one global effect (global temperature) to describe the climate is mistaken.

It is water vapor that essentially blocks outgoing IR radiation which causes the surface and adjacent air to warm and so triggers convection. Convection and radiative processes result in temperature decreasing with height, up to level where there is so little water vapor left that radiation escapes unhindered to space. It is at this altitude where the radiative equilibrium between incoming solar energy and outgoing IR energy happens, and the temperature  there is 255 K. As the temperature has decreased with height, level zero (i.e. the surface) must be warmer. Adding other greenhouse gases (like CO2) increases the equilibrium height, and as a consequence the temperature of the surface. The radiative budget is constantly changed by other factors, as varying cloud cover and height, snow, ocean circulations etc. These changes have an effect that is comparable to that of doubling the CO2 content of the atmosphere. And most important, even if the solar forcing (i.e. the engine driving the climate) would be constant, the climate would still vary, as the system has autonomous variability!

The problem of the “consensus” climatology (IPCC and politics) is that they ignore the many variables at work and simplify the perturbation of energy budget of a complex system to the perturbing effect of a single variable (CO2).

3. History

In this short chapter Lindzen enumerates the many scientists that disagreed up into the eighties with the consensus view. But between 1988 and 1994, climate funding in the USA for example increased by a factor of 15! And all the “new” climate scientists understood very well that the reason for this extraordinary increase in funding was the global warming alarm, which became a self-fulfilling cause.

Let me here repeat as an aside what the German physicist Dr. Gerd Weber wrote 1992 in his book “Der Treibhauseffekt”:

 

4. Chapter 4

This is the longest chapter in Lindzen’s paper, also one that demands a few lectures to understand it correctly. Lindzen wants to show that the thermal difference between equatorial and polar region has an influence on global temperature, and that this difference is independent from the CO2 content of the atmosphere. He recalls the Milankovitch cycles and the important messages that variations in arctic (summer) insolation cause the fluctuations in ice cover. The arctic inversion (i.e. temperature increasing with height) makes the surface difference between equator and polar temperatures greater than they are at the polar tropopause ( 6 km). So one does not have to introduce a mysterious “polar amplification” (as does the IPCC) for this temperature differential.

Lindzen establishes a very simple formula which gives the change in global temperature as the sum of the changes of the tropical temperature (mostly caused by greenhouse radiative forcing) and that of the changes of the equator-to-pole temperature difference (which is independent of the greenhouse effect). This means that even in the absence of greenhouse gas forcings (what is the aim of the actual climate policies) there will be changes in global temperature.

 

5. Conclusion

The conclusion is that the basic premise of the conventional (consensus or not) climate picture that all changes in global (mean) temperature are due to radiative forcing is mistaken.

 

My personal remarks:

Will this paper by one of the most important atmospheric scientists be read by the people deciding on extremely costly and radical climate policies? Will it be mentioned in the media?

I doubt it. The climate train like the “Snowpiercer” in the Netflix series is launched full steam ahead, and political decisions become more and more the result of quasi religious emotions than that of scientific reasoning. But reality and physics are stubborn… and so as the Snowpiercer is vulnerable to avalanches and rockfall, the actual simplistic climate view could well change during the next decades, long before the predicted climate catastrophe in 2100 will occur.