Welcome to the meteoLCD blog

September 28, 2008

Badge_Luxwort_2016

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.

Rain and radioactivity: rain-out or wash-out?

August 5, 2016

In this comment I will continue to comment some of the problems explaining what really happens (“radiationally” speaking) during short-lived precipitation events. Some very good papers will be shortly discussed, and I thank Marcel Severijnen for his really important comments.

1. A phenomenon known since long times ago

That atmospheric radiation peaks during precipitation events is known for many, many years. As an example, Thomas Thomson from the Swedish Meteorological and Hydrological Institute, Stockholm, published in 1962 a paper titled “Some observations of variations of the natural background radiation” (link) . He speaks of a 5% to 20% increase in background radiation which is correlated to precipitation. The following figure 2 from this paper (with my additions) documents this (note that 1 micro-roentgen per hour equals 10 nSv/h):

Thomas_fig2

From the duration of the gamma rays increase he concludes that the cause must be short-lived decay products from radon (with a half-live T1/2 less than 1 hour) and he suggests a specific activity in the rain-water between 10^-12 and 10^-10 curie/g, which corresponds to 37 to 1000 Bq/liter (Bq/kg). He also notes that this activity decreases with precipitation-rate, i.e. the higher the precipitation rate (in mm/h for instance), the lower the specific activity in the rain water. This is a sensible conclusion: when the rain falls through a slab of air loaded with radioactive elements, a fast fall-through will scavenge less particles (per rain drop for instance) than a slower one.

Let us conclude with figure 7, where Thomas calculates by linear regression the radioactive doses given by precipitation events: using SI units (yellow boxes) he finds, according to some months, a number between 50 and 100 nSv per mm of rainfall. Here in Luxembourg we are close to about 800 mm per year, which would correspond to an incredible high natural background dose of 40 to 80 mSv if this dose would be completely absorbed by a body (the usual assumptions of yearly natural doses are about 3 mSv).

Thomas_fig7

2. The Livesay paper

Marcel mentioned in his comment to the previous blog the Livesay paper by  R.J. Livesay et al. from the Oak Ridge National Laboratory, published in 2014 and titled “Rain-Induced Increase in Background Radiation Detected by Radiation portable Monitors” (link). This is a really interesting and very readable paper (with some not too difficult maths) that studies the increase in gamma counts given by portable radiation detectors installed at many places (for instance in Luxembourg at the entrances of trucks or train wagons delivering scrap metal to our steel founderies). Using gamma-ray spectroscopy they clearly show that the elements causing the radiation peaks during rain-fall are mainly the two radon daughters Pb214 and Bi214, with half-lives of 27 and 20 minutes (the Rn222 half-live is 3.8 days) and energies of approx. 352 and 609 keV, and which are deposited on the ground:

Livesay_fig5

The paper has a very nice record of a short rain pulse and the following decay of the gamma counts (fig.8 a&b, with my additions):

Livesay_fig8ab Clearly the time back to normal is about 3 hours, which corresponds to approx. 6 periods of the short-lived radon progeny.

The authors give an easy to understand mathematical model, which renders in a nearly perfect manner the observations:

Livesay_fig8dConcerning the specific activity in the rainwater, the authors do not give any number, but write that this parameters varies heavily with the rain event and the location, and needs further study.

3. The Fujinami paper: rain-out or wash-out?

I will conclude the scientific literature with a paper published by Naoto Fujinami in the Japanese Journal of Health Physics in 2009, titled “Study of Radon Progeny Distribution and Radiation Dose-Rate in the Atmosphere” (link). This paper answers the question given in the title of this blog: does the increase in gamma-activity come from rain-out or a wash-out? Rain-out means that the radon progeny is attached to the rain drops inside the rain delivering cloud, and wash-out means that the main process happens during the fall of the rain drops through the air volume below the clouds. The paper is a bit confusing, as a first (too)  rapid lecture gives the wrong impression that the authors concludes that gamma activity decreases with rain-fall. In fact this is not the case, as the author writes in chapter III (“Scavenging of radon progeny by precipitation in the atmosphere”) that “The radon progeny in precipitation produce an increase in adsorbed dose rate in air at ground level“; a remark that in my opinion should have been made much earlier in the paper. Fujinami reports that there exists an inverse relationship between the concentration of radon progeny (in rain water) and the precipitation rate, an observation reported by T. Thomson in his 1966 paper. What makes the Fujinami paper interesting, is that he tries to demonstrate that rain-out and not wash-out is the important cause of additional gamma activity.
In a first point of his study, the author shows a plot of the radon progeny concentration in surface air (implied by the measured gamma-activity) and precipitation, and he concludes that periods of precipitation lower the radon progeny concentration in surface air:

 

fujinami_fig5 Now I have some problem with this plot (I added the colored lines and text box): the time-resolution seems very large, about 8 hours as shown by the blue lines. So this figure says nothing about the immediate changes in ambient radioactivity in surface air after a rain fall, but possibly more on the coarse general evolution. And even here I have problems of understanding: the two first events (and possibly the last)  clearly show a drop in radioactivity prior to the rain fall. So this point damps seriously my first enthusiasm for the the Fujinami paper.

Let me show you a similar plot made today (05 Aug 2016) at Diekirch and covering the last 7 days:

rain_radioactivity_Diekirch_Aug2016
First one should note the visible daily rhythm of gamma activity, with higher radioactivity in the morning hours when the usual inversion blocks the mixing of ground air with higher levels of the atmosphere; the same situation happens for instance with our CO2 data . The blue arrows point to the main rain events: the first two arrows show that the radiation peak follows the rain pulse; the next three rain events do not have any visible influence, as the ambient radioactivity has come down to the normal background, and the clouds and air above ground seem depleted of additional radon progeny to be scavenged. So in my opinion, lower levels following a rain event seem more a return to normal than an effect caused by precipitation.

All other hints to the Fujinami conclusion that the phenomenon observed is a rain-out happening in the clouds and not a wash-out during the free fall of the rain drops to the air below the clouds rely on the same Japanese data series  from Maizuru (1977-1985). I am absolutely not convinced by the author’s argumentation, and would appreciate stronger logic and more fine-grained data before accepting his conclusion.

4. Do the Diekirch data show the typical time evolution of Pb214 and Bi214 shown in the Livesay paper?

As the half-lives of the radon progeny Pb214 and Bi214 are 27 and 20 minutes, inspecting the decreasing radioactivity after a very short rain pulse should show a return to normal levels after 2 to 3 hours, as given in the Livesay paper. A problem with our Diekirch data is the rather coarse time step of 30 minutes: the measurements stored in the datalogger at time xx:00 and xx:30 are the average of the preceding 30 minutes (radioactivity) or the total precipitation during  this time interval. Our Davis Vantage Pro Plus backup weatherstation also registers the “rain rate”, which is the maximum precipitation that would have been collected during the measuring interval (30 minutes) following the last bucket tip. So this number gives a bit of information on the brusqueness of the rain pulse, compared to the integral over 30 minutes, but it does not help much more. I will use the rain event of the 23 July following the storm from the previous day. The reason is that this rain-pulse is not followed by a second one which muddles the picture. Here are the numbers:

23Jul2016There clearly is a time lag of about 2 hours between the start of the first rain event and the excess radiation peak; if we set the origin of time at the moment where the activity peaks, we obtain with a rather good R2 = 0.97 a half-live of about 43 minutes, applying a simple decay model. Note that the return to normal interval is about 3 hours.

gammaDK_23Jul2016

Let’s verify the corresponding evolution given in the Livesay paper at figure 4:

Livesay_fig4period

I put the time origin at the second count peak and wrote down the corresponding counts obtained by inspection. The decay model gives this result for the decrease of the excess counts:

Livesay_fig4_model

Here the half-live is approx. 19 minutes, about half of what we found at Diekirch. These 19 minutes are close to the half-life of Pb214, the first of the two relevant radon daughters. Now one should remember that we do observe the combined effect of two radioactive decays, where the first element Pb214 creates the second Bi214. Our simplistic curve fitting exercise has no solid physical basis, but should be taken as a visualization tool.

What remains is that in our data the general evolution of the excess gamma activity caused by a rain event follows a similar albeit slower evolution as that shown in the Lindsay paper. Return to normal background levels takes about 3 hours, in accordance to this paper.

22-July-2016 storm-flood and radon washout

July 28, 2016

About 15 km away from Diekirch, the small village of Larochette and some neighbouring villages sufferered the 22 July 2016 from a disastrous downpour of more than 50 mm during less than one hour: the result were very disruptive floods which caused much destruction: bridges that crumbled, roads torn open and many houses with their low levels flooded. The phenomenon was very localized, and in Diekirch the very short-lived rain pulse was not more than about 12.9 mm (in 30 minutes).

As seen in a couple of former blog posts (see here , here and here) these rain pulses caused very visible radioactive peaks measured by our Geiger counter. We know that these peaks are essentially caused by a radon washout (the peaks are the fingerprint of the gamma-emitting radon daughters).

We all like nice and clean cause-effect relations, preferentially linear ones. When we look at what happened during the week from the 21 to the 26 July, we see that things are a bit more complicated:

22JUL2016The upper plot shows in blue the intensities of the precipitation peaks (in mm per 30 minutes, also given by the corresponding labels) and in red the cumulative precipitation. The lower plot gives the dose-rates in nSv/h, with the yellow boxes showing the approximative numbers for clarity. If we assume an usual background about 83 nSv/h, you should subtract 83 from these numbers to yield the peaks caused by the washout.

In the preceding blogs I suggested that after a first washout the atmosphere requires a minimum time to “recover”: a rain pulse, even much more intense, prior to that minimum (3 days were suggested) yields a lower radioactive response. What happened the 22 July 2016 shows both the same and an opposite behavior: the storm rain pulse of 12.9 mm follows less than 24 hours a normal heavy downpour of 4.2mm. This first precipitation event lashes out a radiation peak of 16 nSv/h; the storm event, with a 4 times higher intensity, produces practically the same radiation rise. But even more intriguing is the next rain event some 5 hours later: a meagre 1.8 mm rain produces a radiation peak of 13 nSv/h, not much lower than the preceding one.

So we have in this week two conflicting situations:

a. the first confirms the hypothesis of an atmospheric recovery-time: you can not washout what isn’t yet there; the very heavy storm event happens too early to produce a bigger radiation peak.

b. the small precipitation event following the 19: 00 UTC “monster”  causes an important radiation peak, in violation of both the effect-proportional-to-cause and the time-lag hypothesis. I have no explanation to this for the moment. A first enquiry would be looking for a precipitation measurement error. I checked this with our Davis VantagePro II backup station, mounted at a distance of about 4 m. The next table shows the details of the measurements:

22Jul2016_data._xls

The last column corresponds to the increase of the dose-rate, taking 83 nSv/h as the reference.

Clearly, there is a delay of about 1 hour between the precipitation and the radiation peaks. What remains puzzling, is the strong radiation pulse after a very modest rain peak following the “big one”.

Let us finish this short analysis by plotting the radiation increase versus the rain pulse which is its cause. I will add all the data of this week to the values measured in  2013 and 2014. First the table with these data:

radon_washout
and now the graph:

rad_versus_rain
We see that the big storm from 22-Jul 19:00 is an outlier, and a linear model might not be the best. The Pearson correlation between rain-pulse and radiation-peak is 0.23, statistically not significant at the alpha = 0.95 level. Omitting the outlier betters the correlation to 0.32, but it remains not significant. Now this discussion on significance is moot, as clearly the observations show that radon washout does exist. What remains deeply puzzling is not the situation in cases 11 & 12 (the last two lines of the preceding table): a first small rain pulse might cause a distinct radiation peak, and to get the same peak a second rain pulse following shortly must be much higher.

But the situation in cases 8&9 remains for me disturbing: why does the small rain pulse following the big one create a similar intense radiation peak?

Thank you for an answer, if you have a clue what might happen here….

_____________________________

01-Aug-2016: added as a supplementary information to Marcel’s comment.
The link to the full paper of Livesay “Rain induced background radiation….” is here.

Modulation of Ice Ages (part 2)

July 5, 2016

In the first part of this blog I recalled some fundamentals of Milankovic’s climate-relevant cycles: precession, obliquity and eccentricity. In this second part I try to resume as simple as possible the main points of the new paper by Ellis & Palmer.

1. Five major insights

The following 5 points are known and accepted by most scientists:

a. Each major deglaciation coincides with maximum NH (North Hemisphere) solar insolation.

b. Not all insolation maxima (“Great Summers”) trigger deglaciations.

c. Eccentricity governs the strength of the Great Summer.

d. During an ice age atmospheric CO2 levels plunge (colder oceans absorb more), ice sheets extension and albedo (the earth’s reflectivity) increase.

e. When CO2 levels are at a minimum and albedo is at maximum, a rapid warming will begin and start an interglacial period. Conversely when CO2 levels are at a maximum and albedo is at mininum (during an interglacial) a new cooling (= ice age) will begin.

2. The Ellis & Palmer paper

The new theory postulated by Ellis & Palmer can be summarized as follows: A minimum CO2 ( e.g 150 -200 ppm) starves plant life, creates a die-back of forests and savanna’s, which increases soil erosion and produces more dust storms. The dust deposits on the ice sheets diminish the albedo, increasing the absorption of solar energy. This increase of about 180 W/m2 at higher NH latitudes starts a global warming, i.e. an intra-glacial.

So ice ages are forced by orbital cycles and changes in NH insolation, but regulated by ice-albedo and dust-albedo feedbacks. The precession cycle is the main forcing agent through the induced albedo changes. The primary forcing and feedback for intra-glacial modulation is albedo.

As a consequence CO2 (by its greenhouse gas properties) can not be the primary feedback because high CO2 levels during or at the end of an intra-glacial result in cooling, and low CO2 levels during a glaciation maximum precede the warming.

Antarctic_T_and_dust
The grey bands in this figure correspond to maximal dust deposits  (>0.35 ppm): the Antarctic temperatures (from the Epica 3 bore-hole) start rising after most of these dust peaks.

3. Main conclusions of the paper.

Regarding IPCC’s AR5 published in 2013 the  authors write: “The IPCC has identified dust as a net weak cooling mechanism, when it is probably a very strong warming agent.”

And they conclude with these words: “The world’s dust-ice Achilles heel needs to be primed and ready to fire before an intra-glacial can be fully successful…in which case, intra-glacial warming is eccentricity and polar ice regrowth regulated, Great Summer forced and dust-ice albedo amplified. And the greenhouse attributes of CO2 play little or no part in this feedback system.

You should definitively read the full paper!

Modulation of Ice Ages (part 1)

July 4, 2016

Ralph Ellis and Michael Palmer have an extremely interesting paper in the Elsevier publication “Geoscience Frontiers” titled: Modulation of ice ages via precession and and dust-albedo feedbacks (link to open access version, May 2016). This long paper (19 pages!) is very readable, but nevertheless needs more than one reading to fully understand the important details. So I will try in this blog the resume the most important findings of that outstanding paper.

  1. The Milankovic cycles

The climate of the Earth is a system-response to the insolation of the sun. As this insolation (or irradiance) is not constant, it is not a big surprise that Earth’s climate is not constant, and never was. There are short variations, like seasons, El Nino’s, 11/22 years and 60 years changes from solar and oceanic oscillations etc. The much longer periodic changes like the ice ages are known since Milutin Milankovic’s seminal papers to be caused by variations of (at least) 3 astronomic parameters related to the Earth revolving in our solar system, varaitions that cause important changes in the insolation oft planet Earth,

The most important parameter is the precession of the Earth’s axis: this gyroscopic effect (first studied by the great mathematician Euler) means that the axis (which is inclined w.r. to the ecliptic plane, i.e. the plane of the orbit of the earth circling around the sun) makes a slow rotation around the perpendicular to the ecliptic plane. The axis oscillates between two extreme positions, where it points to Polaris (the Northern Star) or to Vega. When the axis is titled towards Polaris (which is close to the actual situation), the North Hemispheric (NH) winters correspond to a position where the globe is closest to the sun, and the NH summers where is it farthest. This precessional cycle (including a complication caused by the rotation of the elliptical orbit (=apsidal precession) has a cycle length of about 22200 years, a period often called a Seasonal Great Year (SGY). A Great Season takes 1/4 of this period, about 5700 years; one speaks of a Great Summer, a Great Winter and so on. This precession of the axis has by far the biggest influence on solar insolation (details will follow).

A second important astronomical parameter is the obliquity or axial tilt. The angle between the axis and the perpendicular to the ecliptic plane varies between 21.5° and 24.5°; the actual value is 23.5°. This angle essentially impacts the severity of the seasons. Actually the NH winters are moderate, as the solar rays are more close to the perpendicular of the globes surface, and the summers are moderate too, as the solar rays are more inclined, which diminishes their heating potential. The length of one obliquity cycle is 41000 years. Precession and obliquity cause a complicated wobbling movement of the Earth’s axis.

Finally the last important factor is the eccentricity of the Earth’s orbit. The orbit is an ellipse, close but not quite equal to a circle. The eccentricity describes the deviation from a perfect cycle, and in the case of our planet this parameter is not constant but varies slowly under the influence of the other planets with time. The cycle length is approx. 100000 years. The changes in eccentricity are small, between 0.034 and 0.058 (actual value is 0.0167, which means that the orbit actual is near circular). The main influence of the changing eccentricity is a (small) time shift of the seasons during the year.

For climate related questions, the most important parameter is the change in solar irradiance (or insolation) at high latitudes of the NH. Usually one looks at the changes observed (or calculated) at northern latitude 65° (NH 65). Here are the extreme changes caused by the variations of the three astronomical psrameters shown above:

Precession: 110 W/m2
Obliquity:      25 W/m2
Eccentricity: 0.4 W/m2

These changes can be lumped together in the so called Milankovic Cycle (figure from the Ellis/Palmer paper, the time axis is KY (kilo-years) before present):

Milankovitch_Cycle_with_zero_line

The upper plot shows the changes in solar irradiance, the lower the temperature deviations from a mean value of the Antarctic. I added a zero line to make clearer where the intra-glacial periods happen (the peaks above the zero line) and where the ice ages are (the periods in-between: note that the ice ages are the “normal” state of the Earth climate, and that the intra-glacials are, geologically speaking, exceptions to that state). The orange/red bands represent the (Seasonal) Great Summers. Clearly, not all Great Summers cause an intra-glacial warming, as there are about 4 to 5 Great Summers from one intra-glacial to the next. The Ellis/Palmer paper tries to explain this with a novel theory; I will discuss this in the part 2 of this blog.

NOx emissions (part 2)

June 13, 2016

In the first part of my comment I showed that concerning mean annual NO2 concentration, Luxembourg is among the better of  the EU28 countries, whereas Germany is the worst.

This second part will be on emissions from petrol (gasoline) and diesel engines, and the efficiency of the various Euro norms. I rely for a good part on an excellent report by the Kings College of London, the University of Leeds and the AEA (Agricultural Engineers Association),  published in 2011 for DEFRA (Department for Environment,  Food and Rural Affairs).

1. Emissions of NOx from petrol and diesel engines.

Diesel engines are the workhorses in heavy machinery, as they have an efficiency of about 35% compared to 25% of conventional petrol engines; this higher efficiency was one of the main reasons to introduce Diesel engines in ordinary vehicles, as the fuel consumption for a given power output is lower (and the price of Diesel fuel less taxed in many countries). In Diesel engines, the vaporized injected fuel burns lean, with an excess of oxygen and at high pressure, and some spots in the cylinder can reach temperatures over 1500 °C. The excess of oxygen favors the formation of NOx. Conventional petrol engines burn a very equilibrated air/fuel mixture (which is created in the carburetor outside the combustion chamber), without any oxygen excess; the result is a combustion with lower levels of NOx which can easily be removed by a 3-way catalyst. Gasoline direct injection (DI) engines have a better fuel efficiency and torque at low rpm’s, but suffer from higher NOx emissions, similar to the Diesel engines. The gasoline DI engines become more and more fashionable, and some of their NOx problems are solved by special catalytic converters and ECR (exhaust gas re-circulation); you may read this report from DELPHI which shows testing of an engine releasing not more than 0.2g NOx/kWh (the US Federal limit for heavy trucks is 0.26 g NOx/kWh). Nevertheless one should bear in mind that the switch from conventional to DI gasoline engines will increase the NOx problems of gasoline driven vehicles.

2. Main findings of the report “Trends in NOx and NO2 emissions in the UK and ambient measurements”

This report is interesting because it relies heavily on remote sensing detectors (RSD) to measure the NOx/NO2 levels under urban traffic conditions (mostly low speeds of 36 km/h). The report finds big differences between published factors and the measurements for light vehicles, and that certain catalytic techniques used in heavy goods vehicles (trucks) are inefficient in urban conditions.

The following figure shows how NOx emissions (here expressed as the ratio NOx/CO2*1000) changed  during the years for 4 types of vehicles: passenger cars, HGV (heavy goods vehicles = trucks), LGV (light good vehicles = small transporters) and buses:

NOx_CO2_yearmanufacture_Defra

The CAR pattern clearly show a rather dramatic decrease of NOx emissions for gasoline cars (blue curve), but a more or less steady state since 2000 for diesel cars (red curve); the same situation occurs for the LCV’s. For buses the situation is even worse, as emissions tend to increase since ~1998! So no wonder that roadside NOx levels at many cities are high, even when individual traffic is limited and public buses become mandatory as the main transportation mode.

The different Euroclass norms set the upper limits of allowed NOx emissions (in g/km); here the numbers for Diesel engines:

NOx_EUROlimits_approvals_Defra

E2 = 0.7,  E3 = 0.5,  E4 = 0.25, E5 = 0.18 and the latest E6 (not on this figure) =  0.08 g/km.

If we look at the test measurements in the report for Diesel and gasoline cars, the results are mind-boggling:

NOx_CO2_Diesel_cars_EUROnorm_Defra

NOx_CO2_petrol_cars_EUROnorm_Defra

These 2 figures represent box-and -whiskers graphs: the black line corresponds to the median of the sample (50% of the sample are below, 50% higher), the blue rectangular boxes the 25-75% percentiles (i.e. 50% of the samples lie inside the box), and the full extend of the whiskers (the black lines) represent 99% of the sample size.
For petrol cars, the efficiency of the increasing stringent norms is clearly visible, even if there seems to be a stand-still from E3 on. Diesel engines do not show this: on the contrary the latest E norms do bring a worsening! This is a clear sign that the over-optimistic E norms are nearly impossible to fulfill for Diesel cars that must be fuel-efficient and powerful. No wonder that many manufacturers of Diesel cars (like Volkswagen) installed clever software to fool the compliance procedures.

Nobody should be astounded that real measurements give other results than the official numbers based on laboratory measurements and/or computer programs. The problem with measurements under real driving conditions is that these conditions are impossible to standardize (the state of the road, the weather etc. are changing parameters), whereas measurements in the lab can be made under well defined conditions. The next figure shows the difference between the higher road-side measurements (RSD) and the official factors:

RSD_factors_petrol_diesel_EURO1to4_Defra

This figure once more tells the sad story that for Diesel cars the different Euro norms did not have a big effect!

 

3. The roadside or country-wide NOx levels

The next figure gives the  European ambient NO2 concentrations according to different environments:

NO2_concentrations_EU_2008_Defra

The vertical line at 40 ug/m3 corresponds to the European limit for annual average concentrations; of the 5 different environments, the roadside remains the only problematic location. Even urban or sub-urban backgrounds lie well below the 40 ug/m3 limit!

Let us look how this roadside situation changed during the years for different countries:

NO2_concentrations_EUcountries_1995to2009_Defra

Except Greece and Italy, all countries show a more or less horizontal trend for the full period 1995 to 2009: this means that the different Euro norms did not have a big effect at roadside locations. One reason, as shown above, is that successive more modern Diesel engines were unable to drastically lower their NOx emissions, and a second reason may well be the massive increase in Diesel cars; an increase pushed by political decisions to lower fuel consumption (and supposed climate-hurting CO2 output) which made Diesel fuel less expensive than gasoline.

 

4. Conclusion

NO2 (or NOx) mitigation is a wicked problem, and only naive persons believe that impossible stringent norms will miraculously achieve results that are nearly impossible to obtain for physical and/or engineering reasons. Maybe we would be better off if the scare-mongering about the dangers of NO2 would cease, and solutions for lowering NO2 emissions were allowed more time for research, experimentation and development. Pierre Lutgen (who holds a PhD in chemistry) does not believe many of the shrill dangers brought by very low NO2 levels (read his French article on nitrites). But NO2 as a gas is an irritant for the lung linings, and may form very small particles when reacting with other substances; it also has a detrimental effect on plant-life (some put the allowed limit as low as 30 ug(/m3). So high levels of NO2 clearly should be avoided. But as often in environmental policies, setting limits at impossible low values will not hasten the accordance, but favor clever proceedings to circumvent these limits.

NO2 emissions… is Luxembourg the bad guy? (part1)

June 12, 2016
  1. The recent EEA report on national emission compliance. 

    A recent report from the European Environmental Agency (EEA) made some splash in the media, as it showed that many countries (among them Luxembourg) are missing their NOx emission limits.Here the relevant table:

national_emission_exceedences_2010to2014

The red crosses represent exceedance, the ticks conformance. What exceedance means is not very clear: probably it represents somewhere in the country an overshoot of the 8 hour limit of NOx concentration; the place where this happens will almost certainly be a heavy traffic urban road, but not a general mean annual concentration above the 40 ug/m3 limit (for NO2).

If we take the most important industrial countries (Belgium, France, Germany, Italy and the UK), all except Italy and the UK do not conform to the targets. It is an irony that the “über-grün” Germany overshoots all relevant pollutants as NOx, NMVOC (non-methane volatile organics, like terpenes); that Italy and the UK are in conformance might be real (I have some doubts, thinking of Roma or Neapoli traffic conditions), or simply a sign of a particular clever reporting. Emissions in NH3 (ammoniac) are clearly related to agriculture (especially cattle and swine rising), which explains that Denmark and the Netherlands are here big “sinners”.

Locally NOx/NO2 exceedance may give a wrong picture, so let us look at the yearly mean concentrations, as given by several EEA publications and databases.

 

2. The yearly average NO2 concentration in Luxembourg and other EU states.

The following picture shows the average annual concentration in the 6 validated Luxembourg measurement stations in 2011:

LUXBG_NO2_2011NO2_legend

Vianden, Beidweiler and Beckerich are rural , Esch-Alzette and Luxembourg urban stations. It is only at the two Luxembourg (-City) stations that the EU target of 40 ug/m3 is exceeded. Not surprisingly, as the measurement stations lie at roads with very heavy traffic; the Esch-Alzette station is on a small hill (Galgenberg) with plenty of green vegetation around.

The following two pictures show the daily mean NO2 concentrations of Viandem and Luxembourg-Bonnevoie for 2015 (note the different vertical scales!) ( link)

VIANDEN_NO2_2015LUXBG_Bonnevoie_NO2_2015

Rural Vianden concentrations are very low, and even the heating months do not exceed 30 ug/m3; the situation in urban Luxembourg-Bonnevoie is quite different. The (relative) difference between the heating months situation and the summer months is much lower, and days exceeding the 40 ug/m3 limit are quite frequent. The lower summer concentrations at both sides are in my opinion mostly caused by an increased atmospheric mixing due to convective air transport.

Lets close this chapter with a picture showing the situation in 2012 for all EU member states:

NO2_attainment_2012_EU

The dots represent the median, the boxes delimit the 25 to 75 percentiles, and the whiskers ( the thin vertical lines) show the region containing 99% of the values.

Clearly Luxembourg fares very well: if we take the median concentration, we see that 19 countries surpass Luxembourg, which has the 9th “best” attainment of the 28 EU countries. If we look at the upper whisker end only 3 or 4 countries have lower or similar upper bounds.

Conclusion:  Luxembourg is NOT the bad guy!

 

3. Hourly NO2 concentrations

I will close this first part with a look at the hourly NO2 concentrations during the last 7 days; we will compare the measurements made at Vianden,   Luxembourg-Bonnevoie and Diekirch (meteoLCD):

VIANDEN_NO2_1h_5to12Jun2016

LUXBG_Bonnevoie_NO2_6to12Jun2016

DIEKIRCH_NO2_6to12Jun2016

At Vianden we see a daily maximum which mostly does not exceed 3 times the daily minimum (except the last day); the urban Luxembourg-Bonnevoie data show two daily spikes, one in the morning and one in the afternoon: clearly a sign of increased traffic during the rush hours where commuters come in or leave the town. The range extends from 10 to 80 ug/m3, a factor of 8.

The NO2 sensor in Diekirch has a positive bias of about 10 ug/m3, so take the left blue scale for reading.Here we have a very pronounced peak in the morning (commuter traffic and normally a time of morning inversion); the afternoon peak is muted or absent. The range extends from 10 to 100 ug/m3, a factor of 10, similar to the Luxbg-Bonnevoie situation. The red curve shows the NO readings, which are always lower than the NO2. NOx concentration corresponds to the sum of the blue and red curves.

Comparing the last two curves, we observe a flattening during the last two days (11 and 12 June): you will guess that these are the weekend days with no commuter rush hour!

 

In the 2nd part of this blog (coming asap), I will analyze emissions by different types of cars, using data from a truly excellent DEFRA report from 2011.

Energy Return on Energy Investment

May 29, 2016

kelly

Prof. M.J. Kelly from Cambridge University (Electrical Engineering Division, Department of Engineering) just published a very interesting paper “Lessons from technology development for energy and sustainability where he is very critical of the current fashionable decarbonization politics. He strongly warns that trying to massively deploy yet unfit technologies can be counter-productive.

Here in this comment I just want to stress two problems related to energy production, which he mentions in his paper. The first is the EROI (Energy Return on Investment) which we will read as Energy Return on Energy Investment (EROEI), the second the energy density and surface needs of various power technologies.

  1. The EROI 

    This is a very easy to understand parameter which gives a number to the following question: how much energy will a given technology produce during its life-time, compared to the energy needed to build it and keep it working during this period. This problem is practically always fudged by green energy advocates, who say for instance that a wind turbine will pay back its energy budget during the first year (link), ignoring all the associated problems of backup power, grid investments etc. Prof. Kelly does not agree, and gives the following graph:

EROI
The left scale represents the fraction of (energy produced)/(energy invested); the blue histogram considers this without any regards to energy storage, the yellow columns show the result if one considers all the energy needed to implement large-scale storage technologies (as pumped hydro, batteries …) needed by intermittent producers like wind and solar. He says that the economical threshold is about 8; of the 4 renewable producers only thermal solar plants in desert regions barely exceed this minimum,whereas nuclear power reigns supreme with a factor of 75.

A serious problem with such analyses are the life cycle assessments (LCA), often difficult to grasp in a scientific non-partisan manner. Kelly cites a book by Pietro and Hall (Springer, 2013) which studied the EROI of the Spanish solar “revolution”, where clear and unambiguous data are available: these authors give an EROEI of 2.45 for the Spanish solar politics.

2. Energy density and land usage

A second problem with wind and solar is these are extremely low-density power sources. The following table shows the numbers in MJ/kg:

energy_density

I do not quite agree concerning modern, non-lead batteries: the energy densities are much higher, but still minuscule compared to nuclear:

energy_density_batteries_www_epectec_com

This graph (from http://www.epectec.com) shows that the most recent batteries may go close to 0.76 MJ/kg, similar to hydro dams . Energy density is an important factor when the question of land usage is important, as it is for most populated regions of the world and especially for the mega-cities of the future which are assumed to hold 50% of the world population in 2050.

This Breakthrough paper gives the following numbers for land use in m2 per GWh delivered in one year:

land_use_per_GWh

and these are the numbers for material use:

material_use_per_GWh

I have added the capacity factors that are close to those in Germany/Luxembourg (on-shore wind practically never reaches 30%) and for Solar PV (here 10% is still optimstic); with these more realistic capacity factors, onshore wind would have a land use closer to 2200. What comes a bit of a surprise (even if we accept the very optimistic original numbers) is that solar PV has about the same material footprint as nuclear (which instinctively we associate with enormous volumes of concrete and steel).

Let us take tiny Luxembourg’s electricity consumption as a rough indicator of what part of the ~2500 km2 area of the country would be needed if a certain energy source would produce all the energy needed. According to this report the total energy consumption was about 50000 GWh in 2013. Here the area in km2 and  in % of total country area if all these energy had to be produced by the given source:

Nuclear:                  60 km2    = 2.4 %   (assumes cooling water comes from new lakes)

Solar PV:                320 km2   = 12.8%  (land use taken as 6400)

Wind on-shore:    83 km2     =  3.3%  (land use taken as 1650)

Biomass:         23000 km2     =  more than 9 times the total area of Luxembourg !

The wind and solar numbers are more or less meaningless except that full storage solutions would exist (which will not be the case in the foreseeable future).

I do not accept the numbers for nuclear. The nearby Cattenom nuclear plant produces  about 35000 GWh per year and occupies an area of maximum 4 km2 (checked with Google Earth). Using this as a more realistic example, we would have a total land use for the nuclear choice of  about 6 km2 or 0.24 %, i.e. 10 times less.

3. Conclusion

Both EROI and land use show that the nuclear choice for energy is unbeatable as a “carbon-free” energy producer. This is also the conclusion of Prof. Kelly’s paper and that of the late Prof. McKay in his last interview.

Mathiness and models: the new astrology?

May 18, 2016

climate_modelsThere is an outstanding article in aeon on the use (and abuse) of mathematics and mathematical models in economy. It makes for a fascinationg reading, as many things said could directly apply the model-driven climatology. As a physicist, I love mathematics and find them invaluable in giving a precise meaning to what often are fuzzy statements. But this article includes some gems that make one reconsider any naive and exaggerated believe in mathematical models.

The economist Paul Romer is cited: “Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.” Replace the word “economics” with “climatology” and you begin to understand.

You find many citations by the great physicist Freeman Dyson on climate issues, like this one ” …climate models projecting dire consequences in the coming centuries are unreliable” or “[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere (link).

Ari Laor from the Technion (Haifa, Israel) writes in a comment at the American Scientist blog: “Megasimulations are extremely powerful for advancing scientific understanding, but should be used only at a level where clear predictions can be made. Incorporating finer details in a simulation with a large set of free parameters may be a waste of time, both for the researcher and for the readers of the resulting papers. Moreover, such simulations may create the wrong impression that some problems are essentially fully solved, when in fact they are not. The inevitable subgrid physics makes the use of free parameters unavoidable…”

The Bulletin of Atomic Scientists also has a very interesting article “The uncertainty in climate modeling“. Here some gems: “Model agreements (or spreads) are therefore not equivalent to probability statements…does this mean that the average of all the model projections into the future is in fact the best projection? And does the variability in the model projections truly measure the uncertainty? These are unanswerable questions.”

How true…

 

PS: The Bulletin has a series of 8 short contributions to this subject, and I suggest to take the time to read them all.

 

 

First Radiation Amplification factor for 2016

April 21, 2016

In several previous posts (here and here) I commented on the RAF (Radiation Amplification Factor) which tells us how much a change in the total ozone column will cause a change in UVB irradiation. The question is usually asked to quantify the danger that a thinning ozone layer will cause an increase in UVB radiation which might cause an increase in skin cancer risk. The often extremist scare on the danger of UVB radiation has faded somehow in the last years, as cases of rickets caused by too few UVB exposure has again shown up (read this paper). But at many beaches, you can see overprotective mothers putting their children in UV filtering jump-suits, which might be an overreaction triggered by the scary media stories, which usually start in Western countries at the first warm and sunny days of the year.

The RAF is defined as: RAF = – [ln(DU1/UVB1)/ln(DU2/UVB2)]

where the indices 1 and 2 correspond to two different situations. The following graph shows the situation today, a second (nearly) blue sky day following the first. As these dates and the time of measurements are so close, we may assume a constant length for the sun rays through the atmosphere, and constant attenuations. The AOT (atmospheric optical thickness) which measures the turpitude of the atmosphere was 0.055 the first day and 0.068 the second day; these are very close values. For comparison the AOT was 2.197 on the 19 April, which had a heavy cloud cover. The solar zenith angles where 38.4 resp. 38.0 degrees.

RAF_20_21April2016

With the readings shown on the graph, we find an RAF = 1.10

In a much a much more extensive paper I wrote in April 2013, the corresponding value is 1.08 (computed over 5 consecutive days).

Expressed as simple percentages one can roughly say that during the two days of 20 and 21 April 2016 a decrease of 5% of the total ozone column caused a rise of the UVI of 9% (percentages w.r. to the first day). Beware to not extrapolate linearly this conclusion, as the RAF is defined by non-linear logarithms!

 

European Summer Temperatures since Roman Times

April 9, 2016

J. Luterbacher from the University of Giessen has published in Environmental Research Letters an interesting paper on the evolution of European summer temperatures. The paper is only 12 pages long, but the long list of coauthors counts 44 coauthors, reflecting the inflationary tendency to cite everyone the author will be agreeable too (and the desperate struggle for scientists to be coauthor for a maximum of papers). Nevertheless, the paper is interesting to read.

The authors used two statistical methods to evaluate temperature proxies (here tree-rings): a Bayesian hierarchical modelling (BHM) and a Composite Plus Scaling method (CPS). Both results are compared (where feasible) with instrumentals records (here Crutem4). The concordance of these 2 methods and the instrumental record is rather good, as shown in this figure which gives correlations r of 0.81 and 0.83.

Luterbach_Instruments_B_C

Are the 20th century summer temperatures unusual?

A comparison from Roman times to today is known to include 3 warm periods: the Roman, the Medieval and the Modern (notice the well-known ~1000 year period!). The next figure shows the results given by the two statistics and the IPCC consensus reconstruction:

Luterbach_B_C

I have added the red horizontal line giving the highest (reconstructed) level of the Roman Warm Period: clearly the situation during the 20th century was not unusual compared to this period.

Another figure starts at the Medieval Warm Period and gives the same impression:

Luterbach_MCA_LIA_PresentCompared to the Medieval times, the last 100 years with noticeably higher atmospheric CO2 concentrations (mixing ratios) do not show a dramatic warming!

A last figure is also very telling: it gives the temperature differences between Present (1950 to 2003) and Medieval Warm Period:

Luterbach_Present-MCA

Luterbach_Present-MCA_scale Some locations close to the Mediterranean are warmer, most only slightly warmer or about the same and two even cooler.

Conclusion:

The authors write that “both CPS and BHM  indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE”.

This would be the last word, but we all know that a scientist today must pay at least lip-service to the global warming meme. Accordingly the authors tell us that “However, summer temperatures during the last 30 yr (1986–2015) have been anomalously high”. Remember that we had a “monster” El-Nino in 1998, and a very big one in 2015: these two events alone pushed up the average temperatures a lot, so this last remark is rather irrelevant.

But as they write in another part of their paper that “… as well
as a potentially greater role for solar forcing in driving
European summer temperatures than is currently present
in the CMIP5/PMIP3 simulations. This might be evidence for an enhanced sensitivity to solar forcing in this
particular region”, acknowledging the IPCC denied solar forcing, I will pardon the mandatory, career friendly and politically correct sentence on the last years.


Follow

Get every new post delivered to your Inbox.