This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.
There has been some discussions in the media on the success of the Montreal protocol in eliminating the usage of ozone depleting substances, and its effect on the Antarctic ozone hole.
A much more sober evaluation can be found in the EEA report “Production and consumption of ozone depleting substances“. I made an overlay of the two important graphs of the area of the Antarctic ozone hole area and the world consumption of ozone depleting substances (ODS), like CFC’s:
This objective plot shows that the ozone hole practically stays constant since 1992, whereas the consumption of the OSD falls sharply to near zero levels. This begs some serious explanation: are the man-made OSD the sole cause of ozone depletion, or are other phenomena acting here? Since 20 years, the OSD consumption is less than 20% of its value 30 years ago; should we really assume that a delay of 20 to 30 years is needed before seeing the effect of the OSD ban?
Just for us in the Northern Hemisphere: here the graph of the total ozone column as measured in Uccle since the 72’s: the grey region corresponds to the [-2sigma, +2sigma] interval, containing about 96% of all values.
It is remarkable how fast local TOC changes, but the year-long average remains nicely sinusoidal; average TOC values do not show any decrease to be afraid of (there is a first period of decrease followed by one of increase; we still are in the latter): no ozone hole here!
Remark 5th Aug 2016: some minor revision and replacement of some figures by the originals after exchanges with Dr. Frank
Dr. Pat Frank works at the Stanford National Linear Accelerator (SLAC). Since quite some years he studies the problem of uncertainty and reliability of climate models and measurements, and his conclusions are damning. In this comment I will try to condense the essentials given in his July 2016 presentation “No Certain Doom: On the Accuracy of Projected Global Average Surface Air Temperatures” (link) in July 2016 at the 34th Meeting of the Doctors for Disaster Preparedness. This is a very clear presentation and I urge you to spend 42 minutes to watch this video. For all those who do not have the time, I will try to condense the essentials of this presentation below.
1. The ubiquitous climate models
Dr. Frank starts with a beautiful worded reality: “Climate models are the beating heart of the modern frenzy about CO2“. He repeats that the IPCC said “it is extremely likely that human influence has been the dominant cause of the observed warming since the midth-twentieth century”, a “consensus” conclusion that has been adopted by many scientific bodies and universities. Nevertheless, when it comes to predicting the future warming, the picture looks not very nice:
This figure from AR5 (AR5,TS15) shows that using the RCP 8.5 scenario, the uncertainty of 2300 warming (w.r. to pre-industrial times) is probably more than 12°C; this enormous spread of the results given by a very great number of models is a telltale sign of their poor reliability. Now this large uncertainty does not contain the errors of the individual models and their propagation, when future temperatures are calculated step by step. What is really amazing is that no published study shows this propagation of errors through the GCM projections, which as a consequence should be considered unknown.
2. A model of models
Dr. Frank recalls that all models essentially assume a linear dependency of warming and change of radiative forcing during one calculation step. He introduces a simple “model of models”, and compares the outcome with that of the super-computer models:
Later a supplementary term of 4 will be added before the closing right bracket. Comparing the outcome of this “Frank-model” with the GCM’s, one must be surprised how good it compares to the big guys:
The lines represent the outcome of the “Frank-model”, the dots those of different GCM predictions (bottom is Hadcrut3). The extraordinary good result allows Dr. Frank to say that “if you want to duplicate the projections running on super-computers, you can do it with a handheld calculator”.
3. Clouds, the elephant in the room
The influence of clouds on global temperature is enormous: globally their effect is assumed to make a 25 W/m2 cooling, with an annual uncertainty in the GCM’s of +/- 4 W/m2. This number must be compared to the consensus value of human produced GHG (greenhouse gases) of 2.8 W/m2 since 1900. So the cloud uncertainty simply swallows the alleged anthropogenic warming. Usually one assumes that the errors in the models are random, so that they could with some luck at least partially cancel each other. Dr. Frank shows that nothing could be farther from the truth: the errors of the models are heavily correlated ( R>0.95 for 12, and R>0.5 for 46 models).
How does the cloud uncertainty propagate through the calculation scheme of the GCM’s?
The annual 4 W/m2 uncertainty propagates from step to step, and the resulting uncertainty is the square-root of the (sum of the squares of the individual errors):
Clearly this warming range is “outside of any physical possibility”; the temperature projections of even the most advanced models become meaningless, if the cloud uncertainty is properly applied.
What can the climate models deliver as as prediction tools? The following slide is crystal clear:
Do our politicians which jump one over the other to sign the Paris treaty have only the slightest understanding of this uncertainty problem?
In this comment I will continue to comment some of the problems explaining what really happens (“radiationally” speaking) during short-lived precipitation events. Some very good papers will be shortly discussed, and I thank Marcel Severijnen for his really important comments.
1. A phenomenon known since long times ago
That atmospheric radiation peaks during precipitation events is known for many, many years. As an example, Thomas Thomson from the Swedish Meteorological and Hydrological Institute, Stockholm, published in 1962 a paper titled “Some observations of variations of the natural background radiation” (link) . He speaks of a 5% to 20% increase in background radiation which is correlated to precipitation. The following figure 2 from this paper (with my additions) documents this (note that 1 micro-roentgen per hour equals 10 nSv/h):
From the duration of the gamma rays increase he concludes that the cause must be short-lived decay products from radon (with a half-live T1/2 less than 1 hour) and he suggests a specific activity in the rain-water between 10^-12 and 10^-10 curie/g, which corresponds to 37 to 1000 Bq/liter (Bq/kg). He also notes that this activity decreases with precipitation-rate, i.e. the higher the precipitation rate (in mm/h for instance), the lower the specific activity in the rain water. This is a sensible conclusion: when the rain falls through a slab of air loaded with radioactive elements, a fast fall-through will scavenge less particles (per rain drop for instance) than a slower one.
Let us conclude with figure 7, where Thomas calculates by linear regression the radioactive doses given by precipitation events: using SI units (yellow boxes) he finds, according to some months, a number between 50 and 100 nSv per mm of rainfall. Here in Luxembourg we are close to about 800 mm per year, which would correspond to an incredible high natural background dose of 40 to 80 mSv if this dose would be completely absorbed by a body (the usual assumptions of yearly natural doses are about 3 mSv).
2. The Livesay paper
Marcel mentioned in his comment to the previous blog the Livesay paper by R.J. Livesay et al. from the Oak Ridge National Laboratory, published in 2014 and titled “Rain-Induced Increase in Background Radiation Detected by Radiation portable Monitors” (link). This is a really interesting and very readable paper (with some not too difficult maths) that studies the increase in gamma counts given by portable radiation detectors installed at many places (for instance in Luxembourg at the entrances of trucks or train wagons delivering scrap metal to our steel founderies). Using gamma-ray spectroscopy they clearly show that the elements causing the radiation peaks during rain-fall are mainly the two radon daughters Pb214 and Bi214, with half-lives of 27 and 20 minutes (the Rn222 half-live is 3.8 days) and energies of approx. 352 and 609 keV, and which are deposited on the ground:
The paper has a very nice record of a short rain pulse and the following decay of the gamma counts (fig.8 a&b, with my additions):
The authors give an easy to understand mathematical model, which renders in a nearly perfect manner the observations:
3. The Fujinami paper: rain-out or wash-out?
I will conclude the scientific literature with a paper published by Naoto Fujinami in the Japanese Journal of Health Physics in 2009, titled “Study of Radon Progeny Distribution and Radiation Dose-Rate in the Atmosphere” (link). This paper answers the question given in the title of this blog: does the increase in gamma-activity come from rain-out or a wash-out? Rain-out means that the radon progeny is attached to the rain drops inside the rain delivering cloud, and wash-out means that the main process happens during the fall of the rain drops through the air volume below the clouds. The paper is a bit confusing, as a first (too) rapid lecture gives the wrong impression that the authors concludes that gamma activity decreases with rain-fall. In fact this is not the case, as the author writes in chapter III (“Scavenging of radon progeny by precipitation in the atmosphere”) that “The radon progeny in precipitation produce an increase in adsorbed dose rate in air at ground level“; a remark that in my opinion should have been made much earlier in the paper. Fujinami reports that there exists an inverse relationship between the concentration of radon progeny (in rain water) and the precipitation rate, an observation reported by T. Thomson in his 1966 paper. What makes the Fujinami paper interesting, is that he tries to demonstrate that rain-out and not wash-out is the important cause of additional gamma activity.
In a first point of his study, the author shows a plot of the radon progeny concentration in surface air (implied by the measured gamma-activity) and precipitation, and he concludes that periods of precipitation lower the radon progeny concentration in surface air:
Now I have some problem with this plot (I added the colored lines and text box): the time-resolution seems very large, about 8 hours as shown by the blue lines. So this figure says nothing about the immediate changes in ambient radioactivity in surface air after a rain fall, but possibly more on the coarse general evolution. And even here I have problems of understanding: the two first events (and possibly the last) clearly show a drop in radioactivity prior to the rain fall. So this point damps seriously my first enthusiasm for the the Fujinami paper.
Let me show you a similar plot made today (05 Aug 2016) at Diekirch and covering the last 7 days:
First one should note the visible daily rhythm of gamma activity, with higher radioactivity in the morning hours when the usual inversion blocks the mixing of ground air with higher levels of the atmosphere; the same situation happens for instance with our CO2 data . The blue arrows point to the main rain events: the first two arrows show that the radiation peak follows the rain pulse; the next three rain events do not have any visible influence, as the ambient radioactivity has come down to the normal background, and the clouds and air above ground seem depleted of additional radon progeny to be scavenged. So in my opinion, lower levels following a rain event seem more a return to normal than an effect caused by precipitation.
All other hints to the Fujinami conclusion that the phenomenon observed is a rain-out happening in the clouds and not a wash-out during the free fall of the rain drops to the air below the clouds rely on the same Japanese data series from Maizuru (1977-1985). I am absolutely not convinced by the author’s argumentation, and would appreciate stronger logic and more fine-grained data before accepting his conclusion.
4. Do the Diekirch data show the typical time evolution of Pb214 and Bi214 shown in the Livesay paper?
As the half-lives of the radon progeny Pb214 and Bi214 are 27 and 20 minutes, inspecting the decreasing radioactivity after a very short rain pulse should show a return to normal levels after 2 to 3 hours, as given in the Livesay paper. A problem with our Diekirch data is the rather coarse time step of 30 minutes: the measurements stored in the datalogger at time xx:00 and xx:30 are the average of the preceding 30 minutes (radioactivity) or the total precipitation during this time interval. Our Davis Vantage Pro Plus backup weatherstation also registers the “rain rate”, which is the maximum precipitation that would have been collected during the measuring interval (30 minutes) following the last bucket tip. So this number gives a bit of information on the brusqueness of the rain pulse, compared to the integral over 30 minutes, but it does not help much more. I will use the rain event of the 23 July following the storm from the previous day. The reason is that this rain-pulse is not followed by a second one which muddles the picture. Here are the numbers:
There clearly is a time lag of about 2 hours between the start of the first rain event and the excess radiation peak; if we set the origin of time at the moment where the activity peaks, we obtain with a rather good R2 = 0.97 a half-live of about 43 minutes, applying a simple decay model. Note that the return to normal interval is about 3 hours.
Let’s verify the corresponding evolution given in the Livesay paper at figure 4:
I put the time origin at the second count peak and wrote down the corresponding counts obtained by inspection. The decay model gives this result for the decrease of the excess counts:
Here the half-live is approx. 19 minutes, about half of what we found at Diekirch. These 19 minutes are close to the half-life of Pb214, the first of the two relevant radon daughters. Now one should remember that we do observe the combined effect of two radioactive decays, where the first element Pb214 creates the second Bi214. Our simplistic curve fitting exercise has no solid physical basis, but should be taken as a visualization tool.
What remains is that in our data the general evolution of the excess gamma activity caused by a rain event follows a similar albeit slower evolution as that shown in the Lindsay paper. Return to normal background levels takes about 3 hours, in accordance to this paper.
About 15 km away from Diekirch, the small village of Larochette and some neighbouring villages sufferered the 22 July 2016 from a disastrous downpour of more than 50 mm during less than one hour: the result were very disruptive floods which caused much destruction: bridges that crumbled, roads torn open and many houses with their low levels flooded. The phenomenon was very localized, and in Diekirch the very short-lived rain pulse was not more than about 12.9 mm (in 30 minutes).
As seen in a couple of former blog posts (see here , here and here) these rain pulses caused very visible radioactive peaks measured by our Geiger counter. We know that these peaks are essentially caused by a radon washout (the peaks are the fingerprint of the gamma-emitting radon daughters).
We all like nice and clean cause-effect relations, preferentially linear ones. When we look at what happened during the week from the 21 to the 26 July, we see that things are a bit more complicated:
The upper plot shows in blue the intensities of the precipitation peaks (in mm per 30 minutes, also given by the corresponding labels) and in red the cumulative precipitation. The lower plot gives the dose-rates in nSv/h, with the yellow boxes showing the approximative numbers for clarity. If we assume an usual background about 83 nSv/h, you should subtract 83 from these numbers to yield the peaks caused by the washout.
In the preceding blogs I suggested that after a first washout the atmosphere requires a minimum time to “recover”: a rain pulse, even much more intense, prior to that minimum (3 days were suggested) yields a lower radioactive response. What happened the 22 July 2016 shows both the same and an opposite behavior: the storm rain pulse of 12.9 mm follows less than 24 hours a normal heavy downpour of 4.2mm. This first precipitation event lashes out a radiation peak of 16 nSv/h; the storm event, with a 4 times higher intensity, produces practically the same radiation rise. But even more intriguing is the next rain event some 5 hours later: a meagre 1.8 mm rain produces a radiation peak of 13 nSv/h, not much lower than the preceding one.
So we have in this week two conflicting situations:
a. the first confirms the hypothesis of an atmospheric recovery-time: you can not washout what isn’t yet there; the very heavy storm event happens too early to produce a bigger radiation peak.
b. the small precipitation event following the 19: 00 UTC “monster” causes an important radiation peak, in violation of both the effect-proportional-to-cause and the time-lag hypothesis. I have no explanation to this for the moment. A first enquiry would be looking for a precipitation measurement error. I checked this with our Davis VantagePro II backup station, mounted at a distance of about 4 m. The next table shows the details of the measurements:
The last column corresponds to the increase of the dose-rate, taking 83 nSv/h as the reference.
Clearly, there is a delay of about 1 hour between the precipitation and the radiation peaks. What remains puzzling, is the strong radiation pulse after a very modest rain peak following the “big one”.
Let us finish this short analysis by plotting the radiation increase versus the rain pulse which is its cause. I will add all the data of this week to the values measured in 2013 and 2014. First the table with these data:
We see that the big storm from 22-Jul 19:00 is an outlier, and a linear model might not be the best. The Pearson correlation between rain-pulse and radiation-peak is 0.23, statistically not significant at the alpha = 0.95 level. Omitting the outlier betters the correlation to 0.32, but it remains not significant. Now this discussion on significance is moot, as clearly the observations show that radon washout does exist. What remains deeply puzzling is not the situation in cases 11 & 12 (the last two lines of the preceding table): a first small rain pulse might cause a distinct radiation peak, and to get the same peak a second rain pulse following shortly must be much higher.
But the situation in cases 8&9 remains for me disturbing: why does the small rain pulse following the big one create a similar intense radiation peak?
Thank you for an answer, if you have a clue what might happen here….
01-Aug-2016: added as a supplementary information to Marcel’s comment.
The link to the full paper of Livesay “Rain induced background radiation….” is here.
In the first part of this blog I recalled some fundamentals of Milankovic’s climate-relevant cycles: precession, obliquity and eccentricity. In this second part I try to resume as simple as possible the main points of the new paper by Ellis & Palmer.
1. Five major insights
The following 5 points are known and accepted by most scientists:
a. Each major deglaciation coincides with maximum NH (North Hemisphere) solar insolation.
b. Not all insolation maxima (“Great Summers”) trigger deglaciations.
c. Eccentricity governs the strength of the Great Summer.
d. During an ice age atmospheric CO2 levels plunge (colder oceans absorb more), ice sheets extension and albedo (the earth’s reflectivity) increase.
e. When CO2 levels are at a minimum and albedo is at maximum, a rapid warming will begin and start an interglacial period. Conversely when CO2 levels are at a maximum and albedo is at mininum (during an interglacial) a new cooling (= ice age) will begin.
2. The Ellis & Palmer paper
The new theory postulated by Ellis & Palmer can be summarized as follows: A minimum CO2 ( e.g 150 -200 ppm) starves plant life, creates a die-back of forests and savanna’s, which increases soil erosion and produces more dust storms. The dust deposits on the ice sheets diminish the albedo, increasing the absorption of solar energy. This increase of about 180 W/m2 at higher NH latitudes starts a global warming, i.e. an intra-glacial.
So ice ages are forced by orbital cycles and changes in NH insolation, but regulated by ice-albedo and dust-albedo feedbacks. The precession cycle is the main forcing agent through the induced albedo changes. The primary forcing and feedback for intra-glacial modulation is albedo.
As a consequence CO2 (by its greenhouse gas properties) can not be the primary feedback because high CO2 levels during or at the end of an intra-glacial result in cooling, and low CO2 levels during a glaciation maximum precede the warming.
The grey bands in this figure correspond to maximal dust deposits (>0.35 ppm): the Antarctic temperatures (from the Epica 3 bore-hole) start rising after most of these dust peaks.
3. Main conclusions of the paper.
Regarding IPCC’s AR5 published in 2013 the authors write: “The IPCC has identified dust as a net weak cooling mechanism, when it is probably a very strong warming agent.”
And they conclude with these words: “The world’s dust-ice Achilles heel needs to be primed and ready to fire before an intra-glacial can be fully successful…in which case, intra-glacial warming is eccentricity and polar ice regrowth regulated, Great Summer forced and dust-ice albedo amplified. And the greenhouse attributes of CO2 play little or no part in this feedback system.“
You should definitively read the full paper!
Ralph Ellis and Michael Palmer have an extremely interesting paper in the Elsevier publication “Geoscience Frontiers” titled: Modulation of ice ages via precession and and dust-albedo feedbacks (link to open access version, May 2016). This long paper (19 pages!) is very readable, but nevertheless needs more than one reading to fully understand the important details. So I will try in this blog the resume the most important findings of that outstanding paper.
- The Milankovic cycles
The climate of the Earth is a system-response to the insolation of the sun. As this insolation (or irradiance) is not constant, it is not a big surprise that Earth’s climate is not constant, and never was. There are short variations, like seasons, El Nino’s, 11/22 years and 60 years changes from solar and oceanic oscillations etc. The much longer periodic changes like the ice ages are known since Milutin Milankovic’s seminal papers to be caused by variations of (at least) 3 astronomic parameters related to the Earth revolving in our solar system, varaitions that cause important changes in the insolation oft planet Earth,
The most important parameter is the precession of the Earth’s axis: this gyroscopic effect (first studied by the great mathematician Euler) means that the axis (which is inclined w.r. to the ecliptic plane, i.e. the plane of the orbit of the earth circling around the sun) makes a slow rotation around the perpendicular to the ecliptic plane. The axis oscillates between two extreme positions, where it points to Polaris (the Northern Star) or to Vega. When the axis is titled towards Polaris (which is close to the actual situation), the North Hemispheric (NH) winters correspond to a position where the globe is closest to the sun, and the NH summers where is it farthest. This precessional cycle (including a complication caused by the rotation of the elliptical orbit (=apsidal precession) has a cycle length of about 22200 years, a period often called a Seasonal Great Year (SGY). A Great Season takes 1/4 of this period, about 5700 years; one speaks of a Great Summer, a Great Winter and so on. This precession of the axis has by far the biggest influence on solar insolation (details will follow).
A second important astronomical parameter is the obliquity or axial tilt. The angle between the axis and the perpendicular to the ecliptic plane varies between 21.5° and 24.5°; the actual value is 23.5°. This angle essentially impacts the severity of the seasons. Actually the NH winters are moderate, as the solar rays are more close to the perpendicular of the globes surface, and the summers are moderate too, as the solar rays are more inclined, which diminishes their heating potential. The length of one obliquity cycle is 41000 years. Precession and obliquity cause a complicated wobbling movement of the Earth’s axis.
Finally the last important factor is the eccentricity of the Earth’s orbit. The orbit is an ellipse, close but not quite equal to a circle. The eccentricity describes the deviation from a perfect cycle, and in the case of our planet this parameter is not constant but varies slowly under the influence of the other planets with time. The cycle length is approx. 100000 years. The changes in eccentricity are small, between 0.034 and 0.058 (actual value is 0.0167, which means that the orbit actual is near circular). The main influence of the changing eccentricity is a (small) time shift of the seasons during the year.
For climate related questions, the most important parameter is the change in solar irradiance (or insolation) at high latitudes of the NH. Usually one looks at the changes observed (or calculated) at northern latitude 65° (NH 65). Here are the extreme changes caused by the variations of the three astronomical psrameters shown above:
Precession: 110 W/m2
Obliquity: 25 W/m2
Eccentricity: 0.4 W/m2
These changes can be lumped together in the so called Milankovic Cycle (figure from the Ellis/Palmer paper, the time axis is KY (kilo-years) before present):
The upper plot shows the changes in solar irradiance, the lower the temperature deviations from a mean value of the Antarctic. I added a zero line to make clearer where the intra-glacial periods happen (the peaks above the zero line) and where the ice ages are (the periods in-between: note that the ice ages are the “normal” state of the Earth climate, and that the intra-glacials are, geologically speaking, exceptions to that state). The orange/red bands represent the (Seasonal) Great Summers. Clearly, not all Great Summers cause an intra-glacial warming, as there are about 4 to 5 Great Summers from one intra-glacial to the next. The Ellis/Palmer paper tries to explain this with a novel theory; I will discuss this in the part 2 of this blog.
In the first part of my comment I showed that concerning mean annual NO2 concentration, Luxembourg is among the better of the EU28 countries, whereas Germany is the worst.
This second part will be on emissions from petrol (gasoline) and diesel engines, and the efficiency of the various Euro norms. I rely for a good part on an excellent report by the Kings College of London, the University of Leeds and the AEA (Agricultural Engineers Association), published in 2011 for DEFRA (Department for Environment, Food and Rural Affairs).
1. Emissions of NOx from petrol and diesel engines.
Diesel engines are the workhorses in heavy machinery, as they have an efficiency of about 35% compared to 25% of conventional petrol engines; this higher efficiency was one of the main reasons to introduce Diesel engines in ordinary vehicles, as the fuel consumption for a given power output is lower (and the price of Diesel fuel less taxed in many countries). In Diesel engines, the vaporized injected fuel burns lean, with an excess of oxygen and at high pressure, and some spots in the cylinder can reach temperatures over 1500 °C. The excess of oxygen favors the formation of NOx. Conventional petrol engines burn a very equilibrated air/fuel mixture (which is created in the carburetor outside the combustion chamber), without any oxygen excess; the result is a combustion with lower levels of NOx which can easily be removed by a 3-way catalyst. Gasoline direct injection (DI) engines have a better fuel efficiency and torque at low rpm’s, but suffer from higher NOx emissions, similar to the Diesel engines. The gasoline DI engines become more and more fashionable, and some of their NOx problems are solved by special catalytic converters and ECR (exhaust gas re-circulation); you may read this report from DELPHI which shows testing of an engine releasing not more than 0.2g NOx/kWh (the US Federal limit for heavy trucks is 0.26 g NOx/kWh). Nevertheless one should bear in mind that the switch from conventional to DI gasoline engines will increase the NOx problems of gasoline driven vehicles.
2. Main findings of the report “Trends in NOx and NO2 emissions in the UK and ambient measurements”
This report is interesting because it relies heavily on remote sensing detectors (RSD) to measure the NOx/NO2 levels under urban traffic conditions (mostly low speeds of 36 km/h). The report finds big differences between published factors and the measurements for light vehicles, and that certain catalytic techniques used in heavy goods vehicles (trucks) are inefficient in urban conditions.
The following figure shows how NOx emissions (here expressed as the ratio NOx/CO2*1000) changed during the years for 4 types of vehicles: passenger cars, HGV (heavy goods vehicles = trucks), LGV (light good vehicles = small transporters) and buses:
The CAR pattern clearly show a rather dramatic decrease of NOx emissions for gasoline cars (blue curve), but a more or less steady state since 2000 for diesel cars (red curve); the same situation occurs for the LCV’s. For buses the situation is even worse, as emissions tend to increase since ~1998! So no wonder that roadside NOx levels at many cities are high, even when individual traffic is limited and public buses become mandatory as the main transportation mode.
The different Euroclass norms set the upper limits of allowed NOx emissions (in g/km); here the numbers for Diesel engines:
E2 = 0.7, E3 = 0.5, E4 = 0.25, E5 = 0.18 and the latest E6 (not on this figure) = 0.08 g/km.
If we look at the test measurements in the report for Diesel and gasoline cars, the results are mind-boggling:
These 2 figures represent box-and -whiskers graphs: the black line corresponds to the median of the sample (50% of the sample are below, 50% higher), the blue rectangular boxes the 25-75% percentiles (i.e. 50% of the samples lie inside the box), and the full extend of the whiskers (the black lines) represent 99% of the sample size.
For petrol cars, the efficiency of the increasing stringent norms is clearly visible, even if there seems to be a stand-still from E3 on. Diesel engines do not show this: on the contrary the latest E norms do bring a worsening! This is a clear sign that the over-optimistic E norms are nearly impossible to fulfill for Diesel cars that must be fuel-efficient and powerful. No wonder that many manufacturers of Diesel cars (like Volkswagen) installed clever software to fool the compliance procedures.
Nobody should be astounded that real measurements give other results than the official numbers based on laboratory measurements and/or computer programs. The problem with measurements under real driving conditions is that these conditions are impossible to standardize (the state of the road, the weather etc. are changing parameters), whereas measurements in the lab can be made under well defined conditions. The next figure shows the difference between the higher road-side measurements (RSD) and the official factors:
This figure once more tells the sad story that for Diesel cars the different Euro norms did not have a big effect!
3. The roadside or country-wide NOx levels
The next figure gives the European ambient NO2 concentrations according to different environments:
The vertical line at 40 ug/m3 corresponds to the European limit for annual average concentrations; of the 5 different environments, the roadside remains the only problematic location. Even urban or sub-urban backgrounds lie well below the 40 ug/m3 limit!
Let us look how this roadside situation changed during the years for different countries:
Except Greece and Italy, all countries show a more or less horizontal trend for the full period 1995 to 2009: this means that the different Euro norms did not have a big effect at roadside locations. One reason, as shown above, is that successive more modern Diesel engines were unable to drastically lower their NOx emissions, and a second reason may well be the massive increase in Diesel cars; an increase pushed by political decisions to lower fuel consumption (and supposed climate-hurting CO2 output) which made Diesel fuel less expensive than gasoline.
NO2 (or NOx) mitigation is a wicked problem, and only naive persons believe that impossible stringent norms will miraculously achieve results that are nearly impossible to obtain for physical and/or engineering reasons. Maybe we would be better off if the scare-mongering about the dangers of NO2 would cease, and solutions for lowering NO2 emissions were allowed more time for research, experimentation and development. Pierre Lutgen (who holds a PhD in chemistry) does not believe many of the shrill dangers brought by very low NO2 levels (read his French article on nitrites). But NO2 as a gas is an irritant for the lung linings, and may form very small particles when reacting with other substances; it also has a detrimental effect on plant-life (some put the allowed limit as low as 30 ug(/m3). So high levels of NO2 clearly should be avoided. But as often in environmental policies, setting limits at impossible low values will not hasten the accordance, but favor clever proceedings to circumvent these limits.
- The recent EEA report on national emission compliance.
A recent report from the European Environmental Agency (EEA) made some splash in the media, as it showed that many countries (among them Luxembourg) are missing their NOx emission limits.Here the relevant table:
The red crosses represent exceedance, the ticks conformance. What exceedance means is not very clear: probably it represents somewhere in the country an overshoot of the 8 hour limit of NOx concentration; the place where this happens will almost certainly be a heavy traffic urban road, but not a general mean annual concentration above the 40 ug/m3 limit (for NO2).
If we take the most important industrial countries (Belgium, France, Germany, Italy and the UK), all except Italy and the UK do not conform to the targets. It is an irony that the “über-grün” Germany overshoots all relevant pollutants as NOx, NMVOC (non-methane volatile organics, like terpenes); that Italy and the UK are in conformance might be real (I have some doubts, thinking of Roma or Neapoli traffic conditions), or simply a sign of a particular clever reporting. Emissions in NH3 (ammoniac) are clearly related to agriculture (especially cattle and swine rising), which explains that Denmark and the Netherlands are here big “sinners”.
Locally NOx/NO2 exceedance may give a wrong picture, so let us look at the yearly mean concentrations, as given by several EEA publications and databases.
2. The yearly average NO2 concentration in Luxembourg and other EU states.
The following picture shows the average annual concentration in the 6 validated Luxembourg measurement stations in 2011:
Vianden, Beidweiler and Beckerich are rural , Esch-Alzette and Luxembourg urban stations. It is only at the two Luxembourg (-City) stations that the EU target of 40 ug/m3 is exceeded. Not surprisingly, as the measurement stations lie at roads with very heavy traffic; the Esch-Alzette station is on a small hill (Galgenberg) with plenty of green vegetation around.
The following two pictures show the daily mean NO2 concentrations of Viandem and Luxembourg-Bonnevoie for 2015 (note the different vertical scales!) ( link)
Rural Vianden concentrations are very low, and even the heating months do not exceed 30 ug/m3; the situation in urban Luxembourg-Bonnevoie is quite different. The (relative) difference between the heating months situation and the summer months is much lower, and days exceeding the 40 ug/m3 limit are quite frequent. The lower summer concentrations at both sides are in my opinion mostly caused by an increased atmospheric mixing due to convective air transport.
Lets close this chapter with a picture showing the situation in 2012 for all EU member states:
The dots represent the median, the boxes delimit the 25 to 75 percentiles, and the whiskers ( the thin vertical lines) show the region containing 99% of the values.
Clearly Luxembourg fares very well: if we take the median concentration, we see that 19 countries surpass Luxembourg, which has the 9th “best” attainment of the 28 EU countries. If we look at the upper whisker end only 3 or 4 countries have lower or similar upper bounds.
Conclusion: Luxembourg is NOT the bad guy!
3. Hourly NO2 concentrations
I will close this first part with a look at the hourly NO2 concentrations during the last 7 days; we will compare the measurements made at Vianden, Luxembourg-Bonnevoie and Diekirch (meteoLCD):
At Vianden we see a daily maximum which mostly does not exceed 3 times the daily minimum (except the last day); the urban Luxembourg-Bonnevoie data show two daily spikes, one in the morning and one in the afternoon: clearly a sign of increased traffic during the rush hours where commuters come in or leave the town. The range extends from 10 to 80 ug/m3, a factor of 8.
The NO2 sensor in Diekirch has a positive bias of about 10 ug/m3, so take the left blue scale for reading.Here we have a very pronounced peak in the morning (commuter traffic and normally a time of morning inversion); the afternoon peak is muted or absent. The range extends from 10 to 100 ug/m3, a factor of 10, similar to the Luxbg-Bonnevoie situation. The red curve shows the NO readings, which are always lower than the NO2. NOx concentration corresponds to the sum of the blue and red curves.
Comparing the last two curves, we observe a flattening during the last two days (11 and 12 June): you will guess that these are the weekend days with no commuter rush hour!
In the 2nd part of this blog (coming asap), I will analyze emissions by different types of cars, using data from a truly excellent DEFRA report from 2011.
Prof. M.J. Kelly from Cambridge University (Electrical Engineering Division, Department of Engineering) just published a very interesting paper “Lessons from technology development for energy and sustainability“ where he is very critical of the current fashionable decarbonization politics. He strongly warns that trying to massively deploy yet unfit technologies can be counter-productive.
Here in this comment I just want to stress two problems related to energy production, which he mentions in his paper. The first is the EROI (Energy Return on Investment) which we will read as Energy Return on Energy Investment (EROEI), the second the energy density and surface needs of various power technologies.
- The EROI
This is a very easy to understand parameter which gives a number to the following question: how much energy will a given technology produce during its life-time, compared to the energy needed to build it and keep it working during this period. This problem is practically always fudged by green energy advocates, who say for instance that a wind turbine will pay back its energy budget during the first year (link), ignoring all the associated problems of backup power, grid investments etc. Prof. Kelly does not agree, and gives the following graph:
The left scale represents the fraction of (energy produced)/(energy invested); the blue histogram considers this without any regards to energy storage, the yellow columns show the result if one considers all the energy needed to implement large-scale storage technologies (as pumped hydro, batteries …) needed by intermittent producers like wind and solar. He says that the economical threshold is about 8; of the 4 renewable producers only thermal solar plants in desert regions barely exceed this minimum,whereas nuclear power reigns supreme with a factor of 75.
A serious problem with such analyses are the life cycle assessments (LCA), often difficult to grasp in a scientific non-partisan manner. Kelly cites a book by Pietro and Hall (Springer, 2013) which studied the EROI of the Spanish solar “revolution”, where clear and unambiguous data are available: these authors give an EROEI of 2.45 for the Spanish solar politics.
2. Energy density and land usage
A second problem with wind and solar is these are extremely low-density power sources. The following table shows the numbers in MJ/kg:
I do not quite agree concerning modern, non-lead batteries: the energy densities are much higher, but still minuscule compared to nuclear:
This graph (from http://www.epectec.com) shows that the most recent batteries may go close to 0.76 MJ/kg, similar to hydro dams . Energy density is an important factor when the question of land usage is important, as it is for most populated regions of the world and especially for the mega-cities of the future which are assumed to hold 50% of the world population in 2050.
This Breakthrough paper gives the following numbers for land use in m2 per GWh delivered in one year:
and these are the numbers for material use:
I have added the capacity factors that are close to those in Germany/Luxembourg (on-shore wind practically never reaches 30%) and for Solar PV (here 10% is still optimstic); with these more realistic capacity factors, onshore wind would have a land use closer to 2200. What comes a bit of a surprise (even if we accept the very optimistic original numbers) is that solar PV has about the same material footprint as nuclear (which instinctively we associate with enormous volumes of concrete and steel).
Let us take tiny Luxembourg’s electricity consumption as a rough indicator of what part of the ~2500 km2 area of the country would be needed if a certain energy source would produce all the energy needed. According to this report the total energy consumption was about 50000 GWh in 2013. Here the area in km2 and in % of total country area if all these energy had to be produced by the given source:
Nuclear: 60 km2 = 2.4 % (assumes cooling water comes from new lakes)
Solar PV: 320 km2 = 12.8% (land use taken as 6400)
Wind on-shore: 83 km2 = 3.3% (land use taken as 1650)
Biomass: 23000 km2 = more than 9 times the total area of Luxembourg !
The wind and solar numbers are more or less meaningless except that full storage solutions would exist (which will not be the case in the foreseeable future).
I do not accept the numbers for nuclear. The nearby Cattenom nuclear plant produces about 35000 GWh per year and occupies an area of maximum 4 km2 (checked with Google Earth). Using this as a more realistic example, we would have a total land use for the nuclear choice of about 6 km2 or 0.24 %, i.e. 10 times less.
Both EROI and land use show that the nuclear choice for energy is unbeatable as a “carbon-free” energy producer. This is also the conclusion of Prof. Kelly’s paper and that of the late Prof. McKay in his last interview.
There is an outstanding article in aeon on the use (and abuse) of mathematics and mathematical models in economy. It makes for a fascinationg reading, as many things said could directly apply the model-driven climatology. As a physicist, I love mathematics and find them invaluable in giving a precise meaning to what often are fuzzy statements. But this article includes some gems that make one reconsider any naive and exaggerated believe in mathematical models.
The economist Paul Romer is cited: “Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.” Replace the word “economics” with “climatology” and you begin to understand.
You find many citations by the great physicist Freeman Dyson on climate issues, like this one ” …climate models projecting dire consequences in the coming centuries are unreliable” or “[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere” (link).
Ari Laor from the Technion (Haifa, Israel) writes in a comment at the American Scientist blog: “Megasimulations are extremely powerful for advancing scientific understanding, but should be used only at a level where clear predictions can be made. Incorporating finer details in a simulation with a large set of free parameters may be a waste of time, both for the researcher and for the readers of the resulting papers. Moreover, such simulations may create the wrong impression that some problems are essentially fully solved, when in fact they are not. The inevitable subgrid physics makes the use of free parameters unavoidable…”
The Bulletin of Atomic Scientists also has a very interesting article “The uncertainty in climate modeling“. Here some gems: “Model agreements (or spreads) are therefore not equivalent to probability statements…does this mean that the average of all the model projections into the future is in fact the best projection? And does the variability in the model projections truly measure the uncertainty? These are unanswerable questions.”
PS: The Bulletin has a series of 8 short contributions to this subject, and I suggest to take the time to read them all.