Welcome to the meteoLCD blog

September 28, 2008

Badge_Luxwort_2016

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.

Modulation of Ice Ages (part 2)

July 5, 2016

In the first part of this blog I recalled some fundamentals of Milankovic’s climate-relevant cycles: precession, obliquity and eccentricity. In this second part I try to resume as simple as possible the main points of the new paper by Ellis & Palmer.

1. Five major insights

The following 5 points are known and accepted by most scientists:

a. Each major deglaciation coincides with maximum NH (North Hemisphere) solar insolation.

b. Not all insolation maxima (“Great Summers”) trigger deglaciations.

c. Eccentricity governs the strength of the Great Summer.

d. During an ice age atmospheric CO2 levels plunge (colder oceans absorb more), ice sheets extension and albedo (the earth’s reflectivity) increase.

e. When CO2 levels are at a minimum and albedo is at maximum, a rapid warming will begin and start an interglacial period. Conversely when CO2 levels are at a maximum and albedo is at mininum (during an interglacial) a new cooling (= ice age) will begin.

2. The Ellis & Palmer paper

The new theory postulated by Ellis & Palmer can be summarized as follows: A minimum CO2 ( e.g 150 -200 ppm) starves plant life, creates a die-back of forests and savanna’s, which increases soil erosion and produces more dust storms. The dust deposits on the ice sheets diminish the albedo, increasing the absorption of solar energy. This increase of about 180 W/m2 at higher NH latitudes starts a global warming, i.e. an intra-glacial.

So ice ages are forced by orbital cycles and changes in NH insolation, but regulated by ice-albedo and dust-albedo feedbacks. The precession cycle is the main forcing agent through the induced albedo changes. The primary forcing and feedback for intra-glacial modulation is albedo.

As a consequence CO2 (by its greenhouse gas properties) can not be the primary feedback because high CO2 levels during or at the end of an intra-glacial result in cooling, and low CO2 levels during a glaciation maximum precede the warming.

Antarctic_T_and_dust
The grey bands in this figure correspond to maximal dust deposits  (>0.35 ppm): the Antarctic temperatures (from the Epica 3 bore-hole) start rising after most of these dust peaks.

3. Main conclusions of the paper.

Regarding IPCC’s AR5 published in 2013 the  authors write: “The IPCC has identified dust as a net weak cooling mechanism, when it is probably a very strong warming agent.”

And they conclude with these words: “The world’s dust-ice Achilles heel needs to be primed and ready to fire before an intra-glacial can be fully successful…in which case, intra-glacial warming is eccentricity and polar ice regrowth regulated, Great Summer forced and dust-ice albedo amplified. And the greenhouse attributes of CO2 play little or no part in this feedback system.

You should definitively read the full paper!

Modulation of Ice Ages (part 1)

July 4, 2016

Ralph Ellis and Michael Palmer have an extremely interesting paper in the Elsevier publication “Geoscience Frontiers” titled: Modulation of ice ages via precession and and dust-albedo feedbacks (link to open access version, May 2016). This long paper (19 pages!) is very readable, but nevertheless needs more than one reading to fully understand the important details. So I will try in this blog the resume the most important findings of that outstanding paper.

  1. The Milankovic cycles

The climate of the Earth is a system-response to the insolation of the sun. As this insolation (or irradiance) is not constant, it is not a big surprise that Earth’s climate is not constant, and never was. There are short variations, like seasons, El Nino’s, 11/22 years and 60 years changes from solar and oceanic oscillations etc. The much longer periodic changes like the ice ages are known since Milutin Milankovic’s seminal papers to be caused by variations of (at least) 3 astronomic parameters related to the Earth revolving in our solar system, varaitions that cause important changes in the insolation oft planet Earth,

The most important parameter is the precession of the Earth’s axis: this gyroscopic effect (first studied by the great mathematician Euler) means that the axis (which is inclined w.r. to the ecliptic plane, i.e. the plane of the orbit of the earth circling around the sun) makes a slow rotation around the perpendicular to the ecliptic plane. The axis oscillates between two extreme positions, where it points to Polaris (the Northern Star) or to Vega. When the axis is titled towards Polaris (which is close to the actual situation), the North Hemispheric (NH) winters correspond to a position where the globe is closest to the sun, and the NH summers where is it farthest. This precessional cycle (including a complication caused by the rotation of the elliptical orbit (=apsidal precession) has a cycle length of about 22200 years, a period often called a Seasonal Great Year (SGY). A Great Season takes 1/4 of this period, about 5700 years; one speaks of a Great Summer, a Great Winter and so on. This precession of the axis has by far the biggest influence on solar insolation (details will follow).

A second important astronomical parameter is the obliquity or axial tilt. The angle between the axis and the perpendicular to the ecliptic plane varies between 21.5° and 24.5°; the actual value is 23.5°. This angle essentially impacts the severity of the seasons. Actually the NH winters are moderate, as the solar rays are more close to the perpendicular of the globes surface, and the summers are moderate too, as the solar rays are more inclined, which diminishes their heating potential. The length of one obliquity cycle is 41000 years. Precession and obliquity cause a complicated wobbling movement of the Earth’s axis.

Finally the last important factor is the eccentricity of the Earth’s orbit. The orbit is an ellipse, close but not quite equal to a circle. The eccentricity describes the deviation from a perfect cycle, and in the case of our planet this parameter is not constant but varies slowly under the influence of the other planets with time. The cycle length is approx. 100000 years. The changes in eccentricity are small, between 0.034 and 0.058 (actual value is 0.0167, which means that the orbit actual is near circular). The main influence of the changing eccentricity is a (small) time shift of the seasons during the year.

For climate related questions, the most important parameter is the change in solar irradiance (or insolation) at high latitudes of the NH. Usually one looks at the changes observed (or calculated) at northern latitude 65° (NH 65). Here are the extreme changes caused by the variations of the three astronomical psrameters shown above:

Precession: 110 W/m2
Obliquity:      25 W/m2
Eccentricity: 0.4 W/m2

These changes can be lumped together in the so called Milankovic Cycle (figure from the Ellis/Palmer paper, the time axis is KY (kilo-years) before present):

Milankovitch_Cycle_with_zero_line

The upper plot shows the changes in solar irradiance, the lower the temperature deviations from a mean value of the Antarctic. I added a zero line to make clearer where the intra-glacial periods happen (the peaks above the zero line) and where the ice ages are (the periods in-between: note that the ice ages are the “normal” state of the Earth climate, and that the intra-glacials are, geologically speaking, exceptions to that state). The orange/red bands represent the (Seasonal) Great Summers. Clearly, not all Great Summers cause an intra-glacial warming, as there are about 4 to 5 Great Summers from one intra-glacial to the next. The Ellis/Palmer paper tries to explain this with a novel theory; I will discuss this in the part 2 of this blog.

NOx emissions (part 2)

June 13, 2016

In the first part of my comment I showed that concerning mean annual NO2 concentration, Luxembourg is among the better of  the EU28 countries, whereas Germany is the worst.

This second part will be on emissions from petrol (gasoline) and diesel engines, and the efficiency of the various Euro norms. I rely for a good part on an excellent report by the Kings College of London, the University of Leeds and the AEA (Agricultural Engineers Association),  published in 2011 for DEFRA (Department for Environment,  Food and Rural Affairs).

1. Emissions of NOx from petrol and diesel engines.

Diesel engines are the workhorses in heavy machinery, as they have an efficiency of about 35% compared to 25% of conventional petrol engines; this higher efficiency was one of the main reasons to introduce Diesel engines in ordinary vehicles, as the fuel consumption for a given power output is lower (and the price of Diesel fuel less taxed in many countries). In Diesel engines, the vaporized injected fuel burns lean, with an excess of oxygen and at high pressure, and some spots in the cylinder can reach temperatures over 1500 °C. The excess of oxygen favors the formation of NOx. Conventional petrol engines burn a very equilibrated air/fuel mixture (which is created in the carburetor outside the combustion chamber), without any oxygen excess; the result is a combustion with lower levels of NOx which can easily be removed by a 3-way catalyst. Gasoline direct injection (DI) engines have a better fuel efficiency and torque at low rpm’s, but suffer from higher NOx emissions, similar to the Diesel engines. The gasoline DI engines become more and more fashionable, and some of their NOx problems are solved by special catalytic converters and ECR (exhaust gas re-circulation); you may read this report from DELPHI which shows testing of an engine releasing not more than 0.2g NOx/kWh (the US Federal limit for heavy trucks is 0.26 g NOx/kWh). Nevertheless one should bear in mind that the switch from conventional to DI gasoline engines will increase the NOx problems of gasoline driven vehicles.

2. Main findings of the report “Trends in NOx and NO2 emissions in the UK and ambient measurements”

This report is interesting because it relies heavily on remote sensing detectors (RSD) to measure the NOx/NO2 levels under urban traffic conditions (mostly low speeds of 36 km/h). The report finds big differences between published factors and the measurements for light vehicles, and that certain catalytic techniques used in heavy goods vehicles (trucks) are inefficient in urban conditions.

The following figure shows how NOx emissions (here expressed as the ratio NOx/CO2*1000) changed  during the years for 4 types of vehicles: passenger cars, HGV (heavy goods vehicles = trucks), LGV (light good vehicles = small transporters) and buses:

NOx_CO2_yearmanufacture_Defra

The CAR pattern clearly show a rather dramatic decrease of NOx emissions for gasoline cars (blue curve), but a more or less steady state since 2000 for diesel cars (red curve); the same situation occurs for the LCV’s. For buses the situation is even worse, as emissions tend to increase since ~1998! So no wonder that roadside NOx levels at many cities are high, even when individual traffic is limited and public buses become mandatory as the main transportation mode.

The different Euroclass norms set the upper limits of allowed NOx emissions (in g/km); here the numbers for Diesel engines:

NOx_EUROlimits_approvals_Defra

E2 = 0.7,  E3 = 0.5,  E4 = 0.25, E5 = 0.18 and the latest E6 (not on this figure) =  0.08 g/km.

If we look at the test measurements in the report for Diesel and gasoline cars, the results are mind-boggling:

NOx_CO2_Diesel_cars_EUROnorm_Defra

NOx_CO2_petrol_cars_EUROnorm_Defra

These 2 figures represent box-and -whiskers graphs: the black line corresponds to the median of the sample (50% of the sample are below, 50% higher), the blue rectangular boxes the 25-75% percentiles (i.e. 50% of the samples lie inside the box), and the full extend of the whiskers (the black lines) represent 99% of the sample size.
For petrol cars, the efficiency of the increasing stringent norms is clearly visible, even if there seems to be a stand-still from E3 on. Diesel engines do not show this: on the contrary the latest E norms do bring a worsening! This is a clear sign that the over-optimistic E norms are nearly impossible to fulfill for Diesel cars that must be fuel-efficient and powerful. No wonder that many manufacturers of Diesel cars (like Volkswagen) installed clever software to fool the compliance procedures.

Nobody should be astounded that real measurements give other results than the official numbers based on laboratory measurements and/or computer programs. The problem with measurements under real driving conditions is that these conditions are impossible to standardize (the state of the road, the weather etc. are changing parameters), whereas measurements in the lab can be made under well defined conditions. The next figure shows the difference between the higher road-side measurements (RSD) and the official factors:

RSD_factors_petrol_diesel_EURO1to4_Defra

This figure once more tells the sad story that for Diesel cars the different Euro norms did not have a big effect!

 

3. The roadside or country-wide NOx levels

The next figure gives the  European ambient NO2 concentrations according to different environments:

NO2_concentrations_EU_2008_Defra

The vertical line at 40 ug/m3 corresponds to the European limit for annual average concentrations; of the 5 different environments, the roadside remains the only problematic location. Even urban or sub-urban backgrounds lie well below the 40 ug/m3 limit!

Let us look how this roadside situation changed during the years for different countries:

NO2_concentrations_EUcountries_1995to2009_Defra

Except Greece and Italy, all countries show a more or less horizontal trend for the full period 1995 to 2009: this means that the different Euro norms did not have a big effect at roadside locations. One reason, as shown above, is that successive more modern Diesel engines were unable to drastically lower their NOx emissions, and a second reason may well be the massive increase in Diesel cars; an increase pushed by political decisions to lower fuel consumption (and supposed climate-hurting CO2 output) which made Diesel fuel less expensive than gasoline.

 

4. Conclusion

NO2 (or NOx) mitigation is a wicked problem, and only naive persons believe that impossible stringent norms will miraculously achieve results that are nearly impossible to obtain for physical and/or engineering reasons. Maybe we would be better off if the scare-mongering about the dangers of NO2 would cease, and solutions for lowering NO2 emissions were allowed more time for research, experimentation and development. Pierre Lutgen (who holds a PhD in chemistry) does not believe many of the shrill dangers brought by very low NO2 levels (read his French article on nitrites). But NO2 as a gas is an irritant for the lung linings, and may form very small particles when reacting with other substances; it also has a detrimental effect on plant-life (some put the allowed limit as low as 30 ug(/m3). So high levels of NO2 clearly should be avoided. But as often in environmental policies, setting limits at impossible low values will not hasten the accordance, but favor clever proceedings to circumvent these limits.

NO2 emissions… is Luxembourg the bad guy? (part1)

June 12, 2016
  1. The recent EEA report on national emission compliance. 

    A recent report from the European Environmental Agency (EEA) made some splash in the media, as it showed that many countries (among them Luxembourg) are missing their NOx emission limits.Here the relevant table:

national_emission_exceedences_2010to2014

The red crosses represent exceedance, the ticks conformance. What exceedance means is not very clear: probably it represents somewhere in the country an overshoot of the 8 hour limit of NOx concentration; the place where this happens will almost certainly be a heavy traffic urban road, but not a general mean annual concentration above the 40 ug/m3 limit (for NO2).

If we take the most important industrial countries (Belgium, France, Germany, Italy and the UK), all except Italy and the UK do not conform to the targets. It is an irony that the “über-grün” Germany overshoots all relevant pollutants as NOx, NMVOC (non-methane volatile organics, like terpenes); that Italy and the UK are in conformance might be real (I have some doubts, thinking of Roma or Neapoli traffic conditions), or simply a sign of a particular clever reporting. Emissions in NH3 (ammoniac) are clearly related to agriculture (especially cattle and swine rising), which explains that Denmark and the Netherlands are here big “sinners”.

Locally NOx/NO2 exceedance may give a wrong picture, so let us look at the yearly mean concentrations, as given by several EEA publications and databases.

 

2. The yearly average NO2 concentration in Luxembourg and other EU states.

The following picture shows the average annual concentration in the 6 validated Luxembourg measurement stations in 2011:

LUXBG_NO2_2011NO2_legend

Vianden, Beidweiler and Beckerich are rural , Esch-Alzette and Luxembourg urban stations. It is only at the two Luxembourg (-City) stations that the EU target of 40 ug/m3 is exceeded. Not surprisingly, as the measurement stations lie at roads with very heavy traffic; the Esch-Alzette station is on a small hill (Galgenberg) with plenty of green vegetation around.

The following two pictures show the daily mean NO2 concentrations of Viandem and Luxembourg-Bonnevoie for 2015 (note the different vertical scales!) ( link)

VIANDEN_NO2_2015LUXBG_Bonnevoie_NO2_2015

Rural Vianden concentrations are very low, and even the heating months do not exceed 30 ug/m3; the situation in urban Luxembourg-Bonnevoie is quite different. The (relative) difference between the heating months situation and the summer months is much lower, and days exceeding the 40 ug/m3 limit are quite frequent. The lower summer concentrations at both sides are in my opinion mostly caused by an increased atmospheric mixing due to convective air transport.

Lets close this chapter with a picture showing the situation in 2012 for all EU member states:

NO2_attainment_2012_EU

The dots represent the median, the boxes delimit the 25 to 75 percentiles, and the whiskers ( the thin vertical lines) show the region containing 99% of the values.

Clearly Luxembourg fares very well: if we take the median concentration, we see that 19 countries surpass Luxembourg, which has the 9th “best” attainment of the 28 EU countries. If we look at the upper whisker end only 3 or 4 countries have lower or similar upper bounds.

Conclusion:  Luxembourg is NOT the bad guy!

 

3. Hourly NO2 concentrations

I will close this first part with a look at the hourly NO2 concentrations during the last 7 days; we will compare the measurements made at Vianden,   Luxembourg-Bonnevoie and Diekirch (meteoLCD):

VIANDEN_NO2_1h_5to12Jun2016

LUXBG_Bonnevoie_NO2_6to12Jun2016

DIEKIRCH_NO2_6to12Jun2016

At Vianden we see a daily maximum which mostly does not exceed 3 times the daily minimum (except the last day); the urban Luxembourg-Bonnevoie data show two daily spikes, one in the morning and one in the afternoon: clearly a sign of increased traffic during the rush hours where commuters come in or leave the town. The range extends from 10 to 80 ug/m3, a factor of 8.

The NO2 sensor in Diekirch has a positive bias of about 10 ug/m3, so take the left blue scale for reading.Here we have a very pronounced peak in the morning (commuter traffic and normally a time of morning inversion); the afternoon peak is muted or absent. The range extends from 10 to 100 ug/m3, a factor of 10, similar to the Luxbg-Bonnevoie situation. The red curve shows the NO readings, which are always lower than the NO2. NOx concentration corresponds to the sum of the blue and red curves.

Comparing the last two curves, we observe a flattening during the last two days (11 and 12 June): you will guess that these are the weekend days with no commuter rush hour!

 

In the 2nd part of this blog (coming asap), I will analyze emissions by different types of cars, using data from a truly excellent DEFRA report from 2011.

Energy Return on Energy Investment

May 29, 2016

kelly

Prof. M.J. Kelly from Cambridge University (Electrical Engineering Division, Department of Engineering) just published a very interesting paper “Lessons from technology development for energy and sustainability where he is very critical of the current fashionable decarbonization politics. He strongly warns that trying to massively deploy yet unfit technologies can be counter-productive.

Here in this comment I just want to stress two problems related to energy production, which he mentions in his paper. The first is the EROI (Energy Return on Investment) which we will read as Energy Return on Energy Investment (EROEI), the second the energy density and surface needs of various power technologies.

  1. The EROI 

    This is a very easy to understand parameter which gives a number to the following question: how much energy will a given technology produce during its life-time, compared to the energy needed to build it and keep it working during this period. This problem is practically always fudged by green energy advocates, who say for instance that a wind turbine will pay back its energy budget during the first year (link), ignoring all the associated problems of backup power, grid investments etc. Prof. Kelly does not agree, and gives the following graph:

EROI
The left scale represents the fraction of (energy produced)/(energy invested); the blue histogram considers this without any regards to energy storage, the yellow columns show the result if one considers all the energy needed to implement large-scale storage technologies (as pumped hydro, batteries …) needed by intermittent producers like wind and solar. He says that the economical threshold is about 8; of the 4 renewable producers only thermal solar plants in desert regions barely exceed this minimum,whereas nuclear power reigns supreme with a factor of 75.

A serious problem with such analyses are the life cycle assessments (LCA), often difficult to grasp in a scientific non-partisan manner. Kelly cites a book by Pietro and Hall (Springer, 2013) which studied the EROI of the Spanish solar “revolution”, where clear and unambiguous data are available: these authors give an EROEI of 2.45 for the Spanish solar politics.

2. Energy density and land usage

A second problem with wind and solar is these are extremely low-density power sources. The following table shows the numbers in MJ/kg:

energy_density

I do not quite agree concerning modern, non-lead batteries: the energy densities are much higher, but still minuscule compared to nuclear:

energy_density_batteries_www_epectec_com

This graph (from http://www.epectec.com) shows that the most recent batteries may go close to 0.76 MJ/kg, similar to hydro dams . Energy density is an important factor when the question of land usage is important, as it is for most populated regions of the world and especially for the mega-cities of the future which are assumed to hold 50% of the world population in 2050.

This Breakthrough paper gives the following numbers for land use in m2 per GWh delivered in one year:

land_use_per_GWh

and these are the numbers for material use:

material_use_per_GWh

I have added the capacity factors that are close to those in Germany/Luxembourg (on-shore wind practically never reaches 30%) and for Solar PV (here 10% is still optimstic); with these more realistic capacity factors, onshore wind would have a land use closer to 2200. What comes a bit of a surprise (even if we accept the very optimistic original numbers) is that solar PV has about the same material footprint as nuclear (which instinctively we associate with enormous volumes of concrete and steel).

Let us take tiny Luxembourg’s electricity consumption as a rough indicator of what part of the ~2500 km2 area of the country would be needed if a certain energy source would produce all the energy needed. According to this report the total energy consumption was about 50000 GWh in 2013. Here the area in km2 and  in % of total country area if all these energy had to be produced by the given source:

Nuclear:                  60 km2    = 2.4 %   (assumes cooling water comes from new lakes)

Solar PV:                320 km2   = 12.8%  (land use taken as 6400)

Wind on-shore:    83 km2     =  3.3%  (land use taken as 1650)

Biomass:         23000 km2     =  more than 9 times the total area of Luxembourg !

The wind and solar numbers are more or less meaningless except that full storage solutions would exist (which will not be the case in the foreseeable future).

I do not accept the numbers for nuclear. The nearby Cattenom nuclear plant produces  about 35000 GWh per year and occupies an area of maximum 4 km2 (checked with Google Earth). Using this as a more realistic example, we would have a total land use for the nuclear choice of  about 6 km2 or 0.24 %, i.e. 10 times less.

3. Conclusion

Both EROI and land use show that the nuclear choice for energy is unbeatable as a “carbon-free” energy producer. This is also the conclusion of Prof. Kelly’s paper and that of the late Prof. McKay in his last interview.

Mathiness and models: the new astrology?

May 18, 2016

climate_modelsThere is an outstanding article in aeon on the use (and abuse) of mathematics and mathematical models in economy. It makes for a fascinationg reading, as many things said could directly apply the model-driven climatology. As a physicist, I love mathematics and find them invaluable in giving a precise meaning to what often are fuzzy statements. But this article includes some gems that make one reconsider any naive and exaggerated believe in mathematical models.

The economist Paul Romer is cited: “Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.” Replace the word “economics” with “climatology” and you begin to understand.

You find many citations by the great physicist Freeman Dyson on climate issues, like this one ” …climate models projecting dire consequences in the coming centuries are unreliable” or “[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere (link).

Ari Laor from the Technion (Haifa, Israel) writes in a comment at the American Scientist blog: “Megasimulations are extremely powerful for advancing scientific understanding, but should be used only at a level where clear predictions can be made. Incorporating finer details in a simulation with a large set of free parameters may be a waste of time, both for the researcher and for the readers of the resulting papers. Moreover, such simulations may create the wrong impression that some problems are essentially fully solved, when in fact they are not. The inevitable subgrid physics makes the use of free parameters unavoidable…”

The Bulletin of Atomic Scientists also has a very interesting article “The uncertainty in climate modeling“. Here some gems: “Model agreements (or spreads) are therefore not equivalent to probability statements…does this mean that the average of all the model projections into the future is in fact the best projection? And does the variability in the model projections truly measure the uncertainty? These are unanswerable questions.”

How true…

 

PS: The Bulletin has a series of 8 short contributions to this subject, and I suggest to take the time to read them all.

 

 

First Radiation Amplification factor for 2016

April 21, 2016

In several previous posts (here and here) I commented on the RAF (Radiation Amplification Factor) which tells us how much a change in the total ozone column will cause a change in UVB irradiation. The question is usually asked to quantify the danger that a thinning ozone layer will cause an increase in UVB radiation which might cause an increase in skin cancer risk. The often extremist scare on the danger of UVB radiation has faded somehow in the last years, as cases of rickets caused by too few UVB exposure has again shown up (read this paper). But at many beaches, you can see overprotective mothers putting their children in UV filtering jump-suits, which might be an overreaction triggered by the scary media stories, which usually start in Western countries at the first warm and sunny days of the year.

The RAF is defined as: RAF = – [ln(DU1/UVB1)/ln(DU2/UVB2)]

where the indices 1 and 2 correspond to two different situations. The following graph shows the situation today, a second (nearly) blue sky day following the first. As these dates and the time of measurements are so close, we may assume a constant length for the sun rays through the atmosphere, and constant attenuations. The AOT (atmospheric optical thickness) which measures the turpitude of the atmosphere was 0.055 the first day and 0.068 the second day; these are very close values. For comparison the AOT was 2.197 on the 19 April, which had a heavy cloud cover. The solar zenith angles where 38.4 resp. 38.0 degrees.

RAF_20_21April2016

With the readings shown on the graph, we find an RAF = 1.10

In a much a much more extensive paper I wrote in April 2013, the corresponding value is 1.08 (computed over 5 consecutive days).

Expressed as simple percentages one can roughly say that during the two days of 20 and 21 April 2016 a decrease of 5% of the total ozone column caused a rise of the UVI of 9% (percentages w.r. to the first day). Beware to not extrapolate linearly this conclusion, as the RAF is defined by non-linear logarithms!

 

European Summer Temperatures since Roman Times

April 9, 2016

J. Luterbacher from the University of Giessen has published in Environmental Research Letters an interesting paper on the evolution of European summer temperatures. The paper is only 12 pages long, but the long list of coauthors counts 44 coauthors, reflecting the inflationary tendency to cite everyone the author will be agreeable too (and the desperate struggle for scientists to be coauthor for a maximum of papers). Nevertheless, the paper is interesting to read.

The authors used two statistical methods to evaluate temperature proxies (here tree-rings): a Bayesian hierarchical modelling (BHM) and a Composite Plus Scaling method (CPS). Both results are compared (where feasible) with instrumentals records (here Crutem4). The concordance of these 2 methods and the instrumental record is rather good, as shown in this figure which gives correlations r of 0.81 and 0.83.

Luterbach_Instruments_B_C

Are the 20th century summer temperatures unusual?

A comparison from Roman times to today is known to include 3 warm periods: the Roman, the Medieval and the Modern (notice the well-known ~1000 year period!). The next figure shows the results given by the two statistics and the IPCC consensus reconstruction:

Luterbach_B_C

I have added the red horizontal line giving the highest (reconstructed) level of the Roman Warm Period: clearly the situation during the 20th century was not unusual compared to this period.

Another figure starts at the Medieval Warm Period and gives the same impression:

Luterbach_MCA_LIA_PresentCompared to the Medieval times, the last 100 years with noticeably higher atmospheric CO2 concentrations (mixing ratios) do not show a dramatic warming!

A last figure is also very telling: it gives the temperature differences between Present (1950 to 2003) and Medieval Warm Period:

Luterbach_Present-MCA

Luterbach_Present-MCA_scale Some locations close to the Mediterranean are warmer, most only slightly warmer or about the same and two even cooler.

Conclusion:

The authors write that “both CPS and BHM  indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE”.

This would be the last word, but we all know that a scientist today must pay at least lip-service to the global warming meme. Accordingly the authors tell us that “However, summer temperatures during the last 30 yr (1986–2015) have been anomalously high”. Remember that we had a “monster” El-Nino in 1998, and a very big one in 2015: these two events alone pushed up the average temperatures a lot, so this last remark is rather irrelevant.

But as they write in another part of their paper that “… as well
as a potentially greater role for solar forcing in driving
European summer temperatures than is currently present
in the CMIP5/PMIP3 simulations. This might be evidence for an enhanced sensitivity to solar forcing in this
particular region”, acknowledging the IPCC denied solar forcing, I will pardon the mandatory, career friendly and politically correct sentence on the last years.

Climate trends at Diekirch, Luxembourg: part 2c (atmospheric CO2)

March 9, 2016

In this third and last part I will discuss the global trend in CO2 mixing ratio, the measurements of some EU measuring stations and our data at meteoLCD.

  1. The global situation 

CO2 and other atmospheric greenhouse gas concentrations can be found at many web sites, but I recommend two:

  • the NOAA’s Earth System Reserach Laboratory website which has an excellent FTP data finder. Its GLOBALVIEW site has a very interesting movie showing how the seasonal amplitude of atmospheric CO2 swings when going from the South-Pole to the North-Pole (2001 to 2013): nearly constant yearly values at the South-Pole, and huge swings in the Northern hemisphere!
    The following picture shows the 3 non-maritime reference stations that are closest to Diekirch:NOAA_reference_stations_EU
  • the WDGG (World Data Center for Greenhouse Gases), which is a Japanese website with links to many stations and miscellaneous visualization tools. From this site, let us first look at the global atmospheric CO2 trend, and the variations of the yearly growth rate.co2_monthly_molfr_20151109At a first glance, the increase seems practically constant during the last 30 years, and close to (400-354)/30 = 1.8 ppm/yr. The next picture shows in more details that this growth-rate is not an absolute constant, but follows a periodic pattern of ~2 years during the last 10 years and a big variability from 1985 to 2000 (the huge peak in 1998 is the finger-print of the big El-Nino of that year, where warmer oceans did not absorb as much CO2 as colder waters do).
    co2_grr_20151109_annotated

2. What do European stations show?

We will select (on WDGG) all European stations located in a rather small grid of latitudes [45, 50] and longitudes [0, 15 east] (the yellow rectangle pointed at by the blue arrow):

EU_station_selected

CO2_monthly_EU_selected

This plot shows that readings vary in an important manner, and that the global picture combined from these stations is far away from a smooth visible curve! We get a similar result if we select only the stations locatet in a 10°x 10° grid that covers Uccle (Belgium) and Diekirch:

CO2_10x10grid

If we look at the yearly average results from the three NOAA reference stations mentioned above (HPB, OXK and HUN), the measurements lie rather close:

CO2_Yearly_OXK_HUN_HPB

The largest difference is about 6 ppmV in 2011. From the first picture in this blog you may remember that OXK and HPB are mountain stations, located at 1185 and 985 m asl, well above the daily inversion layer. That is not the case for the Hungarian station (248 m asl), but the values you see on the plot are only a subset of the measurements. In a relatively well hidden remark, the scientist from Hegyhatsal write that they ignore all readings except those made between 11:30 and 15:00 UTC; during this period convective mixing is most important and the CO2 concentration is closer to a maritime or global background. I did not find their original data; the closeness of the HUN curve to the OXK and HPB plots clearly is a result of that pre-filtering.


3. The measurements in Diekirch

In Diekirch we publish all our measurements, but the next picture shows the effect of omitting all readings except those done from 11:30 to 15:00.

CO2_Diekirch_yearly_avg_all_and_restricted

Clearly the restriction to the afternoon readings lowers the overall mixing ratio by up to 20 ppmV, which is considerable. The plot also shows one serious problem with our measurements.The dramatic plunge from 2012 to 2013  probably is an artifact due to a systematic and unknown calibration or functional problem. Both years 2011 and 2012 have an exceptional high occurrence of CO2 readings in excess of 500 ppmV: usually there are about 200 occurrences (i.e. half hour periods) in a year, but during these 2 years we measured more than 500. That the lower plot also peaks at nearly the same value as the blue upper curve points to a problem.

A “consolation” is that the periods 2002 to 2012 and 2013 to 2015 show trends that are close to the global increase  rate of ~2 ppmV/year:

CO2_yeraly_Diekirch_closetrends

 

 

4. Conclusion

This 3 part series shows that CO2 measurements are not easy, are far from uniform, and that trends are difficult to find without extremely costly calibration procedures. Even the satellites circling the Earth have problems to measure CO2 mixing ratios with an accuracy of 0.1 ppm. If you look at the various CO2 series of ground stations, you may become horrified on the numerous malfunction and missing data periods. At Diekirch, we are a little player in the CO2 game, doing a difficult job on a shoe-string budget. Our atmospheric gas measurements are by far the most troublesome we do; that our long-time CO2 trends in CO2 increase are similar to those measured by the big guys is comforting, but does not allow us to make a blind eye on the numerous problems remaining.

A recurrent question is “what is the cause of the CO2 increase?”. The usual answer is that our fossil fuel consumption is the main culprit, as both curves go well together. Ferdinand Engelbeen has a good discussion on this problem (link). The next picture from his website shows that CO2 concentration rises with (cumulative) emissions, but also that global temperature does not during all periods:

Engelbeen_CO2_emissions_temperature

Another argument for the human origin of the CO2 increase is the variation of the delta_C13 isotope: fossil fuel has a lower part of C13 than CO2 from active biolological sources. The continuing decrease of delta_C13 is (or could be) a fingerprint of the anthropogenic impact. Not everybody  accepts this explanation. For example Richard Courtney does not accept that the sink capabilities of the ocean is overloaded, and so “excess” human emitted CO2 accumulates in the atmosphere (see discussion here). Prof. Murray Salby also tells that the CO2 increase is “natural” (see video). My late co-author Ernst-Georg Beck also was strongly convinced that warming regions of the North Atlantic are the main reason behind the atmospheric CO2 increase. Salby shows in his presentation this figure:

Salby_fossilfuelincrease_co2increase.jpg

A threefold increase in fossil fuel emissions after 2002 did not change the increase rate in atmospheric CO2: can fossil fuel emissions than really  be the (main) driver of the observed atmospheric CO2 increase?

The debate is not over, but the overall atmospheric CO2 measurement results are accepted by almost everyone.

Climate trends at Diekirch, Luxembourg: part 2b (atmospheric CO2)

February 28, 2016

In this second part on CO2 measurements I will talk about short-time and seasonal CO2 variations. The media usually seem to suggest that CO2 levels are something of what the French call “un long fleuve tranquille”, i.e. a more or less uniform mixing ratio that increases steadily with time. Nothing could be more wrong! If stations near maritime borders have relatively small diurnal variations, the situation is very different in continental and rural locations.

1.The diurnal CO2 variations.

Let me start with a very recent example: the CO2 levels at Diekirch during the last 7 days (22 to 28 Feb 2016):

CO2_7days_22to28Feb2016

Obviously the daily CO2 variation is far from uniform: during the 22th Feb. the levels were nearly flat at about 375 ppmV, then became really variable during the next 4 days and  returned to flat during the last 2 days. The highest peak of 465 ppmV reached during the morning hours of the 25th Feb. represent an increase of 85 ppmV (21%) above the previous low level of 380 ppmV. Why this formidable peak? Let us look at the other meteorological parameters: there was no precipitation, air pressure did not change much, but air temperature and wind speed varied remarkably.

Wind_7days_22to28Feb2016

This plot shows wind-chill in red, air temperature in blue and wind-speed in violet. Let us ignore the red curve, which represents a calculated parameter from air temperature and wind speed. The 25th Feb was the coldest day during this period, and wind-speed was close to zero. The 22th Feb. was a very windy day, and it also was warmer. Now we are in February, and CO2 lowering plant photosynthesis is practically in a dormant state. The next plot of solar irradiance (the blue curve) shows that the sun was shining at its best during the 25th, and  was nearly absent the 22th.

Solar_7days_22to28Feb2016

So we can make an educated guess of the parameter which has the greatest influence on CO2 levels during our period: it can not be temperature, as warmer temperatures increase microbial soil activity and plant rotting, which both are CO2 sources. It can not be photosynthesis driven by solar irradiance , as the highest CO2 readings happen when this potential activity is highest. There remains one parameter: wind-speed! The night of the 25th was very cold, and as a consequence a strong inversion layer with minimal air movements lay as a blanket over the ground. The absence of air movements made ground air mixing with air at higher levels impossible, so all gases accumulated in this inversion layer. A further proof of the correctness of our detective work is given by the plot of the NO/NO2 concentrations, which we restarted measuring after a year long pause:

NOx_7days_22to28Feb2016

NO2 (blue) and NO (red) readings also peak the 25th February (a working day, Thursday), at practically the same time: CO2 peak is at 07:30 UTC,  NO2/NO peak at 08:00. The first and certainly the latter show the fingerprint of morning traffic through the valley where lies the town of Diekirch.

The conclusion is: wind speed  (air movements) are the main cause of high CO2 levels: low wind speed means poor air mixing, and high CO2 levels; high wind speed means the opposite. For many years I tried to push this explanation, which is practically ignored in the usual “consensus research”. The sole consolation was a first price as “best publication” for the paper I wrote in 2009 with the late Ernst-Georg Beck as coauthor, and which was published by Springer.

More information on diurnal CO2 patterns can be found in this paper I wrote  with my friends Tun Kies and Nico Harpes in 2007. It contains the following graph which shows how CO2 levels changed during the passage of storm “Franz” the 11 and 12th January 2007.

11_to120107_franz2a

 

2. Seasonal CO2 pattern

Seasonal CO2 pattern reflect the influence of vegetation (and microbial soil activity): during warmer sunny summer months, CO2 levels are as a general rule lower than during the colder sunny-poor periods. Plant photosynthesis is a potent CO2 sink, overwhelming the opposite source of microbial outgassing. The following figure shows that this photosynthesis influence is nearly nil at the South-Pole and becomes stronger at more Northern latitudes (last is Point Barrow in Alaska):

co2_sta_records_seasonal

 

At Point Barrow (latitude 71°N) the seasonal swing is about 18 ppmV, at Mauna Loa (latitude 2°N = nearly at the equator), it is only 10 ppmV approx. Here is the 2015 situation at Diekirch (latitude close to 50°N):

co2_monthly_2015_withsinusmodel

The plot shows the monthly mean CO2 levels, together with a best-fit sinus-curve with an amplitude of 11.2 ppmV or a total swing of about 22 ppmV. These values are comparable to those measured at the German Hohenpeissenberg (HPB) and Ochsenkopf (OXK) stations. Be aware: not every year shows such a nice picture (look for instance here). A paper by Bender et al. from 2005 shows a seasonal swing at Amsterdam of about 16 ppmV, a further argument that our Diekirch measurements are not too bad!

Bender_CO2_seasonal

 

A new paper from Boston University shows that urban backyards contribute nearly as much CO2 to the atmosphere as traffic emissions. There is a great discussion on this paper at the WUWT blog, as the paper does not seem to make a yearly balance of sources and sinks. Nevertheless, one commenter (Ferdinand Engelbeen) recalls that the yearly balance between photosynthesis (a sink) and microbial activity (a source) is slightly negative, i.e. photosynthesis removes more CO2 from the atmosphere than soil microbes and decomposing vegetation produce. Soils globally inject ca. 60 GtC (giga tons of carbon) into the atmosphere, to be compared with annual anthropogenic emissions about about 10 GtC.

 

____________________________

The upcoming last part in this 3 part series on CO2 will discuss long time trends.


Follow

Get every new post delivered to your Inbox.