AQI: air quality confusion (4)

May 27, 2018

This is part 4.

Click on the links below for the other parts.

Part 1, 23, 5.



7. The EU AQI’s

As everything in Europe is complicated, so are the AQI’s. The oldest index is the CAQI  which has different quality bands names, color scheme and break-points than the newest EAQI.

7.1. The CAQI

The  CAQI index was introduced by the CiteAir (Common Information to European Air) project in 2006. It was revised in 2012 and is meant as an index for the air in the cities. It uses the concentrations (measured in ug/m3) of 3 core pollutants (O3, NO2, PM10) with PM2.5, SO2 and CO as optional. The different sub-indexes run from 0 to 100 as defined in the following table (link):

The table shows that the index break-points are not proportional to the concentrations for all pollutants as for instance for NO2 the “Very low” range [0..50] covers 0…50 ug/m3 whereas the “High” range [75…100] covers 200…400 ug/m3.

The names of the different categories relate to the magnitude of the index, and not the quality of the air. As customary, the highest sub-index defines the CAGI.  Note that the sampling period usually is 1 hour. The CAQI can be viewed on-line at

Clicking a city gives more detailed information, as shown for Paris:

Clearly the CAQI is heavily “French” leaning, as the vast majorities of cities are in France.

This is probably the reason that the DG Environment commanded a research to Ricardo Energy & Environment which delivered in 2013 a paper defining an harmonized EAQI. The proposal was very similar to the EPA AQI, with an identical number of pollutants and an index running from 0 to 500; the names of the different categories were different. This project seems to have been a dead-end (due to it’s US similarities?) and never has been officially applied.

7.2 The EAQI

In November 2017 the EU introduced the EAQI = European Air Quality Index with a web-site showing the live-data. It is very difficult to find precise literature for this index which is somewhat similar to the CAQI, albeit with a different color scheme, a renaming of the categories and most important, different break-points. So most of the following information has been extracted from the excellent

First the naming of the 5 categories is now relevant to the air quality and not the magnitude of the index, the colors go from turquoise to brown and the indices from 0 to 100 relate to concentrations in ug/m3 as shown:

The numerical ranges of the indices are here (often the upper bound is given as equal to the lower bound of the following category which does not easy a decision for qualifying if by chance the data fall on a boundary):


I wrongly presumed that the break-points for each category are defined as they are for the CAQI.

The graph for the EAQI in Epinal (France) shows that the category is POOR as the PM10 concentration exceeds 71 ug/m3.  As this is an 1h concentration, the EAQI break-point for PM10-POOR seems to be lower than in the CAQI table.

Finally after quite some detective-investigation, I found legends at the provisional web-site
which confirm those of the table at the start of chapter 7.2. See that here upper and lower bound values coincide!

Note how different these break-points are  from those of the CAQI, which adds one more level to the overall confusion!



Hopefully the EAQI will be the definitive step in harmonizing an AQI. But I have some doubt that these break-points will be stable for the coming years. As the trend goes to an ever tightening of the tolerated levels, the chances for future stability seem poor.

The web site betters that of the CAQI  enormously as it includes measuring data all over Europe and shows the time series for the different pollutants.


The last part 5 to be followed asap concludes this series with a discussion of the new Luxembourg AQI introduced a couple of weeks ago with a smart phone app called “Meng Loft” , and which shows that adding confusion is not a privilege of big countries!

(go to part 5)


AQI: air quality confusion (3)

May 23, 2018

This is part 3.
Click on the links below
for the other parts.

Part 12, 4,  5.

6. The UK revised DAQI

In the UK the Department for Environment, Food and Rural Affairs (DEFRA) publishes since 2013 the revised Daily Air Quality Index (DAQI); the first DAQI was introduced in 2012. The DAQI has 4 quality levels (from best to worst: LOW, MODERATE, HIGH, VERY HIGH) and the index runs from 1 to 10. It is based on concentrations of O3, NO2, SO2 and PM’s (PM10 and PM2.5) measured in ug/m3 (CO has been removed in the revised DAQI).

The following table gives the break-points (link):

The first comment should be that the “band” qualifiers correspond to the numerical index, and that “Low” means low index = good air quality conditions. The break-points are not proportional to the concentration: note that for O3 index 2 spans over 33 ug/m3, whereas that index 4 extends over 20 ug/m3. The pollutant with the highest index defines the published DAQI.

Ozone concentrations are only taken as 8h running mean. The color levels used for the different bands and subdivisions are different from those of EPA and China, and the health messages attached reflect the differences between individuals at risk and the general population:

There are at least two methods to view live DAQI data:

1. use a Google Earth KMZ file (link)

2. go to the interactive map (link):

Clicking on a station gives very detailed information, a shown below (all lines are active links):


  • Care should be taken to not confuse a “LOW” index with “low air quality”.
  • Not further subdividing the “Very high” category is a good decision.
  • 10 different color-shades are in my opinion way too many: it is difficult to distinguish neighboring colors when the map is shown in a poorer resolution.


7. The French ATMO index

The “Fédération des Associations de la Surveillance de la Qualité de l’Air” (link) defines two indices:
– the ATMO is based on 4 pollutants (O3, NO2, SO2, PM10) and applicable for cities of more than 100000 inhabitants
– the IQA (Indice de Qualité de l’Air simplifié) is based on a subset of these 4 pollutants and used for cities with less than 100000 inhabitants

In the following, I will consider only the ATMO which extends from 1 to 10, uses 6 quality levels and only 3 different colors (GREEN, ORANGE, RED) as defined in the relevant “arrêté” (link). The break-points are based on 1h concentrations measured in ug/m3 (except PM10 given as the 24h mean); if several stations cover a geographic zone, the average is used. As for all previous indices, the highest sub-index defines the ATMO (link):

I did not find an interactive map covering France, but you may start here with the map of the regions (link) and click on a region to get more details, as shown for the Eastern Region of France (link):

The details in the regional maps vary from region to region; in the sub-map above clicking on a station gives further specific indices for the individual pollutants.


  • using only 3 colors makes an overall view easier
  • the format for the individual regions is not exactly the same, which is slightly annoying. Besides the ATMO France often uses an AQI called CiteAir, which is based on a EU convention. The EU AQI’s will be discussed in the upcoming part 4.

(go to part 4)


AQI: air quality confusion (2)

May 21, 2018

This is part 2.
Click on the links below
for the other parts.

Part 1, 3, 4, 5.



In the first part of this blog on AQI I finished with the definition of the US EPA AQI, which has 6 quality levels and AQI numbers of to 500. Before going to Europe, let us begin this part with the Chinese AQI.


5. The PRC Chinese AQI

China uses for defining air quality the same 6 pollutants as does the US EPA (see here): O3, NO2, PM10 and PM2.5, CO and SO2. It also uses 6 quality levels albeit with different wording: EXCELLENT to SEVERELY POLLUTED. The Chinese sub-AQI’s (called IAQI = individual AQI) run from 0 to 500, as do the EPA sub-AQI’s. And the highest IAQI defines the published AQI. Most of the times PM2.5 is the primary pollutant with the highest IAQI, but during summer time O3 may have the highest IAQI.

The break-points for PM’s differ from those of EPA (link, attention: this blog has a completely wrong table comparing US and China other AQI’s):

The qualifiers (“Description”) in this table are the Chinese ones, and obviously China is much more tolerant for PM2.5 as is the EPA (notice that the first Chinese break-points are considerably higher). The same remark holds for some of the other pollutants as shown in this comparison table which uses 1996 EPA break-points (link):

First one should note that China uses ug/m3 as concentration unit. Second it calculates NO2 pollution only using a 24h average, whereas EPA uses 1h values, which makes comparison impossible. Third where comparison can be made, the break-point numbers are very close for O3, CO and PM10 but differ for SO2 (24h), NO2 (24h) and PM2.5 (24h).

Chinese AQI’s are often calculated as the average from readings of multiple stations around a city, whereas US EPA AQI’s allways come from a single station.

The US Embassies in China have their own measuring stations which use the EPA standard: see here.

There are a couple of smartphone apps to visualize real-time Chinese AQI’s, but I did not find an official Chinese live map on the web based on Chinese AQI standards. The web site shows Chinese air quality using EPA standards; the same holds for the website

The next picture shows the situation today 21 May 2018 at 14:10 UTC:

Clicking on a label gives more information, as for instance for the City of Yulin:

We see that PM2.5 and PM10 situation is particularly bad, whereas O3 and NO2 levels are GOOD.

Comparing AQI’s over China with those of other parts of the world clearly shows that bad air quality (mostly PM’s) is a serious issue in many parts of mainland China.


Be careful when reading AQI’s for China, as often it is not clear on what standard they are based. The following paragraph (link) summarizes this well:

(go to part 3)





AQI: air quality confusion (1)

May 20, 2018

This is part 1.
Click on the links below for the other parts.

Part 2, 3, 4, 5.




Luxembourg’s Environmental Agency has joined the AQI-train and published a smartphone app called “Meng Loft” (= my Air) which gives an air quality index (AQI) for Luxembourg and/or sub-regions of the country. There are many different AQI’s around the world, and in this multi-part comment I try to clarify a bit the overwhelming number of country/administration specific definitions.


1. What’s an index?

An index is a single number which should simplify a more complex situation. A good example is the UV-Index (UVI) which represents by a single number (in practice between 0 and about 12) the “dangerosity” of solar UVB irradiance. The biological effect of UVB is expressed as a dose intensity in the biological effective unit MED/h (minimum erythemal dose per hour). A person of a certain skin complexion (phototype II) is assumed to have a reddening of the skin (an erythema) after having received this dose. Physically speaking the dose represents an energy (a number of Joule per m2), and the instruments measuring UVB measure a dose per time unit: our UVB Solarlight biometer at meteoLCD measures MED/h per m2. The UVI is defined as a biologically effective UVB irradiance of 25mW/m2 where UVB extents to wavelengths between 298 and 320 nm. So here we have an index that represents simply a scale-factor for the physical unit: the UVI is strictly proportional to the physical unit of the relevant phenomenon.

2. What’s air quality?

The air we inhale is a mixture of many components, where some are considered as potentially harmful above a certain level (or concentration). The three core “pollutants” are ground ozone (O3), nitrogen dioxide (NO2) and fine particles (PM10, PM2.5). Please note that all theses substances exist in every type of air and have both natural and anthropogenic origins. So the qualifier “pollutant” is slightly misleading: theses substances should be seen as “pollutants” if their concentration exceeds certain levels. Besides these 3 core substances many others are often considered, mainly CO, SO2 and NO etc…

One real problem is the measurement of theses substances and the units used. O3 for instance is measured by the big expensive sensors by its absorption of certain UVB wavelengths (e.g. our previous O341M sensor by Environnement SA); other sensors use chemical reactions that change an electrical current (amperometric sensors as our CAIRSENS) and so on. The concentration of O3 and NO2 can be expressed as a sub-volume in a reference volume (usually in ppb =parts per volume) or as a mass in a reference volume (microgram per m3 = ug/m3). The big advantage of the ppb unit is that it is independent of ambient pressure and temperature; the big disadvantage of the ug/m3 unit is that it depends on a standard pressure (1013.25 hPa) and a standard temperature (usually 25°C, but also 20°C). Many countries like the USA use the ppb (or ppm), but the EU has decided to use the ug/m3 unit. As the standard temperature for gases in the EU is often taken as 20°C (but frquently also as 25°C) there is much room for confusion. The conversion factors from ppb to ug/m3 are the following (see here):

O3:    to change ppb to ug/m3 multiply by 1.996 if standard temp. is 20°C and by 1.962 if standard temp. is 25°C

NO2: to change ppb to ug/m3 multiply by 1.913 if standard temp. is 20°C and by 1.880 if standard temp. is 25°C

The concentration of fine particles (PM10, PM2.5) is universally given as mass per volume (ug/m3). PM10 are particles up to a size of 10 micrometers, PM2.5 up to 2.5 um. On-line measurement principles are either the attenuation of beta radiation or the scattering of light (laser light); these techniques all must be done in known gas conditions and dry air (measurement with new low-cost sensors as the Nova SDS011 give absolutely wrong results outside a very narrow humidity range). PM2.5 are much more difficult to measure so that many stations keep to PM10’s.

3. Not one but many AQI’s

Every pollutant has its own AQI, and usually it is defined by a “break-point” table relating concentration to this AQI. The resulting graph is a polygone, so that over its full range there is no proportionality between concentration and AQI. The numerical range of the AQI varies enormously from country to country: for instance 0-500 in the US and 0-100 in many EU countries. This numerical range is divided into quality levels (e.g. Excellent to Hazardous) and the number of these levels and the precise wording also is not standardized.

But how to you define a single AQI out of an ensemble of for instance 3 core pollutants? Here the methodology is the same everywhere on the world: the defining AQI is the highest sub-AQI. An example: if O3-AQI = 80, NO2-AQI = 60 and PM10-AQI =50 then the published AQI is 80. This definition often leads to difficult to understand results. If O3 levels are normally high at a pure air mountainous station, but NO2 and PM levels low, that station may be qualified as poorer than a city with slightly lower O3 but much higher NO2 and PM concentrations.

4. The EPA AQI

Let us start with the US EPA (Environmental Protection Agency) AQI, as defined in the last revision. Here is the “break-point” table which defines the specific “sub-AQI’s” for the different pollutants considered by the EPA ( in yellow the 3 core pollutants):

To approximately convert either O3 or NO2 to ug/m3, use a multiplier of 2.

A first problem is that lower 1 hour O3 levels do not have an EPA AQI, what is unfortunate because usually all measurement stations give at least one measurement per hour. To simplify let us take break-points 0…62 and 63…124 for the O3 1h series. Clearly the steps corresponding to the concentration are not of the same magnitude: an AQI [0…50] corresponds to concentrations [0…54 ppb] whereas the same AQI step of 50 [1001…150] corresponds to [125…164 ppb], a much lower concentration increase . The graphical representation is a polygon: there is a proportionality between concentration and AQI from one break-point to the next, but no proportionality from 0 to maximum concentration as shown in the next graph (the red line is a calculated trend-line with its equation to show the deviation with the polygon):

The EPA AQI has 6 quality levels, from GOOD to HAZARDOUS, and runs from 0 to 500. Many on-line sites give a real-time view of the EPA AQI for the entire world, the local measurements being converted according to the EPA standard:

The following figure shows the situation given by today:

Zooming into the figure allows to inspect individual stations and the time series of the last two days.

The quality levels of EPA are relatively “generous”: an 8h O3 concentration of 100 ug/m3 still qualifies as GOOD, whereas in Luxembourg is runs under “MEDIUM”.

(to be continued)

Click here for part 2

EU: no CO2 improvement in cars in 2017

April 23, 2018

There is a new report from the EEA (European Environmental Agency) which is breathtaking regarding elementary  logic. After that Brussels (and many enviro-groups) launched an unprecedented Diesel bashing, sales of Diesel cars are down in the EU (a decrease up to 19% in Greece and 17% in Luxembourg). Everybody should know that the fuel efficiency of a Diesel engine is much better that that of a gasoline engine of same power (a Diesel car makes about 3.4 km/l more than the equivalent petrol car, all fulfilling the Euro 6 norm, see here); the report says that as an average the CO2 emissions of Diesel cars is 117.9 gCO2/km, and those of petrol cars 121.6 gCO2/km.

So no wonder that EU wide car CO2 emissions are not “improving”: actually they rise by a rather minuscule 0.4 g/CO2/km. No wonder also that on average CO2 emissions are in a general rule lower in flat countries like Denmark (107.1) and the Netherlands (108.3) compared to hilly/mountaneous Austria (120.7) and Germany (127.1). Using a unique qualifier independent of geography/topography seems to be particularly silly (see next figure from the report, table added by me):

Clearly all the least developed Eastern countries have the highest emissions, probably due to topography and older car fleets. The extremely low value for Greece could well be a statistical fluke, not uncommon in many statistics from that country (even if Greeks seem to favor lighter cars).

Brussels has mandated a target of 95 gCO2/km for 2021 (i.e. in 3 years). A healthy dose of  skepticism seems adequate; but be sure that EV (electrical vehicles) will be counted as zero-emitters (what clearly they are not) to beautify the statistics!


Arctic warming seen in perspective

April 2, 2018

During the first months of 2018 the Arctic temperatures were “unusually” warm, which made most media jump into quasi hysterical writings; an example is The Guardian, never shy of pushing the alarm:

What most media forget to tell, is that after peaking in February, there was a formidable plunge to cooler temperatures in March, as seen on this graph of the Danish Meteorological Service from today:

The blue line corresponds to the freezing point of 0°C; so even at its highest, the average (global) temperatures of the Arctic region above latitude 80° were still “comfortably” in the freezing range; they are now practically “back” to the mean of the 1958 to 2002 period. You will not be surprised that this was ignored by the Guardian!

A look at the revised PAGES2k project will put things into perspective. The PAGES2k consortium was a research project to make a reanalysis of the land temperatures of the NH of the last 2000 years. Heavy mistakes were made in the first publication, which were corrected in a second corrigendum published in 2015 (see a more complete discussion here). The relevant data for the Arctic are available at the NOAA website here; using the published Excel file, I made the following graph were every data point is the mean of 30 years temperature (given as anomaly w.r. to the period 1961-1990):

Clearly the Arctic has warmed during the past 100 years, but it has not exceeded the maximum around year 400 and is actually now below the temperature of the Medieval warming around year 1000, when atmospheric CO2 levels are assumed being approx. 280 ppmV. What this graph shows is the well known approx. 1000 year oscillation of the climate system (see for instance here ):

The Central Arctic Sea Ice area has shrunken during the last year, but using a realistic y-axis scale, this does not seem to spell disaster (link:

Looking at the winter snow cover of the Northern Hemisphere also brings us back to normality:

No visible plunge into “snow-free” winters are observed, contrary to what some “professors” prophesied ten years ago (see here)!


Before shouting ” disaster!”, please look at the past changes!



Increasing cosmic rays problematic for human space flight

March 12, 2018

In this blog I have written several times on the proposed influence of galactic cosmic rays on global climate, the subject of the H. Svensmark’s ongoing research. In this comment I will comment on two papers by N.A. Schwadron et al. which show that the ongoing solar minimum may cause a dangerous increase in cosmic rays induced radiation for human space travelers, and may cause a strong shortening of permissible extra-terrestrial flight time.

  1. Radiation from space.

Simplifying the problem on can state that 2 categories of ionizing radiation are important for an extra-terrestrial astronaut: solar energetic particles (SEP,  mostly protons and high energetic electrons) and galactic cosmic rays (GCR, mostly protons, electrons and nuclei of heavier elements, discovered by Victor Hess in 1912). The latter decompose in the atmosphere into a shower of secondary electrons, muons, gamma rays etc., as illustrated in the following picture:


The dose rate from this radiation is about 60nSv/h at sea-level. At meteoLCD we measure a background of ca. 80nSv/h. As the atmosphere is a shield against cosmic radiation, it is obvious that the dose-rate at higher altitudes is higher, with an exponential increase as shown by this graph:


Most trans-continental flight happen at about 40000 feet, so passengers and crew members are exposed to 50 times higher dose-rates, enough to classify pilots and stewards as radiation exposed workers. These “radioactive numbers” should always taken as indicative, and can not be  precise. The next figure from (Radiation in flight) shows for different air trips the mean radiation dose rate in microSv/h and also the total dose for a flight:


So a flight from Paris to San Francisco would cause an average dose rate of 6400 nSv/h (to be compared to the previously mentioned 80nSv/h at Diekirch) or a total dose of 0.14 mSv (tiny when compared to the approx. 5 mSv per year one gets through background radiation and usual medical examination).

To quantify the biological risk, one often takes a dose of 250 mGy (approx. 250 mSv) as the upper acceptable limit (blood forming dose limit). The next table from a slide show by N.A. Schwadron (University of New Hampshire) gives the radioactive dose limits corresponding to a 3% risk of dying by exposure (cSv = centi-Sievert, I added the mSv boxes for clarity):

Schwadron_3PC_risk for_Exposure_Induced_Death

For us elder it is a consolation to see that we can tolerate much higher levels than the young ones!

2. Cosmic radiation and solar activity

The sun is a very active nuclear fusion reactor, emitting with varying intensity huge quantities of charged particles and neutrons (the so-called solar wind). When the sun is very active (visible by many sun spots and measurable high magnetic field), the solar wind is strong, and deflects a big part of the galactic cosmic rays from reaching the earth. When the sun is in a lull (few sun spots, low magnetic field), more of these GCR’s reach our atmosphere.


This plot shows the cosmic rays intensity (in red) and the number of sun spots for solar cycles 19 to 23: a low sun spot count (= an inactive sun) correlates with a higher cosmic radiation.

One theory is that this increased radiation creates more nucleii for condensing water vapour, and increases the lower cloud cover. This in turns diminishes the solar energy absorbed by the globe, and will (or could) produce a colder climate. This is the theory of the Danish researcher Henrik Svensmark, who has verified in his lab the creation of condensing nucleii by cosmic rays. The IPCC ignores this theory, and stubbornly sees the human emission of greenhouse gases as the main or even sole climate changing cause.

Now, Schwadron and coauthors have published an add-on to an earlier paper from 2014, where they show that we are heading into a period of very high cosmic radiation (see also this article in the archive of spaceweather). We are now in-midst solar cycle 24, which is exceptionally  inactive: fewer sun spots and lower magnetic activity. At least three periods of low solar activity are known to exist: the Maunder minimum around 1700, the Dalton minimum (~1815) and the Gleissberg minimum (~1910).


The next graph shows at the bottom the sunspot number for cycles 23 (1996-2008) and the ongoing cycle 24 (start 2009):


The upper red and green curves are the yearly doses received at the surface of the moon: the maximum increases from 110 mSv  in 1996 to 130 mSv in 2009 and possibly to ~140 mSv in 2020: that is an increase of nearly 20% ! If, as many solar researchers predict, solar cycle 25 will have a still lower sun spot count, the radiation dose could possibly be much higher. This does not bode well for manned space flight. It is quasi impossible to increase radiation shielding (for obvious reasons of weight), so the duration of flight time spent in space might well be forcibly shortened (arrow added to original figure):


The upper red line shows that the limit for a male with the usual aluminium shielding diminishes from about 1100 to 750 days w.r. to the optimal situation in the 90’s.

3. Conclusion

The authors conclude the update paper with:
We conclude that we are likely in an era of decreasing solar activity. The activity
is weaker than observed in the descending phases of previous cycles within the space
age, and even weaker than the predictions by Schwadron et al. [2014a]. We continue
to observe large but isolated SEP events, the latest one occurring in September of 2017
caused largely by particle acceleration from successive magnetically well-connected CMEs.
The radiation environment remains a critical factor with significant hazards associated both with historically large galactic cosmic ray fluxes both with historically large galactic cosmic ray fluxes and large but isolated SEP events.”

Thus the natural changes in solar activity might not only lead to a possible new cooler period (comparable to the Dalton minimum) but also present new challenging obstacles for human space flight .



13 Mar 2018:  update with some added links and minor correction of spelling errors.




New scare: decline of lower stratospheric ozone

February 9, 2018

There is a new paper by William T. Ball (ETH Zürich) et al.(21 co-authors!!!)  titled “Evidence for a continuous decline in lower stratospheric ozone offsetting ozone layer recovery” published in Atmospheric Chemistry and Physics (6 Feb 2018). This paper has induced many comments by journalists which did not carefully read the paper, and produced the usual scary text about “we will all die by increased UVB radiation”. Actually the paper does not give this conclusion, but uses often well hidden statements to obscure it’s main findings (after heavy data torturing by what I think very obscure statistics):

  • the Total Ozone Column (TOC) remained more or less stable since 1998 in the latitudinal band -60° to °60°
  • the O3 concentration in the lower stratosphere seems to have declined by about 2 DU since 1998 (remember that the mean of this strongly varying TOC is about 300 DU!)
  • the O3 concentration in the upper stratosphere is increasing, what the authors see as a fingerprint of the efficiency of the Montreal protocol
  • the O3 in the lower troposphere is also increasing, which the authors see as a fingerprint of human activity

The conclusion of the paper: if the lower stratosphere O3 had not been decreasing, we would notice the efficiency of the Montreal protocol in out-phase O3 destroying gases… but alas, we do not observe any efficiency for the moment.

1. The most important figures from the paper

This is figure 1; it shows the global O3 trends according to the latitude (so every point at a certain latitude is the mean trend for that latitudinal band); red colors show an increase in TOC, blue a decrease.

Figure 4 of the Ball paper shows the tropospheric O3 column (i.e. the ground ozone) is increasing:

Don’t be fooled by the slope of the linear regression line: in 12 years the total increase is just a meager 1.5 DU !

We will compare this to the measurements done at Diekirch and at Uccle (both locations approx. at 50° lat. North, i.e. at the extreme right border of the graphs.

Here is what we measure in Diekirch:

The TOC at Diekirch seems to be slightly decreasing since 2002, even if the general trend since 1998 is positive.

but the ground ozone levels are slightly increasing since 2002 (by 0.2 ug/m3 per year, please compare to the left side scale!)

Uccle finds this for the TOC (link):

So here we see two periods: a decline from about 1980 to 1996, and then an increase!

Uccle also has a good figure with the trends of their balloon soundings (I added the comments):

Here the lower stratosphere corresponds to the yellow marked region: just below that region, we see that over the years the O3 concentration is increasing, and that the changes in the yellow region are minimal.

Conclusion: the regional behaviour at our latitudes (50° North) do not quite correspond to the global latitudinal findings of the Ball paper.


2. The UVB radiation measured at ground level.

Here is what we measured in Diekirch during the last 15 years:

UVB intensity remains practically constant over the whole period 2002 to 2017.

I wrote several comments and papers on the relation-ship between TOC and UVB levels at ground level: here the main figure in my paper from 2013:

This figure clearly shows that when the TOC declines, UVB radiation increases (compare the two highlighted days). But alas, things not always go such smoothly during longer periods. The next figure shows the results of measurements done by Klara Czikova et al. in the Czech republic over 50 years (“Reconstruction and analysis of erythemal UV radiation time series  from Hradec Králové (Czech Republic) over the past 50 years“),


Just look at the years between the two blue lines: TOC is more or less constant, cloud cover increases and, quite inexplicably the yearly UVB also increases( left scale shows daily mean dose) . This means that short time phenomena can show a different behaviour than yearly averages or totals. Note also the decreasing UVB dose from about 2008 on.


3. Conclusions

The findings of the Ball et al. paper may be interesting from a scientific stand-point, but they are not a cause for any panic. The important factor for health reason is the UVB dose, and that dose either remains constant or declines in our region. Does the Ball et al. paper vindicate the Montreal protocol? Yes and no: if really in the upper stratosphere both ozone depleting substances are decreasing and O3 concentrations increasing, than this should point to an efficiency. But the elephant in the room is the decreasing solar (and UVB) activity during the last years, as shown by this graph of the 10.7cm radio waves flux (a proxy for UVB activity):

Clearly solar activity is on a decline since 2000, so less ozone will be created at the lower layers of the stratosphere (even if the O3 destroying substances had remained constant…). The authors ignore this, and it might well be that the O3 depletion in the lower stratosphere is mostly a consequence of declining solar activity!





Sea-level budget along US Atlantic North-West coast

February 4, 2018

An important new paper has been published by Thomas Frederikse et al. (Delft University) on the sea-level changes along the northern part of the US Atlantic West coast (between latitudes 35° and 45°, from Sewells Point to Halifax). The authors try to check if a budget involving changes of salinity and ground level variations would agree with the tide-gauges observations for the last 50 years. I confess that I have a positive bias for sea-level research done by Dutch scientists, as opposed to the scary stories told by people like Stefan Rahmsdorf from the PKI. The Dutch have a centuries long experience with measuring, battling and mitigating a harsh sea that always tries to invade the many regions below sea-level (an area which amounts to about a third of the country). So Dutch research on this topic usually is much more sober and not agenda driven. As a start you may read this paper by Rovere et al. (2016) as a very good and clear introduction into the subject of sea level change.

  1. Sea-level changes at different regions are vastly different

The following figure shows how different the sea-levels measured by tide-gauges can be; remember that these gauges are installed on the ground, and strictly speaking measure the relative sea-level. At Stockholm the ground is rising due to the post-glacial rebound (GIA, Glacial Isostatic Adjustment), whereas in Galveston (Texas, USA) there is a big ground subsidence (ground sinking) mostly due to excessive ground water pumping (see here), so that local tide gauges report an alarming (relative) sea-level rise..

For Dutch people the recordings at the Maassluis (a lock on the outlet of the Maas river to the North Sea) are reassuring: in 165 years the relative sea-level rise is only 1.5 mm/year, and shows no sign of acceleration. As the globe is leaving a Little Ice Age since 1850, such a rise has essentially a natural cause, and is not (much) caused by human activity or greenhouse gas emissions! What is surprising is that despite the big differences in the amplitude and sign of the changes, the trends are practically linear i.e. persistent!

The figure also tells us that a global sea-level may be an interesting scientific curiosity, but this modeled “virtual” level has no significance at all for local mitigation policies.

2. What are the main contributors to sea-level change?

Steric changes are changes related to density changes; the sea water density can change for instance by warming (often also called eustatic changes when given relative to a fixed point as the center of the globe) and/or inflow of less saltier water. Lower density means more volume for a given mass i.e. rising sea level if  the geological tub for the oceans remains unchanged (which is not the case!). The following picture shows that the density changes are far from uniform over the globe. As a consequence local steric sea-level changes are quite different, from -2 to + 2 mm/year.

Isostatic changes are related to local vertical ground movements, caused for instance by excessive pumping of ground-water, increased pressure by new buildings or heavy infrastructure, but most importantly by glacial isostatic adjustment (GIA): GIA is the rebound of the earth crust (both positive and negative) caused by the disappearing ice mass that accumulated during the last great glacial period (which ended about 10000 years ago). This is a very slow process, with big regional differences. The Baltic coast for instance is rising at Stockholm by more than 4 mm/year, by 12 mm/year around Greenland (see here); this paper shows that the New Zealand coast has both uplift and subsidence parts with changes from -1 to + 1 mm/year.

The next picture from the paper shows that practically all of the 14 US stations used show negative vertical land movements i.e. subsidence (look at the grey bars: only 4 stations have uplift, mostly negligible except at station #3)

3. Lessons learnt

The major aim of the Frederiksen paper was to establish a model for local sea-level, i.e. making a budget of the different contributions and comparing the effect of this budget to the observations by the tide-gauges. The results are quite good:

As this figure shows, the observations of the tide gauges (grey bars) are very (or at least reasonably) close to the results of the budget (orange bars). Especially interesting is the comparison of the contributions of ice melt (glaciers, Arctic and Antarctic) with the GIA: I have highlighted these on the next table:

The sum of ice-melt is 0.57 mm/yr, that of the GIA (here subsidence) is 1.75 mm/yr, about three times higher! So if we believe that all ice melt is due to the human emissions of greenhouse gases, this anthropogenic “sin” pales in comparison to the natural geological influence.

The acceleration (supposed constant) of the ice-melt caused sea-level would cause in 80 years a sea-level rise of 0.5*(0.009+0.003+0.015)*80**2 = 86.4 mm, less than 10 cm ! This must be compared to the linear geological caused increase of  1.75*80 = 140 mm.

4. Conclusion

The Dutch study does not point to any  human caused rise in sea-level that would present a big problem around 2100. Changes in local (relative) sea-level at the West Atlantic US coast are real, but come predominantly from natural factors. This does not mean that no protection work will have to be done in a far future, but it puts the contribution of human GHG emissions into perspective.


PS1: the first two figures are from a slide-show by Frederikse. I lost the link.

PS2: A paper by Paul Sterlini (Koninklijk Nederlands Meteorologisch Instituut) et al. published in GRL July 2017 comes to similar conclusions. The title is “Understanding the spatial variation of sea level rise in the North Sea using satellite altimetry” (paywalled, free access through Researchgate). This paper finds that meteorological effects account for most of the observed regional variation in local SLR. The contribution of ice melt (glaciers + Greenland) around the Dutch coast is shown here at being less than 0.25 mm/yr for the period 1993 to 2014:

Fatal bio-energy decisions

January 22, 2018

Several times in the past I wrote on this blog about the problems of burning wood and of presenting this type of energy use as environmentally friendly and moral. The very EU friendly website Euractiv has a damning article in its Jan.8 2018 edition titled “The EU’s bioenergy policy isn’t just damaging the climate and forests, it’s killing people“.

Here is picture showing the fine particle emissions from different types of wood burning (link):

Compared to oil and gas burning, wood is a dirty, often even extremely dirty energy. An important part of EU’s bioenergy is wood, and the political Zeitgeist was to shift fossil fuel burning power stations to wood, like the infamous UK Drax power station, which burned 6 million tons of wood pellets in 2015. Estimations say that by burning wood this station emits 12% more CO2 than if it had  burned coal (link to picture):

The Euractiv article cites a study by Sigsgaard that estimates 40000 premature deaths caused every year in the EU by biomass pollution. Well, I am not a fan of these statistics, but it is clear that fine particle emissions by open wood fires are huge compared to those of the lamented Diesel cars. Curiously, a group of scientists published an article the 15th January 2018 in Euractiv pushing the need to increase biomass burning. Titled “Bioenergy at the centre of EU renewable energy policy” the 63 authors (yes: 63!) candidly write that “With regards to air quality, it is very difficult to identify the impacts of bioenergy combustion in isolation”. This is an absolute nonsense, as fine particle emissions from wood burning can be measured and compared to other sources, what has been done in many research papers.

The irony of the whole biomass problem is that bio-energy has been promoted by the “Über-greens” as one of the climate-saving politics; ill reflected and hastily promoted, this bioenergy now raises its ugly head and makes the ordinary citizen wonder if “expert advice” (like that of the 63 authors) really should be relied upon…