EU: no CO2 improvement in cars in 2017

April 23, 2018

There is a new report from the EEA (European Environmental Agency) which is breathtaking regarding elementary  logic. After that Brussels (and many enviro-groups) launched an unprecedented Diesel bashing, sales of Diesel cars are down in the EU (a decrease up to 19% in Greece and 17% in Luxembourg). Everybody should know that the fuel efficiency of a Diesel engine is much better that that of a gasoline engine of same power (a Diesel car makes about 3.4 km/l more than the equivalent petrol car, all fulfilling the Euro 6 norm, see here); the report says that as an average the CO2 emissions of Diesel cars is 117.9 gCO2/km, and those of petrol cars 121.6 gCO2/km.

So no wonder that EU wide car CO2 emissions are not “improving”: actually they rise by a rather minuscule 0.4 g/CO2/km. No wonder also that on average CO2 emissions are in a general rule lower in flat countries like Denmark (107.1) and the Netherlands (108.3) compared to hilly/mountaneous Austria (120.7) and Germany (127.1). Using a unique qualifier independent of geography/topography seems to be particularly silly (see next figure from the report, table added by me):

Clearly all the least developed Eastern countries have the highest emissions, probably due to topography and older car fleets. The extremely low value for Greece could well be a statistical fluke, not uncommon in many statistics from that country (even if Greeks seem to favor lighter cars).

Brussels has mandated a target of 95 gCO2/km for 2021 (i.e. in 3 years). A healthy dose of  skepticism seems adequate; but be sure that EV (electrical vehicles) will be counted as zero-emitters (what clearly they are not) to beautify the statistics!

 

Advertisements

Arctic warming seen in perspective

April 2, 2018

During the first months of 2018 the Arctic temperatures were “unusually” warm, which made most media jump into quasi hysterical writings; an example is The Guardian, never shy of pushing the alarm:

What most media forget to tell, is that after peaking in February, there was a formidable plunge to cooler temperatures in March, as seen on this graph of the Danish Meteorological Service from today:

The blue line corresponds to the freezing point of 0°C; so even at its highest, the average (global) temperatures of the Arctic region above latitude 80° were still “comfortably” in the freezing range; they are now practically “back” to the mean of the 1958 to 2002 period. You will not be surprised that this was ignored by the Guardian!

A look at the revised PAGES2k project will put things into perspective. The PAGES2k consortium was a research project to make a reanalysis of the land temperatures of the NH of the last 2000 years. Heavy mistakes were made in the first publication, which were corrected in a second corrigendum published in 2015 (see a more complete discussion here). The relevant data for the Arctic are available at the NOAA website here; using the published Excel file, I made the following graph were every data point is the mean of 30 years temperature (given as anomaly w.r. to the period 1961-1990):

Clearly the Arctic has warmed during the past 100 years, but it has not exceeded the maximum around year 400 and is actually now below the temperature of the Medieval warming around year 1000, when atmospheric CO2 levels are assumed being approx. 280 ppmV. What this graph shows is the well known approx. 1000 year oscillation of the climate system (see for instance here ):

The Central Arctic Sea Ice area has shrunken during the last year, but using a realistic y-axis scale, this does not seem to spell disaster (link: www.climate4you.com):

Looking at the winter snow cover of the Northern Hemisphere also brings us back to normality:

No visible plunge into “snow-free” winters are observed, contrary to what some “professors” prophesied ten years ago (see here)!

Conclusion:

Before shouting ” disaster!”, please look at the past changes!

 

 

Increasing cosmic rays problematic for human space flight

March 12, 2018

In this blog I have written several times on the proposed influence of galactic cosmic rays on global climate, the subject of the H. Svensmark’s ongoing research. In this comment I will comment on two papers by N.A. Schwadron et al. which show that the ongoing solar minimum may cause a dangerous increase in cosmic rays induced radiation for human space travelers, and may cause a strong shortening of permissible extra-terrestrial flight time.

  1. Radiation from space.

Simplifying the problem on can state that 2 categories of ionizing radiation are important for an extra-terrestrial astronaut: solar energetic particles (SEP,  mostly protons and high energetic electrons) and galactic cosmic rays (GCR, mostly protons, electrons and nuclei of heavier elements, discovered by Victor Hess in 1912). The latter decompose in the atmosphere into a shower of secondary electrons, muons, gamma rays etc., as illustrated in the following picture:

cosmic_rays_picture

The dose rate from this radiation is about 60nSv/h at sea-level. At meteoLCD we measure a background of ca. 80nSv/h. As the atmosphere is a shield against cosmic radiation, it is obvious that the dose-rate at higher altitudes is higher, with an exponential increase as shown by this graph:

dose_rate_versus_flying_altitude

Most trans-continental flight happen at about 40000 feet, so passengers and crew members are exposed to 50 times higher dose-rates, enough to classify pilots and stewards as radiation exposed workers. These “radioactive numbers” should always taken as indicative, and can not be  precise. The next figure from www.radioactivity.eu.com (Radiation in flight) shows for different air trips the mean radiation dose rate in microSv/h and also the total dose for a flight:

radiation_in_flight

So a flight from Paris to San Francisco would cause an average dose rate of 6400 nSv/h (to be compared to the previously mentioned 80nSv/h at Diekirch) or a total dose of 0.14 mSv (tiny when compared to the approx. 5 mSv per year one gets through background radiation and usual medical examination).

To quantify the biological risk, one often takes a dose of 250 mGy (approx. 250 mSv) as the upper acceptable limit (blood forming dose limit). The next table from a slide show by N.A. Schwadron (University of New Hampshire) gives the radioactive dose limits corresponding to a 3% risk of dying by exposure (cSv = centi-Sievert, I added the mSv boxes for clarity):

Schwadron_3PC_risk for_Exposure_Induced_Death

For us elder it is a consolation to see that we can tolerate much higher levels than the young ones!

2. Cosmic radiation and solar activity

The sun is a very active nuclear fusion reactor, emitting with varying intensity huge quantities of charged particles and neutrons (the so-called solar wind). When the sun is very active (visible by many sun spots and measurable high magnetic field), the solar wind is strong, and deflects a big part of the galactic cosmic rays from reaching the earth. When the sun is in a lull (few sun spots, low magnetic field), more of these GCR’s reach our atmosphere.

CosmicRaysAndSunspotsMonthlySince195801

This plot shows the cosmic rays intensity (in red) and the number of sun spots for solar cycles 19 to 23: a low sun spot count (= an inactive sun) correlates with a higher cosmic radiation.

One theory is that this increased radiation creates more nucleii for condensing water vapour, and increases the lower cloud cover. This in turns diminishes the solar energy absorbed by the globe, and will (or could) produce a colder climate. This is the theory of the Danish researcher Henrik Svensmark, who has verified in his lab the creation of condensing nucleii by cosmic rays. The IPCC ignores this theory, and stubbornly sees the human emission of greenhouse gases as the main or even sole climate changing cause.

Now, Schwadron and coauthors have published an add-on to an earlier paper from 2014, where they show that we are heading into a period of very high cosmic radiation (see also this article in the archive of spaceweather). We are now in-midst solar cycle 24, which is exceptionally  inactive: fewer sun spots and lower magnetic activity. At least three periods of low solar activity are known to exist: the Maunder minimum around 1700, the Dalton minimum (~1815) and the Gleissberg minimum (~1910).

Schwadron_Maunder_Dalton

The next graph shows at the bottom the sunspot number for cycles 23 (1996-2008) and the ongoing cycle 24 (start 2009):

lunar_surface_dose_Schwadron_predicted.jpg

The upper red and green curves are the yearly doses received at the surface of the moon: the maximum increases from 110 mSv  in 1996 to 130 mSv in 2009 and possibly to ~140 mSv in 2020: that is an increase of nearly 20% ! If, as many solar researchers predict, solar cycle 25 will have a still lower sun spot count, the radiation dose could possibly be much higher. This does not bode well for manned space flight. It is quasi impossible to increase radiation shielding (for obvious reasons of weight), so the duration of flight time spent in space might well be forcibly shortened (arrow added to original figure):

limit_in_space_schwadron_paper_2018

The upper red line shows that the limit for a male with the usual aluminium shielding diminishes from about 1100 to 750 days w.r. to the optimal situation in the 90’s.

3. Conclusion

The authors conclude the update paper with:
We conclude that we are likely in an era of decreasing solar activity. The activity
is weaker than observed in the descending phases of previous cycles within the space
age, and even weaker than the predictions by Schwadron et al. [2014a]. We continue
to observe large but isolated SEP events, the latest one occurring in September of 2017
caused largely by particle acceleration from successive magnetically well-connected CMEs.
The radiation environment remains a critical factor with significant hazards associated both with historically large galactic cosmic ray fluxes both with historically large galactic cosmic ray fluxes and large but isolated SEP events.”

Thus the natural changes in solar activity might not only lead to a possible new cooler period (comparable to the Dalton minimum) but also present new challenging obstacles for human space flight .

_____________________________________________

History:

13 Mar 2018:  update with some added links and minor correction of spelling errors.

 

 

 

New scare: decline of lower stratospheric ozone

February 9, 2018

There is a new paper by William T. Ball (ETH Zürich) et al.(21 co-authors!!!)  titled “Evidence for a continuous decline in lower stratospheric ozone offsetting ozone layer recovery” published in Atmospheric Chemistry and Physics (6 Feb 2018). This paper has induced many comments by journalists which did not carefully read the paper, and produced the usual scary text about “we will all die by increased UVB radiation”. Actually the paper does not give this conclusion, but uses often well hidden statements to obscure it’s main findings (after heavy data torturing by what I think very obscure statistics):

  • the Total Ozone Column (TOC) remained more or less stable since 1998 in the latitudinal band -60° to °60°
  • the O3 concentration in the lower stratosphere seems to have declined by about 2 DU since 1998 (remember that the mean of this strongly varying TOC is about 300 DU!)
  • the O3 concentration in the upper stratosphere is increasing, what the authors see as a fingerprint of the efficiency of the Montreal protocol
  • the O3 in the lower troposphere is also increasing, which the authors see as a fingerprint of human activity

The conclusion of the paper: if the lower stratosphere O3 had not been decreasing, we would notice the efficiency of the Montreal protocol in out-phase O3 destroying gases… but alas, we do not observe any efficiency for the moment.

1. The most important figures from the paper

This is figure 1; it shows the global O3 trends according to the latitude (so every point at a certain latitude is the mean trend for that latitudinal band); red colors show an increase in TOC, blue a decrease.

Figure 4 of the Ball paper shows the tropospheric O3 column (i.e. the ground ozone) is increasing:

Don’t be fooled by the slope of the linear regression line: in 12 years the total increase is just a meager 1.5 DU !

We will compare this to the measurements done at Diekirch and at Uccle (both locations approx. at 50° lat. North, i.e. at the extreme right border of the graphs.

Here is what we measure in Diekirch:

The TOC at Diekirch seems to be slightly decreasing since 2002, even if the general trend since 1998 is positive.

but the ground ozone levels are slightly increasing since 2002 (by 0.2 ug/m3 per year, please compare to the left side scale!)

Uccle finds this for the TOC (link):

So here we see two periods: a decline from about 1980 to 1996, and then an increase!

Uccle also has a good figure with the trends of their balloon soundings (I added the comments):

Here the lower stratosphere corresponds to the yellow marked region: just below that region, we see that over the years the O3 concentration is increasing, and that the changes in the yellow region are minimal.

Conclusion: the regional behaviour at our latitudes (50° North) do not quite correspond to the global latitudinal findings of the Ball paper.

 

2. The UVB radiation measured at ground level.

Here is what we measured in Diekirch during the last 15 years:

UVB intensity remains practically constant over the whole period 2002 to 2017.

I wrote several comments and papers on the relation-ship between TOC and UVB levels at ground level: here the main figure in my paper from 2013:

This figure clearly shows that when the TOC declines, UVB radiation increases (compare the two highlighted days). But alas, things not always go such smoothly during longer periods. The next figure shows the results of measurements done by Klara Czikova et al. in the Czech republic over 50 years (“Reconstruction and analysis of erythemal UV radiation time series  from Hradec Králové (Czech Republic) over the past 50 years“),

 

Just look at the years between the two blue lines: TOC is more or less constant, cloud cover increases and, quite inexplicably the yearly UVB also increases( left scale shows daily mean dose) . This means that short time phenomena can show a different behaviour than yearly averages or totals. Note also the decreasing UVB dose from about 2008 on.

 

3. Conclusions

The findings of the Ball et al. paper may be interesting from a scientific stand-point, but they are not a cause for any panic. The important factor for health reason is the UVB dose, and that dose either remains constant or declines in our region. Does the Ball et al. paper vindicate the Montreal protocol? Yes and no: if really in the upper stratosphere both ozone depleting substances are decreasing and O3 concentrations increasing, than this should point to an efficiency. But the elephant in the room is the decreasing solar (and UVB) activity during the last years, as shown by this graph of the 10.7cm radio waves flux (a proxy for UVB activity):

Clearly solar activity is on a decline since 2000, so less ozone will be created at the lower layers of the stratosphere (even if the O3 destroying substances had remained constant…). The authors ignore this, and it might well be that the O3 depletion in the lower stratosphere is mostly a consequence of declining solar activity!

 

 

 

 

Sea-level budget along US Atlantic North-West coast

February 4, 2018

An important new paper has been published by Thomas Frederikse et al. (Delft University) on the sea-level changes along the northern part of the US Atlantic West coast (between latitudes 35° and 45°, from Sewells Point to Halifax). The authors try to check if a budget involving changes of salinity and ground level variations would agree with the tide-gauges observations for the last 50 years. I confess that I have a positive bias for sea-level research done by Dutch scientists, as opposed to the scary stories told by people like Stefan Rahmsdorf from the PKI. The Dutch have a centuries long experience with measuring, battling and mitigating a harsh sea that always tries to invade the many regions below sea-level (an area which amounts to about a third of the country). So Dutch research on this topic usually is much more sober and not agenda driven. As a start you may read this paper by Rovere et al. (2016) as a very good and clear introduction into the subject of sea level change.

  1. Sea-level changes at different regions are vastly different

The following figure shows how different the sea-levels measured by tide-gauges can be; remember that these gauges are installed on the ground, and strictly speaking measure the relative sea-level. At Stockholm the ground is rising due to the post-glacial rebound (GIA, Glacial Isostatic Adjustment), whereas in Galveston (Texas, USA) there is a big ground subsidence (ground sinking) mostly due to excessive ground water pumping (see here), so that local tide gauges report an alarming (relative) sea-level rise..

For Dutch people the recordings at the Maassluis (a lock on the outlet of the Maas river to the North Sea) are reassuring: in 165 years the relative sea-level rise is only 1.5 mm/year, and shows no sign of acceleration. As the globe is leaving a Little Ice Age since 1850, such a rise has essentially a natural cause, and is not (much) caused by human activity or greenhouse gas emissions! What is surprising is that despite the big differences in the amplitude and sign of the changes, the trends are practically linear i.e. persistent!

The figure also tells us that a global sea-level may be an interesting scientific curiosity, but this modeled “virtual” level has no significance at all for local mitigation policies.

2. What are the main contributors to sea-level change?

Steric changes are changes related to density changes; the sea water density can change for instance by warming (often also called eustatic changes when given relative to a fixed point as the center of the globe) and/or inflow of less saltier water. Lower density means more volume for a given mass i.e. rising sea level if  the geological tub for the oceans remains unchanged (which is not the case!). The following picture shows that the density changes are far from uniform over the globe. As a consequence local steric sea-level changes are quite different, from -2 to + 2 mm/year.

Isostatic changes are related to local vertical ground movements, caused for instance by excessive pumping of ground-water, increased pressure by new buildings or heavy infrastructure, but most importantly by glacial isostatic adjustment (GIA): GIA is the rebound of the earth crust (both positive and negative) caused by the disappearing ice mass that accumulated during the last great glacial period (which ended about 10000 years ago). This is a very slow process, with big regional differences. The Baltic coast for instance is rising at Stockholm by more than 4 mm/year, by 12 mm/year around Greenland (see here); this paper shows that the New Zealand coast has both uplift and subsidence parts with changes from -1 to + 1 mm/year.

The next picture from the paper shows that practically all of the 14 US stations used show negative vertical land movements i.e. subsidence (look at the grey bars: only 4 stations have uplift, mostly negligible except at station #3)

3. Lessons learnt

The major aim of the Frederiksen paper was to establish a model for local sea-level, i.e. making a budget of the different contributions and comparing the effect of this budget to the observations by the tide-gauges. The results are quite good:

As this figure shows, the observations of the tide gauges (grey bars) are very (or at least reasonably) close to the results of the budget (orange bars). Especially interesting is the comparison of the contributions of ice melt (glaciers, Arctic and Antarctic) with the GIA: I have highlighted these on the next table:

The sum of ice-melt is 0.57 mm/yr, that of the GIA (here subsidence) is 1.75 mm/yr, about three times higher! So if we believe that all ice melt is due to the human emissions of greenhouse gases, this anthropogenic “sin” pales in comparison to the natural geological influence.

The acceleration (supposed constant) of the ice-melt caused sea-level would cause in 80 years a sea-level rise of 0.5*(0.009+0.003+0.015)*80**2 = 86.4 mm, less than 10 cm ! This must be compared to the linear geological caused increase of  1.75*80 = 140 mm.

4. Conclusion

The Dutch study does not point to any  human caused rise in sea-level that would present a big problem around 2100. Changes in local (relative) sea-level at the West Atlantic US coast are real, but come predominantly from natural factors. This does not mean that no protection work will have to be done in a far future, but it puts the contribution of human GHG emissions into perspective.

 

PS1: the first two figures are from a slide-show by Frederikse. I lost the link.

PS2: A paper by Paul Sterlini (Koninklijk Nederlands Meteorologisch Instituut) et al. published in GRL July 2017 comes to similar conclusions. The title is “Understanding the spatial variation of sea level rise in the North Sea using satellite altimetry” (paywalled, free access through Researchgate). This paper finds that meteorological effects account for most of the observed regional variation in local SLR. The contribution of ice melt (glaciers + Greenland) around the Dutch coast is shown here at being less than 0.25 mm/yr for the period 1993 to 2014:

Fatal bio-energy decisions

January 22, 2018

Several times in the past I wrote on this blog about the problems of burning wood and of presenting this type of energy use as environmentally friendly and moral. The very EU friendly website Euractiv has a damning article in its Jan.8 2018 edition titled “The EU’s bioenergy policy isn’t just damaging the climate and forests, it’s killing people“.

Here is picture showing the fine particle emissions from different types of wood burning (link):

Compared to oil and gas burning, wood is a dirty, often even extremely dirty energy. An important part of EU’s bioenergy is wood, and the political Zeitgeist was to shift fossil fuel burning power stations to wood, like the infamous UK Drax power station, which burned 6 million tons of wood pellets in 2015. Estimations say that by burning wood this station emits 12% more CO2 than if it had  burned coal (link to picture):

The Euractiv article cites a study by Sigsgaard that estimates 40000 premature deaths caused every year in the EU by biomass pollution. Well, I am not a fan of these statistics, but it is clear that fine particle emissions by open wood fires are huge compared to those of the lamented Diesel cars. Curiously, a group of scientists published an article the 15th January 2018 in Euractiv pushing the need to increase biomass burning. Titled “Bioenergy at the centre of EU renewable energy policy” the 63 authors (yes: 63!) candidly write that “With regards to air quality, it is very difficult to identify the impacts of bioenergy combustion in isolation”. This is an absolute nonsense, as fine particle emissions from wood burning can be measured and compared to other sources, what has been done in many research papers.

The irony of the whole biomass problem is that bio-energy has been promoted by the “Über-greens” as one of the climate-saving politics; ill reflected and hastily promoted, this bioenergy now raises its ugly head and makes the ordinary citizen wonder if “expert advice” (like that of the 63 authors) really should be relied upon…

 

 

Extreme weather more frequent in first than in second half of last century

December 26, 2017

Ocean storm. Credit: Shutterstock

There is an interesting and very easy to read paper by MJ. KELLY (University of Cambridge, Department of Engineering) titled “Trends in Extreme Weather Events since 1990“, published in Feb.2016 in the Journal of Geography & Natural Disasters.

The essence of this study is the following: “A survey of official weather sites and the scientific literature provides strong evidence that the first half of the 20th century had more extreme weather than the second half, when anthropogenic global warming is claimed to have been mainly responsible for observed climate change”. This is a big problem for all those who tirelessly write about increasing extreme weather events and promote immediate decarbonization of our societies to avoid the impending climate disasters. It also is a stumbling block for the poor structural engineer who plans dams, dikes, bridges and other big constructions: should he  device stronger (and much more expensive) structures, as suggested by the climate alarmist community, or should he rely on the lessons of past history?

Kelly politely writes about “the disconnect between real-world historical data on the 100 years’ time scale and the current predictions”, when in effect the disconnect shows that the numerous mentions of “more extreme weather” caused by global warming correspond perhaps to some wishful thinking, but are not grounded in reality.

Very telling is the next figure (taken from a I. Goklany paper) which gives the deaths and the death-rate (in millions per year) from extreme weather events:

Clearly, if one would plan a trip back in the past, the period 1950 to 2000 would be much less dangerous! Kelly adds more graphs (all taken from official data sources) which confirm this situation.

He than reflects on a problem that is continuously debated in the climate realists circles: We should not naively trust the official data series, as many have been “doctored” and “re-doctored” over the time (the scientific correct expression is “homogenized”), practically always in a way that inflates the current warming. A very good illustration is a graph from a paper by Hansen (1984) and from a modern global temperature series (Nasa-GISS, the employer of Hansen) :

Where in the first paper 1980 was colder than 1940, the situation has flipped to the opposite!

Conclusion:

The conclusion is best given by the words of the author: “The lack of clarity about future extreme weather, after 20 years of intensive analysis of future climates is deeply worrying. There is nothing that emerges from references [1,2] that would require a significant refinement of the margins that have applied over the last half-century, and hyperbole is no substitute for hard facts in the engineering of the physical infrastructure.”

Air pollution: numbers and doubt

December 20, 2017

city_smog

Much has been written on air pollution and its health impacts, especially following the Volkswagen Diesel scandal and the new trendy enviro fashion of Diesel bashing. In this blog, I will make a few comments on the contribution of household wood burning, on the fluency of reference values and the extraordinary statistics of air pollution caused mortality. As a long time asthma sufferer I appreciate clean air above all; this does not give me license to take the hysteria train and to throw away all principles of scientific thinking.

  1. The climate-friendly wood burning

In a previous comment I showed that residential wood burning is a big contributor to PMx fine dust particles (PM10 and PM2.5) pollution. The 2016 air quality report of the EEA confirms this with the following picture:

household_wood_burning

Clearly the contribution of road transport is minuscule compared to that from wood burning; curiously it is the former that you find mentioned as a culprit in nearly all media articles, and the latter that is most often conveniently ignored. The percentage of 56% may well be too low as the EEA report mentions a new study that finds that “the residential wood combustion emissions (are) higher than those in previous inventories by a factor of two to three…”

Residential wood burning has been pushed by the “climate-friendly” agenda without any pause for clear thinking. As so oft when feelings dominate over intellect, the unintended consequences are spectacular!

2. The crux of the reference values

When you qualify a gas as a pollutant, it is important to know what the natural background values are. These values haven been christened “counterfactual concentrations C0” in the new EEA report. Curiously, this natural background has often been ignored, so that health related effects start with an impossible 0 concentration. This was the case for PM2.5, until the 2017 (preliminary) air quality report conceded that nothing (or not much) is known about the danger of levels lower than 2.5 ug/m3, so that this new level diminished the attributed mortality by a whooping 18%.
The same problem exists for many other pollutants: natural ozone concentrations may locally be much higher, and introducing a unique EU-wide lower threshold automatically pushes meridional sunny countries into the club of the sinners. The report candidly acknowledges this by ” O3 concentrations show a clear increase as one moves from the northern parts to the southern parts of Europe, with the highest concentrations in some Mediterranean counties and in summer time”.

3. The extravagant mortality numbers

400000 people killed by air pollution in Europe, 40000 deaths in the UK…. These are numbers repeated at nauseam by the copy/paste media without any clear reflection on their validity.

By digging deep, one nevertheless can find some clearer thinking. Let us start by a “factcheck” article by Greenpeace (yes!) titled “Are there really 40000 air pollution deaths a year?“. For instance the article recalls that in the cities where studies have been made on the danger of PM’s, values were never lower than 7 ug/m3 and the data do not show any danger coming from values below. Antony Frew, professor in respiratory medicine says that “the basic data does not say that 40,000 people have died”.

Another comment comes from the Winston Centre for Risk and Evidence Communication “Does air pollution kill 40000 people each year in the UK?“. Here we find the mind blowing statement that “COMEAP point out, for example, that 60% of the estimated harm in the UK is estimated to occur at levels less than 7 ug/m3 PM, and yet the US data provide no direct evidence of harm at this level”. (COMEAP = Committee on the Medical Effects of Air Pollution). The report shows this table from COMEAP:

COMEAP_measure_of_effect

Note the huge plausible interval for the life-years lost: the interval is abyssal, so that an estimate of 340000 borders on the nonsensical. The Winston comment concludes wisely by “There are huge uncertainties surrounding all the measures of impacts of air pollution, with inadequate knowledge replaced by substantial doses of expert judgement. These uncertainties should be better reflected in the public debates. In addition, the situation in the UK is not what we would usually think of as a ‘crisis’.”

4. What a change in a year!

As a last reflection I suggest the very good article by phys.org “NO2 – not as bad as we thought?” The article discusses a new technical report concerning the planned Clean Air Zones in the UK. This report finds that the damages caused by NO2 to the public health is 80% lower than the estimate in a previous report. This previous report assumed that for every 10 ug/m3 PM the mortality risk would increase by 2.5%; now this risk factor is down to 0.92%. When a new report changes the danger level given in a previous one by such an enormous percentage, our politicians would be well advised not to rush into hasty actions, and would wisely wait for things to settle down.

The dubious 2°C limit

December 16, 2017

No comment on the Paris (and newest Bonn) climate meetings are written without raising the warning finger that everything should be done to limit global warming at +°2C “above pre-industrial levels“. Sebastian Lüning and Fritz Vahrenholt have published a new paper in “Frontiers in Earth Science” that tries to find the origin of the number, and how scientifically correct is the baseline yielding this maximum permitted warming.

This paper is very readable; they are not the first to point to the astounding vagueness of the foundations of this internationally agreed limit, but their paper has the big merit of being short and easy to understand. I will discuss just some major findings, and suggest a careful lecture of the full paper.

  1. The sea surface temperatures (SST) of the last 10000 years.

The last 10k years correspond to the ongoing interglacial period. The (annotated) figure shows how the SST changed during that period:

If we take as a baseline the average SST temperature of the last 10k years (up to 1850, the year that is commonly taken as the start of the industrialization), we see that the current global temperature is just +0.4°C higher. The blue box shows that the baseline period for the Paris aim is about 0.35° lower that this average (which corresponds to a period with only natural climate changes).

Taking the Holocene average SST as a baseline, +2°C would allow a supplementary warming (compared to today) by about 1.6 °C !

2. The land + sea global temperatures of the last 2000 years.

The next figure uses both land and sea temperature reconstructions:

It is obvious that taking 1850 as a baseline is extremely dishonest, even silly: the end of the Litte Ice Age corresponds to the coldest period during the last 10000 years. How could any intelligent person choose a baseline that corresponds to an extreme value, and scientifically speaking is at least 0.6°C too low?

3. Conclusions

1. The Paris convention suggests that the pre-industrial climate was more or less constant, and that it is mostly human activity which caused the observed warming since 1850. This is blatantly false, as during the Holocene the climate was highly variable, and has reached or even exceeded the actual global temperatures.
The following pictures shows how the Greenland temperature changed during that period:

Holocene_Greenland_temperature

2. A big chunk of the observed +0.8°C warming (w.r. to the 1850-1900 period) is of natural origin.
3. Using a 2°C target could be valid, but choosing and imposing the coldest period of the last 10000 years as a baseline is silly, dishonest and unscientific.

 

_________________________

History:

  • added Greenland graph (16Dec2017, 15:26 UTC

Decarbonized France

November 10, 2017

The German Nuclear-Angst does regularly ignore the real de-carbonization of electrical power that France has managed to achieve. Look at the figure from RTE, giving the different parts in the French electricity generation of 2016:

So 91% of the French electricity had been carbon-free in 2016.

Now look at Germany, the poster-child in climate-scare and “Energiewende”:

Carbon-free German electricity is just 42.6% in 2016, after 27 years of “Energiewende” (and a cost of ~150 billion Euro from 2000 to 2016, and a total estimated cost of 520 billion Euro up to 2025):

So if you assume that the world really has to be de-carbonized as fast as possible, what technology would you choose for your electricity production?