Wood burning causes more pollution than diesel trucks

April 13, 2017


In 2015 I wrote here a comment “Wood and pellets: a “burning” fine particulate problem“, where the small particle emissions were compared to those of traditional fossil energy sources, and found extremely high. As we are now in a time where Diesel-bashing has become the newest green fad, I was agreeably surprised by an article from Michael Le Page in the New Scientist (4 Feb 2017), titled “Where there’s smoke” (access is paywalled!). He writes how log burning in London and other cities has become a major pollution problem, emitting often much more PM 2.5 than Diesel trucks.

A Danish “eco-friendly” wood burner was found to emit through the chimney 500000 small particles per cm3, to be compared to the 1000 particles/cm3 found at the tail-pipe of a modern Diesel truck: so one eco-family thinking to save the planet caused as much pollution as 500 Diesel trucks!

Look at this graph from the Danish Ecological Council showing the particulate count inside a Copenhagen wood burning house:

This “correctly installed stove” caused an inside pollution 4 times higher than that of the most polluted street!

As I wrote in my former article, log burning certainly is the big contributor, well exceeding a pellet stove. But regarding the romantics, it is difficult to negate the much higher emotional appeal of a burning log compared to some smoldering pellets.

What I really find so scandalous is that people have been coaxed by infused “climate guild” to switch to wood burning, and now the nasty side of this eco-fad shows up. But as much as Diesel bashing seems to be hip, ignoring the pollution caused by wood burning is a real scandal. A scandal which seems a perfect “déjà-vu”, a repeat of the politically motivated push to the less CO2 emitting Diesel engines in the 90’s.

Do eco-politics  always must swing from naively pushing a “climate” solution to recognizing (after a longer latency) that the nuisances of the solution might well be higher than the putative dangers they are meant to avoid?


PS1: here a plot from the Australian Air Quality Group showing the contributions to PM2.5 pollution from different sources (Jun/Jul/August is Australian winter):

Hibberd_PM2.5sources_Muswellbrook_Sept2013.png.1379722208210The peak during July shows that woodsmoke contributes about 10 ug/m3, whereas transport and industry together only about 1 ug/m3, i.e. ten times less! (see here)


How the IPCC buries it’s inconvenient findings

March 30, 2017


There has been an interesting hearing before the US Senate on Climate Change, Models and the Scientific Method, with testimonies from J. Curry, J. Christy, M. Mann and R. Pielke Jr.. In a later blog I will comment on this hearing (video link ). For the moment I would just write about an astonishing fact given by Prof. Christy from the UAH (University of Alabama in Huntsville). Christy (and Roy Spencer) analyze and maintain the database of global temperature measurements done by the satellites (the other team is RSS).


  1. The absence of a human caused warming finger-print

All climate models agree that human caused global warming should show up as an upper atmospheric warm hot-spot in the tropics. Look at the next figure, which corresponds to the outcome of one model:


The problem is that observations (by balloons, radiosondes etc.) do not find this hot-spot, which is a serious blow to the validity of the CMIP-5 model ensemble. Christy has shown in his testimony that the difference between models and observations can be found even in the IPCC’s own latest report (AR5): but probably you have to be an avid and patient reader to find it, as it is buried away in the Supplementary Material for chapter 10, figure 10.SM.1.; also the graphics are confusing and obscure, and some detective work is needed to clear the fog.

Here is this original figure 10SM.1:

The second from left plot corresponds to the tropics, and it is this sub-plot which we will look into.


2. IPCC’s hidden truth

I made a zoom from the relevant plot, and added some annotations and boxes:


The red band gives the answer of the CMIP-5 ensemble to the question: “what are the warming trends in the tropical atmosphere (up to about 15km) in °C/decade” when the models include human generated greenhouse gases (essentially CO2); the blue band gives the answer when the models do not include (i.e. ignore) human GHG emissions. And finally the thin grey line shows the observations of one radiosonde database (RAOBCORE = Radiosonde Observation Correction using Reanalysis): it can readily be seen that the models including GHGs terribly overstate the real warming: the red band (= region of uncertainty) lies completely above the observations. Now whats nearly hilarious is that when the models do not include human GHGs (the blue band), the result is absolutely acceptable, as the blue band covers most of the observation line.

Christy made this graph still clearer by including all observations (the region limited by the grey lines):

The conclusion is the same.

So the 1 million dollar question is: how can the IPCC claim with great confidence that its model tell us that the observed warming carries a human fingerprint, and is caused with very high certainty by the anthropogenic emissions, when in its own assessment report it shows the failing of these models?


The tuning of climate models

March 26, 2017

belmont_glas_dial(link to picture)

Many  (or better practically all) “climate politics” are based on the outcomes of climate models; as these models predict nearly unanimously a future warming due to the expected rise of atmospheric CO2 concentration, their reliability is of a primordial importance. Naively many politicians and environmental lobbies see these models as objective products of hard science, comparable for instance to the many years-long proofed correctness of the  structural physics of skyscrapers.

Alas, this is not the case: as the climate system is devilishly complex and chaotic, building a General Circulation Model (GCM) starting with basic physics laws is a daunting task; during its development, each model must take choices for certain parameters (their values, their possible range), a process which is part of what usually one calls “tuning”. The choices in tuning are not cast in stone but change with the modeling creators, with time and cultural/ideological preferences.

Frédéric Hourdin from the “Laboratoire de Météorologie Dynamique” in Paris has published in 2016 together with 15 co-authors an extremely interesting and sincere article in the “Bulletin of the American Meteorological Society” titled “The art and science of climate model tuning” (link) on this problem. I will discuss some of the main arguments given in this excellent paper.

  1. Observations and models

Hourdin gives in Fig.3 a very telling example how the ensemble of the CMIP5 models (used by the IPCC in AR5) differ in the evaluation of global temperature change, starting from 1850 to 2010 (the temperature anomalies are given with respect to the 1850-1899 average):

I have added the arrows and text box: the spread among the different models (shown by the gray arrows) is awesome, larger than the observed warming (given in the very warming-friendly Hadcrut4 series); even the “adjusted” = tuned variant (the red curve) gives a warming in 2010 that is higher by 0.5°C than the observations. We are far, far away from a scientific consensus, and decisions that ignore this are at best called “naive”.

2. Where are the most difficult/uncertain parts in climate models?

Climate models are huge constructs which are built up by different teams over the years; they contain numerous “sub-parts” (or sub-models) with uncertain parameters. One of the most uncertain ones is cloud cover. Just to show the importance, look at these numbers:

  • the forcing (cooling) of clouds is estimated at -20 W/m2
  • the uncertainty about this parameter at least 5 W/m2
  • the forcing thought to be responsible for the post 1850 warming of about 1°C is estimated at 1.7 W/m2.Conclusion: the uncertainty of the cloud cover effect is 3 times higher than the cause of the observed warming!

Hourdin asked many modelers about what they think to be the most important cause of model bias, and they correctly include cloud physics and atmospheric convection, as shown in the fig.S6 of the supplement to the paper (highlights and red border added):

3.  Are the differences among the models only due to scientific choices?

The answer is no! Many factors guide the choices in tuning; Hourdin writes that ” there is some diversity and subjectivity in the tuning process” and that “different models may be optimized to perform better on a particular metric, related to specific goals, expertise or cultural identity of a given modeling center”. So as in many other academic domains group-think and group-pressure do certainly play a strong role, showing a consensus that might well be due more to job security or tenure than objective facts.

4. Conclusion

This Hourdin et al. paper is important, as it is one of the first where a major group of “main-stream” researchers puts the finger on a situation that would be unacceptable in other scientific domains: models should not be black boxes whose outcomes demand a quasi religious acceptance. Laying open the algorithms and unavoidable tuning parameters (“because of the approximate nature of the models”) should be a mandatory premise. It would then be possible to check if some “models have been inadvertently or intentionally tuned to the 2oth century warming” and possibly correct/modify/adapt/abolish some hastily taken political decisions based on them.

The coming cooling predicted by Stohzkov et al.

March 18, 2017




A new paper from February 2017 has been published by Y.I. Stozhkov et al. in the Bulletin of the Russian Academy of Sciences. Here a link to the abstract (at Springerlink); the complete version is regrettably paywalled, but I was able to access it through the Luxembourg Bibliothèque Nationale.

The paper is very short (3 pages only), has no complicated maths or statistics and is a pleasure to read. The authors predict as many other have done before a coming cooling period; their prediction is based on two independent methods of assessment: a spectral analysis of past global temperature anomalies, and the observation of the relationship between global temperatures and the intensity of the flux of charged particles in the lower atmosphere.

  1. Spectral analysis of the 1880-2016 global temperature anomalies.

The paper  uses the global temperature anomalies series from NOAA and CRU, computing them as the difference with the global average near surface temperatures between 1901 and 2000. Their spectral analysis suggests that only 4 sinus waves are important:

The general form is: wave = amplitude*sin[(2pi/period)*time+phase] with the period and time in years and the phase in radians; the authors give the phase in years, so you have to multiply by 2pi/period to obtain the phase in radians.

  • series #1: amplitude=0.406  period=204.57 years  phase=125.81*2pi/period (radians)
  • series #2: amplitude=0.218  period=  69.30 years  phase=  31.02*2pi/period
  • series #3: amplitude=0.079 period=   34.58 years  phase=  17.14*2pi/period
  • series #4: amplitude=0.088 period=   22.61 years  phase=  10.48*2pi/period

I computed the sum of these 4 series and merged the graph with the global land-ocean temperature anomalies from GISS; the problem is that GISStemp calculates the anomalies from the mean of the 1951-1980 period, so the concordance will suffer from an offset.

The authors write that spectral periods less than 20 years do not play an important role: this means that El Nino’s (roughly a 4 years period) are ignored, as well as non periodic important forcing phenomena like volcanic eruptions. The following graph shows my calculation of the sum of the 4 spectral components (in light blue) together with the official GISStemp series in red:

The fit is not too bad, but as had to be expected, misses the very high 2016 El Nino caused warming.

The authors are not the first doing a spectral analysis on the temperature series. N. Scafetta in his 2012 paper “Testing an astronomically based decadal-scale empirical harmonic climate model vs. the IPCC (2007) general circulation climate models” (link) gives the following figure of a good fit using 4 short period sinus waves (so he does not seem to agree with the Stozhkov on the non-importance of short periods):

Note that both models predict a cooling for the 2000-2050 period.

2. Cosmic rays and global temperature

We are now in solar cycle 24, one of the weakest cycles since ~200 years, as shown by the next figure (link):

A situation similar to the Dalton minimum during the first decade of the 19th century cold period seems to unfold, and all things being equal, would suggest a return to colder than “normal” temperatures. But as Henrik Svensmark has first suggested, the sun’s activity acts as a modulator of the flux of cosmic charged particles, which create in the lower atmosphere the nucleation particles for condensing water, i.e. cooling low atmosphere clouds. In this paper the authors compare the flux N (in particles per minute) measured in the lower atmosphere (0.3-2.2 km) at middle northern latitudes with the global temperature anomalies dT: the measurements clearly show that an increase in N correlates with a decrease in dT. This is an observational justification of Svensmark’s hypothesis:

As this and the next solar cycle are predicted to be very low-active, this observation is a second and independent prognosis of a coming cooling (you may want to look at my older presentation on this problem here).

3. Conclusion

I like this paper because it is so short and does not try to impress the reader by an avalanche of complicated and futile mathematics and/or statistics. The reasoning is crystal clear: both the spectral analysis and the to be expected rise in the flux of charged particles suggest a future global cooling for the next ~30 years!


Addendum 03 April 2017:

You might watch this presentation by Prof. Weiss given in 2015 on cycles in long-term European temperature series.

Energiewende: a lesson in numbers (Part2)

March 11, 2017

 (picture link:http://www.nature.com/nature/journal/v445/n7125/full/445254a.html)

In the first part of this comment on the Energiewende I showed that its primary goal to restrict the CO2 emissions has not been attained.

In this second and last part I will concentrate on the costs of the Energiewende.

2. The costs of the Energiewende

Let us remember that the financial aspect of Energiewende is a system of subsidies going into many directions: those who install solar PV or wind-turbines (for instance) receive a subsidy for the installation costs; they are granted priority in feeding electricity into the grid, and they are paid for this feed-in a tariff largely in excess of the market price. The McKinsey report writes: “Die aktuell vorliegenden Zahlen belegen, dass die bisherigen Erfolge der Energiewende überwiegend durch teure Subventionen erkauft worden sind “.

The costs for the individual household rises continuously. as shown by the next graph:

The increase with respect to the 2010 situation is a mind-blowing  3.35 factor; as the kWh price will probably reach or exceed 0.30 Euro in 2017, most experts agree that the yearly supplementary cost per 4 person household will be higher than 1400 Euro (which has to be compared to the 1 € price of one ice cone per month/person that minister Jürgen Trittin announced in 2004!).

The subsidization has transformed a free market into a planned economy, with many unintended nefarious consequences:
At certain times the combined solar+wind production is excessive, and leads to negative prices (the big electricity companies must pay their (foreign) clients to accept the surplus electricity:

The “redispatch” interventions to stabilize the grid and avoid its collapse rise by a factor of 10 from 2010 to 2015 (link); the costs rise practically more than the doubling given by “Moore’s law” during 2013-2015 (link):

Actually, if one includes not only the costs of not-needed electricity, but also those of the redispatch (changing the provenience of the electrical energy) and the mandatory reserve capacity, we are close to a doubling in the years 2011-2013-2014-2015, as shown by the “Insgesamt” total in million Euro (link) :

The McKinsey report sees grid management costs quadruple during the coming years and rise to over 4 billion Euros (4*10^9) per year. A recent article in The Economist titled “Wind and solar power are disrupting electricity systems”. Here three main problems are cited: the subsidies, the intermittency of wind and solar and finally their very low production costs which make traditional power stations (urgently needed for base-load, backup and grid stabilization) non-economic: without state subsidies nobody will built these power stations, so that the circle of state planning (as we know it from soviet times) is closed.

3. The job problem

Renewables have always been hyped for their job potential, but the reality in Germany is quite different: 2016 was the fourth year with falling job numbers in the renewable industry, and when this trend continues the aim of 322000 “green” jobs will not be attainable in 2020. Equally disquieting is that 2016 is the first year showing a decline in the jobs in the electricity-hungry industry. An older (2011) AEI report concludes that green jobs only displace traditional ones, and that in Spain each green megawatt installed destroyed 5.28 jobs. It seems that the whole Energiewende depends on its foundation of big subsidies (either direct or indirect) and state planning and steering. In a free market, the rise of “renewable” electricity would not be nil, but be much slower. The subsidies have spoiled huge parts of the industry, and they see these subsidies paid by all the citizens as their due.

Fritz Vahrenholdt has published a paper at the GWPF titled “Germany’s Energiewende: a disaster in the making”. He could well be right.

Energiewende: a lesson in numbers (Part 1)

March 11, 2017

lesson in numbersA new report from McKinsey on Germany’s Energiewende (= energy transition policy) has been published in the series “Energiewende-Index”. This very transparent and non-emotional report makes for a good reading: the main lesson is that the costs of the Energiewende (which has driven German household electricity prices 47.3% higher than the EU average) will continue to rise, and that the political deciders seem to ignore the future financial burden.

In this blog, I will comment using only numbers from well-known institutions (as the Dutch PBL report  “Trends in global CO2 emissions 2016“, Fraunhofer ISE, Agora Energiewende etc.), and let these numbers speak. Let me just give my personal position on renewable energies: In my opinion, every country should diversify as much as possible its energy sources, and that means that wind and solar should not be brushed aside. But the importance of having reliable and affordable continuous electricity available can not be ignored: intermittent sources as solar and wind should not be presented as the sole environmentally acceptable providers, as clearly the last dozen years have shown that this intermittency and the absence of realistic electricity storage are at the root of many tough problems. The German green Zeitgeist (which seems to drive many EU regulations) clearly is blind on both eyes concerning these problems; condemning nuclear energy under all its actual and upcoming forms as unacceptable increases dramatically the problems.

  1. The avoidance of CO2 emissions

The Energiewende was first positioned as a measure to avoid and diminish CO2 emissions caused by producing electricity from fossil fuels, transportation and industrial manufacture. After the Fukushima tsunami (March 2011), the “Atomausstieg” (nuclear exit) was added to this political foundation. Heavy subsidies have been poured on solar PV and wind energy facilities, pushing up the installed capacities of these 2 providers to 91 GW for a total installed generation capacity of 196 GW (numbers rounded commercially) as shown in this edited plot from Fraunhofer ISE:

Intermittent sources thus represent 91/196*10  = 46% of the installed capacity in 2016.; in January they delivered 23%, in August 25% of the total installed generating capacity. So we can conclude that when summing the intermittent sources, we find that these subsidized sources which have a feed-in priority contribute at about half of their installed capacity. The problem lies in the word “summing”: under the aspect of emissions, the sum might be a useful metric, but in real life it is the instantaneous available power that counts. The two following graphs from the Agora Energiewende report 2016 show the situation during the first and third quarters: I highlight the days with minimum and maximum (solar+wind) contribution with yellow rectangles.



Without the base load of CO2 emitters like biomass and coal, the lights would have been out many times!

Let us now look at the CO2 (or better the equivalent CO2 (CO2eq)) balance for the last years, compare several countries with Germany, and see if the Energiewende has been a successful CO2 lowering policy.

Our next graph shows how the CO2 emissions varied from 1990 to 2015 (I added zoomed inside pictures):


The most interesting conclusion from this graph is that Germany’s total CO2 output diminishes not much between 2005 and 2015 (the Energiewende started in 2001), in contrary to the USA which had not a comparable policy. The same picture shows up in the “per capita” emissions:


Compared to the non-“Energiewende” countries of France and the USA, Germany again fares very poorly. The next  graph highlights in a more precise manner the trends between 2002 and 2015:


I computed the trend-lines for Germany (magenta) and France (black): the equation show that France is two-times more successful than Germany in lowering its CO2 emissions, without any comparable and extremely costly Energiewende policy. Agora concedes this in its report writing that ” … Germany’s total greenhouse emissions haven risen once again“!

And the following graph shows that the part of fossil fuel has remained constant since 2000:

Conclusion: The Energiewende has not achieved its primary goal in greatly lowering CO2 emissions!


(to be followed by part 2)

Has climate alarm peak been crossed?

February 17, 2017


There is a very good comment by Donald Kasper at the Wattsupwiththat climate blog (15th Feb 2017). He writes that all social issues have a peak of popularity, but that the times of the rise might not be equal to the time of decline. Climate and global warming alarm is now among us since at least 30 years, and it seems that the continuous rise in attention and funding that this problems receives are quite different in many regions of the world. In the USA, the climate problem clearly is not the most burning one for the general population but in Europe the climate-angst train does not yet seem to slow down.

I remember at least 3 big environmental scares that were very popular in the past, and initially seemed to become eternal: the pilfering and exhaustion of the Earth’s resources and over-population (Club of Rome, the “Population Bomb” book by the Ehrlich couple published in 1968) seemed in hindsight to have and attention-grabbing duration of possibly 10 – 15 years. Look here for a good New York Times article and video on “The Unrealized Horrors of Population Explosion”. As neither prophecy, nor those of material exhaustion of the Club of Rome and those of rapid famines predicted by the two Ehrlichs became true, time was ripe for another scare.

During the second half of the 80’s, the danger of ambient radon, the ubiquitous natural radioactive gas, was pushed to new heights. Many profited from this new angst, mostly research labs and companies that were quick to sell radon mitigation appliances to disturbed house owners (usually a simple fan with some sealing of the caves bottom). A gas that in some rare instances could be a problem was pushed by the media and politicians (as always wanting to show that they care about their voters) to a permanent and extreme danger, allegedly causing a high percentage of the lung cancers (a conclusion that was extrapolated from extreme high radon situations to very low ones, according to the probably wrong No Linear Threshold (LNL) theory still fashionable among many anti-nuclear activists today). New legal maximum concentrations were defined (as for instance 300 Bq/m3 in Luxembourg); in the USA a radon certificate had to be added when a house was sold; and than the problem vanished from the media and the overall attention.

Why did the radon angst disappear? Because the new danger of global warming caused by another “pernicious gas”, CO2, was ramped up. The avoidance (mitigation) of high radon levels was not a too difficult task; but avoiding CO2, a natural constituent of the atmosphere and an inevitable by-product of fossil energy use, is quite a different beast. No wonder that climate change (which replaced global warming when it became clear that there has not been much warming for the last 20 years) became rapidly the poster child of everyone: as for radon, the new danger assured heavy funding in university research, the possibility to produce electricity by non-carbon emitting procedures pushed many parts of the industry into renewable wind and solar devices, and on top the very influential environmental movements had a topic that predictably would have a much longer life-span than the previous scares. As an additional pusher we can see the disappearance of the Cold War worries,the slow-down of traditional religious feelings which were, at least in many parts of the Western World, replaced by the new “quasi-religion” of environmentalism.

All these scares have some solid foundations: a future world population of 11 billion would be unmanageable if technology and science would stand still, so the 1968 angst (as the much earlier prophecies of Malthus) seem quite reasonable in an unchanging world. But this has not happened: the green revolution (which owns so much to Norman Borlaug) increased agricultural yields tremendously without destroying the soils and “nature”; in spite of many ongoing (civil) wars, political unrest, deep corruption etc.  poverty has decreased and access to education made quite a jump. When the “Population Bomb” was written (1968) the world population was about 3.6 billion, and today, close to 50 years later, is has doubled to 7.2 billion.

The big environmental scares all ignore the tremendous potential for innovation of humanity. Despite the horrors of wars, environmental damage and political unrest in many parts of the world, the overall picture of the past 50 years commands an optimistic point of view, and not one of fear and depression. Will climate angst follow the past pattern? What makes climate change different is that, depending on your view, it is essential a cause of human evolution and progress, which both were and are heavily tied to energy availability and usage. All the previous scares have found at least a partial solution by human progress (remember that as a general rule the most industrialized countries are also the most eco-conscious ones!), but this one demands a big change in thinking. When we want to avoid pumping more and more CO2 into the atmosphere (my personal opinion still remains that the dangerous consequences will be small), and we have installed solar panels and wind turbines everywhere without seriously solving the intermittency problem of these renewable energies, why do we not see the elephant in the room: nuclear energy has all the needed potential for an abundant and cheap carbon-free energy, and many ways different from those used in the past exist to use nuclear (or fusion) energy in a low-dangerous manner, without a legacy of extremely long radioactive waste.

The Case for Simplicity

January 25, 2017


In my comments on the TIR Jeremy Rifkin report I repeated many times that in my opinion this report suggests a devilishly complex future, with millions of digital gadgets being interconnected and working to control and manage nearly every aspect of our life, amongst them the electrical grid. One of the most obvious problems is the vulnerability of the coming “smart” electrical grid and its feeders against malicious attacks. The last years we have seen  big attacks deploying rather successfully: on December 2015, the Ukrainian power grid was brought down (read this report), possibly by Russian hackers; USA Today reported (link) that the USA grid is under nearly continuous attack. The website of “Transmission & Distribution World” writes in April 2016 on a dramatic rise in successful cyber attacks (link). 86% of the security experts at a RSA conference said  that cyber attacks could cause physical damage to the infrastructure.

A group of 4 senators introduced a bill in January 2016 suggesting to go “retro” in selected components of the electrical grid to isolate it from malicious attacks (link).

The Center for Strategic and International Studies (CSIS) published in October an extremely interesting article by Michael Assante and al. titled “The Case for Simplicity in Energy Infrastructure“. The text clashes definitively with the naive all-digital optimism of the Rifkin paper. Let me just cite a few sentences:

  • “Mix in a whorl of oversight organizations, legislation,regulatory frameworks, standards, and continually changing standards, and we’ve baked ourselves a layer cake of complexity and abstraction that no one in their right mind would want”
  • “Complexity is not a desirable attribute”
  • “There is a point of diminishing returns where more energy is required to sustain the complexity than the complex system provides in benefits”

They suggest not to un-digitize everything (which clearly is not feasible), but to introduce “attack surface interruption zones” which use non-digital, analog technologies to block a cyber attack. So instead of infiltrating every component of the grid with digitalization, well chosen islands using “retro” technologies (as analog relays) and human operators would avoid a break down of the whole attacked grid.

The best strategy in the search for resilience and stability of the electrical grid against attacks might be in this sentence: “Don’t over digitize!”

The Ewringmann report on pump tourism (part 4, last part)

December 23, 2016



1. Part 1 (Introduction; Shutting down…)
2. Part 2 ( costs)
3. Part 3 (errors and the 3.5 billion cost)
4. Part 4 (Diesel bashing and conclusion)

This last part holds some musings on Ewringmann’s Diesel comments, and make as general conclusion of the report.

8. Diesel bashing

The green movements have taken delight in a new fad: the Diesel bashing. All evils, all bad pollution problems are the fault of the Diesel engines that foolish politicians do favor (at least in the EU) by lower taxes. These people do not remember or consider three important facts:

  • Diesel engines have a much higher efficiency as gasoline engines (about 35% versus 25%), so for a given work, they consume less fuel and emit less CO2
  • it was green policy to push the Diesel engine as a solution for lower CO2 emissions
  • high temperature/high pressure thermal engines emit more NOx and more nano-particles than lower pressure ones; as the newest gasoline direct injection engines (GDI) operate also at higher pressures, they have the same problems with these emissions as the Diesels (plus some 20 times higher CO emissions), but also have a higher power and better mileage than the traditional atmospheric models. See this figure from Delphi:


There are no big differences in emissions between such an injection-fuel car and an EU-6 Diesel car; GDI cars may even exceed the EU6 norm (link of picture). If efficiency is what counts, the Diesel engine remains the king since its invention by Rudolf Diesel. For a more sober comment, read this comment “Do diesels have a future?“.

Now Ewringmann writes that about 85% of the external costs from the sold fuel come from Diesel cars or trucks. As Diesel fuel accounts for 83% (inland) and 84% (export) of the quantities of fuel sold, this fact is not a rocket-science conclusion, but self-evident. If all Diesels were forcibly changed to gasoline, 100% of the cost would come from gasoline! When the author asks for an “Überdenken der Dieselpolitik”, Diesel bashing won’t help. He should clearly say what he thinks: lower  (strangulate?) private and commercial transports on the roads by political regulation!

9. A short conclusion

I ended part 1 of this discussion with many citations, where in my opinion Ewringmann was right. All these citations finally have one common argument: even when Luxembourg makes its fuels much more expensive so that the quantities exported will be drastically lowered, the environmental impact outside Luxembourg will practically remain the same. Luxembourg is not the culprit of the external costs happening abroad!

Ewringmann seems to be quite sincere in evaluating the benefits, but he really jumps the line when calculating the costs. Applying Luxembourgish cost-factors to fuel burnt outside the country is wrong, and this single “error” inflates the costs. Attaching fantasy prices to CO2 emissions is equally wrong, even if some German institutes or environmental organizations do favour extremely inflated numbers. CO2 has a clear price in Europe since many years; the price is low and does not increase as predicted. So this price of 10 €/ton should be used, and not an amount 10 times higher!

The report is easy to read, but shows the piece-wise writing (and possibly multiple authors), as Ewringmann acknowledges in his foreword. Childish errors like the rounding problems creeping up in some tables should be taboo in a publication from a serious institute.

Does the report help in making political decisions? I remain dubious, as the main lesson I take away after several careful readings is that the best option is to let things evolve without interference. As the last 3 years show a “natural” lowering trend of the fuel exports, let economy decide. A “natural” weaning will probably allow to compensate the tax losses, avoid to kill numerous jobs and economically devastate the commercially successful border regions by ill-conceived ideology driven politics.

(end of part 4, the last part)

The Ewringmann report on pump tourism (part 3)

December 23, 2016



1. Part 1 (Introduction; Shutting down…)
2. Part 2 ( costs)
3. Part 3 (errors and the 3.5 billion cost)
4. Part 4 (Diesel bashing and conclusion)

In this 3rd part I will make some comments on rounding errors, and make a recalculation of the external costs that the author gives at 3.5 billion Euros.

6. Rounding errors

A really annoying error is that the author has problems with rounding and sums of rounded numbers. At many places the sums given in a table are different from the correct number by 1. The problem probably comes from summing in an Excel sheet non-rounded numbers, and giving in the tables the rounded numbers without checking that the sum of these rounded numbers is not equal to the rounded sum in the Excel sheet. This is a very basic error, that should not be made in a (probably expensive!) report written by a well-known institute.
As an example, let us look at Table 4 at page 26: all the sums with a blue strike-out are wrong; the sum “440” in the line corresponding to 2000 is two-times wrong: adding the numbers should give 341 (instead of 440), but the number 96 for the “Benzin” (gasoline) inland consumption probably should be 196 (which makes the correct sum 441):


7. Re-checking the scary 3.5 billion cost number

At page 49 we find the sentence repeated by all news articles I have read: “ist mit externen Umwelt-und Gesundheitskosten von insgesamt 3.5 Mrd. Euro pro Jahr verbunden”. In English, the authors says that the fuel sold in Luxembourg in 2012 (for this year has been used for the calculations) has external costs for the environment and health of 3.5 billion Euro. The positive impact on the GNP is 1.8 + 0.26 =2.06 billion € .

The costs which can be attributed to the fuel sold are essentially the costs related to the emissions of pollutants, and those from CO2 emissions. Road accidents will also happen when all vehicles run on electricity, they should not appear in this calculation!

The numbers for the emissions costs relate to 2008 (there is a real confusion in the report regarding the years corresponding to given numbers, because they vary from 2008, 2010 and finally 2012 is taken in the status quo discussion). The inland fuel usages (in kt) in 2008 and 2012 were 573 and 574, and the exported numbers 1610 and 1586. As these quantities do not differ by more than 5%, we will use the emission costs given in part 2 for the calculation of total costs in 2012. The price of 1 ton CO2 is taken as 10€, a number that many experts estimate being the EU emission price until 2020 (see for instance here). The up to 50 times higher number given by Ewringmann at page 32 should be considered as green fantasy.

The following picture shows the costs for inland, exports and the grand total:


So even if we accept that all costs (inland and export) should be summed (I repeat: I do not agree!), the range goes from 264 to 661 million Euro: this is a staggering difference by nearly one order of magnitude for the lower range, and by a factor of 5 for the high range value with respect to the scary 3.5 billion amount.

The 3.5 billion Euro number is pure and extreme guess work, a fantasy number rooted in non-real, extreme CO2 costs and in a faulty calculation of emission costs.

(end of part 3)
(to be continued with last part 4 )