Welcome to the meteoLCD blog

September 28, 2008

blog-2018

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to the questions of climate change, global warming, energy etc…

An interesting day for Ozone and CO2

February 5, 2020

Today 05 Feb 2020 is an interesting day to observe how ground ozone O3, CO2, NO2 and wind may play together. The next figure shows how air temperature, CO2, ground O3 (measured by the Cairsens O3&NO2 sensor) and wind velocity (in m/s) varied during the morning at meteoLCD:

We see that CO2 makes a big jump starting at 06:00, with a peak at 07:30 and a fall back to “normal” at 09:00. Ground ozone varies in the opposite manner, as does the wind velocity and air temperature. What causes these variations? Probably the cooling of the boundary layer, together with lower wind, enhances the morning inversion. CO2 emissions from traffic and heating are trapped and CO2 levels rise; when the wind increases again, the boundary layer starts being mixed, which dilutes CO2. So far, so good. But why the sudden slump in O3?
Our period starts at 06:00 UTC, which is 07:00 local time. At that hour the nearby traffic from people driving to their workplace peaks, something we often have noticed in our NO/NO2 measurements. We know that O3 is destroyed by NO (and produced by NO2, among others); look here for a very small article on this I wrote in Dec. 1998 (!) with several of my students. So the best explanation is that a NO peak from this traffic (and NO being trapped in the inversion layer) operates to destroy ground ozone.

We find this situation at the official station of Beckerich, where the traffic situation is similar to that at meteoLCD (Diekirch). Beckerich curiously is the sole station showing NO2 readings (the other 4 having probably a problem with their sensors). NO2 is mainly the result of an oxidation of NO coming out of the tailpipes, so we may safely assume that the hourly variation is similar in shape.
Here the Beckerich situation:

NO2 peaks at Beidweiler
(05 Feb morning = yellow)
… while O3 takes a plunge!

Two other stations (Luxembourg-Bonnevoie and Esch) show the same O3 dip, and two (Beidweiler and Vianden) do not.

Conclusion:

Once again we find an illustration of the importance of wind velocity (i.e. air movements) on the CO2 mixing ratio: lower wind (especially during an inversion) allows CO2 to accumulate, and more air movement makes for higher dilution and lowers the concentration.

Ozone levels in the relatively low sun morning hours (despite the sunny and blue sky morning, at 08:00 UTC solar irradiance was a meager 50 W/m2)) are mostly affected by the destroying NO gas, whose concentration peaks normally during the high traffic morning hours, This would explain why the Vianden O3 levels do not plunge, as there is practically no traffic around that measuring station:

No O3 dip at traffic-free Vianden!

One year of fine particle measurements by Airvisual Pro at meteoLCD

January 23, 2020
Airvisual Pro installed in the Stevenson hut the 26th Dec. 2018

1. Introduction

During the year 2018 I decided to introduce fine particle measurements at meteoLCD. During many months I built several PM measurement systems based on the Chinese SDS011 sensor made by inovafit, with data logging done by a Raspberry Pi. All these new sensors are of the LLS type (LLS = Laser Light Scattering), where ambient air is sucked into a chamber by a little fan and exposed in that chamber to the light emitted by a solid-state laser. The scattered light is analysed by a photodiode, and the result is a count of particles in suspension, classified in two categories (< 2.5 um for PM 2.5 and < 10 um for PM 10). The counts are converted into a mass (ug/m3) by the inbuilt controller, assuming a certain combination of substances of known density. Clearly this easy to built system has some weak points: the most important is humidity, and it has been shown that above a certain level of relative humidity (about 75%) the condensation of water vapor on the particles inflates the count and so the mass reading. A second weak point are the changing conditions of air flow due to a varying atmospheric pressure and/or wind. Professional, expensive sensors like the Horiba APDA-371 avoid these problems by drying the incoming air, maintaining precise conditions of airflow and air pressure, and using a much more expensive and complicated principle of beta radiation weakening (BAM principle). I wrote two preliminary articles on comparing low-cost LLS sensors with a Horiba at the Beidweiler station, which I suggest to read here and here.

2. The Airvisual Pro

The Airvisual Pro is a stylish LLS-type sensor made by the Swiss company iQAir (actual price ca. 460 €). It measures temperature, rel. humidity, CO2 concentration, PM2.5 and PM10. The instrument can be integrated into a cloud managed by iQAir, so that hourly data are always available on the internet (see our data here). Communication with the outside is exclusively by WiFi (there is no RJ45 connector), which was sort of a problem at meteoLCD. We first used a Devolo Powerline system with the AP located in the hub (visible in the above picture below the translucent base plate). This system was unstable, even when we switched to a AVM powerline system which uses the 3 line wires (neutral, line and ground). So finally I laid an RJ45 cable up into the Stevenson hut, and installed a WiFi access point directly beneath the Airvisual Pro. This solved the problem of intermittent connection failures.

The correction of the humidity influence consist in dividing the raw readings by a growth-factor GW = a + (b*RH^2)/(1-RH) where RH is the relative humidity (number between 0…1), a=1 and b= 0.25. This formula has been suggested by several authors (see my prior papers for the references), and first tests have shown that this compensation for humidity is an absolute must.

3. One full year of data

We now have a full year of hourly data to compare the Airvisual Pro readings with those of the official Beidweiler station LU0105A located less than 20 km from Diekirch. The Beidweiler data have been downloaded from the “discomap“-site of the EEA, and I used the E1a series (the validated E2a series are not yet available). The E1a data are somewhat irregular in the ongoing time (lines do not always follow increasing time), and there are many missing data (often a couple of hours, but some much longer periods). The time-stamp probably is local time, as is the time-stamp of the Airvisual Pro file. Missing values have been replaced by repeating the last correct reading before the interruption. As all hourly data are raveled into daily averages, the impact of this action is tolerable.

So here is a plot showing the daily PM2.5 readings for the full year 2019:

First, look at the peaking values, which are absolutely synchronous: practically all the peaks and lows coincide in time. The yearly averages, minima and maxima also are very close for both series.

The next plot shows the Airvisual Pro readings versus the Beidweiler ones:

The goodness of the fit (forced to 0) is R2 = 0.88, quite acceptable! A calibration factor to apply to the Airvisual Pro PM 2.5. readings would be a multiplier of 1/0.88 = 1.14 (always rounded to the nearest integer). Not forcing the trend-line through the origin does not change these results.

4. The PM 10 readings

The Airvisual Pro seems to have a problem with the PM10 category, as these readings are very similar to the PM 2.5, and considerably too low, as shown by the next plot:

I have no explication to this for the moment.

5. Conclusion

This first year-long series of PM measurements with the Airvisual Pro shows that it is exceptional accurate in its PM 2.5 measurements, compared to the hugely more expensive Horiba APDA_371.

The Airvisual Pro, despite being exposed in the open in a well ventilated Stevenson hut (natural ventilation) and high humidity levels, worked without a single break-down for the full year

The year long communications with the Airvisual cloud worked flawlessly; this cloud makes it easy to store and consult the uploaded data. For understandable security reasons, accessing these data is reserved to the subscriber, and not the general public.

An Excel file holding all relevant data and plots can be found here. Please give proper credit when citing these data.

Future and extinction fear

October 10, 2019

I reread during the last weeks the excellent book “La Peur Exponentielle” of Benoît Rittaud, in which he recalls the many fears tied to a geometric or exponential increase in some environmental factor, be it population growth, pollution, climate change etc. At the end of his book Rittaud recalls the classic book “The Doomsday Syndrome” published by John Maddox in 1972. Maddox was a theoretical physicist and is especially known for being the editor of the Nature journal (from 1966 – 1973 and 1980 – 1995). I never did read the book in the past, so I ordered a used copy as it is out of print.

The ideas and comments by John Maddox written 47 years ago are breathtaking in their modernity and actuality. He writes this book at a time when the environmental movement was thriving in the Western World, and the prophets of doom like the Ehrlich (“The Population Bomb”) and Rachel Carson (“The Silent Spring”) were very influential. The great angst were population growth, pollution, pesticides, climate change (yes!) and a general overcrowding and damaging the “Spaceship Earth”.

All of these prophecies were wrong in their exaggerations of real exisiting problems. The great famines predicted by Ehrlich for the 1970’s did not happen; overuse of DDT certainly was a problem, but the general policies in forbidding its use (after it had saved millions of lives from malaria) are responsible for possibly hundreds of thousands of deaths.

Let me here just cite a couple of sentences from Maddox:

On the doomsday prophecies: “Their most common error is to suppose that the worst will happen”.

On the way the environmentalist see the non-alarmists: “One of the distressing features of the present debate about the environment is the way it is supposed to be an argument between far-sighted people with the interests of humanity at heart and others who care no tuppence for the future.”

On the scientists: “They have too often made more of the facts than the conventions of their craft permit. Too often, they have expressed moderate or unsure conclusions in language designed to scare, sometimes with the open declaration that exaggeration is necessary to 'get things done'.

On ecology: “The word ecology has become a slogan, not the name of a branch of science.”

The doomsday cause: “…would be more telling if it were more securely grounded in facts, better informed by a sense of history and awareness of economics and less cataclysmic in temper.”

On DDT and Rachel Carlson: “The most seriously misleading part of her narrative is the use of horror stories of the misuse of DDT to create an impression that there are no safe uses worth consideration.”

On alarm: “…alarm does not provide the best atmosphere for finding rational solutions”.

On extreme environmentalists: “…the extreme wing of the environmental movement may inhibit communities of all kind from making the fullest use of the technical means which exist for improvement of the human condition.”

Conclusion:

How about telling Extinction Rebellion (or the elder of Fridays for Future) to start reading this prescient book written well before they were born?

A peek at ground-ozone

October 5, 2019

Attention: this comment has being updated the 06Oct2019; the previous comparison with the 6 months values was erroneous. Sorry!

________________________

A useful exercise is always to compare our meteoLCD ground ozone measurements (here) with those of the official administration for air quality (here).

Since 3 Jan. 2017 we use the miniature CAIRSENS O3&NO2 sensor for our ozone measurements (technical specs here). The NO2 readings are normally well below the lower limit of detection of 20 ppb of this instrument, so that its readings can be taken as ozone concentration (in ppb, multiplying by the conversion factor 2 gives a result in ug/m3 at standard conditions). The Administration of Environment uses expensive Horiba sensors in its stations and the readings are given in ug/m3. The 3rd April 2018, the first Cairsens O3&NO2 has been replaced by a follower.

  1. The official stations of Vianden, Beckerich and Beidweiler.

We will compare our measurements made during the week of the 29th September 2019 with those of the three stations of Beckerich, Beidweiler and Vianden:

As Luxembourg is a very tiny country, the distances between Diekirch and the other 3 stations are small: Diekirch-Vianden = 8km, Diekirch-Beidweiler = 19km, Diekirch-Beckerich = 25km (rounded to the km).

Regarding traffic we have a very clear situation: Vianden has very low traffic, and is surrounded by a large forest area; the measuring station is situated between the two basins of the pumped water storage SEO facility. Beidweiler has some morning and evening traffic peak from people driving for work into Luxembourg and also a certain amount going opposite into the Moselle region or Trier in Germany. Beckerich surely has the most morning and evening work related traffic, and being on a main road (but not a highway) to Arlon in Belgium, also more traffic during the day. This situation is similar to that in Diekirch, where there are traffic peaks in the morning and evening hours and a continuous background over the day.

Here a picture of the O3 measurements of these 3 stations:

I added a horizontal line defining the lowest nightly measurements: Clearly Vianden shows the typical situation of a rural station, where there is no nightly traffic whose NO emissions are rapidly bringing down the O3 levels. Its peak values also are the highest, as this is a location with very pristine and clear air (so no much UVB attenuation here), and with a large natural supply of ozone precursor gases as isoprenes and terpenes emitted by the surrounding trees.

Beidweiler is an intermediate: the night lows are close to 20 ug/m3 and the peak values are lower than those of Vianden. Finally Beckerich has the lowest night O3 levels, going down to about 5 ug/m3; its peak readings also are distinctly inferior to those of Vianden and Beidweiler (please pay attention that on the graphs the vertical scales are not the same!)

Now I can not refrain to make a comment I am issuing since at least 20 years. From its location, it is clear that Vianden must have the highest natural O3 levels during warm periods (all stations not mentioned here are city stations). Why is it than that the Administration of Environment uses every year the Vianden values to declare an ozone emergency and limit traffic speed on the highways, with the argument that Vianden is representative for Luxembourg! Nothing could be father from truth: most people in Luxembourg live in/near Luxembourg-City and to the South, in regions that are a far shot from the pristine Vianden situation. I guess that an environmental agency must justify its existence by launching scary messages from time to time ; I am more than willing to change my opinion if I get one single good argument justifying its choice that I consider a political, and not scientific one.

3. Comparing Diekirch to the three other stations

The next figure shows for every station (brown curve) an overplot with our Cairsens measurements (in red):

Simple inspection would suggest that Beckerich and Diekirch are the most similar: same night-time low due to ongoing traffic, same highs and lows with a possible overshoot of the highest readings. Beidweiler ranks second and Vianden makes for some head scratching: the night-time readings are quite different, but all the high readings are very close.

In my opinion, our Cairsens (this is the second sensor as each one has a life-span of a year) has a span that might be slightly to high, even if its calibration certificate was nearly perfect. During high ozone events, we had several times readings exceeding 200 ug/m3, when the other stations were distinctly below.

Let us have a look at the last month (09 Sep to the morning of the 06 Oct) overlaying the Diekirch and Vianden plots:

This looks not too bad, but clearly the ozone sensor in Vianden had some problems, shown by the brown straight lines which point to an interpolation of missing data. Do also not ignore that this graphical overplot is really rough, so one should not put too much emphasis on slightly misaligned peaks.

Now lets look at the past 6 months. The official station correspond rather well one to another:

We find approximately the same situation as in the short week long series from above.

Here now a comparison between Diekirch and Beckerich, and in the next picture Diekirch and Vianden:

Visually this is not too bad! We have the same peaks and lows, with a possible overshoot in the Cairsens readings during the highest O3 periods.

4. Conclusion

This short comparison has shown a quite good agreement between our Cairsens measurements and the Horibas during the last week, month and even 6 months periods. Do not forget that the price difference between the Cairsens and the Horiba is enormous. The Cairsens is one of a new generation of sensors that are made to be affordable, calibrated for a full year of operation, and meant to be replaced after that year.

The O3&NO2 Cairsens sensor has a special patented filter that must be changed every 4 months or so. This change will be made asap.

Radon washout: two consecutive precipitation peaks

September 25, 2019

Many times I wrote on this blog on radon washout: after a short downpour, we (nearly) always see a visible peak in our gamma radiation, caused by a washout of the daughters of the noble gas radon which is a natural constituent of our atmosphere; to find these comments enter “radon” into the search window on this site or click  here , here and here.

A washout means a diminution of the local atmospheric aerosol concentration, and all measurements show that there exist a delay of a few days before recovering. The last few days give us a very good example of two situations: a high precipitation peak followed by a lower one, and vice -versa a small rain peak followed by higher one.

The situation A shows a high rainfall followed after about 6 hours by a smaller one: the gamma-radiation peaks have the same pattern: high, than lower. Situation B is like a mirror image for rain: first small pourdown, than a much higher one. Here the small precipitation peak causes the highest gamma peak: it is a sign of radon washout from a “pristine” atmosphere. 6 hours later we observe a 3-times higher downpour, but the gamma peak is very small: the washout operates on an atmosphere “cleaned” from radioactive aerosol particles (the radon daughters), so there is not much radioactive debris left.

This example shows that it would be a folly to try to find a proportionality between the rain peak and caused gamma rise. The recovery time is a parameter not to ignore. I tried to find a relationship between rain and gamma peaks, if the situation is that of a one-time event, sufficiently far (> 3 days) from a preceding one. There are not many such happenings in a year, and the correlation is poor. Maybe more on this when time permits.

The Kauppinen papers (2/2)

August 11, 2019

3. The four Kauppinen papers.

In the first part of these comments I finished by writing that Dr. Jyrki Kaupinnen (et al.) has published during the last decade several papers on the problem of finding the climate sensitivity. Here is a list of these papers:

  • 2011 : Major portions in climate change: physical approach. (International Review of Physics) link
  • 2014: Influence of relative humidity and clouds on the global mean surface temperature (Energy & Environment). Link to abstract.
    Link to jstor read-only version (download is paywalled).
  • 2018: Major feedback factors and effects of the cloud cover and the relative humidity on the climate. Link.
  • 2019: No experimental evidence for the significant anthropogenic climate change. Link.

The last two papers are on arXiv and are not peer reviewed, not an argument to refute them in my opinion.

4. Trying to render the essentials without mathematics.

All these papers are, at least for big parts, heavy on mathematics, even if parts thereof are not too difficult to grasp. Let me try to summarize in lay man’s words (if possible):

The authors remember that the IPCC models trying to deliver an estimate for ECS or TCR usually take the relative humidity of the atmosphere as constant, and practically restrict to allowing one major cause leading to a global temperature change: the change of the radiative forcing Q. Many factors can change Q, but overall the IPCC estimates the human caused emission of greenhouse gases and the land usage changes (like deforestation) are the principal causes of a changing Q. If the climate sensitivy is called R, the IPCC assumes that DT = R*DQ (here “D” is taken as the greek capital “delta”). This assumption leads to a positive water vapour feedback factor and so to the high values of R.

Kauppinen et al. disagree: They write that one has to include in the expression of DT the changes of the atmospheric water mass (which may show up in changes of the relative humidity and/or low cloud cover. Putting this into a equation leads to the conclusion that the water vapour feedback is negative and as a consequence that climate sensitivity is much lower.

Let us insist that the authors do not write that increasing CO2 concentrations do not have any influence on global temperature. They have, but it is many times smaller than the influence of the hydrological cycle.

Here what Kauppinen et al. find if they take real observational values (no fudge parameters!) and compare their calculated result to one of the offical global temperature series:

The visual correlation is quite good: the changes in low cloud cover explain almost completely the warming of the last 40 years!

In their 2017 paper, they conclude to a CO2 sensitivity of 0.24°C (about ten times lower than the IPCC consensus value). In the last 2019 paper they refine their estimate, find again R=0.24 and give the following figure:

Clearly the results are quite satisfactory, and show also clearly that their simple model can not render the spikes caused by volcanic or El Nino activity, as these natural disturbances are not included in their balance.

The authors conclude that the IPCC models can not give a “correct” value for the climate sensitivity, as they practically ignore (at least until AR5) the influence of low cloud cover. Their finding is politically explosive in the sense that there is no need for a precipitous decarbonization (even if on the longer run a reduction in carbon intensity in many activities might be recommendable.

5. My opinion

As written in part 1, Kauppinen et al. are not the first to conclude to a much lower climate sensitivity as the IPCC and its derived policies do. Many papers, even if based on different assumptions and methods come to a similar conclusion i.e. the IPCC models give values that are (much) too high. Kauppinen et al. also show that the hydrological cycle can not be ignored, and that the influence of low clouds cover (possibly modulated by solar activity) should not be ignored.
What makes their papers so interesting is that they rely only on practically 2 observational factors and are not forced to introduce various fudge parameters.

The whole problem is a complicated one, and rushing into ill-reflected and painful policies should be avoided before we have a much clearer picture.

The author Alberto Zarogoza Comendador has a very interesting web-site with an interactive climate-sensitivity calculator:
azcomendador.shinyapps.io/Clisense

I really recommend to spend some time trying his calculations and especially reading his very interesting article “It should’nt take 100 years to estimate the climate sensitivity“.

_______________________________________________________

Addendum (added 12Aug2019) :

Dr. Roy Spencer showed a very telling slide in his Heartland 2019 presentation:

This image shows the troposphere (not surface) warming as predicted by the CMIP5 models (which form the basis of all the “consensus” political action) versus the observations made by the satellites (by the RSS and UAH teams) and 4 different reanalysis which included everything (satellites, floats, balloons …). The spread between the different models is so great as to forbid any action based on any of them (which one would you choose as the “truth”?). Curiously the only model close to the observations is the Russian INM-CM5 model (read a more complete discussion on that model here).

The Kauppinen papers (1/2)

August 11, 2019
  1. Climate sensitivity.

The most important question regarding anthropogenic climate change is that of the climate sensitivity: in short “what supplementary warming will be caused by a doubling of the atmospheric CO2 concentration”. This question lies at the heart of “climate protection policies”: if this sensitivity is great, rapid decarbonisation might be seen as inevitable, if it is low, a better policy might be to wait for upcoming technologies allowing a more pain-less switch to a non- or low-carbon future.

The IPCC can not get is uncertainty range narrowed down since more than 20 years: it stubbornly lies in the interval 1.5 to 4.5 °C with a “best” estimate of approx. 3.5°C. These numbers are (mostly) the outcomes of climate models, and all assume that feedback factors like increasing water vapour are positive, i.e. they augment the warming (about 3.6 W/m2 radiative forcing caused by a CO2 doubling) considerably.

Many scientists agree with the IPCC, but a smaller group does not. This group (like Lindzen, Lewis and Curry etc…) tries to find the climate sensitivity from the observations of past climate, and most get an answer which lies below (often well below) the lower the IPCC’s lowest boundary.

If they are right (and the IPCC consensus people wrong), most of the expensive policies following the Paris outcome (“limit warming to 1.5°C w.r. to pre-industrial times”) could be scrapped.

The notion of “climate sensitivity” is complex: usually 2 different sensitivities are used: the Equilibrium Climate Sensitivity ECS which considers the final temperature change caused by a CO2 doubling if “everything has settled down”, what means all feedback factors have played out, and all momentary thermal imbalances on the planet have been resolved. This may take a horrible long time, with a magnitude of centuries, and thus is too long to represent a realistic political goal. So often a second definition the Transient Climate Sensitivity TCS is used (often also called transient climate response TCR); here we assume a yearly 1% increase in atmospheric CO2 concentration which will lead to a doubling in 70 years, a time span more acceptable for a political agenda.

If we look at the history of scientific papers treating this subject, there is a clear tendency for lower sensitivities since the first calculation of Charney in 1979:

Decline of published TCR and ECS values since 2000 (link).

So this extremely important subject is far from “settled science” as most media, environmental groups and politicians continue to shout and want us make believe.

2. Dr Jyrki Kauppinen

Dr. Kauppinen is a professor of physics at the Turku University in Finland. He has published quite a lot of papers on spectroscopy, Fourier analysis etc. Four of his (and co-authors) papers (published 2011, 2017, 2018 and 2019) look at the climate sensitivity problem using only observations, and finding that the most important feedbacks caused by water vapor (condensing into low clouds or not) are negative and not positive as assumed by the IPCC.

They find that the human activity is insignificant on climate change (read here a general comment in the Helsinki Times from 14 July 2019).

In the following parts of this blog, I will look into these papers, which are not always easy to read and understand. They are quite heavy on mathematics, and even if I am able to follow most, there are some occurrences where I have to assume that their calculations are correct.

____________________________________________

to be continued….

Energy, as always!

July 24, 2019

Ignoring the importance of reliable, sufficient, base-load capable and affordable energy is ignoring everything past human history has told us. There simply is not a single example where a nation flourished and developed by cutting back its energy usage. The form of the energy used may change (e.g. more gas, less coal…) but the unavoidable truth is that progress correlates with energy use. We now have a new generation of “child-climatologists” or “child-environmentalist” which are absolutely ignoring this, and in their quasi religious war against energy usage remember me of the wrong doings of Chinese juveniles during the Cultural Revolution in the late 60’s (see here).

Making all us believe that the planet is on the brink of climate destruction and that deep decarbonisation must be achieved at all costs during the next months (yes months!) should be regarded by all sensible people not only as Utopian (an Utopia often touches us as sympathetic if outlandish) but as completely foolish.

So I will in this very short blog only show 3 pictures which might put us back into reality (more on Paul Homewood excellent website “notalotofpepoleknowthat“.

First here a pie-chart of global energy usage in 2018:

Note how small the percentage of renewables is, despite billions of subsidies poured into what should now be considered as mature technologies (solar PV, wind and hydro). The biggest problem of PV and wind remains their intermittency, and an affordable and sustainable storage technology of the needed magnitude is nowhere to be seen.

The next picture shows that from 2000 to 2018 energy usage was continually rising; nothing surprising here as many under- or low-developed nations try to better the standard of living of their people.

So the mantra that increasing energy efficiency will rapidly lead to a steady state must be seen as wishful but unrealistic desire. The developed world has made big progresses in energy efficiency, and the low hanging fruits have all been harvested. As so often there will be an asymptotic progression where increasing efficiency will become more and more difficult and costly.

Finally let us look at the CO2 emissions from primary energy usage according to the various world regions:

Old Europe becomes more and more a blip in the total: what matters is what happens in Asia, and all the self-hurting policies dreamed up in the EU will finally be seen by history as “much pain without gain”.

Please reflect on this, and make your own conclusions.

The new IFO report on E- and Diesel cars

April 19, 2019

e-car_roadsign.png

The München based IFO (Information und FOrschung) is one of the largest research think tanks in Germany. Its former president Prof. Hans-Werner Sinn is an outspoken critic of Germany’s Energiewende and is constantly attacked but those who follow the politically correct official energy mantra. Three authors, Prof. Christoph Buchal (physics, Uni Köln), Hans-Dieter Kaul (research fellow, IFO) and Hans-Werner Sinn have published a report titled “Kohlemotoren, Windmotoren und Dieselmotoren: Was zeigt die CO2-Bilanz?” (link), which may be translated to “Coal-, Wind- and Diesel-Engines: what is the CO2 balance?”. This paper is remarkable not for what is shows (there are no fundamental new insights here), but because it is written in an extremely accessible language, without any superfluous statistical gizmo and technical jargon. It also uses only freely available data, without any activist cherry picking. The paper compares the CO2 emissions (in gCO2 per km) of a new TESLA 3 electric car and a MERCEDES 220d Diesel car. Here are some comments on this paper.

  1. The big lie

The EU authorities and the Bundesumweltamt all classify battery driven electric cars as “CO2 free”. This is unacceptable for at least 2 reasons:
– the electricity taken from the grid to charge the batteries is not CO2 free in Germany (the actual energy mix accounts for 550 gCO2/kWh)
the batteries used are a consumable, having a life-span generally well below that of the car, needing important energy amounts for production and eventual recycling.

So a correct LCA (life cycle analysis) must include the battery relevant CO2 emissions and naturally also those caused by transforming subterranean oil into a liter of Diesel fuel at the pump. These numbers are available from many research papers, and amount to the following:
– the battery LCA adds 73-98 gCO2/km for the electric car (here the Tesla 3)
– for the Diesel fuel one should add 21% to the CO2 quantity emitted by burning the gasoil in the engine.

This amounts to the following results:

  • the Tesla 3 emits 156 to 181 gCO2/km
  • the Mercedes 200d emits 141 gCO2/km

If the Mercedes had a methane (natural gas) driven engine, it would emit only 99 gCO2/km. The Tesla emissions are based on the 550 gCO2/kWh actual German energy mix (to be compared to France’s 100 gCO2/km !).

The conclusion is damning: the E-car emits more CO2 per km than the “infamous ” Diesel car, so replacing Diesel cars with electrical ones will do nothing for climate protection (if one accepts the dogma that CO2 emissions from fossil fuels are the cause of observed climate changes)

2. What to do with ever more solar and wind electricity?

Many papers point to the fact than when intermittent electricity producers as wind turbines and solar PV’s amount to more than 30% of the installed capacity, an increasing amount of renewable electricity must be dumped during peak “green” generation periods (much wind and sunny sky) by shutting off the wind turbines and solar panels; nevertheless they must be helped by an increasingly non-economic array of base-load capable producers like coal, gas or nuclear power stations. Together with many other researchers, the authors of the paper consider that battery-storage for handling for instance seasonal imbalance will not be possible, due to the huge quantity of rare materials needed and the exorbitant price tag. They suggest a two-step political decision:

  1. begin to switch all fossil engines to methane (natural gas), as this will surely have an enormous impact on the traffic related CO2 emissions (which remain stubbornly constant in Germany); the Netherlands have shown how to do this, and normal gas engines are easy to adapt to the methane fuel.
  2. begin to develop electrolysis of hydrogen using the excess renewable electricity, and use this hydrogen to make “green” methane (a process well known, see for instance here).

Both of these steps could use the existent network of refueling stations and the existing underground gas pipes infrastructure. Electrolysis and methanization do not come cheap, as the efficiency w.r. to burning raw hydrogen drops to 34%, a number comparable to the efficiency of a modern Diesel engine. Nevertheless the authors see the hydrogen way (be it as a fuel for fuel cells driving e-cars or as the basis for methanization driving thermal engines) as the only possibility to further increase renewable electricity production and usage.

3. A Post Scriptum

The authors conclude with some thoughts on the German Energiewende: forced upon by politics and NGOs to “save the planet” it has spectacularly failed in reducing Germany’s CO2 emissions. A recent STATISTA article (link) gives this diagram:

Europes_10_biggest_polluters

7 among the 10 biggest “polluters” are from Germany!

The authors fear that politics built on a lie may well lead to a general mistrust of the public and could in the near future make desirable and necessary political decisions  impossible to enforce .

PM contribution of road traffic

December 17, 2018

Reading the media one would think that the fine particles pollution is mostly caused by Diesel cars (the new villain on the block) and that by restricting Diesel cars this problem would be solved. The simple truth is quite different, as shown by this recent graph from the EEA:

The violet bands are the contributions to PM emissions by road transport (including tail pipe, brake and tire abrasive wear emissions): they are truly small w.r. to the totals: about 7.6% for the PM10 and 10.5% for the PM2.5 fine particles! So the vast majority from fine particle emissions come from sources not related to road transport. The elephant in the room are the emissions by wood burning (included in the brown segments labelled “Commercial, institutional and households).

(link to EEA article)