Welcome to the meteoLCD blog

September 28, 2008

Badge_Luxwort_2016

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.

Do sea levels rise faster at EU coasts?

June 2, 2017

One of the great scares of climate alarmists is the “we are all going to drown” meme; rising global sea-levels (caused by our CO2 emissions) are predicted to displace millions and millions of people, and the media joyfully bring more and more graphic pictures of this impending catastrophe. As always, one should look at the data, at that’s what Phil J. Watson does in his latest paper “Acceleration in European Mean Sea Level? A New Insight Using Improved Tools” (Journal of Coastal Research, 33/1, Jan.2017, link to full version).

  1. Relative and geocentric sea-level

The best and longest data series we have on changing sea levels come from coastal tide gauges (and not from satellites). The Permanent Service for Mean Sea Level has many long-time records (most are European), some going back to the beginning of the 19th century and even further. These tide gauges have usually be well maintained, as they were important for the safety of ships entering or leaving a harbor. One of the best known series comes from the town of Brest (France), starting in 1807. A tide gauge measures a relative sea level, i.e. the sea level relative to the position of the instrument. When we speak of global sea levels, then these levels are geocentric, i.e. relative to the center of the geode (the center of the earth with its not spherical form). A global sea-level has as much practical value as a mean global temperature, that is mostly none. It is an intellectual construct which may be interesting from a scientific point of view, but useless as a tool for concrete political action, such as deciding to build dams or coastal protections. So what Watson does, is to detect if there is any acceleration in the relative sea level measured at 81 locations on the European costs. The following picture shows his selection of tide gauges:

Clearly most stations are on the Atlantic and Baltic coasts. The Netherlands are accustomed since long times to protect their low-lying land against rising sea and storms, so I will insist on the situation at Amsterdam, and of Brest as representative for much of the more southern Atlantic coast. When talking about relative sea-level, one should remember that (at least) 3 factors have an influence:
1. land movement at the site of the gauge: this can be an uplift caused by glacial rebound (the ground moves up as the pressure of the heavy ice masses that covered it during the last ice age have vanished: this is the case around the Baltic, the stations in the rectangle B). The ground also may move down due to ground water extraction or simply the weight of the neighboring buildings. These land movements can be quite different are close locations: at Vlissingen (NL) there is a mean rise of +0.28 mm/y, and at Ostende (BE) 59km away the ground sinks at -35 mm/y.
2. atmospheric influences (short time such as atmospheric pressure and wind, long time such as those caused by NAO (North Atlantic Oscillation). Usually the gauge readings are corrected for atmospheric pressure variations and are low pass filtered to remove for instance the influence of the wind.
3. climate change influence, essential melting (land based) glaciers adding water to the oceans and thermal expansion of warmer waters.

Extracting unambiguously these single factors from the gauge data is difficult, if not yet impossible. But it is not a practical necessity, as decisions to begin work on new dams or other protective measures rely on the relative sea level (including all these factors), and not on a single parameter. Watson has used a new tool, actually a new module called msltrend of the well-known open source R2 statistical package to obtain data series of exceptional quality.

2. The situation at Amsterdam

The following picture shows the relative sea level at Amsterdam, from 1766 to 2010:

Two conclusions are obvious:
1. the relative sea level begins to rise around 1820, close to the end of the Little Ice Age which is usually taken as 1850.
2. since that date the increase is uniform, without any visible acceleration since the start of the industrial age ( ~1850); over the whole period the average increase is less than 1 mm/y or about 1.3 mm/y since 1820.

Clearly the period of highest CO2 emissions between 1970 and 2010 does not leave any visible impression in this sea level change.

3. Velocity of change and acceleration at Brest and Stockholm

The  next picture shows the time series of relative sea level, the velocity of change ( = the time gradient) and the acceleration at Brest and Stockholm:

Stockholm’s relative sea-level is continuously falling, at a near constant velocity and a practically zero acceleration (which is obvious, as acceleration is the derivative of velocity). The relative fall of the sea-level is important: more than 4 mm per year, and is practically the fingerprint of glacial rebound around the Baltic sea (at the island of Visby, ~150 km from Stockholm, the ground rises at 3.31 mm/y, and at Furuogrund, close to the northern coast of the Baltic, the rise is a spectacular 10.43 mm/y ).

At Brest the picture is quite different: there is a small increase less than 0.7mm/y over the whole period, with a near constant velocity since 1990 (and no statistically significant acceleration). As at Amsterdam, a “global warming” causing accelerated sea-level rise is not visible in these series. Watson writes in his paper that such a warming might show up with a delay of 15 to 20 years: well, the two periods of 1910-1945 and 1976-1990 usually accepted as being the two global warming events, are long past that delay, and leave no impression in the relative sea-level series.

4. The geocentric sea-level rise and the IPCC scenarios

Geocentric sea-levels can be calculated from the relative levels (with some caveats), using tectonic models or GPS measurements for the newest data. It is interesting to compare the European situation to the “global” geocentric sea-level rise (in mm/y) as used by the IPCC and in many models:

The global geocentric sea-level average change corresponds to the black line at about 3.4 mm/y (the grey band represents the 95% confidence interval i.e. the region outside which there is a low probability to find data). All EU geocentric sea-levels are below the global average, and most even clearly below the lower confidence boundary!

5. Conclusions

Watson’s paper concludes that relative sea-levels at European coasts do not show a statistically significant acceleration (that is their change per year remains constant); remember that these sea levels are given by very long time series and reliable instruments, usually carefully maintained. As such, these data do not show a finger print of an ongoing climate warming. He makes a very interesting point in plotting what the changes in velocity and acceleration would be if past observations should conform to the IPCC’s models:

Taking Cuxhaven as an example, acceleration that has been zero since 1840 would have to increase to about 0.20 mm/y**2, and velocity which was constant ~2 mm/y (i.e. a flat slope) during the last 180 years would have rise to a mind-blowing 17 mm/y during the coming 90 years: an extremely improbable scenario!

The conclusion for our political rulers is first to read this paper, than try to understand the numbers and finally avoid to jump to hysteric “mitigation” measures that may well be extremely costly, but essentially superfluous for changing the coastal sea-levels whose century long changes do not point to any potential catastrophe.

Rooftop wind turbines: haha or oho ?

May 28, 2017

A couple of weeks ago our national media  wrote about plans of the University of Luxembourg “going green” by planning to install PV panels (yawn…) and roof top wind turbines on its buildings at the Belval campus. As the following picture shows, there is only one high-rising building on the campus, the 83m high (18 floors) “Maison du Savoir” on the rather level building ground at approx. 300m asl.

I understand that an university must jump on the subsidy train of the day to bolster its income, and plan every fashionable “planet saving” measure keeping public money flowing in. Nevertheless, the (very old) idea of roof top turbines has never taken off, as simple physics and economics show it being more of a children’s play than a serious technology to harvest wind energy at most locations.

1. The numbers behind rooftop wind turbines

There is a very good report on this type of diminutive wind turbines at the Solacity website (link) titled “The truth about small wind turbines” which concludes to avoid wind turbines and install PV panels (what had to be expected from a PV manufacturer!). The report gives a lot of very clear data, which I will use in this blog.

Here is the formula to compute the yearly energy (in kWh) to be expected from a turbine with a rotor of a certain diameter and constant windspeed:

E [kWh/year] = 2.09* (diameter**2)*(windspeed**3)   with diameter in [m] and windspeed in [m/s]. The ** means “to the power”.

With a diameter of 4m and a constant wind speed of 3.5 m/s such a turbine would yield a meager 1434 kWh (value approx. 260 Euro). These data are for horizontal axis turbines, which do not look well on a roof and may cause more severe static problems due to the usual turbulent air flow as does a vertical axis turbine (VAWT, usually a variant of the Darrieus type). The following picture shows two VAWT’s on a building in Queensland (link):

The problem is that the efficiency of this VAWT type has been found to be much lower than that of a corresponding horizontal axis turbine, and material cost and fatigue was also higher (see here).

2. The problem with wind speed

All wind turbines have a cut-in minimum wind speed, below which they can not produce any energy. This usually lies around 3 to 5 m/s (10 to 18 km/h). There also is a maximum cut-off speed (around 25 m/s) where they must be stopped to avoid damage, but this is a rare situation in Luxembourg that we may ignore when speaking of collectable energy.

Now depending on your location, speeds above 3 m/s might not be exceptional: if your building is located in the high alps or at the sea-shore, this should not be a big problem. The real situation in continental Luxembourg is quite different. Let me give you the data measured at meteoLCD (Diekirch, about 218m asl) for the last 5 years:

The “avg” column gives the yearly average of the 17520 half-hour measurements (each half-hour measurement is the average of 30 measurements made every minute). In neither year does this average wind speed (in m/s) exceed the 3 m/s cut-in limit. The GT3 column gives the number of hours (out of the total of 8760) where the wind speed was greater than 3 m/s, and so on for the next columns.

Clearly only in about 1/5 of the time could any wind energy have been produced. The GT4, GT5 etc. columns show that the number of “fertile” hours drops rapidly (approx. by half for each step) and becomes virtually zero for wind speeds greater than 10 m/s. One should not forget that very often the rated power is given for a wind speed of 11 to 15 m/s.

The European Wind Atlas gives much more optimistic numbers, that in my opinion are the result from modelling and not observations:

According to the “Sheltered terrain” column the Belval wind resource is about 60% lower than the situation at Diekirch (for a measuring height at 50m above ground level; so keeping in mind that the meteoLCD anemometer is about 20m above ground level, and that a possible roof-top wind turbine installed on the “Maison du Savoir” is at 83m, my best guess is that the Belval data will not be vastly different from those at Diekirch, or at the best not more than the double. Using one of the available wind profiles for cities (link) one could speculate that at 83m height the wind speed would be about 60% higher that at 20m, which confirms my intuitive guess.

In this article the (US) author suggests that a typical payback period for a 1 KW roof top HAWT would be about 120 years, not counting repair and maintenance.

3. Conclusion

My reaction is rather “haha”. Do not expect any non trivial energy output from roof top turbines at the university of Luxembourg. The university roof top wind turbine will not be more than an expensive gimmick, fooling people into believing that they found a miracle solution for providing “clean” energy.

The funding authorities should at least insist on good observational wind speed data before paying for what seems to me more a publicity stunt than a scientific endeavor.

Global land warming: an airport fingerprint?

May 12, 2017

svalbard_wxstation_runway

[Picture courtesy Wattsupwiththat shows Svalbard (Spitzbergen) runaway with weatherstation]

Prof. Fred Singer has an interesting article in the “American Thinker” where he suggests that the recent global warming given in the IPCC reports and many other data sources may be a fake, compared to the “real” global warming seen during the early period 1910 – 1942.

Let me just give a very short comment concerning the global land temperatures, as given by GISS Nasa; the numbers I use are from the CSV file (link), smoothed data column.

First look at this graph which shows how dramatically the number of weather stations has fallen, with rural stations taking the biggest toll:

Weather_stations_at_airports

Before the plunge, the percentage of weather stations located at an airport was ~40%, increasing to ~75% in 2000.

Now we all know that an airport has large surfaces of  dark tarmac and that the exhaust from the aircraft turbines is very hot air; so even if the meteorological sensors are mounted inside a Stevenson hut, at the regular height over grass covered soil, one should not be surprised that such an airport location will show warmer temperatures compared to a plain rural one.

Look here at the Findel airport in Luxembourg (courtesy Google maps): notice how the exhaust of the parked blue-tailed plane blows in the direction of the weatherstation  in the westerly wind conditions (which are the most frequent in Luxembourg).

Findel_weatherstation.jpg

Let us also remember that at most airports the traffic increased dramatically between 1970 and 2000. All these factors could produce a fake warming, on top of an eventual existing “real” warming”.

In the next figure I superposed the graph of global land temperature anomalies from GISS to the previous plot:

GISS_Land_and_weatherstations_airport

I calculate the 5 year warming for two periods where the percentage of airport based station is more or less constant: from 1970 to 1975 (less than 40% airport stations)  the warming is +0.04 °C, from 1995 to 2000 (about 75% airport stations) the warming is +0.20°C. The increase in atmospheric CO2 was +5.43 ppmV and +8.73 ppmV (Mauna Loa data).

The table resumes this situation:

land_warming_airport_CO2

So we have a land warming that is 5 times higher when the percentage of airport located stations has nearly doubled and the CO2 increase is less than double.  Do you really think that the location has no influence at all on the observed land warming, and that the CO2 increase is the sole cause?

______________________________________________

History:

14 May 2017: added pictures of weatherstations at Svalbard and Findel airports.

Wood burning causes more pollution than diesel trucks

April 13, 2017

wood_smoke

In 2015 I wrote here a comment “Wood and pellets: a “burning” fine particulate problem“, where the small particle emissions were compared to those of traditional fossil energy sources, and found extremely high. As we are now in a time where Diesel-bashing has become the newest green fad, I was agreeably surprised by an article from Michael Le Page in the New Scientist (4 Feb 2017), titled “Where there’s smoke” (access is paywalled!). He writes how log burning in London and other cities has become a major pollution problem, emitting often much more PM 2.5 than Diesel trucks.

A Danish “eco-friendly” wood burner was found to emit through the chimney 500000 small particles per cm3, to be compared to the 1000 particles/cm3 found at the tail-pipe of a modern Diesel truck: so one eco-family thinking to save the planet caused as much pollution as 500 Diesel trucks!

Look at this graph from the Danish Ecological Council showing the particulate count inside a Copenhagen wood burning house:

This “correctly installed stove” caused an inside pollution 4 times higher than that of the most polluted street!

As I wrote in my former article, log burning certainly is the big contributor, well exceeding a pellet stove. But regarding the romantics, it is difficult to negate the much higher emotional appeal of a burning log compared to some smoldering pellets.

What I really find so scandalous is that people have been coaxed by infused “climate guild” to switch to wood burning, and now the nasty side of this eco-fad shows up. But as much as Diesel bashing seems to be hip, ignoring the pollution caused by wood burning is a real scandal. A scandal which seems a perfect “déjà-vu”, a repeat of the politically motivated push to the less CO2 emitting Diesel engines in the 90’s.

Do eco-politics  always must swing from naively pushing a “climate” solution to recognizing (after a longer latency) that the nuisances of the solution might well be higher than the putative dangers they are meant to avoid?

_______________________

PS1: here a plot from the Australian Air Quality Group showing the contributions to PM2.5 pollution from different sources (Jun/Jul/August is Australian winter):

Hibberd_PM2.5sources_Muswellbrook_Sept2013.png.1379722208210The peak during July shows that woodsmoke contributes about 10 ug/m3, whereas transport and industry together only about 1 ug/m3, i.e. ten times less! (see here)

How the IPCC buries it’s inconvenient findings

March 30, 2017

 

There has been an interesting hearing before the US Senate on Climate Change, Models and the Scientific Method, with testimonies from J. Curry, J. Christy, M. Mann and R. Pielke Jr.. In a later blog I will comment on this hearing (video link ). For the moment I would just write about an astonishing fact given by Prof. Christy from the UAH (University of Alabama in Huntsville). Christy (and Roy Spencer) analyze and maintain the database of global temperature measurements done by the satellites (the other team is RSS).

 

  1. The absence of a human caused warming finger-print

All climate models agree that human caused global warming should show up as an upper atmospheric warm hot-spot in the tropics. Look at the next figure, which corresponds to the outcome of one model:

 

The problem is that observations (by balloons, radiosondes etc.) do not find this hot-spot, which is a serious blow to the validity of the CMIP-5 model ensemble. Christy has shown in his testimony that the difference between models and observations can be found even in the IPCC’s own latest report (AR5): but probably you have to be an avid and patient reader to find it, as it is buried away in the Supplementary Material for chapter 10, figure 10.SM.1.; also the graphics are confusing and obscure, and some detective work is needed to clear the fog.

Here is this original figure 10SM.1:

The second from left plot corresponds to the tropics, and it is this sub-plot which we will look into.

 

2. IPCC’s hidden truth

I made a zoom from the relevant plot, and added some annotations and boxes:

 

The red band gives the answer of the CMIP-5 ensemble to the question: “what are the warming trends in the tropical atmosphere (up to about 15km) in °C/decade” when the models include human generated greenhouse gases (essentially CO2); the blue band gives the answer when the models do not include (i.e. ignore) human GHG emissions. And finally the thin grey line shows the observations of one radiosonde database (RAOBCORE = Radiosonde Observation Correction using Reanalysis): it can readily be seen that the models including GHGs terribly overstate the real warming: the red band (= region of uncertainty) lies completely above the observations. Now whats nearly hilarious is that when the models do not include human GHGs (the blue band), the result is absolutely acceptable, as the blue band covers most of the observation line.

Christy made this graph still clearer by including all observations (the region limited by the grey lines):

The conclusion is the same.

So the 1 million dollar question is: how can the IPCC claim with great confidence that its model tell us that the observed warming carries a human fingerprint, and is caused with very high certainty by the anthropogenic emissions, when in its own assessment report it shows the failing of these models?

 

The tuning of climate models

March 26, 2017

belmont_glas_dial(link to picture)

Many  (or better practically all) “climate politics” are based on the outcomes of climate models; as these models predict nearly unanimously a future warming due to the expected rise of atmospheric CO2 concentration, their reliability is of a primordial importance. Naively many politicians and environmental lobbies see these models as objective products of hard science, comparable for instance to the many years-long proofed correctness of the  structural physics of skyscrapers.

Alas, this is not the case: as the climate system is devilishly complex and chaotic, building a General Circulation Model (GCM) starting with basic physics laws is a daunting task; during its development, each model must take choices for certain parameters (their values, their possible range), a process which is part of what usually one calls “tuning”. The choices in tuning are not cast in stone but change with the modeling creators, with time and cultural/ideological preferences.

Frédéric Hourdin from the “Laboratoire de Météorologie Dynamique” in Paris has published in 2016 together with 15 co-authors an extremely interesting and sincere article in the “Bulletin of the American Meteorological Society” titled “The art and science of climate model tuning” (link) on this problem. I will discuss some of the main arguments given in this excellent paper.

  1. Observations and models

Hourdin gives in Fig.3 a very telling example how the ensemble of the CMIP5 models (used by the IPCC in AR5) differ in the evaluation of global temperature change, starting from 1850 to 2010 (the temperature anomalies are given with respect to the 1850-1899 average):

I have added the arrows and text box: the spread among the different models (shown by the gray arrows) is awesome, larger than the observed warming (given in the very warming-friendly Hadcrut4 series); even the “adjusted” = tuned variant (the red curve) gives a warming in 2010 that is higher by 0.5°C than the observations. We are far, far away from a scientific consensus, and decisions that ignore this are at best called “naive”.

2. Where are the most difficult/uncertain parts in climate models?

Climate models are huge constructs which are built up by different teams over the years; they contain numerous “sub-parts” (or sub-models) with uncertain parameters. One of the most uncertain ones is cloud cover. Just to show the importance, look at these numbers:

  • the forcing (cooling) of clouds is estimated at -20 W/m2
  • the uncertainty about this parameter at least 5 W/m2
  • the forcing thought to be responsible for the post 1850 warming of about 1°C is estimated at 1.7 W/m2.Conclusion: the uncertainty of the cloud cover effect is 3 times higher than the cause of the observed warming!

Hourdin asked many modelers about what they think to be the most important cause of model bias, and they correctly include cloud physics and atmospheric convection, as shown in the fig.S6 of the supplement to the paper (highlights and red border added):

3.  Are the differences among the models only due to scientific choices?

The answer is no! Many factors guide the choices in tuning; Hourdin writes that ” there is some diversity and subjectivity in the tuning process” and that “different models may be optimized to perform better on a particular metric, related to specific goals, expertise or cultural identity of a given modeling center”. So as in many other academic domains group-think and group-pressure do certainly play a strong role, showing a consensus that might well be due more to job security or tenure than objective facts.

4. Conclusion

This Hourdin et al. paper is important, as it is one of the first where a major group of “main-stream” researchers puts the finger on a situation that would be unacceptable in other scientific domains: models should not be black boxes whose outcomes demand a quasi religious acceptance. Laying open the algorithms and unavoidable tuning parameters (“because of the approximate nature of the models”) should be a mandatory premise. It would then be possible to check if some “models have been inadvertently or intentionally tuned to the 2oth century warming” and possibly correct/modify/adapt/abolish some hastily taken political decisions based on them.

The coming cooling predicted by Stohzkov et al.

March 18, 2017

 

time

 

A new paper from February 2017 has been published by Y.I. Stozhkov et al. in the Bulletin of the Russian Academy of Sciences. Here a link to the abstract (at Springerlink); the complete version is regrettably paywalled, but I was able to access it through the Luxembourg Bibliothèque Nationale.

The paper is very short (3 pages only), has no complicated maths or statistics and is a pleasure to read. The authors predict as many other have done before a coming cooling period; their prediction is based on two independent methods of assessment: a spectral analysis of past global temperature anomalies, and the observation of the relationship between global temperatures and the intensity of the flux of charged particles in the lower atmosphere.

  1. Spectral analysis of the 1880-2016 global temperature anomalies.

The paper  uses the global temperature anomalies series from NOAA and CRU, computing them as the difference with the global average near surface temperatures between 1901 and 2000. Their spectral analysis suggests that only 4 sinus waves are important:

The general form is: wave = amplitude*sin[(2pi/period)*time+phase] with the period and time in years and the phase in radians; the authors give the phase in years, so you have to multiply by 2pi/period to obtain the phase in radians.

  • series #1: amplitude=0.406  period=204.57 years  phase=125.81*2pi/period (radians)
  • series #2: amplitude=0.218  period=  69.30 years  phase=  31.02*2pi/period
  • series #3: amplitude=0.079 period=   34.58 years  phase=  17.14*2pi/period
  • series #4: amplitude=0.088 period=   22.61 years  phase=  10.48*2pi/period

I computed the sum of these 4 series and merged the graph with the global land-ocean temperature anomalies from GISS; the problem is that GISStemp calculates the anomalies from the mean of the 1951-1980 period, so the concordance will suffer from an offset.

The authors write that spectral periods less than 20 years do not play an important role: this means that El Nino’s (roughly a 4 years period) are ignored, as well as non periodic important forcing phenomena like volcanic eruptions. The following graph shows my calculation of the sum of the 4 spectral components (in light blue) together with the official GISStemp series in red:

The fit is not too bad, but as had to be expected, misses the very high 2016 El Nino caused warming.

The authors are not the first doing a spectral analysis on the temperature series. N. Scafetta in his 2012 paper “Testing an astronomically based decadal-scale empirical harmonic climate model vs. the IPCC (2007) general circulation climate models” (link) gives the following figure of a good fit using 4 short period sinus waves (so he does not seem to agree with the Stozhkov on the non-importance of short periods):

Note that both models predict a cooling for the 2000-2050 period.

2. Cosmic rays and global temperature

We are now in solar cycle 24, one of the weakest cycles since ~200 years, as shown by the next figure (link):

A situation similar to the Dalton minimum during the first decade of the 19th century cold period seems to unfold, and all things being equal, would suggest a return to colder than “normal” temperatures. But as Henrik Svensmark has first suggested, the sun’s activity acts as a modulator of the flux of cosmic charged particles, which create in the lower atmosphere the nucleation particles for condensing water, i.e. cooling low atmosphere clouds. In this paper the authors compare the flux N (in particles per minute) measured in the lower atmosphere (0.3-2.2 km) at middle northern latitudes with the global temperature anomalies dT: the measurements clearly show that an increase in N correlates with a decrease in dT. This is an observational justification of Svensmark’s hypothesis:

As this and the next solar cycle are predicted to be very low-active, this observation is a second and independent prognosis of a coming cooling (you may want to look at my older presentation on this problem here).

3. Conclusion

I like this paper because it is so short and does not try to impress the reader by an avalanche of complicated and futile mathematics and/or statistics. The reasoning is crystal clear: both the spectral analysis and the to be expected rise in the flux of charged particles suggest a future global cooling for the next ~30 years!

___________________________________________________________________________________________

Addendum 03 April 2017:

You might watch this presentation by Prof. Weiss given in 2015 on cycles in long-term European temperature series.

Energiewende: a lesson in numbers (Part2)

March 11, 2017

 (picture link:http://www.nature.com/nature/journal/v445/n7125/full/445254a.html)

In the first part of this comment on the Energiewende I showed that its primary goal to restrict the CO2 emissions has not been attained.

In this second and last part I will concentrate on the costs of the Energiewende.

2. The costs of the Energiewende

Let us remember that the financial aspect of Energiewende is a system of subsidies going into many directions: those who install solar PV or wind-turbines (for instance) receive a subsidy for the installation costs; they are granted priority in feeding electricity into the grid, and they are paid for this feed-in a tariff largely in excess of the market price. The McKinsey report writes: “Die aktuell vorliegenden Zahlen belegen, dass die bisherigen Erfolge der Energiewende überwiegend durch teure Subventionen erkauft worden sind “.

The costs for the individual household rises continuously. as shown by the next graph:

The increase with respect to the 2010 situation is a mind-blowing  3.35 factor; as the kWh price will probably reach or exceed 0.30 Euro in 2017, most experts agree that the yearly supplementary cost per 4 person household will be higher than 1400 Euro (which has to be compared to the 1 € price of one ice cone per month/person that minister Jürgen Trittin announced in 2004!).

The subsidization has transformed a free market into a planned economy, with many unintended nefarious consequences:
At certain times the combined solar+wind production is excessive, and leads to negative prices (the big electricity companies must pay their (foreign) clients to accept the surplus electricity:

The “redispatch” interventions to stabilize the grid and avoid its collapse rise by a factor of 10 from 2010 to 2015 (link); the costs rise practically more than the doubling given by “Moore’s law” during 2013-2015 (link):

Actually, if one includes not only the costs of not-needed electricity, but also those of the redispatch (changing the provenience of the electrical energy) and the mandatory reserve capacity, we are close to a doubling in the years 2011-2013-2014-2015, as shown by the “Insgesamt” total in million Euro (link) :

The McKinsey report sees grid management costs quadruple during the coming years and rise to over 4 billion Euros (4*10^9) per year. A recent article in The Economist titled “Wind and solar power are disrupting electricity systems”. Here three main problems are cited: the subsidies, the intermittency of wind and solar and finally their very low production costs which make traditional power stations (urgently needed for base-load, backup and grid stabilization) non-economic: without state subsidies nobody will built these power stations, so that the circle of state planning (as we know it from soviet times) is closed.

3. The job problem

Renewables have always been hyped for their job potential, but the reality in Germany is quite different: 2016 was the fourth year with falling job numbers in the renewable industry, and when this trend continues the aim of 322000 “green” jobs will not be attainable in 2020. Equally disquieting is that 2016 is the first year showing a decline in the jobs in the electricity-hungry industry. An older (2011) AEI report concludes that green jobs only displace traditional ones, and that in Spain each green megawatt installed destroyed 5.28 jobs. It seems that the whole Energiewende depends on its foundation of big subsidies (either direct or indirect) and state planning and steering. In a free market, the rise of “renewable” electricity would not be nil, but be much slower. The subsidies have spoiled huge parts of the industry, and they see these subsidies paid by all the citizens as their due.

Fritz Vahrenholdt has published a paper at the GWPF titled “Germany’s Energiewende: a disaster in the making”. He could well be right.

Energiewende: a lesson in numbers (Part 1)

March 11, 2017

lesson in numbersA new report from McKinsey on Germany’s Energiewende (= energy transition policy) has been published in the series “Energiewende-Index”. This very transparent and non-emotional report makes for a good reading: the main lesson is that the costs of the Energiewende (which has driven German household electricity prices 47.3% higher than the EU average) will continue to rise, and that the political deciders seem to ignore the future financial burden.

In this blog, I will comment using only numbers from well-known institutions (as the Dutch PBL report  “Trends in global CO2 emissions 2016“, Fraunhofer ISE, Agora Energiewende etc.), and let these numbers speak. Let me just give my personal position on renewable energies: In my opinion, every country should diversify as much as possible its energy sources, and that means that wind and solar should not be brushed aside. But the importance of having reliable and affordable continuous electricity available can not be ignored: intermittent sources as solar and wind should not be presented as the sole environmentally acceptable providers, as clearly the last dozen years have shown that this intermittency and the absence of realistic electricity storage are at the root of many tough problems. The German green Zeitgeist (which seems to drive many EU regulations) clearly is blind on both eyes concerning these problems; condemning nuclear energy under all its actual and upcoming forms as unacceptable increases dramatically the problems.

  1. The avoidance of CO2 emissions

The Energiewende was first positioned as a measure to avoid and diminish CO2 emissions caused by producing electricity from fossil fuels, transportation and industrial manufacture. After the Fukushima tsunami (March 2011), the “Atomausstieg” (nuclear exit) was added to this political foundation. Heavy subsidies have been poured on solar PV and wind energy facilities, pushing up the installed capacities of these 2 providers to 91 GW for a total installed generation capacity of 196 GW (numbers rounded commercially) as shown in this edited plot from Fraunhofer ISE:

Intermittent sources thus represent 91/196*10  = 46% of the installed capacity in 2016.; in January they delivered 23%, in August 25% of the total installed generating capacity. So we can conclude that when summing the intermittent sources, we find that these subsidized sources which have a feed-in priority contribute at about half of their installed capacity. The problem lies in the word “summing”: under the aspect of emissions, the sum might be a useful metric, but in real life it is the instantaneous available power that counts. The two following graphs from the Agora Energiewende report 2016 show the situation during the first and third quarters: I highlight the days with minimum and maximum (solar+wind) contribution with yellow rectangles.

agora_1quartal2016

agora_3quartal2016

Without the base load of CO2 emitters like biomass and coal, the lights would have been out many times!

Let us now look at the CO2 (or better the equivalent CO2 (CO2eq)) balance for the last years, compare several countries with Germany, and see if the Energiewende has been a successful CO2 lowering policy.

Our next graph shows how the CO2 emissions varied from 1990 to 2015 (I added zoomed inside pictures):

CO2_emissions_Germany_USA_1990_2015

The most interesting conclusion from this graph is that Germany’s total CO2 output diminishes not much between 2005 and 2015 (the Energiewende started in 2001), in contrary to the USA which had not a comparable policy. The same picture shows up in the “per capita” emissions:

CO2_per_capita_emissions_countries_USDEFR_1990_2015

Compared to the non-“Energiewende” countries of France and the USA, Germany again fares very poorly. The next  graph highlights in a more precise manner the trends between 2002 and 2015:

2010_2015_compare

I computed the trend-lines for Germany (magenta) and France (black): the equation show that France is two-times more successful than Germany in lowering its CO2 emissions, without any comparable and extremely costly Energiewende policy. Agora concedes this in its report writing that ” … Germany’s total greenhouse emissions haven risen once again“!

And the following graph shows that the part of fossil fuel has remained constant since 2000:
agora_1990_2016_fossilfuelconstant

Conclusion: The Energiewende has not achieved its primary goal in greatly lowering CO2 emissions!

_____________________________

(to be followed by part 2)

Has climate alarm peak been crossed?

February 17, 2017

climate_angst_curve

There is a very good comment by Donald Kasper at the Wattsupwiththat climate blog (15th Feb 2017). He writes that all social issues have a peak of popularity, but that the times of the rise might not be equal to the time of decline. Climate and global warming alarm is now among us since at least 30 years, and it seems that the continuous rise in attention and funding that this problems receives are quite different in many regions of the world. In the USA, the climate problem clearly is not the most burning one for the general population but in Europe the climate-angst train does not yet seem to slow down.

I remember at least 3 big environmental scares that were very popular in the past, and initially seemed to become eternal: the pilfering and exhaustion of the Earth’s resources and over-population (Club of Rome, the “Population Bomb” book by the Ehrlich couple published in 1968) seemed in hindsight to have and attention-grabbing duration of possibly 10 – 15 years. Look here for a good New York Times article and video on “The Unrealized Horrors of Population Explosion”. As neither prophecy, nor those of material exhaustion of the Club of Rome and those of rapid famines predicted by the two Ehrlichs became true, time was ripe for another scare.

During the second half of the 80’s, the danger of ambient radon, the ubiquitous natural radioactive gas, was pushed to new heights. Many profited from this new angst, mostly research labs and companies that were quick to sell radon mitigation appliances to disturbed house owners (usually a simple fan with some sealing of the caves bottom). A gas that in some rare instances could be a problem was pushed by the media and politicians (as always wanting to show that they care about their voters) to a permanent and extreme danger, allegedly causing a high percentage of the lung cancers (a conclusion that was extrapolated from extreme high radon situations to very low ones, according to the probably wrong No Linear Threshold (LNL) theory still fashionable among many anti-nuclear activists today). New legal maximum concentrations were defined (as for instance 300 Bq/m3 in Luxembourg); in the USA a radon certificate had to be added when a house was sold; and than the problem vanished from the media and the overall attention.

Why did the radon angst disappear? Because the new danger of global warming caused by another “pernicious gas”, CO2, was ramped up. The avoidance (mitigation) of high radon levels was not a too difficult task; but avoiding CO2, a natural constituent of the atmosphere and an inevitable by-product of fossil energy use, is quite a different beast. No wonder that climate change (which replaced global warming when it became clear that there has not been much warming for the last 20 years) became rapidly the poster child of everyone: as for radon, the new danger assured heavy funding in university research, the possibility to produce electricity by non-carbon emitting procedures pushed many parts of the industry into renewable wind and solar devices, and on top the very influential environmental movements had a topic that predictably would have a much longer life-span than the previous scares. As an additional pusher we can see the disappearance of the Cold War worries,the slow-down of traditional religious feelings which were, at least in many parts of the Western World, replaced by the new “quasi-religion” of environmentalism.

All these scares have some solid foundations: a future world population of 11 billion would be unmanageable if technology and science would stand still, so the 1968 angst (as the much earlier prophecies of Malthus) seem quite reasonable in an unchanging world. But this has not happened: the green revolution (which owns so much to Norman Borlaug) increased agricultural yields tremendously without destroying the soils and “nature”; in spite of many ongoing (civil) wars, political unrest, deep corruption etc.  poverty has decreased and access to education made quite a jump. When the “Population Bomb” was written (1968) the world population was about 3.6 billion, and today, close to 50 years later, is has doubled to 7.2 billion.

The big environmental scares all ignore the tremendous potential for innovation of humanity. Despite the horrors of wars, environmental damage and political unrest in many parts of the world, the overall picture of the past 50 years commands an optimistic point of view, and not one of fear and depression. Will climate angst follow the past pattern? What makes climate change different is that, depending on your view, it is essential a cause of human evolution and progress, which both were and are heavily tied to energy availability and usage. All the previous scares have found at least a partial solution by human progress (remember that as a general rule the most industrialized countries are also the most eco-conscious ones!), but this one demands a big change in thinking. When we want to avoid pumping more and more CO2 into the atmosphere (my personal opinion still remains that the dangerous consequences will be small), and we have installed solar panels and wind turbines everywhere without seriously solving the intermittency problem of these renewable energies, why do we not see the elephant in the room: nuclear energy has all the needed potential for an abundant and cheap carbon-free energy, and many ways different from those used in the past exist to use nuclear (or fusion) energy in a low-dangerous manner, without a legacy of extremely long radioactive waste.