Archive for March, 2017

How the IPCC buries it’s inconvenient findings

March 30, 2017

 

There has been an interesting hearing before the US Senate on Climate Change, Models and the Scientific Method, with testimonies from J. Curry, J. Christy, M. Mann and R. Pielke Jr.. In a later blog I will comment on this hearing (video link ). For the moment I would just write about an astonishing fact given by Prof. Christy from the UAH (University of Alabama in Huntsville). Christy (and Roy Spencer) analyze and maintain the database of global temperature measurements done by the satellites (the other team is RSS).

 

  1. The absence of a human caused warming finger-print

All climate models agree that human caused global warming should show up as an upper atmospheric warm hot-spot in the tropics. Look at the next figure, which corresponds to the outcome of one model:

 

The problem is that observations (by balloons, radiosondes etc.) do not find this hot-spot, which is a serious blow to the validity of the CMIP-5 model ensemble. Christy has shown in his testimony that the difference between models and observations can be found even in the IPCC’s own latest report (AR5): but probably you have to be an avid and patient reader to find it, as it is buried away in the Supplementary Material for chapter 10, figure 10.SM.1.; also the graphics are confusing and obscure, and some detective work is needed to clear the fog.

Here is this original figure 10SM.1:

The second from left plot corresponds to the tropics, and it is this sub-plot which we will look into.

 

2. IPCC’s hidden truth

I made a zoom from the relevant plot, and added some annotations and boxes:

 

The red band gives the answer of the CMIP-5 ensemble to the question: “what are the warming trends in the tropical atmosphere (up to about 15km) in °C/decade” when the models include human generated greenhouse gases (essentially CO2); the blue band gives the answer when the models do not include (i.e. ignore) human GHG emissions. And finally the thin grey line shows the observations of one radiosonde database (RAOBCORE = Radiosonde Observation Correction using Reanalysis): it can readily be seen that the models including GHGs terribly overstate the real warming: the red band (= region of uncertainty) lies completely above the observations. Now whats nearly hilarious is that when the models do not include human GHGs (the blue band), the result is absolutely acceptable, as the blue band covers most of the observation line.

Christy made this graph still clearer by including all observations (the region limited by the grey lines):

The conclusion is the same.

So the 1 million dollar question is: how can the IPCC claim with great confidence that its model tell us that the observed warming carries a human fingerprint, and is caused with very high certainty by the anthropogenic emissions, when in its own assessment report it shows the failing of these models?

 

The tuning of climate models

March 26, 2017

belmont_glas_dial(link to picture)

Many  (or better practically all) “climate politics” are based on the outcomes of climate models; as these models predict nearly unanimously a future warming due to the expected rise of atmospheric CO2 concentration, their reliability is of a primordial importance. Naively many politicians and environmental lobbies see these models as objective products of hard science, comparable for instance to the many years-long proofed correctness of the  structural physics of skyscrapers.

Alas, this is not the case: as the climate system is devilishly complex and chaotic, building a General Circulation Model (GCM) starting with basic physics laws is a daunting task; during its development, each model must take choices for certain parameters (their values, their possible range), a process which is part of what usually one calls “tuning”. The choices in tuning are not cast in stone but change with the modeling creators, with time and cultural/ideological preferences.

Frédéric Hourdin from the “Laboratoire de Météorologie Dynamique” in Paris has published in 2016 together with 15 co-authors an extremely interesting and sincere article in the “Bulletin of the American Meteorological Society” titled “The art and science of climate model tuning” (link) on this problem. I will discuss some of the main arguments given in this excellent paper.

  1. Observations and models

Hourdin gives in Fig.3 a very telling example how the ensemble of the CMIP5 models (used by the IPCC in AR5) differ in the evaluation of global temperature change, starting from 1850 to 2010 (the temperature anomalies are given with respect to the 1850-1899 average):

I have added the arrows and text box: the spread among the different models (shown by the gray arrows) is awesome, larger than the observed warming (given in the very warming-friendly Hadcrut4 series); even the “adjusted” = tuned variant (the red curve) gives a warming in 2010 that is higher by 0.5°C than the observations. We are far, far away from a scientific consensus, and decisions that ignore this are at best called “naive”.

2. Where are the most difficult/uncertain parts in climate models?

Climate models are huge constructs which are built up by different teams over the years; they contain numerous “sub-parts” (or sub-models) with uncertain parameters. One of the most uncertain ones is cloud cover. Just to show the importance, look at these numbers:

  • the forcing (cooling) of clouds is estimated at -20 W/m2
  • the uncertainty about this parameter at least 5 W/m2
  • the forcing thought to be responsible for the post 1850 warming of about 1°C is estimated at 1.7 W/m2.Conclusion: the uncertainty of the cloud cover effect is 3 times higher than the cause of the observed warming!

Hourdin asked many modelers about what they think to be the most important cause of model bias, and they correctly include cloud physics and atmospheric convection, as shown in the fig.S6 of the supplement to the paper (highlights and red border added):

3.  Are the differences among the models only due to scientific choices?

The answer is no! Many factors guide the choices in tuning; Hourdin writes that ” there is some diversity and subjectivity in the tuning process” and that “different models may be optimized to perform better on a particular metric, related to specific goals, expertise or cultural identity of a given modeling center”. So as in many other academic domains group-think and group-pressure do certainly play a strong role, showing a consensus that might well be due more to job security or tenure than objective facts.

4. Conclusion

This Hourdin et al. paper is important, as it is one of the first where a major group of “main-stream” researchers puts the finger on a situation that would be unacceptable in other scientific domains: models should not be black boxes whose outcomes demand a quasi religious acceptance. Laying open the algorithms and unavoidable tuning parameters (“because of the approximate nature of the models”) should be a mandatory premise. It would then be possible to check if some “models have been inadvertently or intentionally tuned to the 2oth century warming” and possibly correct/modify/adapt/abolish some hastily taken political decisions based on them.

The coming cooling predicted by Stohzkov et al.

March 18, 2017

 

time

 

A new paper from February 2017 has been published by Y.I. Stozhkov et al. in the Bulletin of the Russian Academy of Sciences. Here a link to the abstract (at Springerlink); the complete version is regrettably paywalled, but I was able to access it through the Luxembourg Bibliothèque Nationale.

The paper is very short (3 pages only), has no complicated maths or statistics and is a pleasure to read. The authors predict as many other have done before a coming cooling period; their prediction is based on two independent methods of assessment: a spectral analysis of past global temperature anomalies, and the observation of the relationship between global temperatures and the intensity of the flux of charged particles in the lower atmosphere.

  1. Spectral analysis of the 1880-2016 global temperature anomalies.

The paper  uses the global temperature anomalies series from NOAA and CRU, computing them as the difference with the global average near surface temperatures between 1901 and 2000. Their spectral analysis suggests that only 4 sinus waves are important:

The general form is: wave = amplitude*sin[(2pi/period)*time+phase] with the period and time in years and the phase in radians; the authors give the phase in years, so you have to multiply by 2pi/period to obtain the phase in radians.

  • series #1: amplitude=0.406  period=204.57 years  phase=125.81*2pi/period (radians)
  • series #2: amplitude=0.218  period=  69.30 years  phase=  31.02*2pi/period
  • series #3: amplitude=0.079 period=   34.58 years  phase=  17.14*2pi/period
  • series #4: amplitude=0.088 period=   22.61 years  phase=  10.48*2pi/period

I computed the sum of these 4 series and merged the graph with the global land-ocean temperature anomalies from GISS; the problem is that GISStemp calculates the anomalies from the mean of the 1951-1980 period, so the concordance will suffer from an offset.

The authors write that spectral periods less than 20 years do not play an important role: this means that El Nino’s (roughly a 4 years period) are ignored, as well as non periodic important forcing phenomena like volcanic eruptions. The following graph shows my calculation of the sum of the 4 spectral components (in light blue) together with the official GISStemp series in red:

The fit is not too bad, but as had to be expected, misses the very high 2016 El Nino caused warming.

The authors are not the first doing a spectral analysis on the temperature series. N. Scafetta in his 2012 paper “Testing an astronomically based decadal-scale empirical harmonic climate model vs. the IPCC (2007) general circulation climate models” (link) gives the following figure of a good fit using 4 short period sinus waves (so he does not seem to agree with the Stozhkov on the non-importance of short periods):

Note that both models predict a cooling for the 2000-2050 period.

2. Cosmic rays and global temperature

We are now in solar cycle 24, one of the weakest cycles since ~200 years, as shown by the next figure (link):

A situation similar to the Dalton minimum during the first decade of the 19th century cold period seems to unfold, and all things being equal, would suggest a return to colder than “normal” temperatures. But as Henrik Svensmark has first suggested, the sun’s activity acts as a modulator of the flux of cosmic charged particles, which create in the lower atmosphere the nucleation particles for condensing water, i.e. cooling low atmosphere clouds. In this paper the authors compare the flux N (in particles per minute) measured in the lower atmosphere (0.3-2.2 km) at middle northern latitudes with the global temperature anomalies dT: the measurements clearly show that an increase in N correlates with a decrease in dT. This is an observational justification of Svensmark’s hypothesis:

As this and the next solar cycle are predicted to be very low-active, this observation is a second and independent prognosis of a coming cooling (you may want to look at my older presentation on this problem here).

3. Conclusion

I like this paper because it is so short and does not try to impress the reader by an avalanche of complicated and futile mathematics and/or statistics. The reasoning is crystal clear: both the spectral analysis and the to be expected rise in the flux of charged particles suggest a future global cooling for the next ~30 years!

___________________________________________________________________________________________

Addendum 03 April 2017:

You might watch this presentation by Prof. Weiss given in 2015 on cycles in long-term European temperature series.

Energiewende: a lesson in numbers (Part2)

March 11, 2017

 (picture link:http://www.nature.com/nature/journal/v445/n7125/full/445254a.html)

In the first part of this comment on the Energiewende I showed that its primary goal to restrict the CO2 emissions has not been attained.

In this second and last part I will concentrate on the costs of the Energiewende.

2. The costs of the Energiewende

Let us remember that the financial aspect of Energiewende is a system of subsidies going into many directions: those who install solar PV or wind-turbines (for instance) receive a subsidy for the installation costs; they are granted priority in feeding electricity into the grid, and they are paid for this feed-in a tariff largely in excess of the market price. The McKinsey report writes: “Die aktuell vorliegenden Zahlen belegen, dass die bisherigen Erfolge der Energiewende überwiegend durch teure Subventionen erkauft worden sind “.

The costs for the individual household rises continuously. as shown by the next graph:

The increase with respect to the 2010 situation is a mind-blowing  3.35 factor; as the kWh price will probably reach or exceed 0.30 Euro in 2017, most experts agree that the yearly supplementary cost per 4 person household will be higher than 1400 Euro (which has to be compared to the 1 € price of one ice cone per month/person that minister Jürgen Trittin announced in 2004!).

The subsidization has transformed a free market into a planned economy, with many unintended nefarious consequences:
At certain times the combined solar+wind production is excessive, and leads to negative prices (the big electricity companies must pay their (foreign) clients to accept the surplus electricity:

The “redispatch” interventions to stabilize the grid and avoid its collapse rise by a factor of 10 from 2010 to 2015 (link); the costs rise practically more than the doubling given by “Moore’s law” during 2013-2015 (link):

Actually, if one includes not only the costs of not-needed electricity, but also those of the redispatch (changing the provenience of the electrical energy) and the mandatory reserve capacity, we are close to a doubling in the years 2011-2013-2014-2015, as shown by the “Insgesamt” total in million Euro (link) :

The McKinsey report sees grid management costs quadruple during the coming years and rise to over 4 billion Euros (4*10^9) per year. A recent article in The Economist titled “Wind and solar power are disrupting electricity systems”. Here three main problems are cited: the subsidies, the intermittency of wind and solar and finally their very low production costs which make traditional power stations (urgently needed for base-load, backup and grid stabilization) non-economic: without state subsidies nobody will built these power stations, so that the circle of state planning (as we know it from soviet times) is closed.

3. The job problem

Renewables have always been hyped for their job potential, but the reality in Germany is quite different: 2016 was the fourth year with falling job numbers in the renewable industry, and when this trend continues the aim of 322000 “green” jobs will not be attainable in 2020. Equally disquieting is that 2016 is the first year showing a decline in the jobs in the electricity-hungry industry. An older (2011) AEI report concludes that green jobs only displace traditional ones, and that in Spain each green megawatt installed destroyed 5.28 jobs. It seems that the whole Energiewende depends on its foundation of big subsidies (either direct or indirect) and state planning and steering. In a free market, the rise of “renewable” electricity would not be nil, but be much slower. The subsidies have spoiled huge parts of the industry, and they see these subsidies paid by all the citizens as their due.

Fritz Vahrenholdt has published a paper at the GWPF titled “Germany’s Energiewende: a disaster in the making”. He could well be right.

Energiewende: a lesson in numbers (Part 1)

March 11, 2017

lesson in numbersA new report from McKinsey on Germany’s Energiewende (= energy transition policy) has been published in the series “Energiewende-Index”. This very transparent and non-emotional report makes for a good reading: the main lesson is that the costs of the Energiewende (which has driven German household electricity prices 47.3% higher than the EU average) will continue to rise, and that the political deciders seem to ignore the future financial burden.

In this blog, I will comment using only numbers from well-known institutions (as the Dutch PBL report  “Trends in global CO2 emissions 2016“, Fraunhofer ISE, Agora Energiewende etc.), and let these numbers speak. Let me just give my personal position on renewable energies: In my opinion, every country should diversify as much as possible its energy sources, and that means that wind and solar should not be brushed aside. But the importance of having reliable and affordable continuous electricity available can not be ignored: intermittent sources as solar and wind should not be presented as the sole environmentally acceptable providers, as clearly the last dozen years have shown that this intermittency and the absence of realistic electricity storage are at the root of many tough problems. The German green Zeitgeist (which seems to drive many EU regulations) clearly is blind on both eyes concerning these problems; condemning nuclear energy under all its actual and upcoming forms as unacceptable increases dramatically the problems.

  1. The avoidance of CO2 emissions

The Energiewende was first positioned as a measure to avoid and diminish CO2 emissions caused by producing electricity from fossil fuels, transportation and industrial manufacture. After the Fukushima tsunami (March 2011), the “Atomausstieg” (nuclear exit) was added to this political foundation. Heavy subsidies have been poured on solar PV and wind energy facilities, pushing up the installed capacities of these 2 providers to 91 GW for a total installed generation capacity of 196 GW (numbers rounded commercially) as shown in this edited plot from Fraunhofer ISE:

Intermittent sources thus represent 91/196*10  = 46% of the installed capacity in 2016.; in January they delivered 23%, in August 25% of the total installed generating capacity. So we can conclude that when summing the intermittent sources, we find that these subsidized sources which have a feed-in priority contribute at about half of their installed capacity. The problem lies in the word “summing”: under the aspect of emissions, the sum might be a useful metric, but in real life it is the instantaneous available power that counts. The two following graphs from the Agora Energiewende report 2016 show the situation during the first and third quarters: I highlight the days with minimum and maximum (solar+wind) contribution with yellow rectangles.

agora_1quartal2016

agora_3quartal2016

Without the base load of CO2 emitters like biomass and coal, the lights would have been out many times!

Let us now look at the CO2 (or better the equivalent CO2 (CO2eq)) balance for the last years, compare several countries with Germany, and see if the Energiewende has been a successful CO2 lowering policy.

Our next graph shows how the CO2 emissions varied from 1990 to 2015 (I added zoomed inside pictures):

CO2_emissions_Germany_USA_1990_2015

The most interesting conclusion from this graph is that Germany’s total CO2 output diminishes not much between 2005 and 2015 (the Energiewende started in 2001), in contrary to the USA which had not a comparable policy. The same picture shows up in the “per capita” emissions:

CO2_per_capita_emissions_countries_USDEFR_1990_2015

Compared to the non-“Energiewende” countries of France and the USA, Germany again fares very poorly. The next  graph highlights in a more precise manner the trends between 2002 and 2015:

2010_2015_compare

I computed the trend-lines for Germany (magenta) and France (black): the equation show that France is two-times more successful than Germany in lowering its CO2 emissions, without any comparable and extremely costly Energiewende policy. Agora concedes this in its report writing that ” … Germany’s total greenhouse emissions haven risen once again“!

And the following graph shows that the part of fossil fuel has remained constant since 2000:
agora_1990_2016_fossilfuelconstant

Conclusion: The Energiewende has not achieved its primary goal in greatly lowering CO2 emissions!

_____________________________

(to be followed by part 2)