This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.
Prof. Murray SALBY presented his conference “Relationship between Greenhouse Gases and Global Temperature” the 18 April 2013 at the University of Hamburg (see Youtube version here and MP4 version here). His presentation was similar, but not identical to that I discussed in a previous post. It was quite technical in several parts (the video shows a very silent public, but this could simply show that German academics are well-mannered), but not overwhelming for someone who is familiar with the usual tools used in signal or time series processing. Nevertheless, it is good idea to go several times through this great presentation (the Quicktime player is handy for making precise stops at a certain slide), and to make some musings on several aspects.
1. The lag between observed CO2 and temperature changes.
In this first comment, I will compare some findings concerning the time lag between the observed measurements between CO2 and some of the global temperatures. I made several calculations myself, using the exceptional DADiSP software (which remains my favorite tool since many, many years).
Here is what Prof. Salby shows concerning the cross-correlation between CO2 and global temperature (colored elements added by me): CO2 levels lag temperature by about 8.5 months (temperature rises first, CO2 follows).
I made the same calculations using various monthly CO2 and temperature data for the 1979 to 2012 period: the seasonal detrended Mauna Loa CO2 data , the NCDC series of various monthly global temperature anomalies (ocean, land and ocean, land) and the RSS satellite data of lower troposphere temperature anomalies.
The next figure shows the cross-correlation between CO2 and the NCDC ocean temperature (SST anomaly):
The lag between SST and CO2 is 13 months: temperature first rises, and 13 month after its (statistical) maximum, CO2 reaches its next peak value.
Prof. Ole Humlum (from the climate4you blog) and co-authors published in “Global and Planetary Change 100 (2013) 51–69″ a paper “The phase relation between atmospheric carbon dioxide and global temperature” (pay-walled, see abstract here). Using a different calculation method, they too find CO2 levels lagging temperature.
The following table summaries the different findings:
The RSS cross-correlation in the last row has a first miniscule peak at 12 months lag, and a next one at 15 months.
Normalizing the correlations (using the XCORR function of DADiSP) gives the following cross-correlation maxima for the NDC and RSS series: NCDC land: 0.64, NCDC ocean: 0.77, NCDC land + ocean: 0.77 , RSS lower troposphere: 0.59
These numbers (to be compared to the Salby 0.5 maximum) show that one should use either SST alone, or the global land plus ocean series. The satellite derived lower troposphere anomalies seem to be less influential in documenting the CO2 changes.
All these lags are of the same sign, i.e. all point to an observed temperature rise preceding the CO2 rise. This would invalidate the essential IPCC “consensus” that atmospheric CO2 levels are the primary drivers of global temperature change. The lags found above are a hint that temperature change is the (or at least one of others) cause, and CO2 change the effect, and not the other way around.
2. First conclusion
The Salby, Humlum and my own calculations all show that global temperature change is not driven by atmospheric CO2 mixing ratio, but that statistically speaking, it is the inverse: if temperature rises, CO2 follows. This lag has been found for instance in the Vostok ice core series, albeit with much longer delays (about 800 years). Our short term observations simply document the well known physical effect that a warmer ocean will absorb less CO2 than a colder one. Hardly surprising!
What can not be deduced from these correlations is that the CO2 increase in the atmosphere has a predominant natural origin. More on this in a next comment.
PS: The Humlum paper has not been well received by different researchers. M. Richardson has a comment in print (pay-walled, see abstract here) that seems to show that Humlum’s method violates conservation of mass. A second critique is that it can not be shown that the natural contribution to atmospheric CO2 levels is distinguishable from zero. More on this in a next comment.
By a bit of luck I found a short discussion of a really “inconvenient” paper by associate-professor Q-B Lu of the University of Waterloo (see personal website) in Lubos Motl’s blog “The Reference Frame“. There are quite a lot of versions of this paper flying around the internet, some pay-walled, some free. The title is: Cosmic-ray driven reaction and greenhouse effect of halogenated molecules: culprits for atmospheric ozone depletion and global climate change.
It is well worth the time to spend an hour or two reading this 24 pages paper. It is well written, and relatively easy to understand. The main conclusion of Lu is that the global warming observed from 1970 to 2002 and the following slight cooling can be explained by the action of cosmic-rays on (anthropogenic) atmospheric halogens; the absorption bands of the usual GHG’s are more or less saturated, so that increasing CO2 for instance does not play a visible role from 1970 to present, and will not in the future.
Variations of total ozone column
The chlorine needed for ozone destruction is created by cosmic rays through a process called DET (dissociative electron transfer, a process running on the tiny ice crystals in Polar Stratospheric Clouds (PSC’s). As the intensity of cosmic rays is modulated by the solar wind (and as such by solar activity), one should observe an 11 year cycle in the Antarctic ozone hole. This is indeed the case:
LU gives a very simple equation that shows that the relative change in total ozone column is proportional to the concentration of halogens (or more precisely the “equivalent effective chlorine” Ci delayed by about 2 years for Antarctica and 10 years for mid-latitudes) and the cosmic ray intensities in the preceding and current year:
This figure shows the equivalent effective chlorine (due to halogen dissociation) peaking around 1995, and the excellent parabolic fit of the relative total ozone column change to time (note spelling error in legend to red curve!). As anthropogenic halogen emissions are supposed to fall during the coming years (in accordance to the Montreal protocol), a total recovery back to the 1980 level is expected for 2050-2060.
LU accepts without discussion that all CFC’s in the atmosphere have a human origin. This might not be absolutely true, as some researchers show that CFC’s also may have a natural origin, like volcanoes (see here). So the conjecture of continuously lower concentrations in the future might not be rock-solid. Nevertheless, if LU is correct up to this point, reducing CFC’s emission through the Montreal protocol was the correct thing to do, and the photo-chemical reactions discovered by Molina and Crutzen are at least broadly correct.
Link to global warming
This second part of the papers has attracted the most vigorous critics (see here and here), often with the argument that a good correlation is not necessarily a sign of a strong causation. As according to LU all absorption bands of GHG’s are saturated (an argument also given by numerous greenhouse “sceptics”), one need only compute the radiative forcing from the halogens, and use this forcing to compute a global temperature change. Doing this he finds a really high correlation between global warming and total halogen concentration, as shown in this figure:
Note that the CFC concentration nicely follows the plateau in global temperature anomaly (see C and D). Extending his conclusions into the near future would suggest a gradual cooling extending at least until 2050, as resumed in this figure:
The wiggles in the green temperature curve represent the influence of the 11 year solar cycle.
According to LU, the only variations in solar activity that have a climatic effect are those that change the intensity of cosmic rays, i.e. the solar wind. Neither changes in total solar nor UV irradiances would do much to the global temperature, nor the (rising) concentrations of greenhouse cases. With these two assumptions, LU squeezes the toes of the alarmists as well of those who think that climate change is a total natural phenomenon. Hypothesizing that cosmic rays mediated changes in halogens is the climate driver does in fact unite both those who see human activity as the evil, and those who think that the sun is the BIG driver of climate change. No wonder that overall reactions are mostly unsympathetic!
Nevertheless this is a quite interesting paper, and as said above, very readable. It does not try to bury the arguments in obscure mathematics or baroque statistics. Hopefully another team of researchers will try its hands on this, so that the scientific debate over the climatic role of halogens would become more intense.
The Age of Global Warming, a History
ISBN 978 0 7043 7299 3
This is the book every politician and historian should read. Here you will not find endless tables of climate data and no avalanche of graphs (there are just two in the 442 pages book). No equations, no mathematics, no formulas, and nevertheless this is one of the best books I read on the subject. Rupert Darwall is an economist and historian, and his writing style is fluid and easy to read. His knowledge of history, politics and economy is broad, and every citation is clearly referenced. The book is a journey through the Western mind embracing the concepts or ideologies of environmentalism, sustainable development and anthropogenic climate change.
The author constantly goes back to first principles of Science, as laid out by Francis Bacon and Karl Popper when it comes to decide if a theory has scientific merit.
If you are allergic to numeral over-kill but like clear logic, good writing style and a treasure of carefully researched information, this should be your book. I was so enthralled that I read it twice in a row, and it was time well spent.
2. Environmentalism and the meme of global warming
GW (here GW always means AGW, anthropogenic global warming) is a child of Western environmentalism. It is the best-liked of its children, bringing into the lime light the virtues of ecological behavior and ending in a state where it is more important for governments to be seen doing something to “save the planet” than assessing the likely consequences of these policies. The “Global Warming Policy Paradox” is that the therapy causes the malady it was designed to avert. Global therapeutics like biofuels create problems (like foot shortage) at a scale that a possible warming would not be able to produce.
The environmental and sustainability angst of the 20th century have two mothers: US citizen Rachel Carson with her book “Silent Spring” (1962), and British Barbara Ward with her “Spaceship Earth” paper (1966). Both writers touch a sensitive Western soul: living in relative prosperity calls for feelings of guilt, and the search to alleviate that guilt by sacrifices : sin, punishment,redemption! (see Serge Salam: Global Warming, the sacrificial temptation ).
That people and politicians care about the environment and nature is not new: President Teddy Roosevelt was the creator of US National parks, and even the über-capitalist Ronald Reagon forbade the construction of a Trans-Sierra highway and created the National Redwood Park.
Environmentalism is environmental care mutating into a religion. It works best when societies are affluent, what explains its relatively low stand in the Third World countries. The environmental wave started around 1962, reached its crest in 1972 and crashed the following year as the Yom Kippur war lead to the first big oil crisis. Than, during the “roaring eighties” it resumed, this time not riding the “limited resources” meme but that of man-made catastrophic global warming. As R. Darwall writes “with global warming environmentalism has found its killer app”.
3. Not a science
Throughout the book there are numerous examples that “modern” climatology has abandoned the fundamentals of what is “the scientific method”. To be scientific a theory must be falsifiable. Having only examples that agree with the theory is not enough; if by design it can not be falsified, it is not a scientific theory (the author does not write about the “post-modern sciences”, which negate the findings from Francis Bacon to Karl Popper that demonstrate why science was such a success-story since the 16th century).
The global warming theory in the sense that the concentration of atmospheric green house gases (notably CO2) will cause a warming that is proportional to their concentration is not falsifiable. Sure, there are observations where rising CO2 levels coincide with warming (1980 to 1998) and others where this is not the case (the last 15 years). Neither of these observations makes the theory scientific, as the “official” climatologist always will find an explanation why even opposite behavior is consistent with their theory.
R. Darwall writes that the politicization of climate science did it inflate it in such a manner that it became “too big to fail”. This is a situation akin to the speculation bubble which burst in the recent past. Will the global warming bubble also burst in a near future, swallowing scientific reputation, political power, media influence and many NGO’s which sold their soul to the impeding climate catastrophe, which might well have a only a virtual life in the climate models but none in the real world?
Let me conclude with a couple of citations from the book. There are so many, that the book will be an invaluable aid in preparing an expose.
p. 6: “it would be more accurate to describe global warming as a speculation or a conjecture”
p.62: “a persistent failure of the environmentalist position is to ignore economic history and fail to ask how or why industrial societies had escaped the Malthusian trap in the past”
p.81: “the energy crisis was created by governments, not geology”
p. 107: “the more a theory forbids, the better it is”
p. 246: “just as with the British government’s assessment of weapons of mass destruction, Nordhaus suggested the Stern Review should be read primarily as a document that is political in nature and has advocacy as its purpose”
p. 350: “in believing scientists and politicians can solve the problems of a far distant future, the tangible needs of the present are neglected”
An exceptional book, not to be missed!
We measured at meteoLCD a spectacular decrease of the thickness of the ozone layer between the 22 and 23 April 2013, from 382 wdown to 266 DU in one day! As expected, the UVB irradiance (and the UVI) jumped by quite a lot. I wrote a small paper on this phenomen, and used the occasion to compute the important Radiation Ampliflication Factor (RAF), probably the first such calculation done here in Luxembourg. The result is: RAF ~1.1, i.e. close to 1.
So a quick answer to the question: “by how much will the UVI increase if the total ozone column drops from highTOC to lowTOC is to apply a multiplier highTOC/lowTOC.
Here a picture of the measurements at Uccle showing the TOC plunge:
The discussion on climate cycles is an ongoing and interesting research topic. Particularly so, when many scientist estimate that the consensus IPCC AR’s neglect to take into account very proeminent cycles, like the ~60 years cycle apparent in oceanic oscillations (see for instance here). A quick inspection of the available satellite data from RSS shows a recurrent dip about every 8 years:
Let us make a quick spectral analysis of these monthly data (410 data points). The following table shows the 10 dominant peaks with their period in months:
The next figure gives a graphical representation:
The visual clue is conformed by the spectrum: there is a periodicity of 102.5 months i.e. about 8.5 years in the time series; note the large width of the peak, showing the relatively strong unsharpness (or the variablity of that period). Most apparent is the very sharp peak corresponding to 45.6 month i.e. about 3.8 years. This seems to be the fingerprint of ENSO. A paper published in Nature by Watanabe et al. (Nature 471, 209–211 , 10 March 2011,doi:10.1038/nature09777) shows the following periodogram (green line and text box added), which suggests a period of 4.15 years for about the same period.
Nicola Scafetta has published a very interesting paper in the Journal of Atmospheric and Solar Terrestrial Physics (DOI: 10.1016/j.jastp.2011.12.005, reprint here) “TESTING AN ASTRONOMICALLY BASED DECADAL-SCALE EMPIRICAL HARMONIC CLIMATE MODEL VS. THE IPCC (2007) GENERAL CIRCULATION CLIMATE MODELS” where he documents a 9.1 year period caused by solar/lunar decadal oscillations:
His assumption is that the climate is resonating with several dominant cycles imposed by astronomical pattern.
A quick and dirty spectrum analysis of the RSS global temperature anomaly displays 2 very apparent cycles of about 4 and 8-9 years period: the first shows the fingerprint of ENSO, and the second could have an extraterrestrial cause. Neither of these two phenomena is believed to be related to atmospheric greenhouse gas concentrations.
The BNA (Bundesnetzagentur) has just updated two Excel spreadsheets with the lists of all German power stations (>10 MW) and with the planned additional capacities to go on and off grid between 2013 and 2015.
This is the balance between the capacities to be removed and those to be added: Germany will add at least 7.5 GW fossil power generation (I left out some multiple purpose plants) running on gas and coal; the increase in coal (Steinkohle) burning power stations is ~6.4 GW!
In my blog from September 2012 the balance in planned additional fossil power was 6.4 GW, so the new plans show an increase of more than 1 GW.
If we admit a very conservative capacity factor of 0.8, and emissions of 450 g CO2/kWh for gas and 800 g/kWh for coal, the increase in yearly CO2 emissions will be 38.1 million tons of CO2 (metric tons) or 10.4 million tons of carbon ! Not bad for a country that started its “Energiewende” to save the planet from CO2 driven global warming.
To download the xls files, look here.
The whole discussion on the effects of human added greenhouse gases to the atmosphere centers on the concept of climate sensitivity: a sensitive climate will respond with a strong warming to increasing GHG mixing ratios, whereas an insensitive climate will react only with minor temperature variations. The concepts of radiative forcing and feed-backs are essential in that discussion, which is very far away from a consensus: the IPCC close fraction of the climatologists (which represent also the politically “correct” ideology of the moment) says that the sensitivity is high, and their opponents try to prove the contrary. Whereas the first group mostly (but not exclusively) uses GCM climate models to find the answer, the second group prefers observational data interpretation.
I follow these interesting research papers with high interest, and try to understand the physics and underlying assumptions. This is not always very easy, as mundane elements like incoherent algebraic sign conventions or same word terminology with different meanings unnecessarily complicate matters. Here one really sees that the concepts have not reached the definitive standardization of a mature scientific domain.
In the following chapters, I shall try to give a summary of three important papers of authors belonging to the second group, and show that their estimations of a low climate sensitivity are close.
1. Why do greenhouse gases produce a warming of the Earth?
2. The 2009 GRL paper by Lindzen and Choi
3. ENSO and changes in ORL
4. The 2010 JGR paper by Spencer and Braswell
5. The paper by Pehr Björnbom (2013) in Earth System Dynamics
1. Why do greenhouse gases produce a warming of the Earth?
The easiest explanation is given by using the concept of characteristic emission height. An atmosphere without longwave absorbing greenhouse gases would give a global surface temperature of T0=255 K. The simple energy balance equation Incoming Solar irradiance S = black body IR emission of the globe gives this number:
pi*R**2*S*(1-a) = 4*pi*R**2*sigma*T0**4 where S = solar constant = 1368 W/m2, a = albedo = fraction of the sunlight reflected back to space, sigma = Stefan-Boltzman constant = 5.67*10**-8 . The effective absorption is by the equatorial disk of area pi*R**2, and the LW emission by the spherical surface 4*pi*R**2. Solving this equation with an albedo a = 0.3 gives T0 = 249.1 ~255 K. Now using the curve of the variation of the temperature in the troposphere with altitude gives an emission height of about 5.5 km.
fig.1. Rising characteristic emission with increasing CO2 mixing ratio. (from Don Bogart, 2012)
Fig.1. shows that the emission level presently is at about 5.5km; if it is shifted upwards (lets say to 5.6 km), the emission will take place at a cooler place, and will be less. As the current tropospheric lapse rate is -6.5 K/km, this would imply a shift in emission height temperature from 252.25K to 251.6 K, and a declining outgoing longwave radiation (OLR) from 5.67*10-**8*252.25**4 =229.6 W/m2 to 5.67*10**-8*251.6**4 = 227.2 W/m2. The difference of 2.4 W/m2 is called the radiative forcing F due to the increased CO2 concentration. The actual trend is about 0.04 W/m2 per year, and the total additional forcing since the pre-industrial times where the CO2 concentration was 280 ppm is 5.35*ln(392/280) = 1.8 W/m2 (using a formula first given by Gunar Myrhe in 1998). The numerical parameter 5.35 is more or less a “consensus” value, agreed upon by practically all climatologists, but it is not cast in stone!
The notion of “radiative forcing” is a very important concept to simplify energy balance equations. In fact, if a doubling of CO2 from 290 to 580 ppm gives a change in radiative forcing of 3.71 W/m2, an increase of the effective solar “constant” S*(1-a) by the same amount would result in the same warming of the globe.
2. The 2009 GRL paper by Lindzen and Choi
In 2009 Lindzen & Choi from M.I.T- published a paper “On the determination of climate feedbacks from ERBE data” in the Geophysical Reserach Letters (GRL). They took data from the CERES satellite measuring outgoing radiation emitted from the earth, and tried to correlate it with the measured change in sea surface temperature. Now the changes in the net outgoing radiation are caused by a multitude of factors: first by a radiative imbalance (which may have multiple causes, as natural climate phenomenons like ENSO or changes in cloud cover, and anthropogenic ones as increased GHG emissions), but also by feedbacks of the climate system adjusting to that imbalance. These (unknown in detail) feed-backs can be cast into one single parameter, the equilibrium climate sensitivity parameter lambda (Lindzen & Choi use another notation) expressed in W/(m**2*K).
The relationship between temperature change dT0 and forcing change dF is dF = lambda*dT0. and the climate sensitivity dT0 = dF/lambda.
The problem clearly is to find lambda. The IPPC papers suggest small lambda’s corresponding to climate sensitivities of 2 to 4.5 K (or °C), derived from climate models. A doubling of CO2 concentration i.e. a change in radiative forcing of 3.71 W/m2 would imply a climate sensitivity parameter lambda in the range of [3.71/4.5 = 0.92 ; 3.71/2 = 1.8], i.e. a small lambda .
Lindzen & Choi find that the ERBE and SST data point to a climate sensitivity of ~0.6 K which corresponds to a large lambda of 3.71/0.6 = 6.2, very much greater than the IPCC one. The conclusion is evident: if the climate sensitivity parameter is large, the climate is insensitive to changing greenhouse gas concentrations which will cause only a moderate or small warming. So the politics of drastically limiting the use of fossils to avoid a dangerous climate change would be nonsensical.
3. ENSO and changes in ORL
The El Niño Southern Oscillation is a natural climate phenomenon causing a large heating and cooling of parts of the Pacific Ocean, and as a consequence have a very visible impact on global temperature (the extraordinary 1998 warming peak visible in all temperature records was caused by a very strong El Niño):
fig.2. El Niño’s and impact on global temperature (adapted from http://www.climate4you.com)
The two very visible temperature spikes correspond to the “monster” El Niño of 1998 and to the more modest one at 2010. The thick line is a running mean of the temperature.
NOAA has an interesting plot of the Outgoing Longwave Radiation /OLR), i.e. the IR radiation emitted from the earth, and a couple of interesting comments. I superposed their plot with that of the ENSO index:
fig.3. Overlay of ENSO index (red and blue column graph) and ORL (black curve with red dots)
Clearly the warm El Niño periods correspond to lesser ORL, and the colder El Nina’s to higher ORL. The explanation is that higher SST’s during an El Niño cause more convection and clouds at higher atmospheric altitudes; the top of these clouds is colder and emits less IR, so the satellite measures a decline in ORL; the opposite happens during an El Nina. This increase in cloud cover during an El Niño is a feed-back associated with a change in SST (see NOAA site here).
4. The 2010 JGR paper by Spencer and Braswell
Roy Spencer (the collaborator of John Christy from UHA) published with William Braswell a paper “On the diagnosis of radiative feedback in the presence of unknown radiative forcing” in the Journal of Geophysical research. In simple words they wanted to calculate the climate sensitivity factor lambda (which represents the sum of all feed-backs) by analyzing how an internal. natural occurring climate event would modify SST(sea surface temperature) or tropospheric temperatures. They made a couple of phase diagrams by plotting the anomaly of the net radiation measured by the ERBE satellite and the mid-troposphere temperature as a string of data points i.e. they plotted the sequence of measured (temperature anomaly, net radiation anomaly):
fig.4: The phase diagram of Spencer-Braswell (fig 2a of the paper) with enhanced red line, which is the line having a slope which is the average of all linear segments between two consecutive points representing month to month observations.
The important observation was the number of line segments which were parallel (what would be the case if mid-troposphere temperature anomaly and net radiation anomaly were proportional); the average of all slopes was found to be 6 (unit: W/(m2*K)). This number represents the climate sensitivity parameter lambda, as will be shown in the next chapter, and is similar in magnitude to what Lindzen & Choi found in their paper.
5. The paper by Pehr Björnbom (2013) in Earth System Dynamics.
Pehr Björnbm is presenting a paper “Estimation of the climate feedback parameter by using radiative fluxes from CERES EBAF” which is for the moment available as a discussion paper. This is a very clearly written paper, which uses the method suggested by Spencer/Braswell: use a time period where external (example: solar) and GHG forcing is more or less constant, and where the important changes come from internal variations of the climate system, i.e. from ENSO changes.
The net radiation measured by the satellite is net = Forcing – H where H is the feedback associated to temperature change, which can be shown as H = lambda* dT. So net = F – lambda*dT (Björnhom uses the letter alpha for the climate sensitivity parameter, in this discussion I will continue with lambda for clarity). To avoid all complications due to seasonal changes, all calculations are done on anomalies computed with respect to the September 2000 – May 2011 time period.
As “net” is an outgoing radiation, it should be a negative number, and the relationship will more clearly be written as
-net = -F + lambda*dT
fig.5. Plot of the energy balance equation.
The intersect with the x-axis gives the unknown climate sensitivity parameter lambda (in unit W/(m2*K)) and the climate sensitivity temperature dT0, which would be the temperature change caused by the forcing F, regardless the provenience of that forcing.
The next figure 6 ( = fig.1 of the paper) shows how the -net radiation and the global surface temperature (all anomalies) changed during the period of investigation:
Clearly the red curve (-net) lags the temperatures with about 7 months, and the variations are greatest during the 2006-2011 period, which is one of high ENSO indices. Drawing the phase plot with shifting the -net points by 7 months to the right gives the following result (fig. 2b of the paper):
The red line is simply a least square regression line and has a slope of 5.3 +/- 0.6 W/(m2*K); clearly large portions of the phase plot are more or less parallel to the red line. If we assume that the only (major) forcings during that period are those from the natural occurring La Niño and La Niña, the slope of 5.3 would represent the climate sensitivity parameter lambda. As the equilibrium climate sensitivity is often defined versus a doubling of the CO2 concentration from 280 to 560 ppm with a corresponding forcing of 3.71 W/m2, the warming to be expected from such a doubling would be in the interval [3.71/5.9; 3.71/4.7] = [0.63; 0.79 °C], much less than the “consenus” IPPC intervall of [2; 4.5 °C].
Pehr Björnhom must be highly acclaimed for the addendum to his paper, where he gives all the necessary instructions how to obtain the data and make the relevant calculations; this is an exemplary openness and a admirable scientific conduit which all true scientist and researcher should follow. I found a very small glitch in his Scilab script, which was easily corrected and did not have an influence on the calculation.
These 3 papers follow similar but not identical lines of reasoning; all suggest a climate sensitivity that is very small and distinctively lower than 1°C. The warming stand-still observed during the last decade (or even 16-18 years) seems to favor that interpretation, as CO2 mixing ratios from 1998 to 2012 increased from approx. 367 to 394 ppm, corresponding to a related change in forcing of 0.4 W/m2 (about 4 times higher than the change in solar TSI from minimum to maximum during a solar cycle) which seems to have no warming effect.
Probably Björnhom’s paper will be heavily criticized as was the case with the 2 other papers; this is a normal scientific procedure if these criticism is valid and done in a civilized manner. In my opinion, the final and definitive answer to the climate sensitivity problem is not yet here.
This is a new report (in German) from the University of Wien (Austria), which has been discussed by Pierre Gosselin in this blog Notrickszone. The 60 page report is written with the support of 5 institutions:
You should notice the presence of a catholic theological private university, which is a clear sign that the climate discussion mixes religious feelings with science and politics.
I read the 60 pages report carefully and have very mixed feelings. It certainly does not confer a Nobel price, but it has the merit to clearly stress and define the diversity and the common beliefs of the climate realists community. Over large parts, these descriptions are given in neutral and not (too visible) polemic terms, but later on the report rapidly falls back into the downs of the most primitive suggestive wording and one-sided views.
A very annoying, really childish obsession is the feminization of professions: you have the “AkteurInnen”, the “WissentschaftlerInnen”, the “NobelpreisträgerInnen”, the “PolitikerInnenreden” etc… How pity full and laughable is this German politically correct prose, written by a supposed adult university professor who is a Privatdozent of the FU Berlin)! It should be noted that this report is the result of a funding by the Climate and Energy Fund of the Federal State (“managed by Kommunalkredit Public Consulting GmbH, Laufzeit: 01.03.2012 – 31.12.2013″), so the insistence in the report that climate skeptics receive various funding could backfire!
At page 16 the author writes: “Allerdings gibt es auch und diese Gruppe dürfte in der deutlichen Mehrheit sein die Klimaskeptiker, die den menschenverursachten Klimawandel durchaus als wissenschaftlich erwiesenen Tatbestand anerkennen, aber den daraus resultierenden Katastrophismus oder die Klimahysterie ablehnen” . This is an example of what I said about neutrality, and this kind of wording should be the norm in an academic-level report.
Now compare this with page 33: “Dies sind Anknüpfungspunkte für Klimaskeptiker, um Unsicherheiten zu verstärken, Desinformationen zu verbreiten…“. Here the author clearly considers climate skeptics as people who publish disinformation: good climatologists publish information, climate realists disinformation! Say farewell to neutrality, the report is sliding down into diatribe and cheap defamation!
On the same page, Brunnengräber writes how hockey-stick author Michael Mann writes on him being attacked by the climate skeptics… but not a single word on how the hockey-stick team did it’s best to block all publication that they thought “not conform”.
The author also falls into the trap of suggesting that climate skeptics do not publish in peer-reviewed papers, voluntarily ignoring the vast amount of scientific publications… is this due to non-excusable laziness or an “a priori” agenda ? It should be noted that this report received funding over a 10 months period, from 01 March to 31 December 2012.
Suggesting the work of Prof. Mangini from the University of Heidelberg (an eminent specialist of stalagmite use as a temperature proxy) Brunnengräber writes: ” Oder es wird auf nationale Klimaforschungsinstitute Bezug genommen, die wenig bekannt sind, aber mit eigenen Forschungen mit Eisbohrkernen aus der Antarktis, mit Stalagmiten-Analysen oder langjährigen Wetteraufzeichnungen aufwarten ” (pp. 26/27, bold emphasis by me). I do not think that Prof. Mangini’s “Institute of Environmental Physics” is little known, and that the condescending term of “aufwarten” is the correct wording to describe the exceptional work done there in stalagmite datation.
The German EIKE (Europäisches Institut für Klima und Energie) comes out as the most dangerous villain German organization. The author writes on its finances: “ finanziert sich (angeblich) nur durch private Spender..” Why the doubt seeding term of “angeblich” = alleged ? If the author is not sure about the private funding, he should have done his research properly, but he didn’t!
The next important enemies clearly are Fritz Vahrenholt and Sebastian Lüning, the authors of the “Die kalte Sonne“, a book that had the impact of a small atomic bomb in Germany! And in the camp of the “bad” journalists, no surprise, you find Dirk Maxeiner and Michael Miersch which are even labeled industry-lobbyists! Did Brunnengräber even read one single of the truly excellent books of this team?
The report has been written by a professor of the (leftist) “Forschungszentrum für Umweltpolitik” of the FU-Berlin, and a quick glance at the publications of that institute shows it’s agenda and ideology, and what probably is expected from its members. It is a pity that a report which could have been a possible and valid scientific contribution to the climate debate, has been badly disfigured by the agenda and let’s say it clearly, the one-sided and insufficient research work of its author. I nevertheless suggest you take the time to read it through, and make out your own opinion.
And you climate realists, please take consolation with this old (and slightly modified) German dictum: “Wer andern einen Brunnengräb(er)t, fällt selbst hinein“.
1. Average wind speed is on decline at meteoLCD, and probably also at large parts of neighboring countries
2. As a consequence German and Luxembourg wind-energy production is hampered by declining capacity factors
3. Nevertheless, the huge windpower capacity installed, together with an inadequate grid and nearly nonexistent storage facilities, makes that the cost for downing wind parks rises exponentially in Germany.
1. Declining wind speed
In a previous blog I commented on the declining capacity factor of the Irish Eirgrid’s wind turbines, taken as a whole. The same holds for Germany and Luxembourg, at least from 2007/8 on. The German wind power association recently acknowledged that despite more than 1000 supplementary wind turbines being installed in 2012, the total energy output was less than that of the previous year (45.9 versus 46.5 TWh). The culprit is declining wind speed.
I checked this wind speed trend on the meteoLCD data, starting in 2002 because since that date all equipment was the same and did not move.
fig. 1 Trends in mean yearly air speed at meteoLCD, Diekirch
The data points represent the yearly mean air speed measured by a cup anemometer. There clearly is a first period of positive trend, with a decline starting in 2007. I fitted two regression lines forced to meet at the year 2007 point; the decline after 2007 is an average of 0.06 m/s par year. In 2007, a “virtual” wind turbine installed at the same place of the anemometer would have delivered an energy proportional to v**3, i.e. E0 = k*2**3 = k*8. The next year that energy would be E1= k*1.94**3 = k*7.30. So the first year decline would have been (0.7/8)*100 = 8.25 % or close to 44% during the 5 years following 2007 !
The airspeed decline is not correlated to the North Atlantic Oscillation; the correlation factor between the annual mean NAO index (station based) and airspeed is R = 0.24 and is not statistically significant.
Comment added 19th Feb 2013:
There is a paper by Zhao et al (Advances in Climate Research (2) 4, 2011) which gives a declining trend for Europe of -0.09 ms-1 per decade for the 30 years period 1979 to 2008. The meteoLCD negative trend is nearly 7 times higher
2. The changing capacity factors CF of national wind parks
To analyze the real situation, I digged into some data bases (for instance the www.ieawind.org , www.ewea.org , fee.asso.fr websites, using wikipedia only as a last resort; the numbers are close, but not quite the same). I calculated the yearly mean CF from the yearly total production and the installed wind power at the end of the year. There is a small problem with this, as the installed wind power usually increases during that year, so dividing by the end-of-the-year installed power lowers the real CF a bit. In general, the difference is small, so we will live with this.
The next figure shows the situation for 5 countries: Germany (DE), Ireland (EI), Denmark (DK), France (FR), Luxembourg (LU)
fig.2: Annual mean capacity factors of national wind parks
Clearly the German CF’s (blue circles) and the Luxembourg CF’s (pink triangles) are declining since 2007. The next figure computes the regression lines starting in 2007.
fig.3. Linear trends of annual mean capacity factors
Both Germany and Luxembourg show visible declining trends: Germany’s CF declines by 0.0051 per year, i.e. by 2.5% with respect to 2007, or a total (derived from the trend) of -12.6%. Luxembourg’s decline is even more spectacular (the 2012 data are not yet available): CF declines by 0.0124 per year i.e. by approx. 6% per year (w.r. to 2008) or -24% for the whole period.
Neither of these countries had by the end of 2011 a big offshore contribution (percentages of offshore installed power: DE = 0.3, DK =0.1, EI = 1.5, all others 0%). Nevertheless it is clear that relative flat countries close to the sea as Denmark and Ireland fare much better than the “more continental” ones..
3. The German “Abregelung”
Despite declining capacity factor and energy production from wind turbines, the yearly total of wind energy that has to be destroyed in Germany rises in a spectacular manner. The German word for this destruction is a harmless sounding “Abregelung”, translated to “down-regulation”. In fact, this word corresponds to the power that must not be produced to avoid a breakdown of the electrical grid. The grid operators as Eon or Tennet must pay the wind park owners for this, and these sums end up inflating the final consumer price. Here are the numbers as found in the Ecofys report ” Abschätzung der Bedeutung des Einspeisemanagements “
Taking the solid blue triangle data points from the Bundesnetzagentur (BNetzA) one can see that the costs rise from about 6 million to 40 million Euro in the 3 years 2009 to 2011; this is an exponential increase that will continue as long as the deficient grid can not cope with increasing high wind production when local demand is low:
fig.4. Annual costs in million Euro for downing wind production in Germany
The real costs for the customer are still higher, as a not negligible part of the energy produced has to be sold at zero or negative prices. For instance during 2012, electricity prices became negative 15 times at the Leipzig EEX exchange, reaching an eye watering -473 Eur/MWh at Christmas day 2012 in the morning. The total cost for the 2 successive days 25/26 Dec 2012 with negative price balance is approx. 75 million Euro. So it does not come as a surprise that German electricity is among the most expensive in the EU for the normal household paying the full tariff (about 29 cent/kWh in 2013, to be compared to the French tariff of 15.6 cent/kWh).
Without being a tenacious opponent to wind power, one has to ask the question how a nation of excellent engineers and scientists became so enthralled in “green” energy that they forgot two essentials in their rush for new wind parks: grid adequacy and storage facilities.
The big ECAD site of the European Climate Assessment & Dataset holds two versions of data files: a non-blended and a blended one. The non-blended files are said to simply contain the raw data as submitted by the meteorological station, with -9999 indicating missing or unacceptable bad values (there is a column with a quality code, which is 0 if the data are ok and for instance 9 if the data are missing). The blended series fill in missing data by including measurements from realtime SYNOP messages broadcast continuously; in this in-filling only data from station not further than 12.5 km should be used.
Now, what does this mean in practice? I made some detective work using the TN and TX (= minimum and maximum daily temperatures) data files of daily measurements done at the Findel airport meteorological station. As said in a previous blog the FINDEL series have many missing days in 2011 ( the first 5 months are totally missing) and in 2012 (September is missing). Mr. Jacques Zimmer from MeteoLUX (the organisation running the meteorological FINDEL station) kindly told me that the problem was not non-existent measurements, but long-time and intermittent software problems with the transmission of the data to the official European collector.
1. Differences between the ECAD TN and TX non-blended and blended long-time series
The files with the TN and TX data are best found from the ECA&D website by following the links “Daily data” … “Custom Search” and entering Luxembourg and Luxembourg Airport. Both type of files start the 01 January 1947 and end the 31 December 2012. A quick glance directly shows the huge gaps in the non-blended versions during 2011 and 2012, but it is not clear how many differences exist during the previous years from 1947 to 2010. To make an easy comparison of the text files, I used the excellent freeware ExamDiff from Prestosoft.
Well, the result comes as a surprise: from 1947 to 2010 included, non-blended and blended files are exactly the same. The versions differ only during the last 2 years (2011 and 2012): most (but not all) of the missing data have been filled-in. Nevertheless, there remain 7 days flagged -9999 (01 to 03 January, 06 Mar,03 Apr and 18 May). I do not understand why these few holes have not been filled in!
Now a serious question is this: Can one reasonably assume that during the whole period 1947 to 2010 there have been no missing raw data in the FINDEL series? I think no, one can not! The conclusion is that after some delay (let’s say two years), the originally raw data have disappeared, and even series flagged as “raw” (as does the KNMI Climate Explorer which leads to the same files) are in fact modified series. So any hope going back to the “originals” for data re-evaluation is doomed.
2. Influence of blending on Findel mean yearly DTR
If we use the year 2011, we find the following:
- average DTR using the Jan. to May data sent by Mr.Jacques Zimmer and the non-blended series for the remaining of the year: DTRavg = 8.12 (8.122740)
-average DTR using the blended ECAD series (which still contain 7 missing days): DTRavg = 7.92 (7.922409)
The abs0lute difference is 0.30 °C, the blended value being 3.8% lower, a not negligible difference.
Using only the blended files for the 2002 to 2012 DTR anomalies (w.r. to the 2002-2011 mean) one finds a negative trend of -0.34 °C per decade. Using non-blended data (with the exception of in filling the missing September 2012 days by the blended data), the trend is -0.27°C per decade.
The following picture resumes the comparison, including the BEST DTR anomalies (up to 2011); it should be noted that BEST gives only mean monthly DTR, and not daily values as does ECAD; so the BEST points represent yearly averages computed from monthly means, whereas the meteoLCD (green points) and FINDEL (red and pink points) represent yearly averages computed from daily values.
The picture shown in the previous blog showed a slightly positive trend for FINDEL; the reason is that I replaced the missing FINDEL data by those of meteoLCD, multiplied by a calibration factor. As a general rule,one should not expect DTR pattern being the same even for relatively close stations. The measured DTR represents the local climate, which can strongly diverge from a pattern computed using data from neighboring stations, as does BEST.
Marcel Severijnen, a regularly correspondent, also made in his blog klimaatblog.wordpress.com a comparison of DTR’s of some big and well-known stations Dutch weatherstations as de Bilt, Maastricht, Vlissingen etc. He shows the following graph:
All these stations are less than 300 km apart, nevertheless show really different behavior: in-land station de Bilt could reflect the influence of a ~120 years double AMO related oscillation, whereas the damped oscillation of coastal Vlissingen seems close to the AMO period of ~60 years. Please read the full text (in Dutch) of Marcel’s comment here.
It seems that the ECA&D non-blended files are in reality blended ones, except for the most recent years. Thus the raw data are lost in this big European dataset. This conclusion is preliminary, as based on a single station. A more stringent analysis is badly needed.
3.2. DTR patterns of a station reflect the local micro-climate. They can be hugely different from those of other neighboring stations, and evidently from averages computed over extended regions.