Copernicus on Earth: Ecem Demonstrator

July 27, 2017



There is an EU research project called “Copernicus Europe’s Eyes on Earth”, for studying climate and (renewable) energy relevant problems.

At the last ICEM conference organized by WEMC (World Energy and Meteorology Council) the University of Reading presented a very interesting web site called the ECEM Demonstrator. This interactive web site allows to obtain really fast graphs of many climate and renewable energy related variables. One has simply to fill out some information windows to get the result in graphical form.

The following figures shows the input window to get global solar irradiance data for Luxembourg:

Clicking new graph gives the result:

This graph can be zoomed into, cropped etc…

Let us just compare the ECEM yearly data from 2002 to 2016 with those measured at meteoLCD (and given in our Trends page):

Our meteoLCD masurements are very close to the ECEM data, and both show a similar strong decline since 2002, caused by the exceptional heat-wave year 2003.

If  we start the series at 2004, there still is a general negative trend, but it is much smaller: -1.5 for ECEM and -1.8 for the meteoLCD observations.

The ECEM Demonstrator is a very handy tool for an easy and quick check, and I strongly suggest that you try it out.


Warming by greenhouse gases: wrong since 150 years ?

June 23, 2017

From time to time comes a paper which has the potential to throw decade or even century long believes into the dust bin. Since Svante Arrhenius (1859-1927) climate science assumes that the global temperature of the Earth (or similar celestial bodies with an atmosphere) is commanded by the concentration of greenhouse gases, which produce a heating by trapping upwelling long-wave radiation (the mis-called “greenhouse effect”). This trapping can be observed in the laboratory using for instance a closed volume of CO2 gas. No experimental proof exist if this is also the case in an open atmosphere with strong convective air movements.

Ned Nikolov and Karl Zeller have published in “Environment Pollution and Climate Change” in February 2017 a really revolutionary paper titled “New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model“. This is a long paper of 22 pages, which demands a couple of readings to get every aspect; there are no overly complicated mathematics involved, and the conclusions are stated quite clearly.

Let me resume in this blog the essentials of the paper which loudly rings  unfamiliar bells!

  1. The aim of the paper

The paper describes an empirical research to find a set of physical parameters that would predict the GMAT (= Global Mean Annual near surface Temperature) Ts of different celestial bodies: Venus, Earth, Moon, Mars, Titan, Triton. Some of these have an atmosphere, others like the Moon or Triton have none (or almost none). The study uses dimensional analysis (DA), which we all know from our elementary physics courses (an example: an irradiance given in W/m2 has a dimension of [MT^-3]).

The following table shows the 6 variables and their physical dimensions used:


Let me explain some of these variables and others introduced later:

Tna = surface temperature in the absence of an atmosphere
S = solar TOA (top of the atmosphere) irradiance
It can be shown, using data from the Moon, that Tna = 32.44*S^0.25   (SI units: Tna in Kelvin, S in W/m^2)

Ts/Tna = RATE = Relative Atmospheric Thermal Enhancement = near surface warming effect of the atmosphere

An example: for the Earth, Tna = 197 K, RATE = 1.459 so Ts =197 * 1.459 = 287.4 K

The next table shows the numerical values of these different parameters for the celestial bodies included in the study:

3. The best model

Using these variables, the authors tried to build a model that is dimensionally correct, physically sound (using only standard thermodynamics) and that gives the best fit to the observational parameters. It comes as a surprise that  two fractions Ts/Tna and P/Pr are enough to obtain the GMAT for all bodies (P is the average pressure of the atmosphere at the surface, and Pr is a reference pressure = pressure on Mars). The next plot shows how good their best model fits the observational data (R^2 = 0.9999 !):

The conclusion is breathtaking: the surface temperature Ts (actually the mean surface temperature over at least 30 years) depends only on the solar irradiance (through Tna) and the atmospheric pressure at the surface: greenhouse gases are not needed to obtain this average global temperature!

4. The main conclusions.

I will partially use the authors text to give the conclusions:

a. Since the late 1800s all climate models are built on the premise that the atmosphere warms the Earth by limiting radiant heat losses of the surface through the action of IR absorbing gases ( = the greenhouse theory).

b. The new model shows that what is called “the greenhouse effect” is in fact a pressure induced thermal enhancement.

c. Down-welling long-wave (LW) radiation is not a global driver of surface warming.

d. Albedo is a byproduct of the climate system (and not an independent driver).

e. GMAT (do not forget this is a long-time average!) will remain stable (within +/- 1K) as long as TOA solar irradiance and atmospheric mass are stationary; there are no climate tipping points. The RATE thermal enhancement can be understood as the collective effect of a myriad of simultaneous adiabatic processes in the atmosphere.

f. The positive feedback of water vapor (always used in warming scenarios) is a climate model artifact.


All this brings the authors to write that there is a need for a paradigm shift in the understanding of key macro-scale atmospheric properties.


I recommend to read this paper carefully, and ask yourself if we might be betting on the wrong horse with our CO2-centered climate policies.


PS1: You may want to look at this short interview of Dr. Nikolov on Youtube.
PS2: … and read this rather harsh critique in the Washington Post (September 2016)
PS3: … and this page on a “Unified Theory of Climate” of the authors



History: 24 June 2017: added PS1 … PS3; slight change of wording in point e. of last paragraph and other minor editing.

Do sea levels rise faster at EU coasts?

June 2, 2017

One of the great scares of climate alarmists is the “we are all going to drown” meme; rising global sea-levels (caused by our CO2 emissions) are predicted to displace millions and millions of people, and the media joyfully bring more and more graphic pictures of this impending catastrophe. As always, one should look at the data, at that’s what Phil J. Watson does in his latest paper “Acceleration in European Mean Sea Level? A New Insight Using Improved Tools” (Journal of Coastal Research, 33/1, Jan.2017, link to full version).

  1. Relative and geocentric sea-level

The best and longest data series we have on changing sea levels come from coastal tide gauges (and not from satellites). The Permanent Service for Mean Sea Level has many long-time records (most are European), some going back to the beginning of the 19th century and even further. These tide gauges have usually be well maintained, as they were important for the safety of ships entering or leaving a harbor. One of the best known series comes from the town of Brest (France), starting in 1807. A tide gauge measures a relative sea level, i.e. the sea level relative to the position of the instrument. When we speak of global sea levels, then these levels are geocentric, i.e. relative to the center of the geode (the center of the earth with its not spherical form). A global sea-level has as much practical value as a mean global temperature, that is mostly none. It is an intellectual construct which may be interesting from a scientific point of view, but useless as a tool for concrete political action, such as deciding to build dams or coastal protections. So what Watson does, is to detect if there is any acceleration in the relative sea level measured at 81 locations on the European costs. The following picture shows his selection of tide gauges:

Clearly most stations are on the Atlantic and Baltic coasts. The Netherlands are accustomed since long times to protect their low-lying land against rising sea and storms, so I will insist on the situation at Amsterdam, and of Brest as representative for much of the more southern Atlantic coast. When talking about relative sea-level, one should remember that (at least) 3 factors have an influence:
1. land movement at the site of the gauge: this can be an uplift caused by glacial rebound (the ground moves up as the pressure of the heavy ice masses that covered it during the last ice age have vanished: this is the case around the Baltic, the stations in the rectangle B). The ground also may move down due to ground water extraction or simply the weight of the neighboring buildings. These land movements can be quite different are close locations: at Vlissingen (NL) there is a mean rise of +0.28 mm/y, and at Ostende (BE) 59km away the ground sinks at -35 mm/y.
2. atmospheric influences (short time such as atmospheric pressure and wind, long time such as those caused by NAO (North Atlantic Oscillation). Usually the gauge readings are corrected for atmospheric pressure variations and are low pass filtered to remove for instance the influence of the wind.
3. climate change influence, essential melting (land based) glaciers adding water to the oceans and thermal expansion of warmer waters.

Extracting unambiguously these single factors from the gauge data is difficult, if not yet impossible. But it is not a practical necessity, as decisions to begin work on new dams or other protective measures rely on the relative sea level (including all these factors), and not on a single parameter. Watson has used a new tool, actually a new module called msltrend of the well-known open source R2 statistical package to obtain data series of exceptional quality.

2. The situation at Amsterdam

The following picture shows the relative sea level at Amsterdam, from 1766 to 2010:

Two conclusions are obvious:
1. the relative sea level begins to rise around 1820, close to the end of the Little Ice Age which is usually taken as 1850.
2. since that date the increase is uniform, without any visible acceleration since the start of the industrial age ( ~1850); over the whole period the average increase is less than 1 mm/y or about 1.3 mm/y since 1820.

Clearly the period of highest CO2 emissions between 1970 and 2010 does not leave any visible impression in this sea level change.

3. Velocity of change and acceleration at Brest and Stockholm

The  next picture shows the time series of relative sea level, the velocity of change ( = the time gradient) and the acceleration at Brest and Stockholm:

Stockholm’s relative sea-level is continuously falling, at a near constant velocity and a practically zero acceleration (which is obvious, as acceleration is the derivative of velocity). The relative fall of the sea-level is important: more than 4 mm per year, and is practically the fingerprint of glacial rebound around the Baltic sea (at the island of Visby, ~150 km from Stockholm, the ground rises at 3.31 mm/y, and at Furuogrund, close to the northern coast of the Baltic, the rise is a spectacular 10.43 mm/y ).

At Brest the picture is quite different: there is a small increase less than 0.7mm/y over the whole period, with a near constant velocity since 1990 (and no statistically significant acceleration). As at Amsterdam, a “global warming” causing accelerated sea-level rise is not visible in these series. Watson writes in his paper that such a warming might show up with a delay of 15 to 20 years: well, the two periods of 1910-1945 and 1976-1990 usually accepted as being the two global warming events, are long past that delay, and leave no impression in the relative sea-level series.

4. The geocentric sea-level rise and the IPCC scenarios

Geocentric sea-levels can be calculated from the relative levels (with some caveats), using tectonic models or GPS measurements for the newest data. It is interesting to compare the European situation to the “global” geocentric sea-level rise (in mm/y) as used by the IPCC and in many models:

The global geocentric sea-level average change corresponds to the black line at about 3.4 mm/y (the grey band represents the 95% confidence interval i.e. the region outside which there is a low probability to find data). All EU geocentric sea-levels are below the global average, and most even clearly below the lower confidence boundary!

5. Conclusions

Watson’s paper concludes that relative sea-levels at European coasts do not show a statistically significant acceleration (that is their change per year remains constant); remember that these sea levels are given by very long time series and reliable instruments, usually carefully maintained. As such, these data do not show a finger print of an ongoing climate warming. He makes a very interesting point in plotting what the changes in velocity and acceleration would be if past observations should conform to the IPCC’s models:

Taking Cuxhaven as an example, acceleration that has been zero since 1840 would have to increase to about 0.20 mm/y**2, and velocity which was constant ~2 mm/y (i.e. a flat slope) during the last 180 years would have rise to a mind-blowing 17 mm/y during the coming 90 years: an extremely improbable scenario!

The conclusion for our political rulers is first to read this paper, than try to understand the numbers and finally avoid to jump to hysteric “mitigation” measures that may well be extremely costly, but essentially superfluous for changing the coastal sea-levels whose century long changes do not point to any potential catastrophe.

Rooftop wind turbines: haha or oho ?

May 28, 2017

A couple of weeks ago our national media  wrote about plans of the University of Luxembourg “going green” by planning to install PV panels (yawn…) and roof top wind turbines on its buildings at the Belval campus. As the following picture shows, there is only one high-rising building on the campus, the 83m high (18 floors) “Maison du Savoir” on the rather level building ground at approx. 300m asl.

I understand that an university must jump on the subsidy train of the day to bolster its income, and plan every fashionable “planet saving” measure keeping public money flowing in. Nevertheless, the (very old) idea of roof top turbines has never taken off, as simple physics and economics show it being more of a children’s play than a serious technology to harvest wind energy at most locations.

1. The numbers behind rooftop wind turbines

There is a very good report on this type of diminutive wind turbines at the Solacity website (link) titled “The truth about small wind turbines” which concludes to avoid wind turbines and install PV panels (what had to be expected from a PV manufacturer!). The report gives a lot of very clear data, which I will use in this blog.

Here is the formula to compute the yearly energy (in kWh) to be expected from a turbine with a rotor of a certain diameter and constant windspeed:

E [kWh/year] = 2.09* (diameter**2)*(windspeed**3)   with diameter in [m] and windspeed in [m/s]. The ** means “to the power”.

With a diameter of 4m and a constant wind speed of 3.5 m/s such a turbine would yield a meager 1434 kWh (value approx. 260 Euro). These data are for horizontal axis turbines, which do not look well on a roof and may cause more severe static problems due to the usual turbulent air flow as does a vertical axis turbine (VAWT, usually a variant of the Darrieus type). The following picture shows two VAWT’s on a building in Queensland (link):

The problem is that the efficiency of this VAWT type has been found to be much lower than that of a corresponding horizontal axis turbine, and material cost and fatigue was also higher (see here).

2. The problem with wind speed

All wind turbines have a cut-in minimum wind speed, below which they can not produce any energy. This usually lies around 3 to 5 m/s (10 to 18 km/h). There also is a maximum cut-off speed (around 25 m/s) where they must be stopped to avoid damage, but this is a rare situation in Luxembourg that we may ignore when speaking of collectable energy.

Now depending on your location, speeds above 3 m/s might not be exceptional: if your building is located in the high alps or at the sea-shore, this should not be a big problem. The real situation in continental Luxembourg is quite different. Let me give you the data measured at meteoLCD (Diekirch, about 218m asl) for the last 5 years:

The “avg” column gives the yearly average of the 17520 half-hour measurements (each half-hour measurement is the average of 30 measurements made every minute). In neither year does this average wind speed (in m/s) exceed the 3 m/s cut-in limit. The GT3 column gives the number of hours (out of the total of 8760) where the wind speed was greater than 3 m/s, and so on for the next columns.

Clearly only in about 1/5 of the time could any wind energy have been produced. The GT4, GT5 etc. columns show that the number of “fertile” hours drops rapidly (approx. by half for each step) and becomes virtually zero for wind speeds greater than 10 m/s. One should not forget that very often the rated power is given for a wind speed of 11 to 15 m/s.

The European Wind Atlas gives much more optimistic numbers, that in my opinion are the result from modelling and not observations:

According to the “Sheltered terrain” column the Belval wind resource is about 60% lower than the situation at Diekirch (for a measuring height at 50m above ground level; so keeping in mind that the meteoLCD anemometer is about 20m above ground level, and that a possible roof-top wind turbine installed on the “Maison du Savoir” is at 83m, my best guess is that the Belval data will not be vastly different from those at Diekirch, or at the best not more than the double. Using one of the available wind profiles for cities (link) one could speculate that at 83m height the wind speed would be about 60% higher that at 20m, which confirms my intuitive guess.

In this article the (US) author suggests that a typical payback period for a 1 KW roof top HAWT would be about 120 years, not counting repair and maintenance.

3. Conclusion

My reaction is rather “haha”. Do not expect any non trivial energy output from roof top turbines at the university of Luxembourg. The university roof top wind turbine will not be more than an expensive gimmick, fooling people into believing that they found a miracle solution for providing “clean” energy.

The funding authorities should at least insist on good observational wind speed data before paying for what seems to me more a publicity stunt than a scientific endeavor.

Global land warming: an airport fingerprint?

May 12, 2017


[Picture courtesy Wattsupwiththat shows Svalbard (Spitzbergen) runaway with weatherstation]

Prof. Fred Singer has an interesting article in the “American Thinker” where he suggests that the recent global warming given in the IPCC reports and many other data sources may be a fake, compared to the “real” global warming seen during the early period 1910 – 1942.

Let me just give a very short comment concerning the global land temperatures, as given by GISS Nasa; the numbers I use are from the CSV file (link), smoothed data column.

First look at this graph which shows how dramatically the number of weather stations has fallen, with rural stations taking the biggest toll:


Before the plunge, the percentage of weather stations located at an airport was ~40%, increasing to ~75% in 2000.

Now we all know that an airport has large surfaces of  dark tarmac and that the exhaust from the aircraft turbines is very hot air; so even if the meteorological sensors are mounted inside a Stevenson hut, at the regular height over grass covered soil, one should not be surprised that such an airport location will show warmer temperatures compared to a plain rural one.

Look here at the Findel airport in Luxembourg (courtesy Google maps): notice how the exhaust of the parked blue-tailed plane blows in the direction of the weatherstation  in the westerly wind conditions (which are the most frequent in Luxembourg).


Let us also remember that at most airports the traffic increased dramatically between 1970 and 2000. All these factors could produce a fake warming, on top of an eventual existing “real” warming”.

In the next figure I superposed the graph of global land temperature anomalies from GISS to the previous plot:


I calculate the 5 year warming for two periods where the percentage of airport based station is more or less constant: from 1970 to 1975 (less than 40% airport stations)  the warming is +0.04 °C, from 1995 to 2000 (about 75% airport stations) the warming is +0.20°C. The increase in atmospheric CO2 was +5.43 ppmV and +8.73 ppmV (Mauna Loa data).

The table resumes this situation:


So we have a land warming that is 5 times higher when the percentage of airport located stations has nearly doubled and the CO2 increase is less than double.  Do you really think that the location has no influence at all on the observed land warming, and that the CO2 increase is the sole cause?



14 May 2017: added pictures of weatherstations at Svalbard and Findel airports.

Wood burning causes more pollution than diesel trucks

April 13, 2017


In 2015 I wrote here a comment “Wood and pellets: a “burning” fine particulate problem“, where the small particle emissions were compared to those of traditional fossil energy sources, and found extremely high. As we are now in a time where Diesel-bashing has become the newest green fad, I was agreeably surprised by an article from Michael Le Page in the New Scientist (4 Feb 2017), titled “Where there’s smoke” (access is paywalled!). He writes how log burning in London and other cities has become a major pollution problem, emitting often much more PM 2.5 than Diesel trucks.

A Danish “eco-friendly” wood burner was found to emit through the chimney 500000 small particles per cm3, to be compared to the 1000 particles/cm3 found at the tail-pipe of a modern Diesel truck: so one eco-family thinking to save the planet caused as much pollution as 500 Diesel trucks!

Look at this graph from the Danish Ecological Council showing the particulate count inside a Copenhagen wood burning house:

This “correctly installed stove” caused an inside pollution 4 times higher than that of the most polluted street!

As I wrote in my former article, log burning certainly is the big contributor, well exceeding a pellet stove. But regarding the romantics, it is difficult to negate the much higher emotional appeal of a burning log compared to some smoldering pellets.

What I really find so scandalous is that people have been coaxed by infused “climate guild” to switch to wood burning, and now the nasty side of this eco-fad shows up. But as much as Diesel bashing seems to be hip, ignoring the pollution caused by wood burning is a real scandal. A scandal which seems a perfect “déjà-vu”, a repeat of the politically motivated push to the less CO2 emitting Diesel engines in the 90’s.

Do eco-politics  always must swing from naively pushing a “climate” solution to recognizing (after a longer latency) that the nuisances of the solution might well be higher than the putative dangers they are meant to avoid?


PS1: here a plot from the Australian Air Quality Group showing the contributions to PM2.5 pollution from different sources (Jun/Jul/August is Australian winter):

Hibberd_PM2.5sources_Muswellbrook_Sept2013.png.1379722208210The peak during July shows that woodsmoke contributes about 10 ug/m3, whereas transport and industry together only about 1 ug/m3, i.e. ten times less! (see here)

How the IPCC buries it’s inconvenient findings

March 30, 2017


There has been an interesting hearing before the US Senate on Climate Change, Models and the Scientific Method, with testimonies from J. Curry, J. Christy, M. Mann and R. Pielke Jr.. In a later blog I will comment on this hearing (video link ). For the moment I would just write about an astonishing fact given by Prof. Christy from the UAH (University of Alabama in Huntsville). Christy (and Roy Spencer) analyze and maintain the database of global temperature measurements done by the satellites (the other team is RSS).


  1. The absence of a human caused warming finger-print

All climate models agree that human caused global warming should show up as an upper atmospheric warm hot-spot in the tropics. Look at the next figure, which corresponds to the outcome of one model:


The problem is that observations (by balloons, radiosondes etc.) do not find this hot-spot, which is a serious blow to the validity of the CMIP-5 model ensemble. Christy has shown in his testimony that the difference between models and observations can be found even in the IPCC’s own latest report (AR5): but probably you have to be an avid and patient reader to find it, as it is buried away in the Supplementary Material for chapter 10, figure 10.SM.1.; also the graphics are confusing and obscure, and some detective work is needed to clear the fog.

Here is this original figure 10SM.1:

The second from left plot corresponds to the tropics, and it is this sub-plot which we will look into.


2. IPCC’s hidden truth

I made a zoom from the relevant plot, and added some annotations and boxes:


The red band gives the answer of the CMIP-5 ensemble to the question: “what are the warming trends in the tropical atmosphere (up to about 15km) in °C/decade” when the models include human generated greenhouse gases (essentially CO2); the blue band gives the answer when the models do not include (i.e. ignore) human GHG emissions. And finally the thin grey line shows the observations of one radiosonde database (RAOBCORE = Radiosonde Observation Correction using Reanalysis): it can readily be seen that the models including GHGs terribly overstate the real warming: the red band (= region of uncertainty) lies completely above the observations. Now whats nearly hilarious is that when the models do not include human GHGs (the blue band), the result is absolutely acceptable, as the blue band covers most of the observation line.

Christy made this graph still clearer by including all observations (the region limited by the grey lines):

The conclusion is the same.

So the 1 million dollar question is: how can the IPCC claim with great confidence that its model tell us that the observed warming carries a human fingerprint, and is caused with very high certainty by the anthropogenic emissions, when in its own assessment report it shows the failing of these models?


The tuning of climate models

March 26, 2017

belmont_glas_dial(link to picture)

Many  (or better practically all) “climate politics” are based on the outcomes of climate models; as these models predict nearly unanimously a future warming due to the expected rise of atmospheric CO2 concentration, their reliability is of a primordial importance. Naively many politicians and environmental lobbies see these models as objective products of hard science, comparable for instance to the many years-long proofed correctness of the  structural physics of skyscrapers.

Alas, this is not the case: as the climate system is devilishly complex and chaotic, building a General Circulation Model (GCM) starting with basic physics laws is a daunting task; during its development, each model must take choices for certain parameters (their values, their possible range), a process which is part of what usually one calls “tuning”. The choices in tuning are not cast in stone but change with the modeling creators, with time and cultural/ideological preferences.

Frédéric Hourdin from the “Laboratoire de Météorologie Dynamique” in Paris has published in 2016 together with 15 co-authors an extremely interesting and sincere article in the “Bulletin of the American Meteorological Society” titled “The art and science of climate model tuning” (link) on this problem. I will discuss some of the main arguments given in this excellent paper.

  1. Observations and models

Hourdin gives in Fig.3 a very telling example how the ensemble of the CMIP5 models (used by the IPCC in AR5) differ in the evaluation of global temperature change, starting from 1850 to 2010 (the temperature anomalies are given with respect to the 1850-1899 average):

I have added the arrows and text box: the spread among the different models (shown by the gray arrows) is awesome, larger than the observed warming (given in the very warming-friendly Hadcrut4 series); even the “adjusted” = tuned variant (the red curve) gives a warming in 2010 that is higher by 0.5°C than the observations. We are far, far away from a scientific consensus, and decisions that ignore this are at best called “naive”.

2. Where are the most difficult/uncertain parts in climate models?

Climate models are huge constructs which are built up by different teams over the years; they contain numerous “sub-parts” (or sub-models) with uncertain parameters. One of the most uncertain ones is cloud cover. Just to show the importance, look at these numbers:

  • the forcing (cooling) of clouds is estimated at -20 W/m2
  • the uncertainty about this parameter at least 5 W/m2
  • the forcing thought to be responsible for the post 1850 warming of about 1°C is estimated at 1.7 W/m2.Conclusion: the uncertainty of the cloud cover effect is 3 times higher than the cause of the observed warming!

Hourdin asked many modelers about what they think to be the most important cause of model bias, and they correctly include cloud physics and atmospheric convection, as shown in the fig.S6 of the supplement to the paper (highlights and red border added):

3.  Are the differences among the models only due to scientific choices?

The answer is no! Many factors guide the choices in tuning; Hourdin writes that ” there is some diversity and subjectivity in the tuning process” and that “different models may be optimized to perform better on a particular metric, related to specific goals, expertise or cultural identity of a given modeling center”. So as in many other academic domains group-think and group-pressure do certainly play a strong role, showing a consensus that might well be due more to job security or tenure than objective facts.

4. Conclusion

This Hourdin et al. paper is important, as it is one of the first where a major group of “main-stream” researchers puts the finger on a situation that would be unacceptable in other scientific domains: models should not be black boxes whose outcomes demand a quasi religious acceptance. Laying open the algorithms and unavoidable tuning parameters (“because of the approximate nature of the models”) should be a mandatory premise. It would then be possible to check if some “models have been inadvertently or intentionally tuned to the 2oth century warming” and possibly correct/modify/adapt/abolish some hastily taken political decisions based on them.

The coming cooling predicted by Stohzkov et al.

March 18, 2017




A new paper from February 2017 has been published by Y.I. Stozhkov et al. in the Bulletin of the Russian Academy of Sciences. Here a link to the abstract (at Springerlink); the complete version is regrettably paywalled, but I was able to access it through the Luxembourg Bibliothèque Nationale.

The paper is very short (3 pages only), has no complicated maths or statistics and is a pleasure to read. The authors predict as many other have done before a coming cooling period; their prediction is based on two independent methods of assessment: a spectral analysis of past global temperature anomalies, and the observation of the relationship between global temperatures and the intensity of the flux of charged particles in the lower atmosphere.

  1. Spectral analysis of the 1880-2016 global temperature anomalies.

The paper  uses the global temperature anomalies series from NOAA and CRU, computing them as the difference with the global average near surface temperatures between 1901 and 2000. Their spectral analysis suggests that only 4 sinus waves are important:

The general form is: wave = amplitude*sin[(2pi/period)*time+phase] with the period and time in years and the phase in radians; the authors give the phase in years, so you have to multiply by 2pi/period to obtain the phase in radians.

  • series #1: amplitude=0.406  period=204.57 years  phase=125.81*2pi/period (radians)
  • series #2: amplitude=0.218  period=  69.30 years  phase=  31.02*2pi/period
  • series #3: amplitude=0.079 period=   34.58 years  phase=  17.14*2pi/period
  • series #4: amplitude=0.088 period=   22.61 years  phase=  10.48*2pi/period

I computed the sum of these 4 series and merged the graph with the global land-ocean temperature anomalies from GISS; the problem is that GISStemp calculates the anomalies from the mean of the 1951-1980 period, so the concordance will suffer from an offset.

The authors write that spectral periods less than 20 years do not play an important role: this means that El Nino’s (roughly a 4 years period) are ignored, as well as non periodic important forcing phenomena like volcanic eruptions. The following graph shows my calculation of the sum of the 4 spectral components (in light blue) together with the official GISStemp series in red:

The fit is not too bad, but as had to be expected, misses the very high 2016 El Nino caused warming.

The authors are not the first doing a spectral analysis on the temperature series. N. Scafetta in his 2012 paper “Testing an astronomically based decadal-scale empirical harmonic climate model vs. the IPCC (2007) general circulation climate models” (link) gives the following figure of a good fit using 4 short period sinus waves (so he does not seem to agree with the Stozhkov on the non-importance of short periods):

Note that both models predict a cooling for the 2000-2050 period.

2. Cosmic rays and global temperature

We are now in solar cycle 24, one of the weakest cycles since ~200 years, as shown by the next figure (link):

A situation similar to the Dalton minimum during the first decade of the 19th century cold period seems to unfold, and all things being equal, would suggest a return to colder than “normal” temperatures. But as Henrik Svensmark has first suggested, the sun’s activity acts as a modulator of the flux of cosmic charged particles, which create in the lower atmosphere the nucleation particles for condensing water, i.e. cooling low atmosphere clouds. In this paper the authors compare the flux N (in particles per minute) measured in the lower atmosphere (0.3-2.2 km) at middle northern latitudes with the global temperature anomalies dT: the measurements clearly show that an increase in N correlates with a decrease in dT. This is an observational justification of Svensmark’s hypothesis:

As this and the next solar cycle are predicted to be very low-active, this observation is a second and independent prognosis of a coming cooling (you may want to look at my older presentation on this problem here).

3. Conclusion

I like this paper because it is so short and does not try to impress the reader by an avalanche of complicated and futile mathematics and/or statistics. The reasoning is crystal clear: both the spectral analysis and the to be expected rise in the flux of charged particles suggest a future global cooling for the next ~30 years!


Addendum 03 April 2017:

You might watch this presentation by Prof. Weiss given in 2015 on cycles in long-term European temperature series.

Energiewende: a lesson in numbers (Part2)

March 11, 2017

 (picture link:

In the first part of this comment on the Energiewende I showed that its primary goal to restrict the CO2 emissions has not been attained.

In this second and last part I will concentrate on the costs of the Energiewende.

2. The costs of the Energiewende

Let us remember that the financial aspect of Energiewende is a system of subsidies going into many directions: those who install solar PV or wind-turbines (for instance) receive a subsidy for the installation costs; they are granted priority in feeding electricity into the grid, and they are paid for this feed-in a tariff largely in excess of the market price. The McKinsey report writes: “Die aktuell vorliegenden Zahlen belegen, dass die bisherigen Erfolge der Energiewende überwiegend durch teure Subventionen erkauft worden sind “.

The costs for the individual household rises continuously. as shown by the next graph:

The increase with respect to the 2010 situation is a mind-blowing  3.35 factor; as the kWh price will probably reach or exceed 0.30 Euro in 2017, most experts agree that the yearly supplementary cost per 4 person household will be higher than 1400 Euro (which has to be compared to the 1 € price of one ice cone per month/person that minister Jürgen Trittin announced in 2004!).

The subsidization has transformed a free market into a planned economy, with many unintended nefarious consequences:
At certain times the combined solar+wind production is excessive, and leads to negative prices (the big electricity companies must pay their (foreign) clients to accept the surplus electricity:

The “redispatch” interventions to stabilize the grid and avoid its collapse rise by a factor of 10 from 2010 to 2015 (link); the costs rise practically more than the doubling given by “Moore’s law” during 2013-2015 (link):

Actually, if one includes not only the costs of not-needed electricity, but also those of the redispatch (changing the provenience of the electrical energy) and the mandatory reserve capacity, we are close to a doubling in the years 2011-2013-2014-2015, as shown by the “Insgesamt” total in million Euro (link) :

The McKinsey report sees grid management costs quadruple during the coming years and rise to over 4 billion Euros (4*10^9) per year. A recent article in The Economist titled “Wind and solar power are disrupting electricity systems”. Here three main problems are cited: the subsidies, the intermittency of wind and solar and finally their very low production costs which make traditional power stations (urgently needed for base-load, backup and grid stabilization) non-economic: without state subsidies nobody will built these power stations, so that the circle of state planning (as we know it from soviet times) is closed.

3. The job problem

Renewables have always been hyped for their job potential, but the reality in Germany is quite different: 2016 was the fourth year with falling job numbers in the renewable industry, and when this trend continues the aim of 322000 “green” jobs will not be attainable in 2020. Equally disquieting is that 2016 is the first year showing a decline in the jobs in the electricity-hungry industry. An older (2011) AEI report concludes that green jobs only displace traditional ones, and that in Spain each green megawatt installed destroyed 5.28 jobs. It seems that the whole Energiewende depends on its foundation of big subsidies (either direct or indirect) and state planning and steering. In a free market, the rise of “renewable” electricity would not be nil, but be much slower. The subsidies have spoiled huge parts of the industry, and they see these subsidies paid by all the citizens as their due.

Fritz Vahrenholdt has published a paper at the GWPF titled “Germany’s Energiewende: a disaster in the making”. He could well be right.