Recent methane rise mostly due to biogenic sources

September 16, 2017

There is an interesting new paper by Nisbet et al. in the AGU publication Global Biochemical Cycles titled “Rising atmospheric methane: 2004-2014 growth and isotopic shift” . The fossil fuel industry (oil extraction, fracking….) is often blamed for rising methane emissions, and this argument went somehow into limbo as the atmospheric mixing ratio, after a period of clear rising,  was stable for many years:

This picture documents the rise from 1984 to about 1999, the following plateau and finally a new lower rise from 2005 to 2015 (the lower plot shows the derivative = the change in mixing ratio per year).

One fingerprint in detecting the origin of the methane (from fossil fuels or from biogenic sources) is the isotopic composition: biogenic methane has a higher component of the 13C (carbon-13) isotope than the methane from fossil sources which are more depleted in 13C. Usually the isotopic fingerprint is given as delta_13C/12C in per mil (°/°°): the next figure (from Wikipedia) shows the exact defintion.


More negative values point to dominant biogenic sources, less negative values to fossil methane. For instance this paper gives a delta_13C/12C of about -60 for methane from ruminants (cattle) and marsh gas (wetlands). The next table (right column) has an overview from different sources:

Clearly methane from landfills or natural gas leaks and vents have a less negative delta_13C/12C.

The following picture from the Nisbet paper shows how this delta_13C/12C has evolved during the last 18 years:


CDIAC gives the series of 4 measurement stations (from North (Barrow) to South) which is consistent with the previous plots:


Clearly (in the 3 given regions) there was a general plateau until 2005, followed by a marked decrease. Nisbet et al. conclude that the dominant cause of this decrease was biogenic: greater rainfalls in the tropics increased wetlands, and helped increasing agricultural surfaces and livestock. But the contribution of the latter is estimated more gradual and lower, so that the main cause seems to be a meteorological driven increase in the tropical wetlands.




So what happened to the science?

August 11, 2017

There is an excellent guest comment today at the WUWT blog by John Ridgway. There are no graphs, but essentially very sound reflections on the social impacts on a science that gets politicized, as climate science has become.

Let me just cite a few of  what I think are the best remarks in this very readable article:

  1. I suspect the problem is that climatologists are making predictions that cannot be readily falsified through appropriate experimentation
  2. The lack of falsifiability sets the scene for the achievement of consensus by other means, resulting in a certitude that cannot be taken at face value
  3. it is a logical non sequitur to suggest that a model that performs well for the purposes of hindsight will necessarily be reliable for the purposes of making predictions.
  4. With the Hockey Stick to hand, the IPCC no longer needed to bury the uncertainty in the body of its reports, since the graph proved that the uncertainty simply didn’t exist…it is difficult to avoid the conclusion that the data had been mercilessly tortured for a confession
  5. the consensus, rather than being a result of minds being changed during debate and inquiry, instead emerges following a form of sociological natural selection
  6. in climatology the situation is worsened by a politically motivated denial of uncertainties and a lack of commitment towards openness and the reproducibility of results

Please read this comment with an open mind!

PS: here for your information how the Mann’s hockey-stick reconstruction differs from that of Loehle (source):


Wind and Solar: Intermittency, backup and storage (part 2)

August 9, 2017

In the first part of this comment I wrote that the study of F. Wagner on the 100% renewables aim of Germany’s Energiewende showed that this would need a massive blowup of the electrical power structure (about 4 times more capacity than the needed load has to be installed), and a non avoidable production of surplus electricity, which might become more a liability than an asset.
In this second part I will discuss some points of the second study by Linnemann & Vallana “Windenergie in Deutschland und Europa, Teil 1” published in the June 2017 issue of the VGB Powertech Journal (link1 to paper, link2 to additional slides). This interesting study only looks at wind power, insists on what is needed to guarantee reliability, and what has been achieved during the tremendous increase in German wind power from 2010 to 2016.

  1. Big increase in installed capacity, none in minimum.

The following slide shows the big increase in installed capacity during one decade: from approx. 27 GW to 50 GW:

Two facts are sobering:

a. the maximum produced power (during a short annual time-period) decreases from 81% to 68%, in spite of the nearly double installed capacity, and the more modern wind turbines (see blue curves).

b. the minimum power delivered remains constant at less than 1% of the installed capacity. In other words the guaranteed power available at every moment of the year is less than 1% of the installed capacity.

If one makes a statistical analysis, the distribution of the delivered power is very asymmetric and far from normal :

This histogram gives the percentage part of a certain delivered power during 2016 where the mean u is about 8.7 GW. The sum of all relative frequencies left to that mean ( i.e. the blue area left to the vertical u) is high, and corresponds to a 60% probability that the delivered power is low.

2. The capacity factor of the installed wind turbines.

Despite the doubling in installed capacity, the change to more modern and powerful wind turbines, and the increase in offshore wind parks (which in 2016 delivered 12 TWh out of a total of 77 TWh), the overall capacity factor remains depressingly low, and also low when compared to other European countries:

The large variations are essentially due to changing weather patterns: 2010 was a very low wind year for most of Europe. The long-time mean of the CF from 1990 to 2016 did increase by no more than 1% for the 2010-2016 period.

Compared to other European countries Germany’s wind turbines do a disappointing job (note that CF% = [Ausnutzung/8760]*100); the CF always lies close to the lower range.

3. How much backup power?

The VGB report is very clear: you need 100% of the installed  renewable wind capacity as backup. The cause is that the minimum guaranteed power (“Gesicherte Leistung”) is close to zero; the next table shows that this is the case for practically all countries, even those like Ireland or Denmark which are geographically privileged:

As the best windy places are mostly used, a bettering could theoretically be reached by modernization (“re-powering”) and smoothing by including all European producers in one big grid. The real data suggest that neither of these solutions is very effective; low wind areas often extend over a large part of Europe (wind is often strongly correlated over much of Europe).

4. Conclusion

The minimum delivered power during the 2010-2016 period is about 0.15 GW, which represents the displacing of conventional (fossil/nuclear) producers by the newly installed wind turbines. In other words, the doubling to 50 GW installed wind capacity has made only a ridicule low amount of 0.15 GW conventional electricity generators superfluous!

Wind and Solar: Intermittency, storage and backup (part 1).

August 7, 2017

A couple of recent papers/studies make a thorough analysis of the German Energiewende and the new problems caused by relying more and more on intermittent producers. The first paper is by Friedrich Wagner from the Max-Planck Institut für Plasmaphysik. Titled “Surplus from and storage of electricity generated by intermittent sources” it was published in December 2016 in the European Physical Journal Plus (link). The second is part 1 of a two part study published by VGB Powertech in June 2017. The authors are Thomas Linnemann and Guido S. Vallana, and the title is “Windenergie in Deutschland. Teil 1: Entwicklungen in Deutschland seit dem Jahr 2010” (link) .The link points to the original version in German, but an English translation will be available soon.

I wrote a comment on this last paper titled “Vom Winde verweht“, adding some reflections concerning the situation in Luxembourg; this has been published in the Saturday 5th August 2017 edition of the Luxemburger Wort, Luxembourg’s largest newspaper.

  1. A switch from demand orientated to supply driven.

One of the most important aspects in the rush to decarbonize the electricity sector is that a fundamental change is planned to enable the functioning of intermittent suppliers like wind and solar. The traditional electricity market is demand driven: the client has a certain maximum intensity hardwired in his counter or inbox (say 32 A or 64 A per phase, usually there are 3 phases); he can rely on this maximum which will be available at all time (but possibly at a different price according to a predefined peak or low demand period per day); the electricity supplier must do his best that the power asked for will be delivered at the correct voltage and frequency.
The new planned situation will see a swapping of the roles: the supplier decides what electrical energy he will deliver and at what price, and the consumer must accept. Smart meters allow very fine-grained changes of the tariff, which can be modified for very short time-segments; the maximum power can be throttled down if needed. All this is called DSM (demand side management), and practically robs the consumer of its freedom of  lifestyle or consumption pattern. All this because the extremely variable iRES (intermittent Renewable Energy Sources) can not guaranty a previously defined standard base-load. This supply driven “market” may recall to the older of us memories of the life in the former east-European communist states (like the DDR), where complicated 5 year state plans ruled the economy; it seems like an irony that such a life-style will be hyped as progressive by the iRES lobbyists.

2.  Unavoidable surplus and huge backup

F. Wagner’s paper gives some very interesting numbers, concerning an electricity market based on wind and solar alone. He assumes that fuels from biomass will be used for aviation, heavy machinery and big lorry transport, so biomass is not included among the future zero-carbon electricity market. Neither is hydroelectricity, which has practically reached a limit in Germany. He assumes that the future electricity consumption will be 500 TWh per year, which is a rather conservative (low) number if one thinks of the political push for electrical cars. A first conclusion is that in the wind/solar mix, solar PV electricity must not exceed about 20-25%, so there will be no equal sharing between wind and solar. The next figure (fig.2 of report added text-boxes) shows how this scenario, if it had been applied, would have worked out in 2012:

To guarantee supply, a huge backup infrastructure of 131 TWh must be installed (which corresponds to 73 GW installed capacity), and on top a yearly 131 TWh surplus energy will be unavoidable. The calculation show that when iRES sources contribute less than 25%, no surplus will be generated. In the 100% scenario, the unavoidable and strongly variable surplus which will quickly become a liability as there will be no consumer available to pay for it (note that negative price periods are becoming more and more frequent at the EEX exchange); this means that onshore wind turbines must be shutdown every year for a total period of about one month!

Speaking of surplus, the first answer of iRES advocates is electricity storage (in batteries, hydro or through chemical transformations). Wagner analysis covers short time day-storage solutions and long-time seasonal storage, which both will be needed. In winter, surplus is strong both during day- and nighttime, so a one day storage will not be enough. In spring, a daily storage solution would show a capacity factor (=efficiency) of 3%, which is ridiculously low. A seasonal storage solution which would avoid any backup infrastructure would demand an enormous  capacity of at least 100 GW. Nothing similar does exist, and no technological miracle solution for such a storage is in the pipe-line.

The conclusions of F. Wagner’s report:

  • a 100% iRES electricity production must install a capacity 4 times greater than the demand
  • if storage capacity will be delivered by pumped water storage, it must be increased by at least 70 times
  • the integral capacity factor of the iRES system will be 18%, and a backup system of about 89% of peak load must be installed
  • to nominally replace 20 GW nuclear power, 100 GW iRES power will be needed
  • an extremely oversized power supply system must be created
  • the Energiewende can not reach its coal of CO2 avoidance (“there is a clear conflict between political goal and applied method…overproduction by iRES may not be the right way to go”)


to be followed by part 2 which discusses the PWG paper

Copernicus on Earth: Ecem Demonstrator

July 27, 2017



There is an EU research project called “Copernicus Europe’s Eyes on Earth”, for studying climate and (renewable) energy relevant problems.

At the last ICEM conference organized by WEMC (World Energy and Meteorology Council) the University of Reading presented a very interesting web site called the ECEM Demonstrator. This interactive web site allows to obtain really fast graphs of many climate and renewable energy related variables. One has simply to fill out some information windows to get the result in graphical form.

The following figures shows the input window to get global solar irradiance data for Luxembourg:

Clicking new graph gives the result:

This graph can be zoomed into, cropped etc…

Let us just compare the ECEM yearly data from 2002 to 2016 with those measured at meteoLCD (and given in our Trends page):

Our meteoLCD masurements are very close to the ECEM data, and both show a similar strong decline since 2002, caused by the exceptional heat-wave year 2003.

If  we start the series at 2004, there still is a general negative trend, but it is much smaller: -1.5 for ECEM and -1.8 for the meteoLCD observations.

The ECEM Demonstrator is a very handy tool for an easy and quick check, and I strongly suggest that you try it out.

Warming by greenhouse gases: wrong since 150 years ?

June 23, 2017

From time to time comes a paper which has the potential to throw decade or even century long believes into the dust bin. Since Svante Arrhenius (1859-1927) climate science assumes that the global temperature of the Earth (or similar celestial bodies with an atmosphere) is commanded by the concentration of greenhouse gases, which produce a heating by trapping upwelling long-wave radiation (the mis-called “greenhouse effect”). This trapping can be observed in the laboratory using for instance a closed volume of CO2 gas. No experimental proof exist if this is also the case in an open atmosphere with strong convective air movements.

Ned Nikolov and Karl Zeller have published in “Environment Pollution and Climate Change” in February 2017 a really revolutionary paper titled “New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model“. This is a long paper of 22 pages, which demands a couple of readings to get every aspect; there are no overly complicated mathematics involved, and the conclusions are stated quite clearly.

Let me resume in this blog the essentials of the paper which loudly rings  unfamiliar bells!

  1. The aim of the paper

The paper describes an empirical research to find a set of physical parameters that would predict the GMAT (= Global Mean Annual near surface Temperature) Ts of different celestial bodies: Venus, Earth, Moon, Mars, Titan, Triton. Some of these have an atmosphere, others like the Moon or Triton have none (or almost none). The study uses dimensional analysis (DA), which we all know from our elementary physics courses (an example: an irradiance given in W/m2 has a dimension of [MT^-3]).

The following table shows the 6 variables and their physical dimensions used:


Let me explain some of these variables and others introduced later:

Tna = surface temperature in the absence of an atmosphere
S = solar TOA (top of the atmosphere) irradiance
It can be shown, using data from the Moon, that Tna = 32.44*S^0.25   (SI units: Tna in Kelvin, S in W/m^2)

Ts/Tna = RATE = Relative Atmospheric Thermal Enhancement = near surface warming effect of the atmosphere

An example: for the Earth, Tna = 197 K, RATE = 1.459 so Ts =197 * 1.459 = 287.4 K

The next table shows the numerical values of these different parameters for the celestial bodies included in the study:

3. The best model

Using these variables, the authors tried to build a model that is dimensionally correct, physically sound (using only standard thermodynamics) and that gives the best fit to the observational parameters. It comes as a surprise that  two fractions Ts/Tna and P/Pr are enough to obtain the GMAT for all bodies (P is the average pressure of the atmosphere at the surface, and Pr is a reference pressure = pressure on Mars). The next plot shows how good their best model fits the observational data (R^2 = 0.9999 !):

The conclusion is breathtaking: the surface temperature Ts (actually the mean surface temperature over at least 30 years) depends only on the solar irradiance (through Tna) and the atmospheric pressure at the surface: greenhouse gases are not needed to obtain this average global temperature!

4. The main conclusions.

I will partially use the authors text to give the conclusions:

a. Since the late 1800s all climate models are built on the premise that the atmosphere warms the Earth by limiting radiant heat losses of the surface through the action of IR absorbing gases ( = the greenhouse theory).

b. The new model shows that what is called “the greenhouse effect” is in fact a pressure induced thermal enhancement.

c. Down-welling long-wave (LW) radiation is not a global driver of surface warming.

d. Albedo is a byproduct of the climate system (and not an independent driver).

e. GMAT (do not forget this is a long-time average!) will remain stable (within +/- 1K) as long as TOA solar irradiance and atmospheric mass are stationary; there are no climate tipping points. The RATE thermal enhancement can be understood as the collective effect of a myriad of simultaneous adiabatic processes in the atmosphere.

f. The positive feedback of water vapor (always used in warming scenarios) is a climate model artifact.


All this brings the authors to write that there is a need for a paradigm shift in the understanding of key macro-scale atmospheric properties.


I recommend to read this paper carefully, and ask yourself if we might be betting on the wrong horse with our CO2-centered climate policies.


PS1: You may want to look at this short interview of Dr. Nikolov on Youtube.
PS2: … and read this rather harsh critique in the Washington Post (September 2016)
PS3: … and this page on a “Unified Theory of Climate” of the authors



History: 24 June 2017: added PS1 … PS3; slight change of wording in point e. of last paragraph and other minor editing.

Do sea levels rise faster at EU coasts?

June 2, 2017

One of the great scares of climate alarmists is the “we are all going to drown” meme; rising global sea-levels (caused by our CO2 emissions) are predicted to displace millions and millions of people, and the media joyfully bring more and more graphic pictures of this impending catastrophe. As always, one should look at the data, at that’s what Phil J. Watson does in his latest paper “Acceleration in European Mean Sea Level? A New Insight Using Improved Tools” (Journal of Coastal Research, 33/1, Jan.2017, link to full version).

  1. Relative and geocentric sea-level

The best and longest data series we have on changing sea levels come from coastal tide gauges (and not from satellites). The Permanent Service for Mean Sea Level has many long-time records (most are European), some going back to the beginning of the 19th century and even further. These tide gauges have usually be well maintained, as they were important for the safety of ships entering or leaving a harbor. One of the best known series comes from the town of Brest (France), starting in 1807. A tide gauge measures a relative sea level, i.e. the sea level relative to the position of the instrument. When we speak of global sea levels, then these levels are geocentric, i.e. relative to the center of the geode (the center of the earth with its not spherical form). A global sea-level has as much practical value as a mean global temperature, that is mostly none. It is an intellectual construct which may be interesting from a scientific point of view, but useless as a tool for concrete political action, such as deciding to build dams or coastal protections. So what Watson does, is to detect if there is any acceleration in the relative sea level measured at 81 locations on the European costs. The following picture shows his selection of tide gauges:

Clearly most stations are on the Atlantic and Baltic coasts. The Netherlands are accustomed since long times to protect their low-lying land against rising sea and storms, so I will insist on the situation at Amsterdam, and of Brest as representative for much of the more southern Atlantic coast. When talking about relative sea-level, one should remember that (at least) 3 factors have an influence:
1. land movement at the site of the gauge: this can be an uplift caused by glacial rebound (the ground moves up as the pressure of the heavy ice masses that covered it during the last ice age have vanished: this is the case around the Baltic, the stations in the rectangle B). The ground also may move down due to ground water extraction or simply the weight of the neighboring buildings. These land movements can be quite different are close locations: at Vlissingen (NL) there is a mean rise of +0.28 mm/y, and at Ostende (BE) 59km away the ground sinks at -35 mm/y.
2. atmospheric influences (short time such as atmospheric pressure and wind, long time such as those caused by NAO (North Atlantic Oscillation). Usually the gauge readings are corrected for atmospheric pressure variations and are low pass filtered to remove for instance the influence of the wind.
3. climate change influence, essential melting (land based) glaciers adding water to the oceans and thermal expansion of warmer waters.

Extracting unambiguously these single factors from the gauge data is difficult, if not yet impossible. But it is not a practical necessity, as decisions to begin work on new dams or other protective measures rely on the relative sea level (including all these factors), and not on a single parameter. Watson has used a new tool, actually a new module called msltrend of the well-known open source R2 statistical package to obtain data series of exceptional quality.

2. The situation at Amsterdam

The following picture shows the relative sea level at Amsterdam, from 1766 to 2010:

Two conclusions are obvious:
1. the relative sea level begins to rise around 1820, close to the end of the Little Ice Age which is usually taken as 1850.
2. since that date the increase is uniform, without any visible acceleration since the start of the industrial age ( ~1850); over the whole period the average increase is less than 1 mm/y or about 1.3 mm/y since 1820.

Clearly the period of highest CO2 emissions between 1970 and 2010 does not leave any visible impression in this sea level change.

3. Velocity of change and acceleration at Brest and Stockholm

The  next picture shows the time series of relative sea level, the velocity of change ( = the time gradient) and the acceleration at Brest and Stockholm:

Stockholm’s relative sea-level is continuously falling, at a near constant velocity and a practically zero acceleration (which is obvious, as acceleration is the derivative of velocity). The relative fall of the sea-level is important: more than 4 mm per year, and is practically the fingerprint of glacial rebound around the Baltic sea (at the island of Visby, ~150 km from Stockholm, the ground rises at 3.31 mm/y, and at Furuogrund, close to the northern coast of the Baltic, the rise is a spectacular 10.43 mm/y ).

At Brest the picture is quite different: there is a small increase less than 0.7mm/y over the whole period, with a near constant velocity since 1990 (and no statistically significant acceleration). As at Amsterdam, a “global warming” causing accelerated sea-level rise is not visible in these series. Watson writes in his paper that such a warming might show up with a delay of 15 to 20 years: well, the two periods of 1910-1945 and 1976-1990 usually accepted as being the two global warming events, are long past that delay, and leave no impression in the relative sea-level series.

4. The geocentric sea-level rise and the IPCC scenarios

Geocentric sea-levels can be calculated from the relative levels (with some caveats), using tectonic models or GPS measurements for the newest data. It is interesting to compare the European situation to the “global” geocentric sea-level rise (in mm/y) as used by the IPCC and in many models:

The global geocentric sea-level average change corresponds to the black line at about 3.4 mm/y (the grey band represents the 95% confidence interval i.e. the region outside which there is a low probability to find data). All EU geocentric sea-levels are below the global average, and most even clearly below the lower confidence boundary!

5. Conclusions

Watson’s paper concludes that relative sea-levels at European coasts do not show a statistically significant acceleration (that is their change per year remains constant); remember that these sea levels are given by very long time series and reliable instruments, usually carefully maintained. As such, these data do not show a finger print of an ongoing climate warming. He makes a very interesting point in plotting what the changes in velocity and acceleration would be if past observations should conform to the IPCC’s models:

Taking Cuxhaven as an example, acceleration that has been zero since 1840 would have to increase to about 0.20 mm/y**2, and velocity which was constant ~2 mm/y (i.e. a flat slope) during the last 180 years would have rise to a mind-blowing 17 mm/y during the coming 90 years: an extremely improbable scenario!

The conclusion for our political rulers is first to read this paper, than try to understand the numbers and finally avoid to jump to hysteric “mitigation” measures that may well be extremely costly, but essentially superfluous for changing the coastal sea-levels whose century long changes do not point to any potential catastrophe.

Rooftop wind turbines: haha or oho ?

May 28, 2017

A couple of weeks ago our national media  wrote about plans of the University of Luxembourg “going green” by planning to install PV panels (yawn…) and roof top wind turbines on its buildings at the Belval campus. As the following picture shows, there is only one high-rising building on the campus, the 83m high (18 floors) “Maison du Savoir” on the rather level building ground at approx. 300m asl.

I understand that an university must jump on the subsidy train of the day to bolster its income, and plan every fashionable “planet saving” measure keeping public money flowing in. Nevertheless, the (very old) idea of roof top turbines has never taken off, as simple physics and economics show it being more of a children’s play than a serious technology to harvest wind energy at most locations.

1. The numbers behind rooftop wind turbines

There is a very good report on this type of diminutive wind turbines at the Solacity website (link) titled “The truth about small wind turbines” which concludes to avoid wind turbines and install PV panels (what had to be expected from a PV manufacturer!). The report gives a lot of very clear data, which I will use in this blog.

Here is the formula to compute the yearly energy (in kWh) to be expected from a turbine with a rotor of a certain diameter and constant windspeed:

E [kWh/year] = 2.09* (diameter**2)*(windspeed**3)   with diameter in [m] and windspeed in [m/s]. The ** means “to the power”.

With a diameter of 4m and a constant wind speed of 3.5 m/s such a turbine would yield a meager 1434 kWh (value approx. 260 Euro). These data are for horizontal axis turbines, which do not look well on a roof and may cause more severe static problems due to the usual turbulent air flow as does a vertical axis turbine (VAWT, usually a variant of the Darrieus type). The following picture shows two VAWT’s on a building in Queensland (link):

The problem is that the efficiency of this VAWT type has been found to be much lower than that of a corresponding horizontal axis turbine, and material cost and fatigue was also higher (see here).

2. The problem with wind speed

All wind turbines have a cut-in minimum wind speed, below which they can not produce any energy. This usually lies around 3 to 5 m/s (10 to 18 km/h). There also is a maximum cut-off speed (around 25 m/s) where they must be stopped to avoid damage, but this is a rare situation in Luxembourg that we may ignore when speaking of collectable energy.

Now depending on your location, speeds above 3 m/s might not be exceptional: if your building is located in the high alps or at the sea-shore, this should not be a big problem. The real situation in continental Luxembourg is quite different. Let me give you the data measured at meteoLCD (Diekirch, about 218m asl) for the last 5 years:

The “avg” column gives the yearly average of the 17520 half-hour measurements (each half-hour measurement is the average of 30 measurements made every minute). In neither year does this average wind speed (in m/s) exceed the 3 m/s cut-in limit. The GT3 column gives the number of hours (out of the total of 8760) where the wind speed was greater than 3 m/s, and so on for the next columns.

Clearly only in about 1/5 of the time could any wind energy have been produced. The GT4, GT5 etc. columns show that the number of “fertile” hours drops rapidly (approx. by half for each step) and becomes virtually zero for wind speeds greater than 10 m/s. One should not forget that very often the rated power is given for a wind speed of 11 to 15 m/s.

The European Wind Atlas gives much more optimistic numbers, that in my opinion are the result from modelling and not observations:

According to the “Sheltered terrain” column the Belval wind resource is about 60% lower than the situation at Diekirch (for a measuring height at 50m above ground level; so keeping in mind that the meteoLCD anemometer is about 20m above ground level, and that a possible roof-top wind turbine installed on the “Maison du Savoir” is at 83m, my best guess is that the Belval data will not be vastly different from those at Diekirch, or at the best not more than the double. Using one of the available wind profiles for cities (link) one could speculate that at 83m height the wind speed would be about 60% higher that at 20m, which confirms my intuitive guess.

In this article the (US) author suggests that a typical payback period for a 1 KW roof top HAWT would be about 120 years, not counting repair and maintenance.

3. Conclusion

My reaction is rather “haha”. Do not expect any non trivial energy output from roof top turbines at the university of Luxembourg. The university roof top wind turbine will not be more than an expensive gimmick, fooling people into believing that they found a miracle solution for providing “clean” energy.

The funding authorities should at least insist on good observational wind speed data before paying for what seems to me more a publicity stunt than a scientific endeavor.

Global land warming: an airport fingerprint?

May 12, 2017


[Picture courtesy Wattsupwiththat shows Svalbard (Spitzbergen) runaway with weatherstation]

Prof. Fred Singer has an interesting article in the “American Thinker” where he suggests that the recent global warming given in the IPCC reports and many other data sources may be a fake, compared to the “real” global warming seen during the early period 1910 – 1942.

Let me just give a very short comment concerning the global land temperatures, as given by GISS Nasa; the numbers I use are from the CSV file (link), smoothed data column.

First look at this graph which shows how dramatically the number of weather stations has fallen, with rural stations taking the biggest toll:


Before the plunge, the percentage of weather stations located at an airport was ~40%, increasing to ~75% in 2000.

Now we all know that an airport has large surfaces of  dark tarmac and that the exhaust from the aircraft turbines is very hot air; so even if the meteorological sensors are mounted inside a Stevenson hut, at the regular height over grass covered soil, one should not be surprised that such an airport location will show warmer temperatures compared to a plain rural one.

Look here at the Findel airport in Luxembourg (courtesy Google maps): notice how the exhaust of the parked blue-tailed plane blows in the direction of the weatherstation  in the westerly wind conditions (which are the most frequent in Luxembourg).


Let us also remember that at most airports the traffic increased dramatically between 1970 and 2000. All these factors could produce a fake warming, on top of an eventual existing “real” warming”.

In the next figure I superposed the graph of global land temperature anomalies from GISS to the previous plot:


I calculate the 5 year warming for two periods where the percentage of airport based station is more or less constant: from 1970 to 1975 (less than 40% airport stations)  the warming is +0.04 °C, from 1995 to 2000 (about 75% airport stations) the warming is +0.20°C. The increase in atmospheric CO2 was +5.43 ppmV and +8.73 ppmV (Mauna Loa data).

The table resumes this situation:


So we have a land warming that is 5 times higher when the percentage of airport located stations has nearly doubled and the CO2 increase is less than double.  Do you really think that the location has no influence at all on the observed land warming, and that the CO2 increase is the sole cause?



14 May 2017: added pictures of weatherstations at Svalbard and Findel airports.

Wood burning causes more pollution than diesel trucks

April 13, 2017


In 2015 I wrote here a comment “Wood and pellets: a “burning” fine particulate problem“, where the small particle emissions were compared to those of traditional fossil energy sources, and found extremely high. As we are now in a time where Diesel-bashing has become the newest green fad, I was agreeably surprised by an article from Michael Le Page in the New Scientist (4 Feb 2017), titled “Where there’s smoke” (access is paywalled!). He writes how log burning in London and other cities has become a major pollution problem, emitting often much more PM 2.5 than Diesel trucks.

A Danish “eco-friendly” wood burner was found to emit through the chimney 500000 small particles per cm3, to be compared to the 1000 particles/cm3 found at the tail-pipe of a modern Diesel truck: so one eco-family thinking to save the planet caused as much pollution as 500 Diesel trucks!

Look at this graph from the Danish Ecological Council showing the particulate count inside a Copenhagen wood burning house:

This “correctly installed stove” caused an inside pollution 4 times higher than that of the most polluted street!

As I wrote in my former article, log burning certainly is the big contributor, well exceeding a pellet stove. But regarding the romantics, it is difficult to negate the much higher emotional appeal of a burning log compared to some smoldering pellets.

What I really find so scandalous is that people have been coaxed by infused “climate guild” to switch to wood burning, and now the nasty side of this eco-fad shows up. But as much as Diesel bashing seems to be hip, ignoring the pollution caused by wood burning is a real scandal. A scandal which seems a perfect “déjà-vu”, a repeat of the politically motivated push to the less CO2 emitting Diesel engines in the 90’s.

Do eco-politics  always must swing from naively pushing a “climate” solution to recognizing (after a longer latency) that the nuisances of the solution might well be higher than the putative dangers they are meant to avoid?


PS1: here a plot from the Australian Air Quality Group showing the contributions to PM2.5 pollution from different sources (Jun/Jul/August is Australian winter):

Hibberd_PM2.5sources_Muswellbrook_Sept2013.png.1379722208210The peak during July shows that woodsmoke contributes about 10 ug/m3, whereas transport and industry together only about 1 ug/m3, i.e. ten times less! (see here)