Welcome to the meteoLCD blog

September 28, 2008

BadgeExcluThis blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.

Wood and pellets: a “burning” fine particulate problem.

September 26, 2015

The heating season is about to start here in Luxembourg. I heat my home with an oil driven central heating, one of my neighbors only burns wood (in cords). The quantities of wood he uses are breathtaking, but probably he choose wood burning as “climate friendly” . Indeed, the carbon dioxide released has been gobbled up by the tree during its 30 to 60 year life from the atmosphere, and returning it to the air will be, at least at a first glance, be “carbon neutral”. At a second thought, the problem is more complicated: his CO2 release is a spike that would not have occurred if the wood had been left to rot (the release would have been over many years ), so that at least on a short period, there is not much gain by switching, say, from gas to wood.

There is much talk during the last years about the dangers of fine particles (the smaller than 2.4 micron PM2.5), be they released by Diesel engines or other energy providers. In Europe all new Diesels have particulate filters which should solve this problem. Burning wood is a very big PM2.5 emitter, and I will discuss this in the next chapters.

1. CO2 emissions

CO2_emissions_of_fuelsThis figure shows that the CO2 emissions per KWh energy from burning wood are about the double of those from natural gas. So say if a state installs a CO2 emission measuring system (using perhaps a satellite like the OCO-2), wood burners would be in a delicate position.

If we look at the composition of the exhaust, we have this:


There is a 1% per weight emission of NOx, which is about the same for gas or oil; there are not negligible VOC (volatile organic compound) and particles emissions. Clearly the exhaust from a wood stove is very different from clean air!

2. The PM problem

Fine particles are the crux of burning wood:

This figure from the www.treehugger.com web site shows the tremendous difference between an uncertified wood-stove and a usual gas furnace. Things become much better if you use a pellet system, but nevertheless remain 162 times higher than gas (the uncertified wood-stove emits 1464 time more than gas!).

Now very often the discussion on particulate emission puts the blame also on agriculture. But the picture is much different, as agriculture does not emit the same percentage of very small particles (the PM 2.5), which are thought to be the most dangerous, being able to transit to the lung, the heart and even the brain. The next figure compares the two emission sources:


3. Hourly emissions

The next figure shows the emissions of particles in g/h for burning oak (the 3rd most frequent wood in Luxembourg): note that the emissions of the larger PM10 are about the same as the dangerous PM2.5; fire logs are possibly wax-wood mixtures, and so have about 4 times less emissions.


These numbers apply to about a burning rate of about 3 kg dry wood per hour (see here).

4. Conclusion

If you burn wood (cord or pellets), you environmental impact may not be what you intend: your immediate CO2 emissions are comparable to those of other fossil fuels, and your particulate pollution is much much worse! That is the reason why for instance the city of Parish forbids burning wood in open fires. Switching to gas (or nuclear powered electrical heating!) would be more environmentally friendly.

Electricity generation: very different capacity factors!

September 21, 2015

The US Energy Information Administration (EIA) has an interesting post on the huge differences between countries and origins of electricity generation efficiency, or more precisely the capacity factors.

1. Definition of the capacity factor and the “Volllaststunden”.

Let me recall that the capacity factor is simply the yearly energy produced divided by a hypothetical maximum which would have been produced if the generator had functioned 8760 hours at its the name plate capacity. An example: suppose a wind turbine has a name plate capacity of 2.5 MW; if it would deliver this power during the whole year (what is clearly impossible!), it would produce an energy of 2.5*8760 = 21900 MWh. Now its real production has been only 4380 MWh. So the capacity factor is:

CF = 4380/21900 = 0.20  which is often given as a percentage by multiplying by 100, i.e. here CF%= 20%.

In Germany one uses mostly the term “Volllaststunden” (yes, you can write this using 3 letter l). The VS would be equal in our example  to VS = (CF%)*8760/100, i.e. 1752 hours.

2. Capacity factors vary with type of electricity generation and country

The IEA report has several interesting statistics which give the capacity factors of different countries and regions for the period 2008 to 2012. I modified the first table by discarding 4 countries or regions: Russia, because its quasi nonexistent wind/solar production, Japan, because its shutdown of all its nuclear reactors after the Fukushima accident, the Middle East because it has only negligible nuclear electricity production, and Australia/New Zealand for the same  reason. That leaves 12 countries or regions with the following statistics:


The vertical red lines give the average capacity factors  of the different types of production: nuclear is the absolute champion with 79.8%, fossil and hydro are close at 45.9% and 41.9%, and solar/wind come out very low at 21.9%. If we call “renewables” the last two categories, clearly hydro is the only one delivering acceptable capacity factors. Now lets separate the last category into solar and wind. This time one can keep 13 countries or regions, omitting only Brazil, Central/South America and Russia for not having (or having communicated) any solar production.


Solar comes out at an abyssal low CF of 11.9%, whereas wind practically doubles with CF=23.6%

Our first conclusion comes as no surprise: nuclear really shines when it comes to availability and stability; both solar and wind can not deliver (at least for the moment) a reliable electricity production!

3. Why the big differences?

In most countries, solar and wind have an absolute priority to deliver their electricity into the grid, penalizing the non-renewables which must be turned down to adapt production to demand. If that political decision would not exist, and the free market rules would apply, solar and wind production would have still lower capacity factors. Regarding hydro, one clearly sees that countries like Canada and Brazil have a clear advantage in disposing of enormous hydro potential, which may have reached its peak for many reasons. The OECD hydro electricity production is practically at its maximum, so the CF of 40% will be impossible to increase in the future.

Fossil producers suffer the most from the prioritizing of solar and wind: nuclear facilities are difficult to rapidly ramp down or up, but gas turbines (and even some of the latest coal power stations) can do this quite easily, and so are often used to deliver peak load. Often a certain type of generation is put on hold for commercial reasons, so the capacity factors must be taken with a grain of salt: they not only reflect technical deficiencies or for instance lower wind resource, but also ramp down/up decisions taken at the big electricity exchanges (as the EEX at Leipzig) for monetary reasons.

Now the 100 billion dollar question: if you want carbon free electricity, which type of generation would you choose?

Cosmic Theories, Greenhouse Gases, Global Warming

August 27, 2015

Antero Ollila from the Aalto University (Finland) has published in the Journal of Earth Sciences and Geotechnical Engineering a very interesting paper titled “Cosmic Theories and Greenhouse Gases as Explanations of Global Warming” (link to PDF). His study concludes that “the greenhouse gases cannot explain the up’s and down’s of the Earth’s temperature trend since 1750 and the temperature pause since 1798”. I will comment briefly on this rather easy to read paper, which alas should have benefited from a more thorough proof-reading, as there are quite a few spelling errors and/or typos.

1. IPCC and competing theories.

The IPCC concludes in his AR’s that practical all observed warming since the start of the industrial age comes from human emissions of greenhouse gases; the cause of GW (global warming) clearly is inside the Earth/atmosphere system. Competing theories see (possibly exclusively) outside causes at work: solar irradiance, galactic cosmic rays (GCR), space dust, planetary positions… As the temperatures calculated by the IPCC climate models (or better, the mean of numerous GCM’s), deviate now markedly from observations, Ollila writes that the “dependence of the surface temperature solely on the GH gas concentration is not any more justified”.

fig1In this figure (fig.1 of the paper) the blue dots represent the temperature anomaly calculated using the IPCC climate sensitivity parameter, and the blue line the CO2 induced warming postulated by the Myhre et al. paper. The red wiggly curve are the observed temperatures (t. anomalies): the huge difference with the IPCC dot in 2010 is eye watering!

2. The outside, cosmic  models.

Ollila studies 4 cosmic models (which he blends into 3 combinations): variations of TSI and solar magnetic field, GCR, space dust and astronomical harmonics , as proposed by Nicola Scafetta. What many of these causes have in common, is that they could influence cloud coverage: the variations of cloud percentage is the elephant in the room! One percent variation in cloud cover is assumed to cause 0.1°C temperature change. Satellites shows that cloud coverage has varied up to 6% percent since 1983, which would explain a 0.6°C warming.

Combining space dust, solar variations and greenhouse gases together, he finds the following figure, extending to 2050 (fig.8 of the paper):

Here the red dot shows the average warming in 2010 given by the mean of 102 IPCC climate models; the black curve represents Ollila’s calculation. This figure shows, as many other authors predict, a (slight) cooling up to 2020, and then a 30 year period of practically no warming.

In another try, Ollila left out the putative influence of the increasing GH concentration. His justification are famous papers by Dr. Ferenc Miscolszi, a former NASA physicist, where this author proposes the theory that the impact of an increase in anthropogenic greenhouse gases will be cancelled out by a drying of the atmosphere (i.e. a decrease of absolute water vapour content). Miskolszi is able to reconstruct the past temperature variations beautifully, so this “outlandish”  theory about a saturated greenhouse effect should not simply be discarded or ignored (read comments here and here).

This gives the following figure (fig.9 in the paper), with the black curve corresponding to the output of the calculations including only the SDI (star dust index) and TSI (total solar irradiance).


Now look at this: Ollila’s prediction of a coming longer lasting cooling period is nearly identical to the predictions based on the current (and next) very weak solar cycles !!!

3. The crucial role of water vapour

This whole paper stretches again and again the importance of getting the vapour content of a future climate right: the IPCC still assumes a constant relative humidity, i.e. an increasing water content with rising temperatures, and as a conclusion a positive feedback of the CO2 induced warming. Observations show that this has not been the case: the total water content of the atmosphere has not increased, as shown on this graph from http://www.climate4you.com (upper blue curve):


4. Conclusion

This is a paper I urge you to read. It clearly shows that climate science is far from settled, and that the naive, drastic and hurting climate politics proposed by EOL (end-of-life) presidents or advocacy groups could well try to influence a parameter (CO2) which has only a minor influence: this means much pain for very little or no gain!

Your smartphone is radioactive!

August 17, 2015

Nuclear energy and all thinks related to radioactivity have nowadays a bad press in Europe; few people remember their high-school physics with experiments on radioactive decay, and hopefully some information on the ubiquitous radioactive radiations that are a part of nature since the beginning of our planet. Decades of scare stories. semi-truth and abysmal lies have fostered a generation in Germany that thinks nuclear emission-free energy is outdated, and that radioactivity, where it exists, must be avoided like hell (or forbidden by the government :-))

It may come as a surprise that your humble smartphone that you use so frequently is a radioactive gadget. I learned this after reading an excellent article by David Jones at the website Brave New Climate.

1. The ITO touchscreen

All smartphones and tablets use touchscreens, which are one of the principal causes for easy use.

This picture (adapted from www.ti.com/lit/ah/slyt513/slyt513.pdf) shows that the principal element are the two ITO (indium tin oxide) sheets: these are transparent foils covered with a very thin layer of indium and tin oxide. Indium is element number 49, and the isotope used here is the most abundant I115. This isotope is a beta- emitter (it emits electrons from its nucleus, and converts to Sn115, which is tin.)

The energy of these electrons is rather small (495 keV), and the half-live of the indium is huge: 4.14*10^14 years! Indium is the 65th most frequent metal in the Earth crust, where it is found at a very small concentration of about 160 ppb (parts per billion). The minable world reserve is estimated at 6000 to. With steady increasing use in electronic devices and wind-mills, it may become a bottleneck for further development.

2. Measuring the radioactivity of an Iphone 4

Can the radioactivity of the Iphone touchscreen be detected? To answer this question, I put up a quick experiment, using a semi-professional Geiger counter, the INSPECTOR from S.E. International. This instrument has a very large pan-cake Geiger tube of about 48mm diameter; this means it is very sensitive even to small radioactivity levels. The picture shows the back-side of the INSPECTOR, with the wire mesh protecting the counter tube.


The experiment was done in two steps, each taking 10 minutes: first I ran the counter positioned left to the Iphone (which was switched on during the full experiment), and noted the minimum and maximum of the readings (there is about one reading every 2 seconds). Here is a picture, the counter showing a reading of 0.161 uSv/h, close to the maximum.


Secondly, I put the counter on top of the Iphone, so that the pancake Geiger tube covered the screen. The next picture also shows a reading, also close to the maximum of that part.


Here the results (all in uSv/h):

Background:        minimum = 0.065   Maximum = 0.167

On top of screen: minimum = 0.161  Maximum = 0.275

We note that the second range begins practically where the first one stops: the minimum radiation on the screen is equal to the maximum of the background, and the maximum of the screen exceeds the background maximum by 65%. If we use the mid-points between minima and maxima (116 and 218) as relevant indicators, we see that the touchscreen increases ambient radioactivity levels by 88% !

This means that the tip of your finger is exposed to about two times of what is the normal background radiation in Luxembourg.

Should you be afraid? Yes if you have been brainwashed to believe that all  radioactivity is dangerous! No if you remember your physics teacher and have kept a modicum of common sense!

PS1: When the Geiger counter is put on the backside of the Iphone, readings are similar to the background: the beta radiation does not cross the phone’s case. You may want to put your phone in the shirt pocket with the screen facing out :-))

PS2: There are alternatives to the use of indium in the laboratories: graphene, carbon nano-tubes etc. are some potential candidates. They will most certainly be used in the future, when demand makes Indium (now at ~800 US$/kg) too expensive. So, when will Apple launch with great fanfare the non-radioactive Iphone model?

New observational hint at max. 1.3 °C climate sensitivity

July 21, 2015

In a previous blog “CO2 and temperatures: da stelle mer uns janz dumm” I presented a “zero-dollar” model using past global temperatures and CO2 data to estimate the climate sensitivity using the 1850-1945 and 1945-2013 periods, and found the effective climate sensitivity (which should be close to the equilibrium climate sensitivity ECS) to be about 1.34°C.  This means that a doubling of atmospheric CO2 mixing ratio with respect to the pre-industrial level would cause a global warming of at most 1.34°C. This is a number much lower than the “consensus” values of the IPCC which suggest a most probable warming of 3°C (1.5 to 4.5 °C range). Many authors disagree with the IPCC estimation. Lewis and Curry for instance find in their recent paper “The implication for climate sensitivity of AR5 forcing and heat uptake estimation” values of 1.33°C for the transient climate response TCR and 1.25 to 2.45 °C (i.e. a central value of 1.85°C) for the equilibrium climate sensitivity ECS. Let me just recall that the lower TCR gives the heating due to a CO2 doubling at the moment where this doubling occurs (assuming this doubling takes 70 years to realize), whereas the ECS gives the far in the future lying definitive warming if all feed-backs and readjustments are finished. Dr. Roy Spencer from the UAH presented in his blog yesterday a new calculation using the 15 years of data from the CERES instruments which are carried by successive NASA satellites. CERES measures out-going and incoming radiation fluxes (in W/m2), and is the best (and practically only) source for these extremely important data. Dr. Spence found in several previous papers that when the globe changes its surface temperature, the atmosphere reacts with a delay of about 4 months with its changes in radiative flux. So he took the available 15 years of CERES data, computed yearly means and plotted these data versus the 4 month time shifted global temperatures of the HadCRUT4 series, with a linear regression of flux (t) = a*T(t-4) + b (t = time in months). This gives the following figure: spencer_sensitivity_2015 This linear regression tells us that dF = 2.85*dT (a change of global temperature of 1 °C (or 1 K) corresponds to a forcing of 2.85 W/m2 (and vice-versa). The parameter 2.85 represents the climate feedback lambda. Now the effective climate sensitivity ECS is defined as ECS = F2xCO2/lambda where F2xCO2 is the radiative forcing caused by a doubling of atmospheric CO2, and lambda = feedback factor. Let us accept that F2xCO2 = 5.25*ln(2) = 3.71 W/m2; the number 5.35 is the “consensus” value, which remains subject to discussions, but is more or less accepted by both climate alarmists and realists. So we have: ECS = 3.71/2.85 = 1.3 °C When CO2 mixing ratio reaches 560 ppmV (i.e. the double of the concentration at pre-industrial time, about 1850), we should have a total warming of max. 1.3°. As the globe warmed by about 0.8° since that time, there would be max. 0.5° of coming warming in the pipe-line.   Conclusion: All these ECS values derived from observations (and not climate models!) are rather low. Dr. Spencer says that the 1.3°C should be taken as a maximum, and that the real ECS could possibly be much lower (Prof. Lindzen suggested 0.7 – 1°C). Should we worry? No! And should we desperately try to avoid any CO2 emissions? Neither!

Big geothermal heat flux may aid West Antarctic Ice base melting

July 15, 2015

There is a new very interesting paper by A. Fisher and al. from UC of Santa Cruz on the problem of geothermal heat flux at the base of the West-Antarctic ice sheet. Fisher an his collaborators drilled a 25cm diameter hole through the ice sheet up to the underlying lake Whillans. The red square shows the location of the drilling.


They  lowered a heavy 200 kg probe with thermistor sensors through the bore hole and the lake water into the mud at the bottom of the lake. This probe partially entered the mud , so that one thermistor was located about 0.8m deep in the mud (TS1), and the other(TBW) just at the interface between the soil and the bottom of the lake. The next table shows the data for two measurements (GT-1 and GT-2):

The important quantity is the heat-flux q, which is about 280 mW/m2. One part of the heat-flux goes up through the ice sheet, and another one (180 W/m2) essentially causes the ice at the base to melt. This number of 180 seems low, but it corresponds to a melt of approx. 10% of the ice created by snow fall!

The large measured heat-flow comes at a surprise, for usual accepted values for Antarctica (which were derived from various models) are closer to 50 mW/m2.


So we have here again a nice argument not to neglect measurements, and not to rely exclusively on theoretical modelling. This new paper shows a natural phenomenon contributing to the WAIS (West Antarctica Ice Sheet) melt, and not the usual suspected culprit of (anthropogenic) global warming. It remains to be seen, if other measurements at other locations deliver results pointing in the same direction.

This paper follows one of Amanda Lough showing that large volcanoes exist below the WAIS, and that this volcanic activity may also be a contributor to increasing ice melt.

To conclude, here is a figure from Wikipedia showing the heat fluxes over the full globe: note that this flux is highest at the ocean ridges, as should be expected!


The total heat power streaming from the interior to the surface of the earth is estimated to be about 46 TW; this has to be compared to 17 TW power released by human activity (see the meteoLCD energy widgets).

Heat stress: when warm is too hot!

June 30, 2015

As a heat wave is expanding through Western Europe (and Luxembourg), I would like to give some comments on the heat stress that is measured at meteoLCD since May 2001 (see paper by Charles Bonert and Francis Massen here). We are still the one and only station which shows this important parameter in real live time on the web.

1. Some technical details on the apparatus.

The ISO standard defines how the heat stress is measured:

WBGT_instruments(figure from here)

The globe temperature is the highest under usual sun-shine conditions , the dry air thermometer gives an intermediate temperature and the wet bulb thermometer shows distinctively lower values. The next figures gives the situation today (30 June 2015, 14:00 UTC) at Diekirch:


If we take the last rightmost values , we have GlobeT = 40°C, DryT =30°C and WetT = 22°C. Using the formula given in the figure we get WBGT = 0.7*22 + 0.3*40 + 0.1*30 = 26.4 °C, shown by the last red point.

The differences between the three thermometer readings are enormous, and the very low wet bulb temperature shows how efficient evaporation is for cooling. The next picture shows the wet bulb device as used at meteoLCD: it is essentially a Pt100 sensor covered by a cotton wick whose other end plunges into a pot of distilled water. Two times a day this water reservoir is filled by a peristaltic pump. We use distilled water to avoid hardening of the wick by the dissolved lime which is abundant in tap water. A grillage (not shown here) is needed to keep thirsty or curious birds from picking at or stealing the wick (yes, this happened several times).


2. When is warm too hot?

Normally, the body temperature (measured in the rectum) should not exceed 37°, with 38°C set as the upper limit. Too hot is a situation, where the WBGT pushes the internal body temperature above this limit. Now, depending on physical activity (and clothing), this limit is reached sooner or later. A heavy worker, or a soldier making strong physical efforts (in heavy clothing) will reach this dangerous situation much earlier than a tourist resting on the sea-shore. The metabolic rate gives in W the heating power produced by the working body. For a body at rest it is about 65 W, for hard work it can exceed 300 W. So compared to the man-at-rest,  a worker will reach the WBGT limit much sooner, as shown by the following figure (same ref. as above):


A difference is also made between a person acclimatized to the warm situation, and one which is not.

3. An example

Now lets take a person riding a bicycle at about 38 km/h. The corresponding MET (metabolic rate) is according to here about 5, which corresponds to 5*65 = 330 W. Using the above diagram for the unacclimatized person (remember: this is the first day of a heat wave!) we see that the heat stress limit is about 24°C: so a bicycler starting at 14:00 UTC (16:00 local time) exceeds the limit (as the WBGT is 26.4) and puts himself at the risk of a heat stroke.

May I suggest that he should make his cycling much earlier, for instance at 06:00 UTC (08:00 local time) and finish definitively at 10:00 local time.

Fact is that more people die from cold than from heat, but nevertheless heat can be an insidious danger. Usually normal common sense is all what is needed to remain safe, and maybe this blog comment will be of some help!

The solar influence on climate

June 28, 2015

1. The “Consensus Science” ignores or belittles the solar influence.

A furious debate is going on since many, many years on the influence of solar variations on global and regional climate. We all know that solar irradiance varies periodically in 11 years, its magnetic field in 22 years, and that many other periodic variations can be found (for instance the Suess cycle of 211 years , the millennium cycle explaining the Minoan, Roman and Medieval warm periods, and the very long Milankovitch cycles which rhythm the great ice ages).
The total irradiance =  the power sent by the sun to the earth varies little over one cycle, about 1 W/m2 (at the top of the atmosphere, surface perpendicular to the rays) from maximum to minimum. This must be compared to the ~1366 W/m2 mean irradiance. The “consensus” climatologists and the IPCC insist that this is too little to explain for instance the 0.8°C warming observed during the last 100 years, and is completely swamped by the radiative forcing of our greenhouse gas emissions (estimated at ~2 W/m2 for CO2 alone when the year 1750 is taken as zero).  What this consensus-science ignores is that the UV irradiance during a solar cycle varies much more. and has many indirect and possible amplified consequences due to ozone heating and influence on the great oceanic oscillations (see here).

Also ignored are what we know from history: periods of low solar activity (as the Maunder minimum during the first part of the 17th century, or the Dalton minimum around 1820) were much colder (by 0.4 to 0.2 °C) and are described by historians as periods of famine and social unrest due to bad agriculture productivity.

2. The new Ineson et al. paper

Sarah Ineson et al. have published the 23th June 2015 an interesting paper titled “Regional climate impacts of a possible future grand solar minimum” (Nature communications, open access). This paper is interesting not for the usage of climate models (we all know how unreliable these can be), but for the steps made in acknowledging what climate realists have said since many years:

a. the actual decline in solar activity is faster than any other such decline in the past 9300 years

b. this decline may lead to Maunder Minimum-like conditions with a probability of up to 20%

c. recent satellite data show that  the variability of UV irradiance could be considerably larger than given by previous estimates

Their modeling exercise (EXPT-B) suggests a regional winter-time cooling for  Northern Europe and East-USA of about 0.4 °C


This figure of the annual mean temperatures shows the solar induced cooling that touches nearly every part of the globe (under the hypothesis that the ongoing quiet solar situation will have the same effects as it had during the Maunder minimum).

This cooling could begin to start around 2030 and continue up to 2100 !  If you think at the coming Paris COP in December 2015 where all countries will be coaxed into binding warming lowering policies, this new paper should make ripples in the naive enthusiasm of the anti-warming advocates.

Could it be that our rising CO2 emissions, besides their positive influence on plant productivity and planetary greening, will be our best insurance against a fall back into a Little Ice Age?


Read also these contributions in Climate Dialogue about what will happen during a new Maunder Minimum.


June 23, 2015

A couple of weeks ago I sent to the Luxemburger Wort a letter with some comments on the introduction of  smartmeters (or smart meters = intelligent electricity counters), which will shortly begin here in Luxembourg. The client has no other choice than to accept!
This letter has been published in an abridged form last Saturday (20 June 2015) in the Luxemburger Wort at page 24. Here is the full text (in German) I wrote on this subject:


Smartmeter: naïve Begeisterung?

Francis Massen

Die erzwungene Einführung des intelligenten Stromzählers (Smartmeter) wird in den Medien in höchsten Tönen und kritiklos gelobt. Natürlich ist es nützlich und bequem, wenn der Stromlieferant den Zählerstand aus der Ferne ablesen kann, und wenn eine digitale Anzeige eine genauere Angabe des augenblicklichen Verbrauchs ermöglicht als dies die rotierende Scheibe des alten “Ferraris”-Zählers kann. Verschwiegen werden jedoch quasi systematisch mehrere schwerwiegende Konsequenzen dieses neuen Zählers:

  1. Der Smartmeter (in Frankreich auch als “compteur mouchard” bekannt) liefert ein lückenloses Bild unserer Lebensgewohnheiten, da die fast minütliche Abfrage des Zählers ein äusserst klares zeitliches Profil des Stromverbrauchs erstellen kann.
  2. Die traditionelle, einfache und leicht verständliche Tarifierung wird höchstwahrscheinlich sehr bald durch stark schwankende Preisperioden (eventuell im 10-Minuten-Takt) ersetzt, welche den Kunden zu einem Verbrauchsmuster zwingen sollen welches nicht seinen Wünschen, sondern denjenigen des Lieferanten entspricht.
  3. Der Smartmeter ist das unversichtbare Tor zum “intelligenten Netz” mit seiner DMS (Demand Site Management = Verbrauchs-Steuerung). Vorbei die Zeiten wo die über den Panzerkasten fest eingestellte maximale Stromleistung ohne wenn und aber jederzeit verfügbar war: nun sollen Haushaltsgeräte übers Netz freigeschaltet werden können, wenn dies vom Provider erwünscht wird (die Pille wird mit einem billigerem Tarif versüsst) und/oder die verfügbare Leistung wird zeitweilig zwangs-gedrosselt.

Die extravaganten Energieersparnisse welche oft das Hauptmotiv der Smartmeter sind, haben sich in den Ländern wo diese Geräte schon länger oder als Test eingeführt wurden, als Trugschluss erwiesen:.So zeigt eine Studie des Fraunhofer Instituts von 2011 über deutsche und österreichische Haushalte dass die Stromerparnis im Schnitt nur 3.7 % beträgt.

Diese “guilt-meters” (“Schuld-Meter,”) wie Prof. Woudhuyzen sie in einem Artikel des online Magazin WIRED genannt hat, können selbstverständlich gehackt werden (so vom Chaos Computer Club vorgeführt) und erlauben einen Eingriff in die Privatsphäre welche in einem Bericht der Universität von Ryerson (Kanada) so formuliert werden: “Smart appliances offer utilities the opportunity to control areas of life that Courts have considered to be private and intimate”.

Da die grossen Beraterfirmen ja jetzt offenen Zugang zu allen unsern Entscheidungsebenen haben, will ich mit einer Studie von Ernst & Young abschliessen. Im Bericht von 2013 “Kosten-Nutzen-Analyse für einen flächendeckenden Einsatz intelligenter Zähler » steht ganz deutlich dass: « Die von der EU angestrebte Rollout Quote von 80% bis 2022 über eine allgemeine Einbauverpflichtung… ist für den Grossteil der Kundengruppen wirtschaftlich nicht zumutbar ».

PS: Eine ausführlichere Diskussion (mit allen Referenzen) wurde in den APESS Récré No.28 (2014) unter dem Titel “Compteurs et réseaux intelligents, clients impuissants » vom Verfasser publiziert.


If you have a comment on this subject, please feel free to give it here! If you which a copy of the APESS text, please ask for it.

Is there an upper limit for wind and solar grid penetration?

June 7, 2015


1. Germany’s negative electricity prices.

We often read in the press that German wind and solar electricity exceeds the demand, and must be exported at very low or even negative prices. Agora Energiewende writes in June 2014 that the situation of negative prices becomes more frequent, as the conventional fossil and nuclear producers can not diminish their minimum production (the “spinning reserve”) below 20 to 25 GW (often also called the “must run minimum production”). If all things remain as they are, Agora predicts more than 1000 hours per year of negative electricity prices for 2022 ! But they also say that up until know, renewables never were able to produce more than 65% of the demand, even during peak production periods. The excess in production leading to negative prices is caused by the impossibility of intermittent renewables to guarantee the needed electrical power at every moment of the year. Conclusion: wind and solar (the main renewables) are unsteady (and often unpredictable) producers which absolutely need spinning reserve to have year round electricity availability.

2. Is there an upper limit for renewable producers in a stable electrical grid?

This question has been researched by Jesse Jenkins (MIT) and Alex Trembath (from the Breakthrough Institute). They came to a very easy to memorize rule of thumb: “When wind and solar work without subsidies (as do other producers), the maximum amount of their part in the total power production of the grid is equal to their capacity factor”.

They give the following graph from another publication showing the decline in the value factor of Wind and Solar electricity with increasing market share in Germany (boxes added by me):

value_factor_wind_solar_GermanyThe value factor = (market price from wind&solar generation)/(average market price).  The negative trend for the solar electricity is especially sobering.

Conclusion: Economics, and not so much technology impose an upper limit for the integration of wind & solar electricity.

Let me give two examples:

A. Rheinland-Pfalz (the German Land bordering on Luxembourg) , 2013 (see link):

Production from wind and solar:  3041916 MWh and 1369808 MWh

Capacity factors: 0.151 for wind and 0.092 for solar

Fraction of the total power produced: 0.203 for wind and 0.091 for solar

Conclusion: Wind production exceeds the limit given by the rule of thumb, and solar power has reached its maximum.

B. Luxembourg, 2013 (see link1 and link2):

Production from wind and solar: 83027 MWh and 73738 MWh

Capacity factors: 0.162 for wind and 0.089 for solar

Fraction of total power produced: 0.029 for wind and 0.026 for solar

Conclusion: Wind and solar productions are both well below the rule of thumb limit.

3. Two other papers/comments on this problem

Proteos writes in his French blog: “Les subventions à l’éolienne et au solaire sont parties pour durer”:

He writes that wind and solar have a marginal zero price (as wind and solar energy has no price), so on average they get a price which is lower than the average market price, and with increased renewable penetration, this price is falling (picture shows German situation):

price_electricity_versus_wind_production_Germany_2013 Without subsidies, these renewable producers will go out of business, as will the conventional producers which deliver the base-load and spinning reserve. Finally all power producers must be subsidized to have a working electrical infrastructure and power.

JM. Korhonen writes in his blog: About 20% of wind and 74% of solar production are worthless, and under a free market, the renewables revolution will stop dead on its tracks once peak production reaches demand. He also is skeptic of the much touted “demand site management” (to be introduced with the smart meters), which will not avoid PV hitting a wall.

4. A limit imposed by material requirements.

Korhonen cites a 2013 paper by Vidal et al. published in Nature Geoscience with estimates of the extraordinary huge amounts of concrete, aluminium, copper and glass needed by wind and solar, if 25000 TWh world production would be reached in 2050. From 2035 on these requirements would outstrip the year 2010 total world production. The next figure is self-explaining:


Note that despite all the hulla-bahoo about uranium mining nuclear power comes out best!

5. Overall conclusion

Let me give this in the words of Korhonen:

“In conclusion, we may very well have too much of a good thing. And this is something that bears remembering the next time someone tells you that renewable overproduction is not a problem, or that renewables are reducing electricity prices and making existing plants uncompetitive. Or applauds, when 50% (or some other figure) of daily electricity production is met from renewable sources.”

As so often, reality bites harder than the teeth of naive greenies!


Get every new post delivered to your Inbox.