Welcome to the meteoLCD blog

September 28, 2008

Badge_Luxwort_2016

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to http://meteo.lcd.lu , the Global Warming Sceptic pages and environmental policy subjects.

Mathiness and models: the new astrology?

May 18, 2016

climate_modelsThere is an outstanding article in aeon on the use (and abuse) of mathematics and mathematical models in economy. It makes for a fascinationg reading, as many things said could directly apply the model-driven climatology. As a physicist, I love mathematics and find them invaluable in giving a precise meaning to what often are fuzzy statements. But this article includes some gems that make one reconsider any naive and exaggerated believe in mathematical models.

The economist Paul Romer is cited: “Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.” Replace the word “economics” with “climatology” and you begin to understand.

You find many citations by the great physicist Freeman Dyson on climate issues, like this one ” …climate models projecting dire consequences in the coming centuries are unreliable” or “[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere (link).

Ari Laor from the Technion (Haifa, Israel) writes in a comment at the American Scientist blog: “Megasimulations are extremely powerful for advancing scientific understanding, but should be used only at a level where clear predictions can be made. Incorporating finer details in a simulation with a large set of free parameters may be a waste of time, both for the researcher and for the readers of the resulting papers. Moreover, such simulations may create the wrong impression that some problems are essentially fully solved, when in fact they are not. The inevitable subgrid physics makes the use of free parameters unavoidable…”

The Bulletin of Atomic Scientists also has a very interesting article “The uncertainty in climate modeling“. Here some gems: “Model agreements (or spreads) are therefore not equivalent to probability statements…does this mean that the average of all the model projections into the future is in fact the best projection? And does the variability in the model projections truly measure the uncertainty? These are unanswerable questions.”

How true…

 

PS: The Bulletin has a series of 8 short contributions to this subject, and I suggest to take the time to read them all.

 

 

First Radiation Amplification factor for 2016

April 21, 2016

In several previous posts (here and here) I commented on the RAF (Radiation Amplification Factor) which tells us how much a change in the total ozone column will cause a change in UVB irradiation. The question is usually asked to quantify the danger that a thinning ozone layer will cause an increase in UVB radiation which might cause an increase in skin cancer risk. The often extremist scare on the danger of UVB radiation has faded somehow in the last years, as cases of rickets caused by too few UVB exposure has again shown up (read this paper). But at many beaches, you can see overprotective mothers putting their children in UV filtering jump-suits, which might be an overreaction triggered by the scary media stories, which usually start in Western countries at the first warm and sunny days of the year.

The RAF is defined as: RAF = – [ln(DU1/UVB1)/ln(DU2/UVB2)]

where the indices 1 and 2 correspond to two different situations. The following graph shows the situation today, a second (nearly) blue sky day following the first. As these dates and the time of measurements are so close, we may assume a constant length for the sun rays through the atmosphere, and constant attenuations. The AOT (atmospheric optical thickness) which measures the turpitude of the atmosphere was 0.055 the first day and 0.068 the second day; these are very close values. For comparison the AOT was 2.197 on the 19 April, which had a heavy cloud cover. The solar zenith angles where 38.4 resp. 38.0 degrees.

RAF_20_21April2016

With the readings shown on the graph, we find an RAF = 1.10

In a much a much more extensive paper I wrote in April 2013, the corresponding value is 1.08 (computed over 5 consecutive days).

Expressed as simple percentages one can roughly say that during the two days of 20 and 21 April 2016 a decrease of 5% of the total ozone column caused a rise of the UVI of 9% (percentages w.r. to the first day). Beware to not extrapolate linearly this conclusion, as the RAF is defined by non-linear logarithms!

 

European Summer Temperatures since Roman Times

April 9, 2016

J. Luterbacher from the University of Giessen has published in Environmental Research Letters an interesting paper on the evolution of European summer temperatures. The paper is only 12 pages long, but the long list of coauthors counts 44 coauthors, reflecting the inflationary tendency to cite everyone the author will be agreeable too (and the desperate struggle for scientists to be coauthor for a maximum of papers). Nevertheless, the paper is interesting to read.

The authors used two statistical methods to evaluate temperature proxies (here tree-rings): a Bayesian hierarchical modelling (BHM) and a Composite Plus Scaling method (CPS). Both results are compared (where feasible) with instrumentals records (here Crutem4). The concordance of these 2 methods and the instrumental record is rather good, as shown in this figure which gives correlations r of 0.81 and 0.83.

Luterbach_Instruments_B_C

Are the 20th century summer temperatures unusual?

A comparison from Roman times to today is known to include 3 warm periods: the Roman, the Medieval and the Modern (notice the well-known ~1000 year period!). The next figure shows the results given by the two statistics and the IPCC consensus reconstruction:

Luterbach_B_C

I have added the red horizontal line giving the highest (reconstructed) level of the Roman Warm Period: clearly the situation during the 20th century was not unusual compared to this period.

Another figure starts at the Medieval Warm Period and gives the same impression:

Luterbach_MCA_LIA_PresentCompared to the Medieval times, the last 100 years with noticeably higher atmospheric CO2 concentrations (mixing ratios) do not show a dramatic warming!

A last figure is also very telling: it gives the temperature differences between Present (1950 to 2003) and Medieval Warm Period:

Luterbach_Present-MCA

Luterbach_Present-MCA_scale Some locations close to the Mediterranean are warmer, most only slightly warmer or about the same and two even cooler.

Conclusion:

The authors write that “both CPS and BHM  indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE”.

This would be the last word, but we all know that a scientist today must pay at least lip-service to the global warming meme. Accordingly the authors tell us that “However, summer temperatures during the last 30 yr (1986–2015) have been anomalously high”. Remember that we had a “monster” El-Nino in 1998, and a very big one in 2015: these two events alone pushed up the average temperatures a lot, so this last remark is rather irrelevant.

But as they write in another part of their paper that “… as well
as a potentially greater role for solar forcing in driving
European summer temperatures than is currently present
in the CMIP5/PMIP3 simulations. This might be evidence for an enhanced sensitivity to solar forcing in this
particular region”, acknowledging the IPCC denied solar forcing, I will pardon the mandatory, career friendly and politically correct sentence on the last years.

Climate trends at Diekirch, Luxembourg: part 2c (atmospheric CO2)

March 9, 2016

In this third and last part I will discuss the global trend in CO2 mixing ratio, the measurements of some EU measuring stations and our data at meteoLCD.

  1. The global situation 

CO2 and other atmospheric greenhouse gas concentrations can be found at many web sites, but I recommend two:

  • the NOAA’s Earth System Reserach Laboratory website which has an excellent FTP data finder. Its GLOBALVIEW site has a very interesting movie showing how the seasonal amplitude of atmospheric CO2 swings when going from the South-Pole to the North-Pole (2001 to 2013): nearly constant yearly values at the South-Pole, and huge swings in the Northern hemisphere!
    The following picture shows the 3 non-maritime reference stations that are closest to Diekirch:NOAA_reference_stations_EU
  • the WDGG (World Data Center for Greenhouse Gases), which is a Japanese website with links to many stations and miscellaneous visualization tools. From this site, let us first look at the global atmospheric CO2 trend, and the variations of the yearly growth rate.co2_monthly_molfr_20151109At a first glance, the increase seems practically constant during the last 30 years, and close to (400-354)/30 = 1.8 ppm/yr. The next picture shows in more details that this growth-rate is not an absolute constant, but follows a periodic pattern of ~2 years during the last 10 years and a big variability from 1985 to 2000 (the huge peak in 1998 is the finger-print of the big El-Nino of that year, where warmer oceans did not absorb as much CO2 as colder waters do).
    co2_grr_20151109_annotated

2. What do European stations show?

We will select (on WDGG) all European stations located in a rather small grid of latitudes [45, 50] and longitudes [0, 15 east] (the yellow rectangle pointed at by the blue arrow):

EU_station_selected

CO2_monthly_EU_selected

This plot shows that readings vary in an important manner, and that the global picture combined from these stations is far away from a smooth visible curve! We get a similar result if we select only the stations locatet in a 10°x 10° grid that covers Uccle (Belgium) and Diekirch:

CO2_10x10grid

If we look at the yearly average results from the three NOAA reference stations mentioned above (HPB, OXK and HUN), the measurements lie rather close:

CO2_Yearly_OXK_HUN_HPB

The largest difference is about 6 ppmV in 2011. From the first picture in this blog you may remember that OXK and HPB are mountain stations, located at 1185 and 985 m asl, well above the daily inversion layer. That is not the case for the Hungarian station (248 m asl), but the values you see on the plot are only a subset of the measurements. In a relatively well hidden remark, the scientist from Hegyhatsal write that they ignore all readings except those made between 11:30 and 15:00 UTC; during this period convective mixing is most important and the CO2 concentration is closer to a maritime or global background. I did not find their original data; the closeness of the HUN curve to the OXK and HPB plots clearly is a result of that pre-filtering.


3. The measurements in Diekirch

In Diekirch we publish all our measurements, but the next picture shows the effect of omitting all readings except those done from 11:30 to 15:00.

CO2_Diekirch_yearly_avg_all_and_restricted

Clearly the restriction to the afternoon readings lowers the overall mixing ratio by up to 20 ppmV, which is considerable. The plot also shows one serious problem with our measurements.The dramatic plunge from 2012 to 2013  probably is an artifact due to a systematic and unknown calibration or functional problem. Both years 2011 and 2012 have an exceptional high occurrence of CO2 readings in excess of 500 ppmV: usually there are about 200 occurrences (i.e. half hour periods) in a year, but during these 2 years we measured more than 500. That the lower plot also peaks at nearly the same value as the blue upper curve points to a problem.

A “consolation” is that the periods 2002 to 2012 and 2013 to 2015 show trends that are close to the global increase  rate of ~2 ppmV/year:

CO2_yeraly_Diekirch_closetrends

 

 

4. Conclusion

This 3 part series shows that CO2 measurements are not easy, are far from uniform, and that trends are difficult to find without extremely costly calibration procedures. Even the satellites circling the Earth have problems to measure CO2 mixing ratios with an accuracy of 0.1 ppm. If you look at the various CO2 series of ground stations, you may become horrified on the numerous malfunction and missing data periods. At Diekirch, we are a little player in the CO2 game, doing a difficult job on a shoe-string budget. Our atmospheric gas measurements are by far the most troublesome we do; that our long-time CO2 trends in CO2 increase are similar to those measured by the big guys is comforting, but does not allow us to make a blind eye on the numerous problems remaining.

A recurrent question is “what is the cause of the CO2 increase?”. The usual answer is that our fossil fuel consumption is the main culprit, as both curves go well together. Ferdinand Engelbeen has a good discussion on this problem (link). The next picture from his website shows that CO2 concentration rises with (cumulative) emissions, but also that global temperature does not during all periods:

Engelbeen_CO2_emissions_temperature

Another argument for the human origin of the CO2 increase is the variation of the delta_C13 isotope: fossil fuel has a lower part of C13 than CO2 from active biolological sources. The continuing decrease of delta_C13 is (or could be) a fingerprint of the anthropogenic impact. Not everybody  accepts this explanation. For example Richard Courtney does not accept that the sink capabilities of the ocean is overloaded, and so “excess” human emitted CO2 accumulates in the atmosphere (see discussion here). Prof. Murray Salby also tells that the CO2 increase is “natural” (see video). My late co-author Ernst-Georg Beck also was strongly convinced that warming regions of the North Atlantic are the main reason behind the atmospheric CO2 increase. Salby shows in his presentation this figure:

Salby_fossilfuelincrease_co2increase.jpg

A threefold increase in fossil fuel emissions after 2002 did not change the increase rate in atmospheric CO2: can fossil fuel emissions than really  be the (main) driver of the observed atmospheric CO2 increase?

The debate is not over, but the overall atmospheric CO2 measurement results are accepted by almost everyone.

Climate trends at Diekirch, Luxembourg: part 2b (atmospheric CO2)

February 28, 2016

In this second part on CO2 measurements I will talk about short-time and seasonal CO2 variations. The media usually seem to suggest that CO2 levels are something of what the French call “un long fleuve tranquille”, i.e. a more or less uniform mixing ratio that increases steadily with time. Nothing could be more wrong! If stations near maritime borders have relatively small diurnal variations, the situation is very different in continental and rural locations.

1.The diurnal CO2 variations.

Let me start with a very recent example: the CO2 levels at Diekirch during the last 7 days (22 to 28 Feb 2016):

CO2_7days_22to28Feb2016

Obviously the daily CO2 variation is far from uniform: during the 22th Feb. the levels were nearly flat at about 375 ppmV, then became really variable during the next 4 days and  returned to flat during the last 2 days. The highest peak of 465 ppmV reached during the morning hours of the 25th Feb. represent an increase of 85 ppmV (21%) above the previous low level of 380 ppmV. Why this formidable peak? Let us look at the other meteorological parameters: there was no precipitation, air pressure did not change much, but air temperature and wind speed varied remarkably.

Wind_7days_22to28Feb2016

This plot shows wind-chill in red, air temperature in blue and wind-speed in violet. Let us ignore the red curve, which represents a calculated parameter from air temperature and wind speed. The 25th Feb was the coldest day during this period, and wind-speed was close to zero. The 22th Feb. was a very windy day, and it also was warmer. Now we are in February, and CO2 lowering plant photosynthesis is practically in a dormant state. The next plot of solar irradiance (the blue curve) shows that the sun was shining at its best during the 25th, and  was nearly absent the 22th.

Solar_7days_22to28Feb2016

So we can make an educated guess of the parameter which has the greatest influence on CO2 levels during our period: it can not be temperature, as warmer temperatures increase microbial soil activity and plant rotting, which both are CO2 sources. It can not be photosynthesis driven by solar irradiance , as the highest CO2 readings happen when this potential activity is highest. There remains one parameter: wind-speed! The night of the 25th was very cold, and as a consequence a strong inversion layer with minimal air movements lay as a blanket over the ground. The absence of air movements made ground air mixing with air at higher levels impossible, so all gases accumulated in this inversion layer. A further proof of the correctness of our detective work is given by the plot of the NO/NO2 concentrations, which we restarted measuring after a year long pause:

NOx_7days_22to28Feb2016

NO2 (blue) and NO (red) readings also peak the 25th February (a working day, Thursday), at practically the same time: CO2 peak is at 07:30 UTC,  NO2/NO peak at 08:00. The first and certainly the latter show the fingerprint of morning traffic through the valley where lies the town of Diekirch.

The conclusion is: wind speed  (air movements) are the main cause of high CO2 levels: low wind speed means poor air mixing, and high CO2 levels; high wind speed means the opposite. For many years I tried to push this explanation, which is practically ignored in the usual “consensus research”. The sole consolation was a first price as “best publication” for the paper I wrote in 2009 with the late Ernst-Georg Beck as coauthor, and which was published by Springer.

More information on diurnal CO2 patterns can be found in this paper I wrote  with my friends Tun Kies and Nico Harpes in 2007. It contains the following graph which shows how CO2 levels changed during the passage of storm “Franz” the 11 and 12th January 2007.

11_to120107_franz2a

 

2. Seasonal CO2 pattern

Seasonal CO2 pattern reflect the influence of vegetation (and microbial soil activity): during warmer sunny summer months, CO2 levels are as a general rule lower than during the colder sunny-poor periods. Plant photosynthesis is a potent CO2 sink, overwhelming the opposite source of microbial outgassing. The following figure shows that this photosynthesis influence is nearly nil at the South-Pole and becomes stronger at more Northern latitudes (last is Point Barrow in Alaska):

co2_sta_records_seasonal

 

At Point Barrow (latitude 71°N) the seasonal swing is about 18 ppmV, at Mauna Loa (latitude 2°N = nearly at the equator), it is only 10 ppmV approx. Here is the 2015 situation at Diekirch (latitude close to 50°N):

co2_monthly_2015_withsinusmodel

The plot shows the monthly mean CO2 levels, together with a best-fit sinus-curve with an amplitude of 11.2 ppmV or a total swing of about 22 ppmV. These values are comparable to those measured at the German Hohenpeissenberg (HPB) and Ochsenkopf (OXK) stations. Be aware: not every year shows such a nice picture (look for instance here). A paper by Bender et al. from 2005 shows a seasonal swing at Amsterdam of about 16 ppmV, a further argument that our Diekirch measurements are not too bad!

Bender_CO2_seasonal

 

A new paper from Boston University shows that urban backyards contribute nearly as much CO2 to the atmosphere as traffic emissions. There is a great discussion on this paper at the WUWT blog, as the paper does not seem to make a yearly balance of sources and sinks. Nevertheless, one commenter (Ferdinand Engelbeen) recalls that the yearly balance between photosynthesis (a sink) and microbial activity (a source) is slightly negative, i.e. photosynthesis removes more CO2 from the atmosphere than soil microbes and decomposing vegetation produce. Soils globally inject ca. 60 GtC (giga tons of carbon) into the atmosphere, to be compared with annual anthropogenic emissions about about 10 GtC.

 

____________________________

The upcoming last part in this 3 part series on CO2 will discuss long time trends.

Climate trends at Diekirch, Luxembourg: part 2a (atmospheric CO2)

February 24, 2016

CO2, a gas that is essential to life, has become the villain par excellence during the last 20 years. Not because its undeniable positive effect on plant and crops, but because the “consensus” politicized climatology needed an easy to grasp culprit and enemy. CO2 which is after water vapour the second most important greenhouse gas has been chosen as the “climate killer” (what a horrible word!), even if the effect of its increasing atmospheric abundance still can not be quantified with precision and confidence. More than 20 years of lavishly financed climatology still has not delivered a steel-solid answer to the most often asked question: what is the warming potential (i.e. the climate efficiency) of increasing CO2 mixing ratios?

  1. Measuring atmospheric CO2

CO2 is a very rare gas, with a relative abundance of about 0.04% in the atmosphere (or 400 ppmV, the unit most often used). Compare this to oxygen (20%) and nitrogen (nearly 80%), and you can imagine that measuring precisely such a small concentration (the correct expression would be “mixing ratio”) is not very easy. Before Keeling started his CO2 measurement series at Mauna Lo in Hawaii, chemical methods were used, often with relative great precision. My coauthor, the late Ernst-Georg Beck, was a specialist of these historical measurements (see here), which often show wildly varying concentrations, mostly due to the fact that these measurements were done in locations where local effects were important (as for instance plant photosynthesis and human/industrial emissions). Keeling’s admirable insight was to choose a location close to nearly always blowing ocean winds, far from vegetation cover and human activity. Mauna Loa was an ideal location, except for the fact that it is on an active volcano with heavy CO2 out-gassing from time to time; these periods have to be carefully monitored, and the CO2 measurements stopped during strong out-gassing events.

Keeling also was lucky to have a new class of gas sensors available: the NDIR ( = non-dispersive infrared) sensors. Many molecules absorb infrared radiation, and that absorption is proportional to the concentration of the absorbing gas. CO2 for instance absorbs IR radiation in the two wavelength regions of  4 and 15 um (the boxes added by me show the absorption at these wavelengths):

CO2_IR_spectrum_annotatedKeeling used an SIO infrared CO2 analyzer built by the Applied Physics Company (see here). Later instruments were built by Siemens and many other companies. These IR analyzers killed the chemical methods, as they were much easier to do, more precise and could be added without problems to automatic measurement systems. But as with all measurements, there remain problems.

2. Problems when measuring CO2

A first and obvious difficulty when measuring atmospheric CO2 is to be sure that the air sample to be measured does not contain other gases absorbing IR radiation at the same wavelength as CO2 does. As water vapour is the most important gas with overlapping absorption regions, the air sample must be dry. Expressing the measurement as a “mixing ratio” in ppmV makes adjustments to standard temperature and pressure conditions superfluous; this would not be the case if CO2 concentration would be given as a mass per unit volume (as xx ug/m3). All measurement systems have problems with stability. One solution to avoid drifts is to make correlation measurements: all sample measurement is followed by a dry clean air measurement (to give a zero level) and by a measure of a an air sample containing a precisely known amount of CO2. These zero and reference samples may be held in capsules mounted on a rotating disk which lies in the path of the IR radiation. As the CO2 concentration in ambient air is usually so low, the IR rays should make a very long path through that sample. This means that the IR rays make many paths between two mirrors mounted at the end of the sample chamber. The quality and cleanliness of these mirrors (often gold coated) is essential for proper operation.

On top of these continuous zero/span measurements, zero and span measurements using real gas samples filling the measuring chambers must be made at certain intervals. At meteoLCD we (i.e. Raoul Tholl and myself) do these calibration checks about every three weeks.

Making zero air is not too difficult: we use either a Sonimix zero air generator from the Swiss company LNI or a chemical drying/absorber column. Sample air with a given CO2 concentration is difficult to obtain when the concentration has to been known with great precision. Keeling’s son (Charles Keeling) long time had a near monopoly for delivering precise CO2 sample gas. At meteoLCD the best we can afford if a “primary standard” gas from PRAXAIR where the concentration is known to 1%. As we use bottles with about 600 ppmV CO2, this means that our uncertainty to the real concentration lies between 594 and 606 ppmV, a not negligible 12 ppmV amount! A bottle of such a gas costs close to 1000 Euro, including the location of the metallic container itself. It does not last much more than a year.

3. The instruments used at meteoLCD

Over the year, we used three different CO2 sensors, with only the last two being good enough for precise measurements and trend detection. From 1998 to 2001 we used a Gascard sensor made by Edinburgh Instruments (UK); from 2002 to 2007 an expensive MIR9000 from the French company Environnement SA was in action. Finally from 2008 on an E600 instrument from the US company Api-Teledyne is measuring atmospheric CO2. The MIR9000 was replaced because its mirrors started degrading. In an ideal world, one everlasting non-degrading sensor would have made us happy. Alas, reality bites hard when it comes to long-time precise measurements. These are never easy to do, and one can not but warn of the often naive and extreme confidence that the public (and even many scientists!) has on many climate related measurements. As the German say “wer misst, misst Mist!”.

The last figures shows the yearly mean CO2 mixing ratios measured with these three instruments:

meteoLCD_co2_trend_1998_2010The Gascard sensor clearly can not be relied upon for a correct investigation into increasing atmospheric CO2 concentrations.

 

(to be continued)

Climate trends at Diekirch, Luxembourg (part 1: air temperature)

February 1, 2016

I finished a couple of days ago the annual computation of climate trends calculated from the measurements at meteoLCD, Diekirch (Luxembourg). As usual, the numbers show a much less spectacular evolution than the emotional media reports suggest.

  1. Lets start by the ambient air temperature:

airtemp_trend_1998_2015

The thermometers have not been displaced since 2002: the calculated blue regression line for 2002 to 2015 shows no warming, but a very small cooling!

A very similar picture is given by the temperature data of our national meteorological station at the Findel airport. The next graph was made using the homogenized data of NASA’s Gistemp:

Findel_airtemp_Gistemp_1998_2015

Here the cooling rate for the 2002 to 2015 period is -0.0058 °C/year, quite negligible!

 

Did you hear the anthem  “there are no more winters?” Actually this ongoing 2016 winter really seems absent, but the overall picture for  1998-2015 is a remarkable cooling:

Winter_1998_2015This plot of the December-January-February winter periods shows a visible cooling of 0.6°C per decade at the Findel airport. If we restrict our analysis to the 2002-2015, there still is no serious warming to be seen at the Findel: just a meager +0.08 °C warming per decade, very close to zero.

Winter_2002_2015

Our Diekirch data for the 2002-2015 winters also show only a modest winter warming of 0.5°C/decade and a good correlation with the NAO (North Atlantic Oscillation), a natural phenomenon which has a big influence on European climate: note how the trend lines of Diekirch, the Findel, Germany (DE) and the NOA index are very similar.

DJF_winter_airtemp_trends_2002_2015
You might compare this  with the January trend of the German weatherstations given at the NoTricksZone blog!

 

Finally let us finish this first part with a look on the DTR = daily temperature range = daily Tmax – daily Tmin. The global warming advocates always point to this measure as a sign for an ongoing warming caused by human activity: global warming should decrease the DTR, because it would make the nights warmer than the afternoons, and as a consequence decrease the DTR. Here our Diekirch data:

dtr_trend_1998_2015

All trends are practically zero: +0.2 °C/decade from 1998 to 2015 and -0.1 °C/decade for 2002-2015.

So lets finish this first part with a first conclusion: no big warming seen here in Luxembourg since at least 14 years!

 

______________________

to be continued…..

Higher CO2 boosts coccolithophores

December 19, 2015

The current “consensus” theory on the impacts of higher atmospheric CO2 is that current basic ocean pH levels (about 7.8 to 8.1 with ample variations) will be lowered by the dissolved CO2, and that the oceans “acidify” (a wrong appellation as the ocean waters still will remain basic). The bad results from the “acidification” would be problems for shell making creatures, as a more “acid” water would dissolve the calcium carbonate shells.

As so often in climate debates, these simplistic and popular alarmist theories do not survive serious scientific research. The latest issue of the journal SCIENCE Magazine (18 De. 2015, volume 350, issue 6267) has a research report by Sara Rivero-Calle from John Hopkins University and co-authors titled: “multidecadal increase in North Atlantic coccolithopores and the potential role of rising CO2”. Coccolithophores are the main calcium forming phytoplanktons, unicellalur alguae surrounded by calcium plates (link to picture)

coccolithophores_2

According to the authors, coccolithophores are a major source for the oceans inorganic carbon, are helping to sink aggregates and thus increase the storage of atmospheric carbon. The study uses 45 years of data, where the method to collect the phytoplankton with silk sieves has not changed. Their results show that the coccolithopores profit from raising atmospheric CO2 levels. The next figures shows that the percentages of collected coccolithophores in the samples increases dramatically from about 2 to 20%:

coccolithophores_0

Their statistical analysis suggests that the causes of this increase are atmospheric CO2 levels and the AMO (Atlantic Multidecadal Oscillation):

coccolithophores_1

The upper plot gives the above mentioned probabilities to find coccoclithophores in the sample, the second the global atmopsheric CO2 mixing ratio in ppmV and the lower plot the AMO index. They conclude that their North-Atlantic results may well represent a global trend. And that “contrary to the generalized assumption of negative effects of ocean acidification on calcifiers , coccolithophores may be capable of adapting to a high CO2 world”.

Is there a German Energiewende ?

November 27, 2015

 

energiewende_sinkt

Three professors from the Physikalisches Institut Universität Heidelberg have written a very short article in February 2015, titled “Findet eine Energiewende statt?” Contrary to what one usually reads (i.e. the Energiewende is only seen as implying electricity production), they discuss how the amount of fossil energies in the total German energy consumption has changed between 2000 and 2013. During that period, a solar photo-voltaic capacity of nearly 39 GW, and a wind-turbine capacity of approx. 34 GW were installed, quite impressive numbers (a large nuclear reactor has a 1.5 GW capacity)!

The costs of the PV installations alone installed from 2000 to 2012 are estimated at 108 billion Euro (including the yet to pay amounts for the future feed-in). The total costs of the EEG (Erneuerbare Energie Gesetz) are staggering, and mostly unknown. One estimation by Hermann (2011), actually seen as much too low, gives a total of 350 billion Euro up to 2030; 500 billion probably will be exceeded.

What is the effect of this huge effort? The figure below shows the total energy consumption in Germany from 2000 to 2013. The left scale shows the percentages, with the situation in 2000 taken as 100%. I added the boxes and arrows.

Energieverbrauch_Deutschland_2000_2013

We see that the percentage of fossil fuel energy was about 84% in 2000 and remains at least 80% in 2013 compared to the 2000 reference; actually it is 90% of the slightly lower total consumption in 2013. So in 14 years of extraordinary expansion of renewables there is at best a minuscule diminution of the total amount of fossil fuels used for heating, driving, industrial processes and electricity production (and even an increase in the 2013 percentage).

The professors rightly conclude: “Der bisherige Ausbau der Wind- und Solarenergie ist augenfällig, das bisher Erreichte fällt aber sehr bescheiden aus, gemessen am Gesamtziel einer weitgehend von fossilen Energieträgern unabhängigen Energieversorgung unseres Landes”.

Subsidies for renewable and classic electricity

November 22, 2015

In the debate about wind and solar electricity, the amount of subsidies received by political decision is a hot topic. Usually the pushers of this type of electricity (correctly) insist that non-renewable electricity production is also subsidized, and that objecting subsidies for renewables is a moot point.

In this blog I will use concrete data available from the EIA, as well as a few illustrations from the “At the crossroads: Climate and Energy” meeting by the Texas Policy Foundation. All data refer to the USA, 2013.

Let us start with the EIA table which gives the different subsidies for the fiscal year 2013, in million of (2013) US$.

subsidies_USA2013_EIA

The “classical” electricity production by coal, natural gas, petroleum and nuclear receive 5081 million$, the renewables 15043 million$.

The next slide of a presentation by James M. Taylor from the Heartland Institute gives a good textual overview:

subsidies_EIAThe last point is especially instructive: coal, NG, petroleum and nuclear receive 5081 million$ as subsidies, but produce 86.6% of the total electricity; wind and solar receive 11264 million$ as subsidies and produce 4.4% ! This means that the wind & solar combined receive 43.6 times more subsidy per unit of electricity produced than the traditional producers!

If you just picture solar subsidies per 1000 MWh produced with the traditional producers, you get that nice pie-diagram (by Taylor):

subsidies_per_MWPer unit of electrical energy subsidies to traditional producers become nearly invisible compared to those of solar electricity!

The official EIA numbers in the table should close the debate: yes, renewables get really high subsidies for their very low overall contribution! The situation in Europe probably is similar to the USA, with renewable subsidies possibly still higher.


Follow

Get every new post delivered to your Inbox.