Covid-19 Luxembourg: narrowing of the uncertainty range (3/x)

March 27, 2020

In the previous blog (link) I wrote about the fact that with few data points, the Gompertz fit asymptotic value (= the maximum of infected people to be expected) has an extremely large uncertainty. The interval may extend to impossible negative and also impossible high positive values.

As an example I will show how the situation changes if we extend the full data series (starting the 29th February) from the 22nd March to the 24th and finally the 26th March.

If we stop at the 22nd, the lower bound is about -22430 and the higher 43532, and the parameter a of the Gompertz fit (i.e. the asymptote) is not statistically significant.

When we extend the period to March 24, and than again up to March 26, everything changes: the computed asymptotes become statistically significant, and the uncertainty interval narrows spectacularly:

The blue circles represent the asymptotic values, and the red boxes the range of the uncertainty interval. I will continue this investigation when data up to the 28th March are available.


PS: here is the situation with the 28-March data included:



(to be continued)

COVID-19: does the number of deaths increase exponentially? (2/x)

March 19, 2020

In this 2nd and very short comment on the COVID-19 situation in Italy (data for 22 days now available) I will show that the usual remark on the exponential development of the death cases is not correct. The next figure shows as points the number of observed deaths (black dots) up to 18-Mar-20, and also what 3 different fits to the Gompertz function give when made at days 14, 19 and 22 (the points given by Death_GPZxy)):


Firstly, the 3 GPZ fits numbers are nearly identical to the observations (the 4 markers mostly coincide). The blue curve is an exponential fit for the complete series of 22 days: clearly from day 19 on the real numbers lie below the exponential curve. This difference will probably increase in the future.


In the ongoing COVID-19 in Italy, the number of deaths does not increase exponentially, but follows very closely the Gompertz function y=a*exp(-b*exp(-c*x)), and so will be gradually lower to what an exponential increase would suggest.


(to be continued)


COVID-19: calculations and curves (1 of x)

March 18, 2020


1. Introduction

With the spread of the new and nasty corona virus modelling and calculations about pandemics and spread of infection spring up about everywhere. Now the interest and the mathematics of handling such a situation are not new, and are a standard subject for all students of differential equations. As an example, let me give an example from a publicity brochure for the Hitachi 200x hybrid analog computer from the 60’s:


(~1967, click here to download the full brochure).

Here is the example problem:


A system of 3 non-linear differential equations is enough to model the situation, and the plotter driven by the analog computer outputs this graph:


A very long time ago, Benjamin Gompertz (1771 – 1865) published a sigmoid-type function that often fits well to the first or all parts of the infection (above for instance the curve Z). The Gompertz function has only 3 parameters and is:

y(t) = a*exp(-b*exp(-c*t)).

When t tends to infinity, exp(-c*t) -> 0  and exp(-b*0) = 1; so when t -> infinity, a becomes an asymptote ( in the curve Z all persons become immune in the long run).

Willis Eschenbach has an excellent article in the Wattsupwiththat blog titled “The Math of Epidemics“, where he applies the Gompertz function to the Covid-19 cases in South Korea, and the fit is excellent:


His article put me on the rails to do the same with the situation in Italy (start of the epidemic is 25 Feb 2020), all relevant data are published live here.

2. Gompertz function and Covid-19 in Italy

This is the Gompertz-function applied to the death cases, as published today 18-March-2020:


The fit is excellent, and all 3 parameters are statistically significant; the interval between the lower and upper confidence levels is [6741, 45889], and the fraction of (std.error of a)/a has become 35%. The relative errors and the width of the confidence interval will narrow with more data points available. When only few points is all you have, do not make any prediction! The next figure shows the plot from above, extended up to 100 days, and also the same exercise when only 14 and 15 data points were available:


When only 15 or 14 data points are available, the confidence interval becomes ridiculous; it is only for the latest plot with 21 points that all three parameters become significant at the 95% level. So beware to massage your asymptote for any serious meaning, if you have too few data!


3. The Gompertz curve and Covid-19 in Luxembourg.

Tiny Luxembourg (~675000 inhabitants) only started the epidemic in Feb. 29th. Until today we have only 6 data point for the total infected (up to 203 today) and 2 cases of death. Here what the Gompertz fit looks like for the total infected:


But look at the enormity of the confidence interval! Even if the Gompertz calculation gives near identical results as the observations, all parameters are not significant and one should never see the asymptote in the next graph as an intelligent predictor for the maximum number of infections!



(to be continued)

An interesting day for Ozone and CO2

February 5, 2020

Today 05 Feb 2020 is an interesting day to observe how ground ozone O3, CO2, NO2 and wind may play together. The next figure shows how air temperature, CO2, ground O3 (measured by the Cairsens O3&NO2 sensor) and wind velocity (in m/s) varied during the morning at meteoLCD:

We see that CO2 makes a big jump starting at 06:00, with a peak at 07:30 and a fall back to “normal” at 09:00. Ground ozone varies in the opposite manner, as does the wind velocity and air temperature. What causes these variations? Probably the cooling of the boundary layer, together with lower wind, enhances the morning inversion. CO2 emissions from traffic and heating are trapped and CO2 levels rise; when the wind increases again, the boundary layer starts being mixed, which dilutes CO2. So far, so good. But why the sudden slump in O3?
Our period starts at 06:00 UTC, which is 07:00 local time. At that hour the nearby traffic from people driving to their workplace peaks, something we often have noticed in our NO/NO2 measurements. We know that O3 is destroyed by NO (and produced by NO2, among others); look here for a very small article on this I wrote in Dec. 1998 (!) with several of my students. So the best explanation is that a NO peak from this traffic (and NO being trapped in the inversion layer) operates to destroy ground ozone.

We find this situation at the official station of Beckerich, where the traffic situation is similar to that at meteoLCD (Diekirch). Beckerich curiously is the sole station showing NO2 readings (the other 4 having probably a problem with their sensors). NO2 is mainly the result of an oxidation of NO coming out of the tailpipes, so we may safely assume that the hourly variation is similar in shape.
Here the Beckerich situation:

NO2 peaks at Beidweiler
(05 Feb morning = yellow)
… while O3 takes a plunge!

Two other stations (Luxembourg-Bonnevoie and Esch) show the same O3 dip, and two (Beidweiler and Vianden) do not.


Once again we find an illustration of the importance of wind velocity (i.e. air movements) on the CO2 mixing ratio: lower wind (especially during an inversion) allows CO2 to accumulate, and more air movement makes for higher dilution and lowers the concentration.

Ozone levels in the relatively low sun morning hours (despite the sunny and blue sky morning, at 08:00 UTC solar irradiance was a meager 50 W/m2)) are mostly affected by the destroying NO gas, whose concentration peaks normally during the high traffic morning hours, This would explain why the Vianden O3 levels do not plunge, as there is practically no traffic around that measuring station:

No O3 dip at traffic-free Vianden!

One year of fine particle measurements by Airvisual Pro at meteoLCD

January 23, 2020
Airvisual Pro installed in the Stevenson hut the 26th Dec. 2018

1. Introduction

During the year 2018 I decided to introduce fine particle measurements at meteoLCD. During many months I built several PM measurement systems based on the Chinese SDS011 sensor made by inovafit, with data logging done by a Raspberry Pi. All these new sensors are of the LLS type (LLS = Laser Light Scattering), where ambient air is sucked into a chamber by a little fan and exposed in that chamber to the light emitted by a solid-state laser. The scattered light is analysed by a photodiode, and the result is a count of particles in suspension, classified in two categories (< 2.5 um for PM 2.5 and < 10 um for PM 10). The counts are converted into a mass (ug/m3) by the inbuilt controller, assuming a certain combination of substances of known density. Clearly this easy to built system has some weak points: the most important is humidity, and it has been shown that above a certain level of relative humidity (about 75%) the condensation of water vapor on the particles inflates the count and so the mass reading. A second weak point are the changing conditions of air flow due to a varying atmospheric pressure and/or wind. Professional, expensive sensors like the Horiba APDA-371 avoid these problems by drying the incoming air, maintaining precise conditions of airflow and air pressure, and using a much more expensive and complicated principle of beta radiation weakening (BAM principle). I wrote two preliminary articles on comparing low-cost LLS sensors with a Horiba at the Beidweiler station, which I suggest to read here and here.

2. The Airvisual Pro

The Airvisual Pro is a stylish LLS-type sensor made by the Swiss company iQAir (actual price ca. 460 €). It measures temperature, rel. humidity, CO2 concentration, PM2.5 and PM10. The instrument can be integrated into a cloud managed by iQAir, so that hourly data are always available on the internet (see our data here). Communication with the outside is exclusively by WiFi (there is no RJ45 connector), which was sort of a problem at meteoLCD. We first used a Devolo Powerline system with the AP located in the hub (visible in the above picture below the translucent base plate). This system was unstable, even when we switched to a AVM powerline system which uses the 3 line wires (neutral, line and ground). So finally I laid an RJ45 cable up into the Stevenson hut, and installed a WiFi access point directly beneath the Airvisual Pro. This solved the problem of intermittent connection failures.

The correction of the humidity influence consist in dividing the raw readings by a growth-factor GW = a + (b*RH^2)/(1-RH) where RH is the relative humidity (number between 0…1), a=1 and b= 0.25. This formula has been suggested by several authors (see my prior papers for the references), and first tests have shown that this compensation for humidity is an absolute must.

3. One full year of data

We now have a full year of hourly data to compare the Airvisual Pro readings with those of the official Beidweiler station LU0105A located less than 20 km from Diekirch. The Beidweiler data have been downloaded from the “discomap“-site of the EEA, and I used the E1a series (the validated E2a series are not yet available). The E1a data are somewhat irregular in the ongoing time (lines do not always follow increasing time), and there are many missing data (often a couple of hours, but some much longer periods). The time-stamp probably is local time, as is the time-stamp of the Airvisual Pro file. Missing values have been replaced by repeating the last correct reading before the interruption. As all hourly data are raveled into daily averages, the impact of this action is tolerable.

So here is a plot showing the daily PM2.5 readings for the full year 2019:

First, look at the peaking values, which are absolutely synchronous: practically all the peaks and lows coincide in time. The yearly averages, minima and maxima also are very close for both series.

The next plot shows the Airvisual Pro readings versus the Beidweiler ones:

The goodness of the fit (forced to 0) is R2 = 0.88, quite acceptable! A calibration factor to apply to the Airvisual Pro PM 2.5. readings would be a multiplier of 1/0.88 = 1.14 (always rounded to the nearest integer). Not forcing the trend-line through the origin does not change these results.

4. The PM 10 readings

The Airvisual Pro seems to have a problem with the PM10 category, as these readings are very similar to the PM 2.5, and considerably too low, as shown by the next plot:

I have no explication to this for the moment.

5. Conclusion

This first year-long series of PM measurements with the Airvisual Pro shows that it is exceptional accurate in its PM 2.5 measurements, compared to the hugely more expensive Horiba APDA_371.

The Airvisual Pro, despite being exposed in the open in a well ventilated Stevenson hut (natural ventilation) and high humidity levels, worked without a single break-down for the full year

The year long communications with the Airvisual cloud worked flawlessly; this cloud makes it easy to store and consult the uploaded data. For understandable security reasons, accessing these data is reserved to the subscriber, and not the general public.

An Excel file holding all relevant data and plots can be found here. Please give proper credit when citing these data.

Future and extinction fear

October 10, 2019

I reread during the last weeks the excellent book “La Peur Exponentielle” of Benoît Rittaud, in which he recalls the many fears tied to a geometric or exponential increase in some environmental factor, be it population growth, pollution, climate change etc. At the end of his book Rittaud recalls the classic book “The Doomsday Syndrome” published by John Maddox in 1972. Maddox was a theoretical physicist and is especially known for being the editor of the Nature journal (from 1966 – 1973 and 1980 – 1995). I never did read the book in the past, so I ordered a used copy as it is out of print.

The ideas and comments by John Maddox written 47 years ago are breathtaking in their modernity and actuality. He writes this book at a time when the environmental movement was thriving in the Western World, and the prophets of doom like the Ehrlich (“The Population Bomb”) and Rachel Carson (“The Silent Spring”) were very influential. The great angst were population growth, pollution, pesticides, climate change (yes!) and a general overcrowding and damaging the “Spaceship Earth”.

All of these prophecies were wrong in their exaggerations of real exisiting problems. The great famines predicted by Ehrlich for the 1970’s did not happen; overuse of DDT certainly was a problem, but the general policies in forbidding its use (after it had saved millions of lives from malaria) are responsible for possibly hundreds of thousands of deaths.

Let me here just cite a couple of sentences from Maddox:

On the doomsday prophecies: “Their most common error is to suppose that the worst will happen”.

On the way the environmentalist see the non-alarmists: “One of the distressing features of the present debate about the environment is the way it is supposed to be an argument between far-sighted people with the interests of humanity at heart and others who care no tuppence for the future.”

On the scientists: “They have too often made more of the facts than the conventions of their craft permit. Too often, they have expressed moderate or unsure conclusions in language designed to scare, sometimes with the open declaration that exaggeration is necessary to 'get things done'.

On ecology: “The word ecology has become a slogan, not the name of a branch of science.”

The doomsday cause: “…would be more telling if it were more securely grounded in facts, better informed by a sense of history and awareness of economics and less cataclysmic in temper.”

On DDT and Rachel Carlson: “The most seriously misleading part of her narrative is the use of horror stories of the misuse of DDT to create an impression that there are no safe uses worth consideration.”

On alarm: “…alarm does not provide the best atmosphere for finding rational solutions”.

On extreme environmentalists: “…the extreme wing of the environmental movement may inhibit communities of all kind from making the fullest use of the technical means which exist for improvement of the human condition.”


How about telling Extinction Rebellion (or the elder of Fridays for Future) to start reading this prescient book written well before they were born?

A peek at ground-ozone

October 5, 2019

Attention: this comment has being updated the 06Oct2019; the previous comparison with the 6 months values was erroneous. Sorry!


A useful exercise is always to compare our meteoLCD ground ozone measurements (here) with those of the official administration for air quality (here).

Since 3 Jan. 2017 we use the miniature CAIRSENS O3&NO2 sensor for our ozone measurements (technical specs here). The NO2 readings are normally well below the lower limit of detection of 20 ppb of this instrument, so that its readings can be taken as ozone concentration (in ppb, multiplying by the conversion factor 2 gives a result in ug/m3 at standard conditions). The Administration of Environment uses expensive Horiba sensors in its stations and the readings are given in ug/m3. The 3rd April 2018, the first Cairsens O3&NO2 has been replaced by a follower.

  1. The official stations of Vianden, Beckerich and Beidweiler.

We will compare our measurements made during the week of the 29th September 2019 with those of the three stations of Beckerich, Beidweiler and Vianden:

As Luxembourg is a very tiny country, the distances between Diekirch and the other 3 stations are small: Diekirch-Vianden = 8km, Diekirch-Beidweiler = 19km, Diekirch-Beckerich = 25km (rounded to the km).

Regarding traffic we have a very clear situation: Vianden has very low traffic, and is surrounded by a large forest area; the measuring station is situated between the two basins of the pumped water storage SEO facility. Beidweiler has some morning and evening traffic peak from people driving for work into Luxembourg and also a certain amount going opposite into the Moselle region or Trier in Germany. Beckerich surely has the most morning and evening work related traffic, and being on a main road (but not a highway) to Arlon in Belgium, also more traffic during the day. This situation is similar to that in Diekirch, where there are traffic peaks in the morning and evening hours and a continuous background over the day.

Here a picture of the O3 measurements of these 3 stations:

I added a horizontal line defining the lowest nightly measurements: Clearly Vianden shows the typical situation of a rural station, where there is no nightly traffic whose NO emissions are rapidly bringing down the O3 levels. Its peak values also are the highest, as this is a location with very pristine and clear air (so no much UVB attenuation here), and with a large natural supply of ozone precursor gases as isoprenes and terpenes emitted by the surrounding trees.

Beidweiler is an intermediate: the night lows are close to 20 ug/m3 and the peak values are lower than those of Vianden. Finally Beckerich has the lowest night O3 levels, going down to about 5 ug/m3; its peak readings also are distinctly inferior to those of Vianden and Beidweiler (please pay attention that on the graphs the vertical scales are not the same!)

Now I can not refrain to make a comment I am issuing since at least 20 years. From its location, it is clear that Vianden must have the highest natural O3 levels during warm periods (all stations not mentioned here are city stations). Why is it than that the Administration of Environment uses every year the Vianden values to declare an ozone emergency and limit traffic speed on the highways, with the argument that Vianden is representative for Luxembourg! Nothing could be father from truth: most people in Luxembourg live in/near Luxembourg-City and to the South, in regions that are a far shot from the pristine Vianden situation. I guess that an environmental agency must justify its existence by launching scary messages from time to time ; I am more than willing to change my opinion if I get one single good argument justifying its choice that I consider a political, and not scientific one.

3. Comparing Diekirch to the three other stations

The next figure shows for every station (brown curve) an overplot with our Cairsens measurements (in red):

Simple inspection would suggest that Beckerich and Diekirch are the most similar: same night-time low due to ongoing traffic, same highs and lows with a possible overshoot of the highest readings. Beidweiler ranks second and Vianden makes for some head scratching: the night-time readings are quite different, but all the high readings are very close.

In my opinion, our Cairsens (this is the second sensor as each one has a life-span of a year) has a span that might be slightly to high, even if its calibration certificate was nearly perfect. During high ozone events, we had several times readings exceeding 200 ug/m3, when the other stations were distinctly below.

Let us have a look at the last month (09 Sep to the morning of the 06 Oct) overlaying the Diekirch and Vianden plots:

This looks not too bad, but clearly the ozone sensor in Vianden had some problems, shown by the brown straight lines which point to an interpolation of missing data. Do also not ignore that this graphical overplot is really rough, so one should not put too much emphasis on slightly misaligned peaks.

Now lets look at the past 6 months. The official station correspond rather well one to another:

We find approximately the same situation as in the short week long series from above.

Here now a comparison between Diekirch and Beckerich, and in the next picture Diekirch and Vianden:

Visually this is not too bad! We have the same peaks and lows, with a possible overshoot in the Cairsens readings during the highest O3 periods.

4. Conclusion

This short comparison has shown a quite good agreement between our Cairsens measurements and the Horibas during the last week, month and even 6 months periods. Do not forget that the price difference between the Cairsens and the Horiba is enormous. The Cairsens is one of a new generation of sensors that are made to be affordable, calibrated for a full year of operation, and meant to be replaced after that year.

The O3&NO2 Cairsens sensor has a special patented filter that must be changed every 4 months or so. This change will be made asap.

Radon washout: two consecutive precipitation peaks

September 25, 2019

Many times I wrote on this blog on radon washout: after a short downpour, we (nearly) always see a visible peak in our gamma radiation, caused by a washout of the daughters of the noble gas radon which is a natural constituent of our atmosphere; to find these comments enter “radon” into the search window on this site or click  here , here and here.

A washout means a diminution of the local atmospheric aerosol concentration, and all measurements show that there exist a delay of a few days before recovering. The last few days give us a very good example of two situations: a high precipitation peak followed by a lower one, and vice -versa a small rain peak followed by higher one.

The situation A shows a high rainfall followed after about 6 hours by a smaller one: the gamma-radiation peaks have the same pattern: high, than lower. Situation B is like a mirror image for rain: first small pourdown, than a much higher one. Here the small precipitation peak causes the highest gamma peak: it is a sign of radon washout from a “pristine” atmosphere. 6 hours later we observe a 3-times higher downpour, but the gamma peak is very small: the washout operates on an atmosphere “cleaned” from radioactive aerosol particles (the radon daughters), so there is not much radioactive debris left.

This example shows that it would be a folly to try to find a proportionality between the rain peak and caused gamma rise. The recovery time is a parameter not to ignore. I tried to find a relationship between rain and gamma peaks, if the situation is that of a one-time event, sufficiently far (> 3 days) from a preceding one. There are not many such happenings in a year, and the correlation is poor. Maybe more on this when time permits.

The Kauppinen papers (2/2)

August 11, 2019

3. The four Kauppinen papers.

In the first part of these comments I finished by writing that Dr. Jyrki Kaupinnen (et al.) has published during the last decade several papers on the problem of finding the climate sensitivity. Here is a list of these papers:

  • 2011 : Major portions in climate change: physical approach. (International Review of Physics) link
  • 2014: Influence of relative humidity and clouds on the global mean surface temperature (Energy & Environment). Link to abstract.
    Link to jstor read-only version (download is paywalled).
  • 2018: Major feedback factors and effects of the cloud cover and the relative humidity on the climate. Link.
  • 2019: No experimental evidence for the significant anthropogenic climate change. Link.

The last two papers are on arXiv and are not peer reviewed, not an argument to refute them in my opinion.

4. Trying to render the essentials without mathematics.

All these papers are, at least for big parts, heavy on mathematics, even if parts thereof are not too difficult to grasp. Let me try to summarize in lay man’s words (if possible):

The authors remember that the IPCC models trying to deliver an estimate for ECS or TCR usually take the relative humidity of the atmosphere as constant, and practically restrict to allowing one major cause leading to a global temperature change: the change of the radiative forcing Q. Many factors can change Q, but overall the IPCC estimates the human caused emission of greenhouse gases and the land usage changes (like deforestation) are the principal causes of a changing Q. If the climate sensitivy is called R, the IPCC assumes that DT = R*DQ (here “D” is taken as the greek capital “delta”). This assumption leads to a positive water vapour feedback factor and so to the high values of R.

Kauppinen et al. disagree: They write that one has to include in the expression of DT the changes of the atmospheric water mass (which may show up in changes of the relative humidity and/or low cloud cover. Putting this into a equation leads to the conclusion that the water vapour feedback is negative and as a consequence that climate sensitivity is much lower.

Let us insist that the authors do not write that increasing CO2 concentrations do not have any influence on global temperature. They have, but it is many times smaller than the influence of the hydrological cycle.

Here what Kauppinen et al. find if they take real observational values (no fudge parameters!) and compare their calculated result to one of the offical global temperature series:

The visual correlation is quite good: the changes in low cloud cover explain almost completely the warming of the last 40 years!

In their 2017 paper, they conclude to a CO2 sensitivity of 0.24°C (about ten times lower than the IPCC consensus value). In the last 2019 paper they refine their estimate, find again R=0.24 and give the following figure:

Clearly the results are quite satisfactory, and show also clearly that their simple model can not render the spikes caused by volcanic or El Nino activity, as these natural disturbances are not included in their balance.

The authors conclude that the IPCC models can not give a “correct” value for the climate sensitivity, as they practically ignore (at least until AR5) the influence of low cloud cover. Their finding is politically explosive in the sense that there is no need for a precipitous decarbonization (even if on the longer run a reduction in carbon intensity in many activities might be recommendable.

5. My opinion

As written in part 1, Kauppinen et al. are not the first to conclude to a much lower climate sensitivity as the IPCC and its derived policies do. Many papers, even if based on different assumptions and methods come to a similar conclusion i.e. the IPCC models give values that are (much) too high. Kauppinen et al. also show that the hydrological cycle can not be ignored, and that the influence of low clouds cover (possibly modulated by solar activity) should not be ignored.
What makes their papers so interesting is that they rely only on practically 2 observational factors and are not forced to introduce various fudge parameters.

The whole problem is a complicated one, and rushing into ill-reflected and painful policies should be avoided before we have a much clearer picture.

The author Alberto Zarogoza Comendador has a very interesting web-site with an interactive climate-sensitivity calculator:

I really recommend to spend some time trying his calculations and especially reading his very interesting article “It should’nt take 100 years to estimate the climate sensitivity“.


Addendum (added 12Aug2019) :

Dr. Roy Spencer showed a very telling slide in his Heartland 2019 presentation:

This image shows the troposphere (not surface) warming as predicted by the CMIP5 models (which form the basis of all the “consensus” political action) versus the observations made by the satellites (by the RSS and UAH teams) and 4 different reanalysis which included everything (satellites, floats, balloons …). The spread between the different models is so great as to forbid any action based on any of them (which one would you choose as the “truth”?). Curiously the only model close to the observations is the Russian INM-CM5 model (read a more complete discussion on that model here).

The Kauppinen papers (1/2)

August 11, 2019
  1. Climate sensitivity.

The most important question regarding anthropogenic climate change is that of the climate sensitivity: in short “what supplementary warming will be caused by a doubling of the atmospheric CO2 concentration”. This question lies at the heart of “climate protection policies”: if this sensitivity is great, rapid decarbonisation might be seen as inevitable, if it is low, a better policy might be to wait for upcoming technologies allowing a more pain-less switch to a non- or low-carbon future.

The IPCC can not get is uncertainty range narrowed down since more than 20 years: it stubbornly lies in the interval 1.5 to 4.5 °C with a “best” estimate of approx. 3.5°C. These numbers are (mostly) the outcomes of climate models, and all assume that feedback factors like increasing water vapour are positive, i.e. they augment the warming (about 3.6 W/m2 radiative forcing caused by a CO2 doubling) considerably.

Many scientists agree with the IPCC, but a smaller group does not. This group (like Lindzen, Lewis and Curry etc…) tries to find the climate sensitivity from the observations of past climate, and most get an answer which lies below (often well below) the lower the IPCC’s lowest boundary.

If they are right (and the IPCC consensus people wrong), most of the expensive policies following the Paris outcome (“limit warming to 1.5°C w.r. to pre-industrial times”) could be scrapped.

The notion of “climate sensitivity” is complex: usually 2 different sensitivities are used: the Equilibrium Climate Sensitivity ECS which considers the final temperature change caused by a CO2 doubling if “everything has settled down”, what means all feedback factors have played out, and all momentary thermal imbalances on the planet have been resolved. This may take a horrible long time, with a magnitude of centuries, and thus is too long to represent a realistic political goal. So often a second definition the Transient Climate Sensitivity TCS is used (often also called transient climate response TCR); here we assume a yearly 1% increase in atmospheric CO2 concentration which will lead to a doubling in 70 years, a time span more acceptable for a political agenda.

If we look at the history of scientific papers treating this subject, there is a clear tendency for lower sensitivities since the first calculation of Charney in 1979:

Decline of published TCR and ECS values since 2000 (link).

So this extremely important subject is far from “settled science” as most media, environmental groups and politicians continue to shout and want us make believe.

2. Dr Jyrki Kauppinen

Dr. Kauppinen is a professor of physics at the Turku University in Finland. He has published quite a lot of papers on spectroscopy, Fourier analysis etc. Four of his (and co-authors) papers (published 2011, 2017, 2018 and 2019) look at the climate sensitivity problem using only observations, and finding that the most important feedbacks caused by water vapor (condensing into low clouds or not) are negative and not positive as assumed by the IPCC.

They find that the human activity is insignificant on climate change (read here a general comment in the Helsinki Times from 14 July 2019).

In the following parts of this blog, I will look into these papers, which are not always easy to read and understand. They are quite heavy on mathematics, and even if I am able to follow most, there are some occurrences where I have to assume that their calculations are correct.


to be continued….