Welcome to the meteoLCD blog

September 28, 2008

blog-2018

This blog started 28th September 2008 as a quick communication tool for exchanging information, comments, thoughts, admonitions, compliments etc… related to the questions of climate change, global warming, energy etc…

Future and extinction fear

October 10, 2019

I reread during the last weeks the excellent book “La Peur Exponentielle” of Benoît Rittaud, in which he recalls the many fears tied to a geometric or exponential increase in some environmental factor, be it population growth, pollution, climate change etc. At the end of his book Rittaud recalls the classic book “The Doomsday Syndrome” published by John Maddox in 1972. Maddox was a theoretical physicist and is especially known for being the editor of the Nature journal (from 1966 – 1973 and 1980 – 1995). I never did read the book in the past, so I ordered a used copy as it is out of print.

The ideas and comments by John Maddox written 47 years ago are breathtaking in their modernity and actuality. He writes this book at a time when the environmental movement was thriving in the Western World, and the prophets of doom like the Ehrlich (“The Population Bomb”) and Rachel Carson (“The Silent Spring”) were very influential. The great angst were population growth, pollution, pesticides, climate change (yes!) and a general overcrowding and damaging the “Spaceship Earth”.

All of these prophecies were wrong in their exaggerations of real exisiting problems. The great famines predicted by Ehrlich for the 1970’s did not happen; overuse of DDT certainly was a problem, but the general policies in forbidding its use (after it had saved millions of lives from malaria) are responsible for possibly hundreds of thousands of deaths.

Let me here just cite a couple of sentences from Maddox:

On the doomsday prophecies: “Their most common error is to suppose that the worst will happen”.

On the way the environmentalist see the non-alarmists: “One of the distressing features of the present debate about the environment is the way it is supposed to be an argument between far-sighted people with the interests of humanity at heart and others who care no tuppence for the future.”

On the scientists: “They have too often made more of the facts than the conventions of their craft permit. Too often, they have expressed moderate or unsure conclusions in language designed to scare, sometimes with the open declaration that exaggeration is necessary to 'get things done'.

On ecology: “The word ecology has become a slogan, not the name of a branch of science.”

The doomsday cause: “…would be more telling if it were more securely grounded in facts, better informed by a sense of history and awareness of economics and less cataclysmic in temper.”

On DDT and Rachel Carlson: “The most seriously misleading part of her narrative is the use of horror stories of the misuse of DDT to create an impression that there are no safe uses worth consideration.”

On alarm: “…alarm does not provide the best atmosphere for finding rational solutions”.

On extreme environmentalists: “…the extreme wing of the environmental movement may inhibit communities of all kind from making the fullest use of the technical means which exist for improvement of the human condition.”

Conclusion:

How about telling Extinction Rebellion (or the elder of Fridays for Future) to start reading this prescient book written well before they were born?

A peek at ground-ozone

October 5, 2019

Attention: this comment has being updated the 06Oct2019; the previous comparison with the 6 months values was erroneous. Sorry!

________________________

A useful exercise is always to compare our meteoLCD ground ozone measurements (here) with those of the official administration for air quality (here).

Since 3 Jan. 2017 we use the miniature CAIRSENS O3&NO2 sensor for our ozone measurements (technical specs here). The NO2 readings are normally well below the lower limit of detection of 20 ppb of this instrument, so that its readings can be taken as ozone concentration (in ppb, multiplying by the conversion factor 2 gives a result in ug/m3 at standard conditions). The Administration of Environment uses expensive Horiba sensors in its stations and the readings are given in ug/m3. The 3rd April 2018, the first Cairsens O3&NO2 has been replaced by a follower.

  1. The official stations of Vianden, Beckerich and Beidweiler.

We will compare our measurements made during the week of the 29th September 2019 with those of the three stations of Beckerich, Beidweiler and Vianden:

As Luxembourg is a very tiny country, the distances between Diekirch and the other 3 stations are small: Diekirch-Vianden = 8km, Diekirch-Beidweiler = 19km, Diekirch-Beckerich = 25km (rounded to the km).

Regarding traffic we have a very clear situation: Vianden has very low traffic, and is surrounded by a large forest area; the measuring station is situated between the two basins of the pumped water storage SEO facility. Beidweiler has some morning and evening traffic peak from people driving for work into Luxembourg and also a certain amount going opposite into the Moselle region or Trier in Germany. Beckerich surely has the most morning and evening work related traffic, and being on a main road (but not a highway) to Arlon in Belgium, also more traffic during the day. This situation is similar to that in Diekirch, where there are traffic peaks in the morning and evening hours and a continuous background over the day.

Here a picture of the O3 measurements of these 3 stations:

I added a horizontal line defining the lowest nightly measurements: Clearly Vianden shows the typical situation of a rural station, where there is no nightly traffic whose NO emissions are rapidly bringing down the O3 levels. Its peak values also are the highest, as this is a location with very pristine and clear air (so no much UVB attenuation here), and with a large natural supply of ozone precursor gases as isoprenes and terpenes emitted by the surrounding trees.

Beidweiler is an intermediate: the night lows are close to 20 ug/m3 and the peak values are lower than those of Vianden. Finally Beckerich has the lowest night O3 levels, going down to about 5 ug/m3; its peak readings also are distinctly inferior to those of Vianden and Beidweiler (please pay attention that on the graphs the vertical scales are not the same!)

Now I can not refrain to make a comment I am issuing since at least 20 years. From its location, it is clear that Vianden must have the highest natural O3 levels during warm periods (all stations not mentioned here are city stations). Why is it than that the Administration of Environment uses every year the Vianden values to declare an ozone emergency and limit traffic speed on the highways, with the argument that Vianden is representative for Luxembourg! Nothing could be father from truth: most people in Luxembourg live in/near Luxembourg-City and to the South, in regions that are a far shot from the pristine Vianden situation. I guess that an environmental agency must justify its existence by launching scary messages from time to time ; I am more than willing to change my opinion if I get one single good argument justifying its choice that I consider a political, and not scientific one.

3. Comparing Diekirch to the three other stations

The next figure shows for every station (brown curve) an overplot with our Cairsens measurements (in red):

Simple inspection would suggest that Beckerich and Diekirch are the most similar: same night-time low due to ongoing traffic, same highs and lows with a possible overshoot of the highest readings. Beidweiler ranks second and Vianden makes for some head scratching: the night-time readings are quite different, but all the high readings are very close.

In my opinion, our Cairsens (this is the second sensor as each one has a life-span of a year) has a span that might be slightly to high, even if its calibration certificate was nearly perfect. During high ozone events, we had several times readings exceeding 200 ug/m3, when the other stations were distinctly below.

Let us have a look at the last month (09 Sep to the morning of the 06 Oct) overlaying the Diekirch and Vianden plots:

This looks not too bad, but clearly the ozone sensor in Vianden had some problems, shown by the brown straight lines which point to an interpolation of missing data. Do also not ignore that this graphical overplot is really rough, so one should not put too much emphasis on slightly misaligned peaks.

Now lets look at the past 6 months. The official station correspond rather well one to another:

We find approximately the same situation as in the short week long series from above.

Here now a comparison between Diekirch and Beckerich, and in the next picture Diekirch and Vianden:

Visually this is not too bad! We have the same peaks and lows, with a possible overshoot in the Cairsens readings during the highest O3 periods.

4. Conclusion

This short comparison has shown a quite good agreement between our Cairsens measurements and the Horibas during the last week, month and even 6 months periods. Do not forget that the price difference between the Cairsens and the Horiba is enormous. The Cairsens is one of a new generation of sensors that are made to be affordable, calibrated for a full year of operation, and meant to be replaced after that year.

The O3&NO2 Cairsens sensor has a special patented filter that must be changed every 4 months or so. This change will be made asap.

Radon washout: two consecutive precipitation peaks

September 25, 2019

Many times I wrote on this blog on radon washout: after a short downpour, we (nearly) always see a visible peak in our gamma radiation, caused by a washout of the daughters of the noble gas radon which is a natural constituent of our atmosphere; to find these comments enter “radon” into the search window on this site or click  here , here and here.

A washout means a diminution of the local atmospheric aerosol concentration, and all measurements show that there exist a delay of a few days before recovering. The last few days give us a very good example of two situations: a high precipitation peak followed by a lower one, and vice -versa a small rain peak followed by higher one.

The situation A shows a high rainfall followed after about 6 hours by a smaller one: the gamma-radiation peaks have the same pattern: high, than lower. Situation B is like a mirror image for rain: first small pourdown, than a much higher one. Here the small precipitation peak causes the highest gamma peak: it is a sign of radon washout from a “pristine” atmosphere. 6 hours later we observe a 3-times higher downpour, but the gamma peak is very small: the washout operates on an atmosphere “cleaned” from radioactive aerosol particles (the radon daughters), so there is not much radioactive debris left.

This example shows that it would be a folly to try to find a proportionality between the rain peak and caused gamma rise. The recovery time is a parameter not to ignore. I tried to find a relationship between rain and gamma peaks, if the situation is that of a one-time event, sufficiently far (> 3 days) from a preceding one. There are not many such happenings in a year, and the correlation is poor. Maybe more on this when time permits.

The Kauppinen papers (2/2)

August 11, 2019

3. The four Kauppinen papers.

In the first part of these comments I finished by writing that Dr. Jyrki Kaupinnen (et al.) has published during the last decade several papers on the problem of finding the climate sensitivity. Here is a list of these papers:

  • 2011 : Major portions in climate change: physical approach. (International Review of Physics) link
  • 2014: Influence of relative humidity and clouds on the global mean surface temperature (Energy & Environment). Link to abstract.
    Link to jstor read-only version (download is paywalled).
  • 2018: Major feedback factors and effects of the cloud cover and the relative humidity on the climate. Link.
  • 2019: No experimental evidence for the significant anthropogenic climate change. Link.

The last two papers are on arXiv and are not peer reviewed, not an argument to refute them in my opinion.

4. Trying to render the essentials without mathematics.

All these papers are, at least for big parts, heavy on mathematics, even if parts thereof are not too difficult to grasp. Let me try to summarize in lay man’s words (if possible):

The authors remember that the IPCC models trying to deliver an estimate for ECS or TCR usually take the relative humidity of the atmosphere as constant, and practically restrict to allowing one major cause leading to a global temperature change: the change of the radiative forcing Q. Many factors can change Q, but overall the IPCC estimates the human caused emission of greenhouse gases and the land usage changes (like deforestation) are the principal causes of a changing Q. If the climate sensitivy is called R, the IPCC assumes that DT = R*DQ (here “D” is taken as the greek capital “delta”). This assumption leads to a positive water vapour feedback factor and so to the high values of R.

Kauppinen et al. disagree: They write that one has to include in the expression of DT the changes of the atmospheric water mass (which may show up in changes of the relative humidity and/or low cloud cover. Putting this into a equation leads to the conclusion that the water vapour feedback is negative and as a consequence that climate sensitivity is much lower.

Let us insist that the authors do not write that increasing CO2 concentrations do not have any influence on global temperature. They have, but it is many times smaller than the influence of the hydrological cycle.

Here what Kauppinen et al. find if they take real observational values (no fudge parameters!) and compare their calculated result to one of the offical global temperature series:

The visual correlation is quite good: the changes in low cloud cover explain almost completely the warming of the last 40 years!

In their 2017 paper, they conclude to a CO2 sensitivity of 0.24°C (about ten times lower than the IPCC consensus value). In the last 2019 paper they refine their estimate, find again R=0.24 and give the following figure:

Clearly the results are quite satisfactory, and show also clearly that their simple model can not render the spikes caused by volcanic or El Nino activity, as these natural disturbances are not included in their balance.

The authors conclude that the IPCC models can not give a “correct” value for the climate sensitivity, as they practically ignore (at least until AR5) the influence of low cloud cover. Their finding is politically explosive in the sense that there is no need for a precipitous decarbonization (even if on the longer run a reduction in carbon intensity in many activities might be recommendable.

5. My opinion

As written in part 1, Kauppinen et al. are not the first to conclude to a much lower climate sensitivity as the IPCC and its derived policies do. Many papers, even if based on different assumptions and methods come to a similar conclusion i.e. the IPCC models give values that are (much) too high. Kauppinen et al. also show that the hydrological cycle can not be ignored, and that the influence of low clouds cover (possibly modulated by solar activity) should not be ignored.
What makes their papers so interesting is that they rely only on practically 2 observational factors and are not forced to introduce various fudge parameters.

The whole problem is a complicated one, and rushing into ill-reflected and painful policies should be avoided before we have a much clearer picture.

The author Alberto Zarogoza Comendador has a very interesting web-site with an interactive climate-sensitivity calculator:
azcomendador.shinyapps.io/Clisense

I really recommend to spend some time trying his calculations and especially reading his very interesting article “It should’nt take 100 years to estimate the climate sensitivity“.

_______________________________________________________

Addendum (added 12Aug2019) :

Dr. Roy Spencer showed a very telling slide in his Heartland 2019 presentation:

This image shows the troposphere (not surface) warming as predicted by the CMIP5 models (which form the basis of all the “consensus” political action) versus the observations made by the satellites (by the RSS and UAH teams) and 4 different reanalysis which included everything (satellites, floats, balloons …). The spread between the different models is so great as to forbid any action based on any of them (which one would you choose as the “truth”?). Curiously the only model close to the observations is the Russian INM-CM5 model (read a more complete discussion on that model here).

The Kauppinen papers (1/2)

August 11, 2019
  1. Climate sensitivity.

The most important question regarding anthropogenic climate change is that of the climate sensitivity: in short “what supplementary warming will be caused by a doubling of the atmospheric CO2 concentration”. This question lies at the heart of “climate protection policies”: if this sensitivity is great, rapid decarbonisation might be seen as inevitable, if it is low, a better policy might be to wait for upcoming technologies allowing a more pain-less switch to a non- or low-carbon future.

The IPCC can not get is uncertainty range narrowed down since more than 20 years: it stubbornly lies in the interval 1.5 to 4.5 °C with a “best” estimate of approx. 3.5°C. These numbers are (mostly) the outcomes of climate models, and all assume that feedback factors like increasing water vapour are positive, i.e. they augment the warming (about 3.6 W/m2 radiative forcing caused by a CO2 doubling) considerably.

Many scientists agree with the IPCC, but a smaller group does not. This group (like Lindzen, Lewis and Curry etc…) tries to find the climate sensitivity from the observations of past climate, and most get an answer which lies below (often well below) the lower the IPCC’s lowest boundary.

If they are right (and the IPCC consensus people wrong), most of the expensive policies following the Paris outcome (“limit warming to 1.5°C w.r. to pre-industrial times”) could be scrapped.

The notion of “climate sensitivity” is complex: usually 2 different sensitivities are used: the Equilibrium Climate Sensitivity ECS which considers the final temperature change caused by a CO2 doubling if “everything has settled down”, what means all feedback factors have played out, and all momentary thermal imbalances on the planet have been resolved. This may take a horrible long time, with a magnitude of centuries, and thus is too long to represent a realistic political goal. So often a second definition the Transient Climate Sensitivity TCS is used (often also called transient climate response TCR); here we assume a yearly 1% increase in atmospheric CO2 concentration which will lead to a doubling in 70 years, a time span more acceptable for a political agenda.

If we look at the history of scientific papers treating this subject, there is a clear tendency for lower sensitivities since the first calculation of Charney in 1979:

Decline of published TCR and ECS values since 2000 (link).

So this extremely important subject is far from “settled science” as most media, environmental groups and politicians continue to shout and want us make believe.

2. Dr Jyrki Kauppinen

Dr. Kauppinen is a professor of physics at the Turku University in Finland. He has published quite a lot of papers on spectroscopy, Fourier analysis etc. Four of his (and co-authors) papers (published 2011, 2017, 2018 and 2019) look at the climate sensitivity problem using only observations, and finding that the most important feedbacks caused by water vapor (condensing into low clouds or not) are negative and not positive as assumed by the IPCC.

They find that the human activity is insignificant on climate change (read here a general comment in the Helsinki Times from 14 July 2019).

In the following parts of this blog, I will look into these papers, which are not always easy to read and understand. They are quite heavy on mathematics, and even if I am able to follow most, there are some occurrences where I have to assume that their calculations are correct.

____________________________________________

to be continued….

Energy, as always!

July 24, 2019

Ignoring the importance of reliable, sufficient, base-load capable and affordable energy is ignoring everything past human history has told us. There simply is not a single example where a nation flourished and developed by cutting back its energy usage. The form of the energy used may change (e.g. more gas, less coal…) but the unavoidable truth is that progress correlates with energy use. We now have a new generation of “child-climatologists” or “child-environmentalist” which are absolutely ignoring this, and in their quasi religious war against energy usage remember me of the wrong doings of Chinese juveniles during the Cultural Revolution in the late 60’s (see here).

Making all us believe that the planet is on the brink of climate destruction and that deep decarbonisation must be achieved at all costs during the next months (yes months!) should be regarded by all sensible people not only as Utopian (an Utopia often touches us as sympathetic if outlandish) but as completely foolish.

So I will in this very short blog only show 3 pictures which might put us back into reality (more on Paul Homewood excellent website “notalotofpepoleknowthat“.

First here a pie-chart of global energy usage in 2018:

Note how small the percentage of renewables is, despite billions of subsidies poured into what should now be considered as mature technologies (solar PV, wind and hydro). The biggest problem of PV and wind remains their intermittency, and an affordable and sustainable storage technology of the needed magnitude is nowhere to be seen.

The next picture shows that from 2000 to 2018 energy usage was continually rising; nothing surprising here as many under- or low-developed nations try to better the standard of living of their people.

So the mantra that increasing energy efficiency will rapidly lead to a steady state must be seen as wishful but unrealistic desire. The developed world has made big progresses in energy efficiency, and the low hanging fruits have all been harvested. As so often there will be an asymptotic progression where increasing efficiency will become more and more difficult and costly.

Finally let us look at the CO2 emissions from primary energy usage according to the various world regions:

Old Europe becomes more and more a blip in the total: what matters is what happens in Asia, and all the self-hurting policies dreamed up in the EU will finally be seen by history as “much pain without gain”.

Please reflect on this, and make your own conclusions.

The new IFO report on E- and Diesel cars

April 19, 2019

e-car_roadsign.png

The München based IFO (Information und FOrschung) is one of the largest research think tanks in Germany. Its former president Prof. Hans-Werner Sinn is an outspoken critic of Germany’s Energiewende and is constantly attacked but those who follow the politically correct official energy mantra. Three authors, Prof. Christoph Buchal (physics, Uni Köln), Hans-Dieter Kaul (research fellow, IFO) and Hans-Werner Sinn have published a report titled “Kohlemotoren, Windmotoren und Dieselmotoren: Was zeigt die CO2-Bilanz?” (link), which may be translated to “Coal-, Wind- and Diesel-Engines: what is the CO2 balance?”. This paper is remarkable not for what is shows (there are no fundamental new insights here), but because it is written in an extremely accessible language, without any superfluous statistical gizmo and technical jargon. It also uses only freely available data, without any activist cherry picking. The paper compares the CO2 emissions (in gCO2 per km) of a new TESLA 3 electric car and a MERCEDES 220d Diesel car. Here are some comments on this paper.

  1. The big lie

The EU authorities and the Bundesumweltamt all classify battery driven electric cars as “CO2 free”. This is unacceptable for at least 2 reasons:
– the electricity taken from the grid to charge the batteries is not CO2 free in Germany (the actual energy mix accounts for 550 gCO2/kWh)
the batteries used are a consumable, having a life-span generally well below that of the car, needing important energy amounts for production and eventual recycling.

So a correct LCA (life cycle analysis) must include the battery relevant CO2 emissions and naturally also those caused by transforming subterranean oil into a liter of Diesel fuel at the pump. These numbers are available from many research papers, and amount to the following:
– the battery LCA adds 73-98 gCO2/km for the electric car (here the Tesla 3)
– for the Diesel fuel one should add 21% to the CO2 quantity emitted by burning the gasoil in the engine.

This amounts to the following results:

  • the Tesla 3 emits 156 to 181 gCO2/km
  • the Mercedes 200d emits 141 gCO2/km

If the Mercedes had a methane (natural gas) driven engine, it would emit only 99 gCO2/km. The Tesla emissions are based on the 550 gCO2/kWh actual German energy mix (to be compared to France’s 100 gCO2/km !).

The conclusion is damning: the E-car emits more CO2 per km than the “infamous ” Diesel car, so replacing Diesel cars with electrical ones will do nothing for climate protection (if one accepts the dogma that CO2 emissions from fossil fuels are the cause of observed climate changes)

2. What to do with ever more solar and wind electricity?

Many papers point to the fact than when intermittent electricity producers as wind turbines and solar PV’s amount to more than 30% of the installed capacity, an increasing amount of renewable electricity must be dumped during peak “green” generation periods (much wind and sunny sky) by shutting off the wind turbines and solar panels; nevertheless they must be helped by an increasingly non-economic array of base-load capable producers like coal, gas or nuclear power stations. Together with many other researchers, the authors of the paper consider that battery-storage for handling for instance seasonal imbalance will not be possible, due to the huge quantity of rare materials needed and the exorbitant price tag. They suggest a two-step political decision:

  1. begin to switch all fossil engines to methane (natural gas), as this will surely have an enormous impact on the traffic related CO2 emissions (which remain stubbornly constant in Germany); the Netherlands have shown how to do this, and normal gas engines are easy to adapt to the methane fuel.
  2. begin to develop electrolysis of hydrogen using the excess renewable electricity, and use this hydrogen to make “green” methane (a process well known, see for instance here).

Both of these steps could use the existent network of refueling stations and the existing underground gas pipes infrastructure. Electrolysis and methanization do not come cheap, as the efficiency w.r. to burning raw hydrogen drops to 34%, a number comparable to the efficiency of a modern Diesel engine. Nevertheless the authors see the hydrogen way (be it as a fuel for fuel cells driving e-cars or as the basis for methanization driving thermal engines) as the only possibility to further increase renewable electricity production and usage.

3. A Post Scriptum

The authors conclude with some thoughts on the German Energiewende: forced upon by politics and NGOs to “save the planet” it has spectacularly failed in reducing Germany’s CO2 emissions. A recent STATISTA article (link) gives this diagram:

Europes_10_biggest_polluters

7 among the 10 biggest “polluters” are from Germany!

The authors fear that politics built on a lie may well lead to a general mistrust of the public and could in the near future make desirable and necessary political decisions  impossible to enforce .

PM contribution of road traffic

December 17, 2018

Reading the media one would think that the fine particles pollution is mostly caused by Diesel cars (the new villain on the block) and that by restricting Diesel cars this problem would be solved. The simple truth is quite different, as shown by this recent graph from the EEA:

The violet bands are the contributions to PM emissions by road transport (including tail pipe, brake and tire abrasive wear emissions): they are truly small w.r. to the totals: about 7.6% for the PM10 and 10.5% for the PM2.5 fine particles! So the vast majority from fine particle emissions come from sources not related to road transport. The elephant in the room are the emissions by wood burning (included in the brown segments labelled “Commercial, institutional and households).

(link to EEA article)

A test of inexpensive LLS fine particle sensors

November 9, 2018

1. Intro

Fine particle measurements are hip, and unbelievable articles about the death-toll they cause abound. Usually traffic (and especially Diesel cars, the new villain of the block) are given as the main culprit. This is completely nonsense, as only about 25% of fine particles come from car engines; a big chunk has a natural origin, and a real big part comes from wood burning. Nevertheless, measuring these very small particles is important, but not easy. Here we speak about sizes less than 2.5um or less than 10 um (1 um = 1 micrometer = one thousands part of a millimeter). “Official” measurement devices are costly, typically in the 20000 Euro range and measure directly the mass per cubic-meter of dry air (in ug/m3). They are based either on the attenuation of radioactive beta radiation (BAM) or on a direct mass measurement: either by weighing a filter exposed to the dust (for integral measurements over longer periods) or by detecting the changes of the oscillation frequency of an oscillator on which the particles attach themselves. A real problem is humidity: many type of particles (e.g. salt dust) absorb water vapor and increase in size and mass, and so give faulty results. So professional sensors first dry the incoming air flow, which must be kept rigorously constant and be influenced by changes in atmospheric pressure or wind conditions.

Laser light scattering (LLS) sensors are completely different. A laser beam from a solid-state sensor is more or less diffracted by the number of particles in a black chamber, and the diffracted light is measured by a photosensor and analyzed by a micro-controller. Actually what is measured is the count of particles per volume, and from this count, by many assumptions and proprietary algorithms, a mass per volume is calculated. In the inexpensive sensors like the Nova SDS011 or the Plantower series the air is pushed into the measurement chamber by a small fan; so there is no drying, and the airflow is under the influence of changes of atmospheric pressure and wind. Clearly, this type of sensor cannot rival the professional ones, but they are far from useless. Many grass-root movements of “citizen science” use these sensors which often are in a price range between 15 and 50 Euros; nearly all are made in China or in Japan.

2. The meteoLCD test setup

I started working on these sensors in June by buying several SDS011 and a Airmaster AM7, which combines PM2.5 and PM10 with T (temperature), RH (relative humidity) and CO2 measurements. All these sensors do not store their measurements, but either give a binary stream (SDS011) or a stream of ASCII lines. So I added a Raspberry Pi nano-computer running a Python script to make a standalone device, which logged its data on the SD card. A third type of sensor acquired was the Airvisual Pro from the Swiss company IQAir. This is a real stylish device costing 259 Euro which has its own storage and WiFi communication facility. The picture below shows the Airmaster AM7 with the Rapspberry Pi mounted in a Stevenson hut on the meteoLCD terrace in a previous test:

The next test (which gave the data for the paper) had a SDS011 and an Airvisual Pro in the hut:


The black box on the SDS011 is the fan, and the shining case with the letter A is the measuring chamber. The serial output is read by the Raspberry Pi through a serial-to-USB converter.

3. Test results

The test ran from 25th Sep. to 22 Oct. 2018, and the full days 26/09 to 21/10 were used to compare the results to the of the nearest official station (Beidweiler). The full text of the report is here.

Let me just give the figure which shows that the inexpensive sensors were able to reproduce the variations of the average daily PM10 concentrations, and also the peak event on the 18th October:

BEIDWLR = Beidweiler station, AV = Airvisual Pro, AM = Airmaster. Left scale in ug/m3.

The test period was rather dry, with RH not exceeding 75%, a number which is given by some authors as the threshold above which measurement become strongly impacted by humidity. So a test during wet and foggy days will be repeated in the near future.

4. Conclusion

The test shows that these inexpensive LLS sensors are more than a useless gimmick. Sure, they lack the bell and whistles of the professional sensors costing 400 times more, and certainly should not be used as a basis for legal action. But they are able to give a nice picture of the ambient fine particle concentration, and its variation. If you want to impress, strike a match (or light a candle) and blow out the flame in front of a sensor. You will be surprised by spectacular peak values!

Plume Labs Flow: a round-trip record

October 14, 2018

In this blog I report a first round-trip made with the flow-sensor attached to my belt and my iphone recording the flow’s readings.

The picture shows the FLOW in its charging cradle and its packaging. The leather buckle is to attach the device to a backpack, a belt or otherwise. The dimensions of the noiseless, silent FLOW are height = 90mm (or 140mm with the leather buckle), base 40mm x 25mm, weight = 70 g. There seems to be a miniature, inaudible fan inside. More info here and here.

  1. Introductory comments

The trip was made the 13th October 2018 afternoon in fine weather, blue sky, moderate temperature of 27°C, low wind, low humidity. I started at the parking near the entrance of the Echternach lake, and returned to the same point. Here a first screen-shot made at home later in the afternoon from the iphone displaying the Flow app:

The trip started at approx. 2:45 local time (12:45 UTC); the green line left to the red start point corresponds to the trip in the car from my home to the lake. The AQI of 32 represents the average of the total move (so probably including the green part at the left). Let me remember that the AQI from a set of pollutants is the specific AQI of the pollutant having the highest AQI (it is NOT an average or weighted sum of all the sub-AQI’s!). If you are confused, please re-read my 5 part series on AQI .

Dwelling deep into Plume Labs poor explanations of their AQI (which sometimes they label PAQI), I presume that they use more or less the AQI breakpoints defined by EPA, i.e. the EAQI. So if you want to know the concentration of a relevant pollutant, you must use the following EPA table in reverse, going from AQI to concentration:

The Plume Labs plot uses the same 7 categories and the same colors as EPA does.

2. The extraordinary PM10 peaks at the start.

The next picture shows the 2 very high PM10 peaks recorded at the start of the trip, i.e. not far away from the parking lot:

First notice that NO2 and VOC levels are zero or close to zero (which has to be expected!). The AQI peak at the cursor corresponds to a PM10 fine particle concentration of about 380 ug/m3 when using the EAQI table to get the concentration from the AQI, and a “Very Unhealthy” situation. This is a very high and unexpected PM concentration for a semi-rural location without any industry. There is a relatively busy road about 250m away (main road from Luxembourg-City to Echternach), but I do not expect it being such an important source of PM10s (and the low NO2 seems to confirm that traffic does not play a role here). The surrounding of the lake is mostly meadow and forest on the West side (left), and meadows on the East side (right); the wind was more or less blowing from the West. There is a small road (label 378) on the East side of the lake with practically zero traffic. So we have here something close to a mystery, as some comparisons made at home with 3 other sensors do not shows an exaggeration in PM10’s by the Flow ? Are the PM10’s of natural origin, as dust blown over by some wind gusts from the dry meadows?

3. Unsuspected NO2 peak

The next picture shows a curious NO2 peak at the last third part of the trip:

The cursor corresponds to a position close to a small parking lot and a picnic area with 3 public barbecues, one of them being active (and smelly!). My best guess is that the plumes from this barbecue caused the higher NO2 concentration of about 56 ug/m3. The small peak right of the cursor at 3:15 (15:15) is caused by higher PM2.5 (AQI 118, about 38 ug/m3), not NO2, possibly also related to barbecuing.

So it seems plausible that the Flow sensor correctly recorded a footprint of the nuisances caused by an open barbecue burning charcoal.

4. Conclusions

The Flow app is a nice feature, but I do have some serious complains:

1. why does the app not show in addition to the AQIs the concentrations of the pollutants ? There are so many different AQIs (EAQI, Chinese CAQI, European EAQI etc..) that simply stating that the AQI is a qualifier of the pollution is not enough.

2. why does the app not allow to download the data, as they must be stored in some part in the smartphone’s memory?

3. why does Plume Labs not give technical details on the sensors used in the device? How are its readings influenced by humidity and temperature?

There is an avalanche of low-cost sensors on the market, and one should not expect too much regarding accuracy. I made some short comparisons with three other PM sensors in my office (two SDS011, and a SNDWAY SW-825). Using the EPA table for the Flow AQI’s, a typical situation for PM2.5 concentration is ug/m3 is: FLOW = 0.6, SW-825 = 3, SDS011 = 3.6 and 2.6.

The FLOW PM2.5 AQI was 2.5 which would correspond to 0.6 ug/m3.

These big relative differences are not shocking, as the absolute differences are small, and one should not expect identical measurements even from a batch of same model sensors. But as a general rule I suggest to take the readings of all these sensors with some healthy skepticism. That being said, the FLOW is a very sexy device and easy to use.