The Air Quality Egg is a community-developed air quality sensor device. It is clear that at its low cost it cannot compete with much more expensive "institutional" air quality sensor networks on their own terms; instead it offers new collaborative means of monitoring air quality by a global community of air quality sensor enthusiasts.

The data quality offered by the Air Quality Egg sensor device will necessarily be a trade-off between a number of considerations, and it is important to understand the implications. One unique aspect of the Air Quality Egg is our ambition to try to avoid having to calibrate all sensors we ship since it would require substantial effort and access to expensive specialist equipment. Instead we will explore ways of making use of our potentially large network of sensors to compensate for large range of readings from individual sensors.
However the expense of performing simple sensor calibrations at the factory may be the lowest cost place to calibrate and provide a reasonable credibility for the readings. See the thread on what has been coined "Blind Calibration".

This page summarises some key approaches with which we could control and improve data quality at varying levels of effort and cost. We will not pursue all of these, but we will aim to investigate them well enough to make some informed decisions (bearing in mind that we usually won't have the time or resources for extensive studies). We essentially want to determine: how good do our measurements need to be? What is the best way of getting there?

To identify which of the options below we want to focus on we could later capture for each of them:
  • Requirements (resources, expertise.)
  • Our contacts to experts.
  • Links to mailing list discussions with further details.
  • Links to any further documentation.

1. Hardware design


Component Choice


The most effective way of improving data quality is through careful choice of sensor components. We document the sensor components we are evaluating at Hardware/Sensors which also has links to data sheets.

There are a number of considerations in sensor component choice:
  • Sensor types (see Measured Phenomena)
  • Sensor accuracy/precision, resolution (as specified in data sheet)
  • Maximum sensing frequency
  • Environmental specifications (e.g. which min/max temperatures can it sustain?)
  • Chip interface and footprint
  • Power usage
  • Cost
  • ...

George Yu posts details of various typical sensor characteristics in the "sensor fundamentals" thread:
  • Cross sensitivity: "most of these metal oxide sensor readings are not specific to any single gas. So if you take a CO sensor, and breath alcohol or water vapor on it, the sensor will read a high CO concentration."
  • Sensor stability: "as the weather conditions such as temperature or humidity changes, a CO or NOx metal oxide sensor will drift significantly. The only to way to minimize that is to build a temperature and humidity offset calibration curve."
  • Sensor uniformity: "for metal oxide sensors, the manufacturing process makes it very difficult to make two sensor to have exactly the same response. The gas concentration reading could be +/-30% in variations."
  • Power consumption: "metal oxide sensors uses an internal heater to heat the sensing element so it require large amount of power. It is difficult to use only batteries to supply power for the sensors."
  • Sensor type: Electrochemical sensors may address some of the accuracy problems of metal oxide sensors: "electrochemical sensor can easily measure CO from 0 to 400ppm at 0.1ppm resolution. we had done a lot of those gas flow experiments in the lab to compare with experimental sensors." They use less power, and they require much less frequent calibration.

The Hardware/Sensors page links to data sheets and other documents, including a "Pollutant Table" report by Sonoma Technologies, “Desired Characteristics and Information about Major Air Pollutants for Device Manufacturers to Use in Creating Instruments for Non-regulatory Monitoring (e.g., Citizens, Schools, NGOs)” which recommends the following tolerances:
  • CO to 9ppm with an accuracy of 1ppm. Normal atmospheric ranges are up to 20ppm
  • NO2 to 100ppb with an accuracy of 1ppb. Normal atmospheric ranges are up to 400ppb
  • Etc

A "NO2 Sensors Report" linked on the Hardware/Sensors page summarises calibration experiments for No2 sensors, "report from a project we did for the US EPA to characterize the quality of low cost sensor technologies".

Understanding Measurement Tolerances


As part of our prototyping we should document the actual sensor accuracy/precision for our sensing device. (These numbers will vary from data sheet specifications of individual sensor chips.) A good understanding of these tolerances and uncertainties is a necessary starting point for any further data quality improvements, including any data modelling of the results.

For each observed air quality property:
  • What is our measurement accuracy/precision, resolution (e.g. as measured against reference devices)?
  • How does this drift over time, under varying environmental factors (temperature/humidity, ...)?
  • What are the variances of multiple uncalibrated sensor chips of the same type?
  • What are the dispersion characteristics of the observed phenomenon, and what are the implications for the density of our sensor network setup?

César in "Draft AQE Sensor Specification": "I was told yesterday that official sensors in measurement stations accuracy should be 15% (or below), except for PM 25%."

César, in "Regarding Madrid air quality session": "CO dispersion is quite local so it varies lots. Jorge Paz suggested to use O3, NO2 and NOx to auto calibrate sensors, comparing each gas against the others in a regular day schedule to detect broken sensors."

César in "IoT Madrid Meetup group presentation": "Checking AQE data, we discovered that sun [i.e., heat] distorts heavily NO2 sensor measurement. It is necessary to put AQE sensor units under cover or in the shadows, to make measurement conditions stable. Otherwise we will get false positives that render data unusable."

chduke post on his blog: Pattern Recognition for the Air Quality Egg – Part two, he finds that "humidity effects significantly the quality of air AQ sensor readings; high humidity indicates low AQ readings"

David H offers help & equipment to assess and document AQE sensor measurement tolerances in a controlled environment: "I am *more* than happy to run alpha or beta designs through a quick and dirty response-check protocol. I've got access to a fume hood, calibration gases, etc., and I can cover the cost since it's related to my thesis work and a grant I'm funded on. There should be some variation from sensor to sensor, and from module to module given tolerances of resistors, etc. It'd be very interesting to quantify that and, I think, valuable for maximizing the collective information from locally deployed sets of Eggs. If people can put a prior on the likely distribution of the responses to a given stimulus then that's very useful."

Charlene of the Sensaris ECOpod project is interested in exchanging test data of sensor readings: "We are making tests to check our sensors sensitivity and we are looking for a calibration partner. We have some clue but we won't implement it into the software before to complete the calibration process. So having the data from your side would a great thing to compare it with the results will get from here. If everything is coherent, it will help a lot into the improving process of our sensors." Cf. "sensor is not enough" thread.

David H points out in "EPA offering Air Sensor Eval this Summer" that the EPA offers to test community/DIY air quality sensors (deadline was 30 June 2012), see also citizenair.net (PDF): "Air pollution sensor developers are invited to participate in EPA’s Air Sensor Evaluation and Collaboration event, beginning in July 2012. Through this program, developers attend a kick-off meeting and get EPA to evaluate your sensors in a controlled laboratory setting at EPA."

Understanding the Impact of Calibration


(This includes comments that people made on the effects of calibration for particular sensor types, and references people who offered to help with this.)

Cesar is now having one of our current prototype sensors calibrated by Jorge P., cf "Regarding Madrid air quality session ". Such calibrated devices should undergo the same evaluation procedures as outlined in "Understanding Measurement Tolerances" above.

Neilh comments in the "Calibration Gas (NO2) sensors" thread: "Looking at the MiCS-2710 base resistance and sensitivity its obvious to me that the Gas sensors need calibrating at manufacturing for the subsequent sensor readings to have any validity." In the same thread David H. offers help and equipment for NO2 sensor calibration experiments, but also points out that very accurate NO2 readings may not be strictly necessary: NO2 "Pure NOx doesn't have extremely toxic effects at ambient levels. But, it's a decent marker for the stew of aerosols that does cause health effects, and as such, it's very useful for developing inference about health effects and about exposures to traffic-related pollution in general."

David H: "I think it's crucial to approach calibration as a community-by-community thing. There isn't one calibration; there's a different one (sometimes more than one) depending on the community. Hence the need to look at our opportunities for colocation / controlled testing / rough experiments as distinct opportunities ... "

2. Software design


Traceability


We can capture a fair amount of contextual data for each sensor feed in order to support data analyses of the sensor data. This roughly falls into two groups:
  • Hardware info like device ID/type/version etc. This data is provided by the "manufacturer": the AQE project group.
  • Sensing context descriptions like location, indoors/outdoors placement, mounted height, etc. This data is provided by the "user": the person that sets up and maintains the device.

Need to determine a data model and format for these. This may involve multiple formats, subject to device limitations and data integration needs. Some of the candidate formats:

There have been a number of discussions about data formats on the mailing list, e.g. "AQE machine tags: what metadata conventions already exist?". Additional extensive documentation is provided by the W3C Semantic Sensor Network Incubator Group (on their wiki, in their final report from June 2011, etc.)

Calibration at Home


E.g. as local community project, where we train a few individuals who then run workshops for larger groups. Or as a series of training videos.

Requirements:
  • Circuit design needs to accommodate this (e.g. offer potentiometers instead of fixed resistors)
  • Requires ability to set up controlled environments (may only be possible in some very limited cases)
  • Requires visual feedback of the results (e.g. using Cosm graphs, which will introduce latency)

In the Calibration Gas (NO2) sensors thread David H outlines potential procedures for PM sensor calibration based on the "zero check procedure" of the UCB Particle Monitor' SOP: "Page 5: the zero check is just to leave the particle monitor in a Ziplock bag, undisturbed, for 40 minutes". This is then followed with a blind calibration procedure to determine sensor gain. (See the "Network Calibration" section below for details.)

Augmentation with "Human Sensors"


As a basic control mechanism, but also potentially as an additional data source, we could ask users to augment sensor readings with qualitative ("social") data, e.g. a subjective rating of the current air quality on a graduated scale.

3. Sensor setup

User Documentation: "How to set up my AQE"


As a basic but very essential step in optimising our sensor data quality we should provide thorough and clear guidance on how to set up an Air Quality Egg, and how to document the setup when publishing sensor data.

See also: Documentation

Considerations:
  • We should make it very clear to AQE users what the implications are of setting up the AQE in different ways (e.g. indoors vs. outdoors.)
  • We should make it easy for them to find out how to monitor their AQE feed.
  • And we should encourage people to participate in the AQE community beyond merely setting up the egg.
  • We should provide guides in different languages, potentially use an online translation platform (e.g. http://translatewiki.net/wiki/Main_Page)

The documentation should:
  • describe the intended uses of the AQE (indoors, outdoors, ...)
  • describe how to mount an AQE, how to connect it to the Internet, how to annotate the AQE data feed with contextual data (this last step is crucial)
  • describe where to find data and graphs of the sensor on Cosm, and where to find other users' published data
  • link to a list of applications using AQE data (e.g. mapping applications), and where to find more

Randomised Controls


Calibrate a few devices before sending them out. Distribute them randomly, but keep track of their location (e.g. based on a device identifier.)

Redundant Sensors


Ship some devices with 3-5 uncalibrated sensors instead of just one. Distribute them randomly, but keep track of their location (e.g. based on a device identifier.)

An option to always use redundant sensors for the AQE was discussed early on in the "One or two sensors?" thread, but a decision was made to keep a single-sensor design.

A simple alternative that doesn't require hardware changes: set up multiple AQE devices in select locations and then observe their discrepancies.

Co-locating Sensor Hardware


When prototyping hardware designs we can evaluate them by co-locating them with high quality sensors, and then observe behaviour over time.

A number of organisations offered to help with this:

Network Density


Encourage overlapping sensor placement by e.g. fostering local communities. Is this practical? How close is close enough? Cf Measured Phenomena.

4. Monitoring, Evaluation, and Analysis


Analytical models that make use of the resulting sensor network data. These are post-processing stages that rely purely on published sensor data, and are mostly independent of individual sensing devices.

Detection of Measurement Errors


To correct or ignore any measurement errors that were identified during evaluation. Unclear to what extent this can be done automatically.

Adjustment of Measurement Bias and Drift


To analytically correct for e.g. temperature fluctuations of particular sensors.

Sensor Network Cross-Validation


Compare results of nearby sensors within our own sensor network. (Not sure if "cross-validation" is the best term for this.)

Collaborative Monitoring


Human oversight may be useful when monitoring the performance of our growing sensor network, and identifying outliers that may indicate malfunctioning sensors.

Such activity could be done by sensor owners (we encourage them to frequently observe their own graphs and flag any malfunction) or as a collaborative effort open to a larger audience (e.g. using platforms like experimental tribe.)

Nafis writes: "You should definitely check out what is already done in the weather arena. CWOP (network of citizen weather stations) has several data quality checks (http://www.wxqa.com/aprswxnetqc.html). Philip Gladstone has done some great work comparing local stations (eg. for my old 1-wire station: http://weather.gladstonefamily.net/site/C3725)"

Network Calibration


nicox mentions the research domain of "online sensor calibration" (Google Scholar search): techniques to auto-calibrate deployed sensors without having to take them to a lab or other controlled environment.

Martin suggests the term "network calibration": to combine our connected network of sensors with a central data processing application that aggregates all sensor data, then applies corrective transformations to it based on a thorough understanding of individual sensor characteristics (e.g. the impact of temp, humidity on sensor readings), and a spatial proximity model of all sensor readings.

In the Calibration Gas (NO2) sensors thread David H links to a great blind calibration paper by Balzano and Novak: "Blind Calibration of Networks of Sensors: Theory and Algorithms" (PDF) which defines blind calibration as "automatic methods for jointly calibrating sensor networks in the field, without dependence on controlled stimuli or high-fidelity groundtruth data". (See also the more comprehensive MSc thesis by Balzano, "Addressing Fault and Calibration in Wireless Sensor Networks".) Neilh starts a discussion thread prompted by this paper, "Blind Calibration and measurement theory", in which David B offers his assistance, César digs up more research: "System-level Calibration for Data Fusion in Wireless Sensor Networks", "On the Fly Calibration of Low Cost Gas Sensors", and "Energy-Aware Gas Sensors Using Wireless Sensor Networks" ("for cool optimization techniques").

From Tan et al, "System-level Calibration for Data Fusion in Wireless Sensor Networks": "Balzano et al. [3] theoretically prove that the sensors can be partially calibrated using blind measurements. However, the blind approaches often require that the deployment is dense enough [3, 4]. As a compromise, the semi-blind calibration approaches [15, 20, 21] require partial ground truth information. In [15, 21], the sensor locations are calibrated using accurate or coarse position information of a subset of nodes. In [20], an uncalibrated sensor calibrates itself when rendezvousing a calibrated sensor. The calibration approach presented in this paper falls into the semi-blind category, in which sensors can calibrate their energy measurements as long as the physical position (instead of the accurate energy profile) of the target is known"

Cesar links to: "CaliBree: A Self-calibration System for Mobile Sensor Networks" (PDF.) A quote: "several factors make the calibration challenging. Firstly, the calibration can only occur when the ground truth node and the uncalibrated node are experiencing identical sensing environment. This is necessary because the comparison between calibrated and uncalibrated data is only meaningful when the same input to the sensors is applied. Secondly, the calibration rendezvous is complicated by the existence of the sensing factor. The sensing factor is identified by the tendency of a physical phenomenon to be localized to a small region around the entity taking the measurement."

To summarise:
  • There is established research on methods and efficacy of blind calibration, but we don't yet have a clear understanding of how that applies to the specific sensors we're using. Would need evaluation and further experiments by a specialist.
  • A fundamental prerequisite for this blind calibration approach is the use of redundant sensors (in the same device, or by co-locating multiple devices.)
  • It is yet unclear to us *how* close those redundant sensors neet to be; this will e.g. depend on the observed phenomenon and its dispersion characteristics.
  • Mixed approaches (using ground truth, mobile sensors, user calibration meet-ups, etc) could compensate for lack in spatial sensor density, but need further review of the current state of research (starting with the papers listed above.)
  • Neilh observes an additional challenge relating to the analysis and aggregation of such sensor readings: blind calibration techniques generally appear to assume linear sensor responses, but "most sensors are not linear across their range of interest, but maybe there are linear relationships that can be determined from knowledge of the materials (!!) - however drift can be linear, as it applies to all parts of the sensor (?)."

Spatial Sensor Network Data Models


Build geo-statistical models or interpolation surfaces for analyses and visualisations. These are also a good starting point for comparative studies against other data sets, cf. below.

Examples of existing applications/visualisations:

Comparative Studies


Correlation with data from other sensor networks. E.g.: