A Spot - weather image

Data Calibration and Data Uncertainty

Recalibrating and estimating uncertainties of our climate data records

A Spot - weather image
A Spot - weather image

Why and how do we recalibrate and estimate uncertainty?

Last Updated

12 October 2020

Published on

29 June 2020

Why do we recalibrate data?

Obtaining information about long-term climate variability and change most often requires combining time series of observations made by different satellite sensors. Most satellite sensors are calibrated pre-launch, where calibration means establishing the basic model for translating a measured signal (e.g. in counts) into the required measurand (e.g. radiance). Although such models often allow for in-orbit corrections, for example by making gain changes based on on-board measurements, there are many potential problems with using these pre-launch established models. This is because long-standing historic sensors do often behave different in-orbit than during pre-launch calibration. Some effects only manifest while orbiting the Earth, such as the effect of warming or cooling of the sensor on the measurements. Through analysing the measurement time series as a whole, one can develop an in-depth understanding of these effects. In addition, post-launch recalibration has the advantage that the measurements to be recalibrated can be compared against other, sometimes superior, satellite measurements. In summary, the recalibration of individual instrument data is beneficial for the following reasons:

  • it ensures the use of the same calibration model over the entire period of the data record and for each satellite sensor considered, which is prerequisite for the long-term consistency of the data record;
  • it helps to identify issues and increases the confidence in the operational calibration of individual instruments;
  • it provides the basis for a harmonised recalibration of a sensor time series, which is a prerequisite for the derivation of geophysical parameters from different satellites.
Data Calibration image timeline

Fig A. Geostationary and polar orbiting satellites operated since the 1970s that are used by EUMETSAT to build multiple-satellite climate data records.

What are the principles of recalibration methods?

Recalibration is defined as the process of adjusting calibration coefficients and/or of obtaining a new calibration model to update or replace the operational calibration used for the original measurements (see https://research.reading.ac.uk/fiduceo/glossary/ (assessed on 28 July 2020)). The main principle of many recalibration methods is to compare measurements of one satellite sensor to similar, but more accurate, measurements of another satellite sensor or of a ground-based target. When comparing two measurements, one shall take into account that these measurements are not identical by definition. This is partly due to uncertainties in the spatial and temporal collocation (also referred to as match-ups) that is part of any satellite-to-satellite or satellite-to-ground based comparison. And partly due to differences in the spectral characteristics of the sensors compared. This is because each sensor has a unique spectral behaviour, even when these sensors are nominally observing the same ‘spectral band’. In case the recalibration aims to reconcile the calibration of different sensors while conserving their unique spectral characteristics, a recalibrated sensor series is referred to as ‘harmonised’ data record (see section “ What type of recalibrated Climate Data Records exist?”). The harmonisation process itself will involve refitting the calibration parameters (recalibration) using match-ups, and taking into account all errors and correlations of both the instrument and of the match-up process. In case recalibration aims to ‘correct for’ differences in the spectral characteristics, by translating the observations from the test sensor to the observations of a reference sensor the recalibrated sensor series is a ‘homogenised’ data record (see section “ What type of recalibrated Climate Data Records exist?”). When neither superior space-based nor reference ground-based observations do exist, means of simulating the sensor signals using reanalysis data and comparing those to the real measurements also help to analyse the quality of the sensor data.

What type of recalibrated climate data records exist?

Two groups of users of recalibrated climate data records are distinguished. On the one hand, there are users that use these records for assimilation into a dynamical model for climate reanalysis. On the other hand, there are users that use these records as input for the retrieval of geophysical parameters. Although both groups pursue the same scientific goals, i.e., providing data records that enable analysis of long-term variability up to trends in climate variables, they often differ in their requirements.

If used for assimilation into reanalysis, best possible calibrated data including SI quantified uncertainties are requested, in which the characteristics of the individual sensors are maintained. Harmonised data records (see https://research.reading.ac.uk/fiduceo/glossary/ (assessed on 28 July 2020)) best address the requirements for this application. It is important to note that users of harmonised data records need to realise that, when they plot time-series of harmonised data, they will see artificial jumps (see Fig B). These jumps originate from the known and characterised differences between the sensors, e.g., differences in their spectral response. There may be differences remaining that are either more difficult to characterise, such as differences due to sensor non-linearity, or that are unknown. Since the assimilation procedure relies on forward radiative transfer calculations, it is possible to consider the sensors’ spectral responses, and correctly account for step changes in the time series associated with those.

Data Calibration image - 1
Fig B. Graphical illustration of the difference between time-series of original calibrated and harmonised calibrated data record.

For the retrieval of geophysical parameters, also well calibrated data including SI quantified uncertainties are needed. However, often this group prefers data records from different sensors that look the same over the same location at the same time. This is especially true for statistical retrievals that does not involve radiative transfer calculations. It also true for cases, such as users of visible channel data, as their retrieval algorithms rely on computational very intensive radiative transfer calculations, which realistically cannot be performed as part of the retrieval process. Homogenised data records (see https://research.reading.ac.uk/fiduceo/glossary/ (assessed on 28 July 2020)) best address such needs. In case the sensor-to-sensor biases are mainly explained by sensor-to-sensor differences in spectral response, the time-series of homogenised data tend to become temporary stable over invariant targets. However, forcing all sensors to have the same spectral response has drawbacks while it introduces additional uncertainties, which increase with increasing differences between the sensor’s spectral response (see Fig C). As long as these uncertainties are not too large, homogenised data records can be used for the retrieval of geophysical parameters.

Data Calibration image - 2
Fig C. Graphical illustration of the difference between harmonised and homogenised calibration. The shaded areas represent the uncertainty associated with the adjustment to the reference sensor (spectral band adjustment). Typically, these uncertainties are at their largest when the spectral response functions of the actual instrument and reference instrument differ much.

Why do we estimate uncertainties?

The provision of measurements with rigorous uncertainty estimates are increasingly important for climate monitoring and climate re-analysis. This is true for Fundamental Climate Data Records (FCDRs) of long-term calibrated and quality-controlled sensor data. This, however, is also true for Climate Data Records (CDRs) of long-term quality controlled geophysical parameters, such as the Essential Climate Variables defined by the WMO Global Climate Observing System (GCOS 138, 2010). Among others, the European Centre for Medium-Range Weather Forecasts (ECMWF) needs uncertainty estimates as part of the assimilation procedures of their climate reanalysis.

Wielicki et al. (2013) discuss the importance of quantitative uncertainty estimates for detecting climate trends. They argue that detecting a long-term climate trend is hampered by the natural variability of the quantity considered. The level of confidence to detect a trend increases with increasing length of the data record. However, even with a perfect observing system, it can take many years before a climate trend, with 95% confidence, can be detected. To exemplify this, Fig D provides a schematic representation of time needed to detect a climate trend as function of the strength of this trend for perfect observations (green), accurate observations (brown), and inaccurate observations (red). Path A shows that observing a weak trend (e.g. 0.1 K per decade) requires a long data record of perfect observations (e.g. 40 years). Path B shows that a long data record of inaccurate observations (e.g. 30 years) may still suffice to detect a strong trend (e.g. 0.5 K per decade), but is not suited to detect a weak climate trend.

Data Calibration image - 3
Fig D: Schematic representation of data record length needed to detect a climate trend as function of the strength of the trend for observations with different accuracies (from Wielicki et al. 2013), discriminating between perfect, accurate, and inaccurate observations.

 

An innovative approach to create climate data records with rigorous treatment of uncertainty and stability is to apply the key principles of metrology. The metrology community is responsible for maintaining the International System of Units (SI) in a way that ensures the units are stable over time, uniform worldwide, insensitive to the conditions of measurement, independent of the method used to realise the unit and able to be improved as technological advances. Therefore, the metrology community has significant experience in achieving long-term stability of measurements, which is what is needed to provide a benchmark for long-term climate records. This is accomplished through three key concepts: traceability, uncertainty analysis, and comparison. A metrological analysis typically involves the quantification of the systematic and random uncertainty components associated with each step in a processing chain. In this way, it is possible to identify the steps in the chain with the largest uncertainties, to understand how these uncertainties propagate through the processing chain for each processing transformation, and to quantify how these uncertainties affect the final product.

The European Union funded project, Fidelity and Uncertainty in Climate data records from Earth Observations (FIDUCEO), was the first project that developed metrologically defensible approaches for the production of climate data records from Earth Observations with traceable uncertainty estimates ( https://cordis.europa.eu/project/id/638822 ). The developed metrological approaches were used to produce several state-of-art FCDRs and CDRs, e.g. EUMETSAT’s Meteosat FCDR, which were assessed on their quality in terms of stability and harmonisation to demonstrate proof of principle.

References

Bruce A. Wielicki, et al., 2013: Achieving Climate Change Absolute Accuracy in Orbit. Bull. Amer. Meteor. Soc., 94, 1519–1539. doi: http://dx.doi.org/10.1175/BAMS-D-12-00149.1

GCOS-138, 2010; Implementation plan for the Global Observing System for Climate in support of the UNFCCC https :// library.wmo.int/doc_num.php?explnum_id=3851 (accessed on 12 December 2018)