This manuscript corresponds to errors
version 0.4.2.1
and was typeset on 2024-11-11.
For citations, please use the version published in the R Journal (see
citation("errors")
).
The International Vocabulary of Metrology (VIM) defines a quantity as “a property of a phenomenon, body, or substance, where the property has a magnitude that can be expressed as a number and a reference”, where most typically the number is a quantity value, attributed to a measurand and experimentally obtained via some measurement procedure, and the reference is a measurement unit (BIPM et al. 2012).
Additionally, any quantity value must accommodate some indication about the quality of the measurement, a quantifiable attribute known as uncertainty. The Guide to the Expression of Uncertainty in Measurement (GUM) defines uncertainty as “a parameter, associated with the result of a measurement, that characterises the dispersion of the values that could reasonably be attributed to the measurand” (BIPM et al. 2008). Uncertainty can be mainly classified into standard uncertainty, which is the result of a direct measurement (e.g., electrical voltage measured with a voltmeter, or current measured with a amperemeter), and combined standard uncertainty, which is the result of an indirect measurement (i.e., the standard uncertainty when the result is derived from a number of other quantities by the means of some mathematical relationship; e.g., electrical power as a product of voltage and current). Therefore, provided a set of quantities with known uncertainties, the process of obtaining the uncertainty of a derived measurement is called propagation of uncertainty.
Traditionally, computational systems have treated these three components (quantity values, measurement units and uncertainty) separately. Data consisted of bare numbers, and mathematical operations applied to them solely. Units were just metadata, and uncertainty propagation was an unpleasant task requiring additional effort and complex operations. Nowadays though, many software libraries have formalised quantity calculus as method of including units within the scope of mathematical operations, thus preserving dimensional correctness and protecting us from computing nonsensical combinations of quantities. However, these libraries rarely integrate uncertainty handling and propagation (Flater 2018).
Within the R environment, the units package (Pebesma and Mailund 2017; Pebesma, Mailund, and Hiebert 2016) defines a class for associating unit metadata to numeric vectors, which enables transparent quantity derivation, simplification and conversion. This approach is a very comfortable way of managing units with the added advantage of eliminating an entire class of potential programming mistakes. Unfortunately, neither units nor any other package address the integration of uncertainties into quantity calculus.
This article presents errors (Ucar 2018), a package that defines a framework for associating uncertainty metadata to R vectors, matrices and arrays, thus providing transparent, lightweight and automated propagation of uncertainty. This implementation also enables ongoing developments for integrating units and uncertainty handling into a complete solution.
There are two main methods for propagation of uncertainty: the Taylor series method (TSM) and the Monte Carlo method (MCM). The TSM, also called the delta method, is based on a Taylor expansion of the mathematical expression that produces the output variables. As for the MCM, it can deal with generalised input distributions and propagates the uncertainty by Monte Carlo simulation.
The TSM is a flexible and simple method of propagation of uncertainty that offers a good degree of approximation in most cases. In the following, we will provide a brief description. A full derivation, discussion and examples can be found in Arras (1998).
Mathematically, an indirect measurement is obtained as a function of n direct or indirect measurements, Y = f(X1, ..., Xn), where the distribution of Xn is unknown a priori. Usually, the sources of random variability are many, independent and probably unknown as well. Thus, the central limit theorem establishes that an addition of a sufficiently large number of random variables tends to a normal distribution. As a result, the first assumption states that Xn are normally distributed.
The second assumption presumes linearity, i.e., that f can be approximated by a first-order Taylor series expansion around μXn (see Figure 1). Then, given a set of n input variables X and a set of m output variables Y, the first-order uncertainty propagation law establishes that where Σ is the covariance matrix and J is the Jacobian operator.
In practice, as recommended in the GUM (BIPM et al. 2008), this first-order approximation is good even if f is non-linear, provided that the non-linearity is negligible compared to the magnitude of the uncertainty, i.e., $\E[f(X)]\approx f(\E[X])$. Also, this weaker condition is distribution-free: no assumptions are needed on the probability density functions (PDF) of Xn, although they must be reasonably symmetric.
If we consider Equation (1) for pairwise computations, i.e., Y = f(X1, X2), we can write the propagation of the uncertainty σY as follows:
The cross-covariances for the output Y and any other variable Z can be simplified as follows: where, notably, if Z = Xi, one of the covariances above results in σXi2. Finally, and for the sake of completeness, the correlation coefficient can be obtained as rYZ = σYZ/(σYσZ).
The MCM is based on the same principles underlying the TSM. It is based on the propagation of the PDFs of the input variables Xn by performing random sampling and evaluating them under the model considered. Thus, this method is not constrained by the TSM assumptions, and explicitly determines a PDF for the output quantity Y, which makes it a more general approach that applies to a broader set of problems. For further details on this method, as well as a comparison with the TSM and some discussion on the applicability of both methods, the reader may refer to the Supplement 1 of the GUM (BIPM et al. 2008).
The GUM (BIPM et al. 2008) defines four ways of reporting standard uncertainty and combined standard uncertainty. For instance, if the reported quantity is assumed to be a mass mS of nominal value 100 g:
- mS = 100.02147 g with (a combined standard uncertainty) uc = 0.35 mg.
- mS = 100.02147(35) g, where the number in parentheses is the numerical value of (the combined standard uncertainty) uc referred to the corresponding last digits of the quoted result.
- mS = 100.02147(0.00035) g, where the number in parentheses is the numerical value of (the combined standard uncertainty) uc expressed in the unit of the quoted result.
- mS = (100.02147 ± 0.00035) g, where the number following the symbol ± is the numerical value of (the combined standard uncertainty) uc and not a confidence interval.
Schemes (2, 3) and (4) would be referred to as parenthesis notation and plus-minus notation respectively throughout this document. Although (4) is a very extended notation, the GUM explicitly discourages its use to prevent confusion with confidence intervals.
The errors package aims to provide easy and lightweight handling of measurement with errors, including uncertainty propagation using the first-order TSM presented in the previous section and a formally sound representation.
Standard uncertainties, can be assigned to numeric vectors, matrices and arrays, and then all the mathematical and arithmetic operations are transparently applied to both the values and the associated uncertainties:
library(errors)
x <- 1:5 + rnorm(5, sd = 0.01)
y <- 1:5 + rnorm(5, sd = 0.02)
errors(x) <- 0.01
errors(y) <- 0.02
x; y
## Errors: 0.01 0.01 0.01 0.01 0.01
## [1] 1.009314 2.002629 3.007546 3.998674 5.003387
## Errors: 0.02 0.02 0.02 0.02 0.02
## [1] 0.9964423 2.0056511 3.0024418 3.9955924 5.0339563
## Errors: 0.022672724 0.011135401 0.007457628 0.005599789 0.004420403
## [1] 1.0129181 0.9984931 1.0017000 1.0007712 0.9939274
The errors()
method assigns or retrieves a vector of
uncertainties, which is stored as an attribute of the class
errors
, along with a unique object identifier:
## Errors: num [1:5] 1.01(1) 2.00(1) 3.01(1) 4.00(1) 5.00(1)
Correlations (and thus covariances) between pairs of variables can be
set and retrieved using the correl()
and
covar()
methods. These correlations are stored in an
internal hash table indexed by the unique object identifier assigned to
each errors
object. If an object is removed, its associated
correlations are cleaned up automatically.
## [1] 1 1 1 1 1
## NULL
## [1] -0.23887367 0.70379942 -0.05583015 -0.35912856 -0.33108658
## [1] -4.777473e-05 1.407599e-04 -1.116603e-05 -7.182571e-05 -6.621732e-05
Internally, errors provides S3 methods for the
generics belonging to the groups Math
and Ops
,
which propagate the uncertainty and the covariance using Equations (2)
and (3) respectively.
## Errors: 0.022672724 0.011135401 0.007457628 0.005599789 0.004420403
## [1] 1.0129181 0.9984931 1.0017000 1.0007712 0.9939274
## Errors: 0.024729103 0.007356542 0.007622187 0.006353158 0.004973368
## [1] 1.0129181 0.9984931 1.0017000 1.0007712 0.9939274
Other many S3 methods are also provided, such as generics belonging
to the Summary
group, subsetting operators ([
,
[<-
, [[
, [[<-
),
concatenation (c()
), differentiation (diff
),
row and column binding (rbind
, cbind
),
coercion to list, data frame and matrix, and more. Such methods mutate
the errors
object, and thus return a new one with no
correlations associated. There are also setters defined as an
alternative to the assignment methods (set_*()
instead of
errors<-
, correl<-
and
covar<-
), primarily intended for their use in
conjunction with the pipe operator (%>%
) from the
magrittr (Bache and Wickham 2014)
package.
Additionally, other useful summaries are provided, namely, the mean, the weighted mean and the median. The uncertainty of any measurement of central tendency cannot be smaller than the uncertainty of the individual measurements. Therefore, the uncertainty assigned to the mean is computed as the maximum between the standard deviation of the mean and the mean of the individual uncertainties (weighted, in the case of the weighted mean). As for the median, its uncertainty is computed as $\sqrt{\pi/2}\approx1.253$ times the standard deviation of the mean, where this constant comes from the asymptotic variance formula (Hampel et al. 2011).
It is worth noting that both values and uncertainties are stored with
all the digits. However, when a single measurement or a column of
measurements in a data frame are printed, there are S3 methods for
format()
and print()
defined to properly
format the output and display a single significant digit for the
uncertainty. This default representation can be overridden using the
digits
option, and it is globally controlled with the
option errors.digits
.
## 1.6021766208(98)e-19
The parenthesis notation, in which the number in parentheses is the uncertainty referred to the corresponding last digits of the quantity value (scheme 2 from the GUM, widely used in physics due to its compactness), is used by default, but this can be overridden through the appropriate option in order to use the plus-minus notation instead.
## (1.6021766208 ± 0.0000000098)e-19
Finally, errors also facilitates plotting of error
bars. In the following, we first assign a 2% of uncertainty to all the
numeric variables in the iris
data set and then we plot it
using base graphics and ggplot2 (Wickham and Chang
2016; Wickham 2009).
The result is shown in Figure 2.
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1(1) 3.50(7) 1.40(3) 0.200(4) setosa
## 2 4.9(1) 3.00(6) 1.40(3) 0.200(4) setosa
## 3 4.70(9) 3.20(6) 1.30(3) 0.200(4) setosa
## 4 4.60(9) 3.10(6) 1.50(3) 0.200(4) setosa
## 5 5.0(1) 3.60(7) 1.40(3) 0.200(4) setosa
## 6 5.4(1) 3.90(8) 1.70(3) 0.400(8) setosa
plot(Sepal.Width ~ Sepal.Length, iris.e, col=Species)
legend(6.2, 4.4, unique(iris.e[["Species"]]),
col=1:length(iris.e[["Species"]]), pch=1)
library(ggplot2)
ggplot(iris.e) + aes(Sepal.Length, Sepal.Width, color=Species) +
geom_point() + geom_errors() + theme_bw() + theme(legend.position=c(0.6, 0.8))
In base graphics, the error bars are automatically plotted when an
object of class errors
is passed. Additionally, we provide
the convenience functions errors_min(x)
and
errors_max(x)
for obtaining the boundaries of the interval
in ggplot2 and other plotting packages, instead of
writing x - errors(x)
and x + errors(x)
respectively.
From Annex H.2 of the GUM (BIPM et al. 2008):
The resistance R and the reactance X of a circuit element are determined by measuring the amplitude V of a sinusoidally-alternating potential difference across its terminals, the amplitude I of the alternating current passing through it, and the phase-shift angle ϕ of the alternating potential difference relative to the alternating current.
The measurands (resistance R, reactance X and impedance Z) are related to the input quantities (V, I, phi) by the Ohm’s law:
Five simultaneous observations for each input variable are provided
(Table H.2 of the GUM), which are included in errors as
dataset GUM.H.2
. First, we need to obtain the mean input
values and set the correlations from the measurements. Then, we compute
the measurands and examine the correlations between them. The results
agree with those reported in the GUM:
V <- mean(set_errors(GUM.H.2$V))
I <- mean(set_errors(GUM.H.2$I))
phi <- mean(set_errors(GUM.H.2$phi))
correl(V, I) <- with(GUM.H.2, cor(V, I))
correl(V, phi) <- with(GUM.H.2, cor(V, phi))
correl(I, phi) <- with(GUM.H.2, cor(I, phi))
print(R <- (V / I) * cos(phi), digits = 2, notation = "plus-minus")
## 127.732 ± 0.071
## 219.847 ± 0.296
## 254.260 ± 0.236
## [1] -0.5884298
## [1] -0.4852592
## [1] 0.9925116
From Annex H.3 of the GUM (BIPM et al. 2008):
A thermometer is calibrated by comparing n = 11 temperature readings tk of the thermometer […] with corresponding known reference temperatures tR, k in the temperature range 21 ∘C to 27 ∘C to obtain the corrections bk = tR, k − tk to the readings.
Measured temperatures and corrections (Table H.6 of the GUM), which
are included in errors as dataset GUM.H.3
,
are related by a linear calibration curve:
In the following, we first fit the linear model for a reference temperature t0 = 20 ∘C. Then, we extract the coefficients, and assign the uncertainty and correlation between them. Finally, we compute the predicted correction b(30), which agrees with the value reported in the GUM:
fit <- lm(bk ~ I(tk - 20), data = GUM.H.3)
y1 <- set_errors(coef(fit)[1], sqrt(vcov(fit)[1, 1]))
y2 <- set_errors(coef(fit)[2], sqrt(vcov(fit)[2, 2]))
covar(y1, y2) <- vcov(fit)[1, 2]
print(b.30 <- y1 + y2 * set_errors(30 - 20), digits = 2, notation = "plus-minus")
## -0.1494 ± 0.0041
The errors package provides the means for defining
numeric vectors, matrices and arrays with errors in R, as well as to
operate with them in a transparent way. Propagation of uncertainty
implements the commonly used first-order TSM formula from Equation (1).
This method has been pre-computed and expanded for each operation in the
S3 groups Math
and Ops
, instead of
differentiating symbolic expressions on demand or using functions from
other packages for this task. The advantages of this approach are
twofold. On the one hand, it is faster, as it does not involve
simulation nor symbolic computation, and very lightweight in terms of
package dependencies.
Another advantage of errors is the built-in
consistent and formally sound representation of measurements with
errors, rounding the uncertainty to one significant digit by default and
supporting two widely used notations: parenthesis (e.g., 5.00(1)) and plus-minus (e.g., 5.00 ± 0.01). These notations are applied for
single numbers and data frames, as well as tbl_df
data
frames from the tibble (Wickham, Francois, and Müller 2017)
package.
Full support is provided for both data.frame
and
tbl_df
, as well as matrices and arrays. However, some
operations on those data structures may drop uncertainties (i.e., object
class and attributes). More specifically, there are six common data
wrangling operations: row subsetting, row ordering, column
transformation, row aggregation, column joining and (un)pivoting. Table
1 shows the correspondence between these operations and R base
functions, as well as the compatibility with
errors.
Operation | R base function(s) | Compatibility |
---|---|---|
Row subsetting | [ , [[ ,
subset |
Full |
Row ordering | order + [ |
Full |
Column transformation | transform , within |
Full |
Row aggregation | tapply , by ,
aggregate |
with simplify=FALSE |
Column joining | merge |
Full |
(Un)Pivoting | reshape |
Full |
Overall, errors is fully compatible with data
wrangling operations embed in R base, and this is because those
functions are mainly based on the subsetting generics, for which
errors provides the corresponding S3 methods.
Nonetheless, special attention must be paid to aggregations, which store
partial results in lists that are finally simplified. Such
simplification is made with unlist
, which drops all the
input attributes, including custom classes. However, all these
aggregation functions provide the argument simplify
(sometimes SIMPLIFY
), which if set to FALSE
,
prevents this destructive simplification, and lists are returned. Such
lists can be simplified non-destructively by calling
do.call(c, ...)
.
unlist <- function(x) if (is.list(x)) do.call(c, x) else x
iris.e.agg <- aggregate(. ~ Species, data = iris.e, mean, simplify=FALSE)
as.data.frame(lapply(iris.e.agg, unlist), col.names=colnames(iris.e.agg))
## Species Sepal.Length Sepal.Width Petal.Length Petal.Width
## 1 setosa 5.0(1) 3.43(7) 1.46(3) 0.25(1)
## 2 versicolor 5.9(1) 2.77(6) 4.26(9) 1.33(3)
## 3 virginica 6.6(1) 2.97(6) 5.6(1) 2.03(4)
We have introduced errors, a lightweight R package
for managing numeric data with associated standard uncertainties. The
new class errors
provides numeric operations with automated
propagation of uncertainty through a first-order TSM, and a formally
sound representation of measurements with errors. Using this package
makes the process of computing indirect measurements easier and less
error-prone.
Future work includes importing and exporting data with uncertainties and providing the user with an interface for plugging uncertainty propagation methods from other packages. Finally, errors enables ongoing developments for integrating units and uncertainty handling into a complete solution for quantity calculus. Having a unified workflow for managing measurements with units and errors would be an interesting addition to the R ecosystem with very few precedents in other programming languages.