EAS 327: Errors

1. Propogation of Errors/Uncertainties in Calculations

Suppose we measure variables A and B and thus determine that their numerical values are respectively a, b to within uncertainties that we quantify (or estimate) to be da, db. We can represent the above statement, ie. our state partial knowledge, by writing:

A = a ± da

B = b ± db

We say da and db are the "absolute errors" or "absolute uncertainties" in A,B, while da/a, db/b are the "relative errors."

Now suppose we take sums, products or quotients of A,B... the rules for the propogation of uncertainty are:

In words, absolute errors are to be summed to find the absolute error in a sum or difference; while relative errors are to be summed to find the relative error in a product or quotient.

From the "A-B" rule one can see why it is always problematic when an unknown quantity (A-B) must be estimated by taking the (small) difference between two large numbers (A,B); the absolute error may be much larger than the true, small residual difference.

2. Instrument Uncertainties

Suppose a certain sensor is designed to measure the property X. The accuracy of the sensor (which we might symbolize as, eg., AX) is a "figure of merit" for the closeness of measurements made using this sensor to the truth. We might read, eg., that a given temperature (T) sensor is accurate to within ± 1oC over the range -100  <= T <= 100 oC.

In contrast the resolution (or "discrimination" or "precision") of the sensor (say, dX) is a measure of the smallest change in X that this sensor could "see" (ie. discriminate), or, equivalently, a measure of the closeness-together of a large series of measurements with the sensor that are made holding the value of X fixed (the second definition makes it apparent that the resolution of an instrument in some cases may be essentially equivalent to the "noise level").

3. Random Errors

Suppose our instrument measures X with a random error characterised by a magnitude dx. Then, if this measurement is repeated many times (say, N times) we obtain a series of estimates:

X = x1 , x2 , x3, ... xi ... xN

and we can define the mean value over the N measurements as

< X > = 1/N S xi

It is the defining property of a random error that the uncertainty or error in this mean < X > over many measurements is much smaller than the characteristic error (dx) in a single measurement: in fact, the uncertainty in < X > is dx/N½.

Summarising, if X is measured with an instrument or system that is subject only to random error, then given that for a single measurement we have an uncertainty range defined by

X = x ± dx

then for the average of N measurements we have a much better estimate of X, namely, x ± dx/N½

It is useful to note that the mean value < X > is a "sample mean." It is one of many statistics that we could determine from our random sample of values of X. We can speak of an underlying "population" whence our individual measurements are drawn (at random). The characteristics of that population are termed the "parameters" of the population (whereas the characteristics of a sample are called statistics). Parameters are usually denoted using Greek letters. Thus, the population mean is written mX and from our sample we can make a best estimate as to its value, namely,

mX = < X > ± dx/N½

4. Systematic Errors

Suppose on the other hand that our instrument measures X with a systematic error of value, say, Sx (each value of X measured is too large by an amount Sx). Then, if this measurement is repeated many times (say, N times) we again obtain a series of estimates... but the average value calculated from this series will itself be in error by exactly the same (systematic) error as any individual measurement, ie. by the amount Sx.

The term "bias" is sometimes used for a systematic error.

5. Example of Random and Systematic Errors

Suppose we have a well-made metre-stick, that is accurate to within 1 mm and whose scale is sufficiently fine that repeated measurements of the same length by the same, constantly-attentive observer, differ typically by an amount less than about dx. Let us say this metre-stick is "true."

Now suppose 30 members of a class one by one take this ruler and measure the width of a particular desk. Suppose they are not all equally carefully in laying the edge of the ruler against the edge of the desk, and that the resulting random error is of a magnitude of about "D." If we average their 30 estimates, the resulting length < X > will be "true" to within an uncertainty that is the larger of 1 mm (the stated accuracy of the ruler), and (dx+D)/30½.

But suppose the ruler was not "true," eg. that its scale had been scribed on such that all scale marks were 10% short (if the ruler reads 100 cm, true length is only 90 cm). Then, the average of all 30 student measurements of a desk that was 100 cm wide would be close to 90 cm - a systematic error like this is not reduced by averaging together many repeated measurements.



Back to the EAS327 home page.

Back to the Earth & Atmospheric Sciences home page.



Last Modified: 30 Mar. 2004