Introduction To Error Analysis
Introduction To Error Analysis
References: Bevington Data Reduction & Error Analysis for the Physical Sciences LLM: Appendix B Warning: the introductory literature on statistics of measurement is remarkably uneven, and nomenclature is not consistent. Is error analysis important? Yes! See next page.
Parent Distributions
Measurement of any physical quantity is always aected by uncontrollable random (stochastic) processes. These produce a statistical scatter in the values measured. The parent distribution for a given measurement gives the probability of obtaining a particular result from a single measure. It is fully dened and represents the idealized outcome of an innite number of measures, where the random eects on the measuring process are assumed to be always the same (stationary).
Distinction between precision and accuracy: o A measurement with a large ratio of value to statistical uncertainty is said to be precise. o An accurate measurement is one which is close to the true value of the parameter being measured. o Because of systematic errors precise measures may not be accurate. o A famous example: the primary mirror for the Hubble Space Telescope was gured with high precision (i.e. had very small ripples), but it was inaccurate in that its shape was wrong. The statistical infrastructure we are discussing here does not permit an assessment of systematic errors. Those must be addressed by other means.
The parent distribution is characterized by its moments: Parent probability distribution: p(x) Mean: rst moment. Variance: second moment. V ar(x) Sigma: V ar(x) (x )2 p(x) dx x p(x) dx
Aliases: is the standard deviation, but is also known as the dispersion or rms dispersion measures the center and measures the width of the parent distribution.
NB: the mean can be very dierent from the median (50th percentile) or the mode (most frequent value) of the parent distribution. These represent alternative measures of the distributions center. But the mean is the more widely used parameter.
Applies to any continuous counting process where events are independent of one another and have a uniform probability of occurring in any time bin.
The Poisson distribution is derived as a limit of the binomial distribution based on the fact that time can be divided up into small intervals such that the probability of an event in any given interval is arbitrarily small.
pP (n) =
Properties:
n n!
pP (n) = 1
n=0
Asymmetric about ; mode Mean value per bin: . need not be an integer. Variance: Standard deviation: = Implies mean/width = / = Square root of n statistics
NB: the Poisson distribution is the proper description of a uniform counting process for small numbers of counts. For larger numbers (n 30), the Gaussian distribution is a good description and is easier to compute.
The Gaussian, or normal, distribution is the limiting form of the Poisson distribution for large ( 30) Probability distribution:
pG(x) =
Properties:
+
1 2
1 exp [ 2 ( x )2 ]
pG(x) dx = 1
Bell-shaped curve; symmetric about mode at Mean value: (= median and mode) Variance: 2 Full Width Half Maximum = 2.355 If refers to a counting process (x = n in bin), then = Importance:
The central limit theorem of Gauss demonstrates that a Gaussian distribution applies to any situation where a large number of independent random processes contribute to the result. This means it is a valid statistical description of an enormous range of real-life situations. Much of the statistical analysis of data measurement is based on the assumption of Gaussian distributions.
10
11
p(2, ) =
(2)0.5(2) exp[2/2]
...where the Gamma function is dened as follows: (n + 1) = n! if n is an integer (1/2) = and (n + 1) = n(n) if n is half-integer
Properties:
Only one parameter, , the number of degrees of freedom. = the number of independent quantities in the sum of squares. Mean and mode: . Variance: 2 Asymmetric distribution
12
Sampling Theory
In practice, we usually do not know the parameters of the parent distribution because this requires a very large number of measures. Instead, we try to make inferences about the parent distribution from nite (& often small) samples. Sampling theory describes how to estimate the moments of p(x). The results here are based on applying the method of maximum likelihood to variables whose parent distribution is assumed to be stationary and normally distributed. Suppose we obtain a sample consisting of M measurements of a given variable characterized by a normal distribution (with mean and standard deviation ). Dene the following two estimators:
M
Sample mean:
x
2
1 M
xi
i=1 1 M 1 M
Sample variance:
(xi x)2
i=1
13
SAMPLING THEORY (cont) How well determined is x? The uncertainty in x is its variance. But this is not the same as the variance in x. x is a random variable, and its variance can be computed as follows:
s2 x
V ar() = x
1 2 s M 1 M (M 1) M
1 M2
V ar(xi) =
i=1
1 V M
ar(x)
s2 x s2 x
(xi x)2
i=1
sx is known as the standard error of the mean Important! sx << if M is large. The distinction between and sx is often overlooked by students and can lead to agrant overestimation of errors in mean values. The mean of a random variable can be determined very precisely regardless of its variance. This demonstrates the importance of repeated measurements...if feasible.
14
SAMPLING THEORY (cont) Probability Distribution of x: By the central limit theorem, if we repeat a set of M measures from a given parent distribution a large number of times, the resulting distribution of xM will be a normal distribution regardless of the form of the parent distribution p(x). It will have a standard deviation of / M .
Inhomogeneous samples: A sample is inhomogeneous if of the parent distribution is dierent for dierent measurements. This could happen with a long series of photometric determinations of a sources brightness, for instance. Here, the values entering the estimates of the sample mean and variance must be weighted in inverse proportion to their uncertainties. The following expressions assume that the variance of each measurement can be estimated in some independent way:
M Sample mean: M
x=
i=1
w i xi / s2 = 1/ x
i=1 M
wi wi
i=1
wi =
1 2 i
15
SN R x/sx
sx here must include all eects which contribute to random error in the quantity x. This is a basic gure of merit that should be considered in both planning observations (based on expected performance of equipment) and in evaluating them after they are made.
16
If u = f (x, y) is a function of two random variables, x and y, then we can propagate the uncertainty in x and y to u as follows: u x
2
2 u
2 x
2 y
u y
2 2xy
u x
u y
[(xi x)(yi y )]
For independent random variables, xy = 0. So, we obtain for the following simple functions: V ar(kx) = k2 V ar(x) if k is a constant
2 V ar(x + y) = V ar(x) + V ar(y) + 2xy 2 V ar(xy) = y 2 V ar(x) + x2 V ar(y) + 2xyxy
17
Condence Intervals
A condence interval is a range of values which can be expected to contain a given parameter (e.g. the mean) of the parent distribution with a specied probability. The smaller the condence interval, the higher the precision of the measure. (A) In the ideal case of a single measurement drawn from a normally-distributed parent distribution of known mean and variance, condence intervals for the mean in units of are easy to compute in the following form:
+k
P (k) =
k
pG(x, , )dx
where pG is the Gaussian distribution. Results from this calculation are as follows: k 0.675 1.0 2.0 3.0 P (k) 0.500 0.683 0.954 0.997
Intepretation: A single measure drawn from this distribution will fall within 1.0 of the mean value in 68% of the samples. Only 0.3% of the samples would fall more than 3.0 from the mean.
18
(B) In the real world, we have only estimates of the properties of the parent distribution based on a nite sample. The larger the sample, the better the estimates, and the smaller the condence interval. To place condence intervals on the estimate of the parent mean () based on a nite sample of M measures, we use the probability distribution of the Student t variable: t = ( ) M /s x where s2 is the sample variance. The probability distribution of t depends on the number of degrees of freedom, which in this case is M 1. The probability that the true mean of the parent distribution lies within t sx of the sample mean is estimated by integrating the Student t-distribution from t to +t. P ( x t sx) t 0.5 0.6745 1.0 2.0 3.0 M =2 0.295 0.377 0.500 0.705 0.795 M = 10 0.371 0.483 0.657 0.923 0.985 M = 0.383 0.500 0.683 0.954 0.997
19
CONFIDENCE INTERVALS (cont) Interpretation & comments on the t-distribution results: Entries for M = correspond to those for the Gaussian parent distribution quoted earlier, as expected. Values for small M can be very dierent than for M = . The number of observations is an important determinant of the quality of the measurement. The entry for 0.6745 is included because the formal denition of the probable error is 0.6745 sx. For a large number of measures, the probable error denes a 50% condence interval. But for small samples, it is a very weak constraint. A better measure of uncertainty is the standard error of the mean, sx, which provides at least a 50% condence interval for all M . Careful authors often quote 3 condence intervals. This corresponds to t = 3 and provides 80% condence for two measures and 99.7% for many measures. It is a strong contraint on results of a measurement.
NB; the integrals in the preceding table were derived from an IDL built-in routine. The table contains output from the IDL statement: P = 2*T PDF(t,M-1)-1.
20
GOODNESS OF FIT (2 TEST) Widely used standard for comparing an observed distribution with a hypothetical functional relationship for two or more related random variables. Determines the likelihood that the observed deviations between the observations and the expected relationship occur by chance. Assumes that the measuring process is governed by Gaussian statistics. Two random variables x and y. Let y be a function of x and a number k of additional parameters, j : y = f (x; 1...k ). 1. Make M observations of x and y. 2. For each observation, estimate the total variance in the yi 2 value, i 3. We require f (x; 1...k ). Either this must be known a priori, or it must be estimated from the data (e.g. by least squares tting). 4. Then dene
M 2
2 0
=
i
yi f (xi) i
5. The probability distribution for 2 was given earlier. It depends on the number of degrees of freedom . If the k parameters were estimated from the data, then = M k. 6. The predicted mean value of 2 is .
21
7. The integral P0 =
2 0
probability that this or a higher value of 2 would occur by 0 chance. 8. The larger is P0, the more likely it is that f is correct. Values over 50% are regarded as consistent with the hypothesis that y = f . 9. Sample values of 2 yielding a given P0: P0 0.05 0.10 0.50 0.90 0.95 =1 3.841 2.706 0.455 0.016 0.004 = 10 1.831 1.599 0.934 0.487 0.394 = 200 1.170 1.130 0.997 0.874 0.841
10. Generally, one uses 1 P0 as a criterion for rejection of the validity of f : E.g. if P0 = 5%, then with 95% condence one can reject the hypothesis that f is the correct description of y. 11. Important caveat: the 2 test is ambiguous because it makes 2 assumptions: that f is the correct description of y(x) and that a Gaussian process with the adopted s properly described the measurements. It will reject the hypothesis if either condition fails.