Algorithms For Calculating Variance
Algorithms For Calculating Variance
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good
algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical
instability as well as to arithmetic overflow when dealing with large values.
Naïve algorithm
A formula for calculating the variance of an entire population of size N is:
Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n
observations, the formula is:
Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on
the last line.
Because SumSq and (Sum×Sum)/n can be very similar numbers, cancellation can lead to the precision of the result
to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this
algorithm should not be used in practice,[1][2] and several alternate, numerically stable, algorithms have been proposed.[3]
This is particularly bad if the standard deviation is small relative to the mean.
The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the
catastrophic cancellation in this formula.
If just the first sample is taken as the algorithm can be written in Python programming language as
def shifted_data_variance(data):
if len(data) < 2:
return 0.0
K = data[0]
n = Ex = Ex2 = 0.0
for x in data:
n += 1
Ex += x - K
Ex2 += (x - K) ** 2
variance = (Ex2 - Ex**2 / n) / (n - 1)
# use n instead of (n-1) if want to compute the exact variance of the given data
# use (n-1) if data are samples of a larger population
return variance
This formula also facilitates the incremental computation that can be expressed as
K = Ex = Ex2 = 0.0
n = 0
def add_variable(x):
global K, n, Ex, Ex2
if n == 0:
K = x
n += 1
Ex += x - K
Ex2 += (x - K) ** 2
def remove_variable(x):
global K, n, Ex, Ex2
n -= 1
Ex -= x - K
Ex2 -= (x - K) ** 2
def get_mean():
global K, n, Ex
return K + Ex / n
def get_variance():
global n, Ex, Ex2
return (Ex2 - Ex**2 / n) / (n - 1)
Two-pass algorithm
An alternative approach, using a different formula for the variance, first computes the sample mean,
and then computes the sum of the squares of the differences from the mean,
def two_pass_variance(data):
n = len(data)
mean = sum(data) / n
variance = sum([(x - mean) ** 2 for x in data]) / (n - 1)
return variance
This algorithm is numerically stable if n is small.[1][4] However, the results of both of these simple algorithms ("naïve" and
"two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to
repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to
combat this error to a degree.
The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional
element xn . Here, denotes the sample mean of the first n samples ,
their biased sample variance, and their unbiased sample
variance.
These formulas suffer from numerical instability, as they repeatedly subtract a small number from a big number which
scales with n. A better quantity for updating is the sum of squares of differences from the current mean,
, here denoted :
This algorithm was found by Welford,[5][6] and it has been thoroughly analyzed.[2][7] It is also common to denote
and .[8]
# For a new value newValue, compute the new count, new mean, the new M2.
# mean accumulates the mean of the entire dataset
# M2 aggregates the squared distance from the mean
# count aggregates the number of samples seen so far
def update(existingAggregate, newValue):
(count, mean, M2) = existingAggregate
count += 1
delta = newValue - mean
mean += delta / count
delta2 = newValue - mean
M2 += delta * delta2
return (count, mean, M2)
This algorithm is much less prone to loss of precision due to catastrophic cancellation, but might not be as efficient
because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance,
one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.
The parallel algorithm below illustrates how to merge multiple sets of statistics calculated online.
def weighted_incremental_variance(data_weight_pairs):
w_sum = w_sum2 = mean = S = 0
for x, w in data_weight_pairs:
w_sum = w_sum + w
w_sum2 = w_sum2 + w**2
mean_old = mean
mean = mean_old + (w / w_sum) * (x - mean_old)
S = S + w * (x - mean_old) * (x - mean)
population_variance = S / w_sum
# Bessel's correction for weighted samples
# Frequency weights
sample_frequency_variance = S / (w_sum - 1)
# Reliability weights
sample_reliability_variance = S / (w_sum - w_sum2 / w_sum)
Parallel algorithm
Chan et al.[10] note that Welford's online algorithm detailed above is a special case of an algorithm that works for
combining arbitrary sets and :
This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.
Chan's method for estimating the mean is numerically unstable when and both are large, because the numerical
error in is not scaled down in the way that it is in the case. In such cases, prefer
.
This can be generalized to allow parallelization with AVX, with GPUs, and computer clusters, and to covariance.[3]
Example
Assume that all floating point operations use standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7,
13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate
of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly.
Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the
first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns
29.333333333333332 instead of 30.
While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the
offset makes the error catastrophic. Consider the sample (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated
population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as
−170.66666666666666. This is a serious problem with naïve algorithm and is due to catastrophic cancellation in the
subtraction of two similar numbers at the final stage of the algorithm.
Higher-order statistics
Terriberry[11] extends Chan's formulae to calculating the third and fourth central moments, needed for example when
estimating skewness and kurtosis:
Here the are again the sums of powers of differences from the mean , giving
By preserving the value , only one division operation is needed and the higher-order statistics can thus be calculated
for little incremental cost.
for x in data:
n1 = n
n = n + 1
delta = x - mean
delta_n = delta / n
delta_n2 = delta_n**2
term1 = delta * delta_n * n1
mean = mean + delta_n
M4 = M4 + term1 * delta_n2 * (n**2 - 3*n + 3) + 6 * delta_n2 * M2 - 4 * delta_n * M3
M3 = M3 + term1 * delta_n * (n - 2) - 3 * delta_n * M2
M2 = M2 + term1
# Note, you may also calculate variance using M2, and skewness using M3
# Caution: If all the inputs are the same, M2 will be 0, resulting in a division by 0.
kurtosis = (n * M4) / (M2**2) - 3
return kurtosis
Pébaÿ[12] further extends these results to arbitrary-order central moments, for the incremental and the pairwise cases, and
subsequently Pébaÿ et al.[13] for weighted and compound moments. One can also find there similar formulas for
covariance.
Choi and Sweetman[14] offer two alternative methods to compute the skewness and kurtosis, each of which can save
substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the
statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting
histogram, which effectively becomes a one-pass algorithm for higher moments. One benefit is that the statistical moment
calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the
data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in
the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are
counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:
where and represent the frequency and the relative frequency at bin and is the
total area of the histogram. After this normalization, the raw moments and central moments of can be calculated
from the relative histogram:
where the superscript indicates the moments are calculated from the histogram. For constant bin width
these two expressions can be simplified using :
The second approach from Choi and Sweetman[14] is an analytical methodology to combine statistical moments from
individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This
methodology could be used for parallel computation of statistical moments with subsequent combination of those
moments, or for combination of statistical moments computed at sequential times.
If sets of statistical moments are known: for , then each can be
expressed in terms of the equivalent raw moments:
where is generally taken to be the duration of the time-history, or the number of points if is constant.
The benefit of expressing the statistical moments in terms of is that the sets can be combined by addition, and there is
no upper limit on the value of .
where the subscript represents the concatenated time-history or combined . These combined values of can then be
inversely transformed into raw moments representing the complete concatenated time-history
Known relationships between the raw moments ( ) and the central moments ( ) are then used to
compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history
are computed from the central moments:
Covariance
Very similar algorithms can be used to compute the covariance.
Naïve algorithm
For the algorithm above, one could use the following Python code:
As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values
and it can be written:
and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as
make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:
Two-pass
The two-pass algorithm first computes the sample means, and then the covariance:
covariance = 0
for i1, i2 in zip(data1, data2):
a = i1 - mean1
b = i2 - mean2
covariance += a * b / n
return covariance
A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums
and should be zero, but the second pass compensates for any small error.
Online
A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment
:
The apparent asymmetry in that last equation is due to the fact that , so both update
terms are equal to . Even greater accuracy can be achieved by first computing the means,
then using the stable one-pass algorithm on the residuals.
population_covar = C / n
# Bessel's correction for sample variance
sample_covar = C / (n - 1)
population_covar = C / wsum
# Bessel's correction for sample variance
# Frequency weights
sample_frequency_covar = C / (wsum - 1)
# Reliability weights
sample_reliability_covar = C / (wsum - wsum2 / wsum)
Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:[3]
A version of the weighted online algorithm that does batched updated also exists: let denote the weights, and
write
The covariance can then be computed as
See also
Kahan summation algorithm
Squared deviations from the mean
Yamartino method
References
1. Einarsson, Bo (2005). Accuracy and Reliability in Scientific Computing (https://books.google.com/books?
id=8hrDV5EbrEsC). SIAM. p. 47. ISBN 978-0-89871-584-2.
2. Chan, Tony F.; Golub, Gene H.; LeVeque, Randall J. (1983). "Algorithms for computing the sample
variance: Analysis and recommendations" (http://cpsc.yale.edu/sites/default/files/files/tr222.pdf) (PDF).
The American Statistician. 37 (3): 242–247. doi:10.1080/00031305.1983.10483115 (https://doi.org/10.108
0%2F00031305.1983.10483115). JSTOR 2683386 (https://www.jstor.org/stable/2683386). Archived (http
s://ghostarchive.org/archive/20221009/http://cpsc.yale.edu/sites/default/files/files/tr222.pdf) (PDF) from
the original on 9 October 2022.
3. Schubert, Erich; Gertz, Michael (9 July 2018). Numerically stable parallel computation of (co-)variance (ht
tp://dl.acm.org/citation.cfm?id=3221269.3223036). ACM. p. 10. doi:10.1145/3221269.3223036 (https://doi.
org/10.1145%2F3221269.3223036). ISBN 9781450365055. S2CID 49665540 (https://api.semanticschol
ar.org/CorpusID:49665540).
4. Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed) (Problem 1.10). SIAM.
5. Welford, B. P. (1962). "Note on a method for calculating corrected sums of squares and products".
Technometrics. 4 (3): 419–420. doi:10.2307/1266577 (https://doi.org/10.2307%2F1266577).
JSTOR 1266577 (https://www.jstor.org/stable/1266577).
6. Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd
edn., p. 232. Boston: Addison-Wesley.
7. Ling, Robert F. (1974). "Comparison of Several Algorithms for Computing Sample Means and Variances".
Journal of the American Statistical Association. 69 (348): 859–866. doi:10.2307/2286154 (https://doi.org/1
0.2307%2F2286154). JSTOR 2286154 (https://www.jstor.org/stable/2286154).
8. "Accurately computing sample variance online" (http://www.johndcook.com/standard_deviation.html).
9. West, D. H. D. (1979). "Updating Mean and Variance Estimates: An Improved Method" (https://doi.org/10.
1145%2F359146.359153). Communications of the ACM. 22 (9): 532–535. doi:10.1145/359146.359153 (h
ttps://doi.org/10.1145%2F359146.359153). S2CID 30671293 (https://api.semanticscholar.org/CorpusID:3
0671293).
10. Chan, Tony F.; Golub, Gene H.; LeVeque, Randall J. (1979), "Updating Formulae and a Pairwise
Algorithm for Computing Sample Variances." (http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-
773.pdf) (PDF), Technical Report STAN-CS-79-773, Department of Computer Science, Stanford
University.
11. Terriberry, Timothy B. (2007), Computing Higher-Order Moments Online (https://web.archive.org/web/201
40423031833/http://people.xiph.org/~tterribe/notes/homs.html), archived from the original (http://people.xi
ph.org/~tterribe/notes/homs.html) on 23 April 2014, retrieved 5 May 2008
12. Pébaÿ, Philippe (2008), "Formulas for Robust, One-Pass Parallel Computation of Covariances and
Arbitrary-Order Statistical Moments" (http://infoserve.sandia.gov/sand_doc/2008/086212.pdf) (PDF),
Technical Report SAND2008-6212, Sandia National Laboratories, archived (https://ghostarchive.org/arch
ive/20221009/http://infoserve.sandia.gov/sand_doc/2008/086212.pdf) (PDF) from the original on 9
October 2022
13. Pébaÿ, Philippe; Terriberry, Timothy; Kolla, Hemanth; Bennett, Janine (2016), "Numerically Stable,
Scalable Formulas for Parallel and Online Computation of Higher-Order Multivariate Central Moments
with Arbitrary Weights" (https://zenodo.org/record/1232635), Computational Statistics, Springer, 31 (4):
1305–1325, doi:10.1007/s00180-015-0637-z (https://doi.org/10.1007%2Fs00180-015-0637-z),
S2CID 124570169 (https://api.semanticscholar.org/CorpusID:124570169)
14. Choi, Myoungkeun; Sweetman, Bert (2010), "Efficient Calculation of Statistical Moments for Structural
Health Monitoring", Journal of Structural Health Monitoring, 9 (1): 13–24,
doi:10.1177/1475921709341014 (https://doi.org/10.1177%2F1475921709341014), S2CID 17534100 (htt
ps://api.semanticscholar.org/CorpusID:17534100)
External links
Weisstein, Eric W. "Sample Variance Computation" (https://mathworld.wolfram.com/SampleVarianceCom
putation.html). MathWorld.