Algorithms for calculating variance

related topics
{math, number, function}
{rate, high, increase}
{line, north, south}
{system, computer, user}

Algorithms for calculating variance play a major role in statistical computing. A key problem in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

Contents

Naïve algorithm

The formula for calculating the variance of an entire population of size n is:

The formula for calculating an unbiased estimate of the population variance from a finite sample of n observations is:

Therefore a naive algorithm to calculate the estimated variance is given by the following:

def naive_variance(data):
    n = 0
    Sum = 0
    Sum_sqr = 0
 
    for x in data:
        n = n + 1
        Sum = Sum + x
        Sum_sqr = Sum_sqr + x*x
 
    mean = Sum/n
    variance = (Sum_sqr - Sum*mean)/(n - 1)
    return variance

This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.

Because sum_sqr and sum * mean can be very similar numbers, the precision of the result can be much less than the inherent precision of the floating-point arithmetic used to perform the computation. This is particularly bad if the standard deviation is small relative to the mean.

[edit] Two-pass algorithm

An alternate approach, using a different formula for the variance, first computes the sample mean:

\bar x = \displaystyle \sum_{j=1}^n x_j/n

and then computes the sum of the squares of the differences from the mean:

s^2 = \displaystyle\frac {\sum_{i=1}^n (x_i - \bar x)^2}{n-1}. \!

is given by the following pseudocode:

def two_pass_variance(data):
    n    = 0
    sum1 = 0
    sum2 = 0
 
    for x in data:
        n    = n + 1
        sum1 = sum1 + x
 
    mean = sum1/n
 
    for x in data:
        sum2 = sum2 + (x - mean)*(x - mean)
 
    variance = sum2/(n - 1)
    return variance

This algorithm is often more numerically reliable than the naïve algorithm for large sets of data, although it can be worse if much of the data is very close to but not precisely equal to the mean and some are quite far away from it.

The results of both of these simple algorithms (I and II) can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.

[edit] Compensated variant

The compensated-summation version of the algorithm above reads[citation needed]:

def compensated_variance(data):
    n = 0
    sum1 = 0
    for x in data:
        n = n + 1
        sum1 = sum1 + x
    mean = sum1/n
 
    sum2 = 0
    sumc = 0
    for x in data:
        sum2 = sum2 + (x - mean)**2
        sumc = sumc + (x - mean)
    variance = (sum2 - sumc**2/n)/(n - 1)
    return variance

[edit] On-line algorithm

It is often useful to be able to compute the variance in a single pass, inspecting each value xi only once; for example, when the data are being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.

The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xnew. Here, xn denotes the sample mean of the first n samples (x1, ..., xn), s2n their sample variance, and σ2n their population variance.

\bar x_n = \frac{(n-1) \, \bar x_{n-1} + x_n}{n} = \bar x_{n-1} + \frac{x_n - \bar x_{n-1}}{n} \!
s^2_n = \frac{(n-2) \, s^2_{n-1} + (x_n - \bar x_n) (x_n - \bar x_{n-1})}{n-1}, \quad n>1
\sigma^2_n = \frac{(n-1) \, \sigma^2_{n-1} + (x_n - \bar x_n) \, (x_n - \bar x_{n-1})}{n}.

It turns out that a more suitable quantity for updating is the sum of squares of differences from the (current) mean, \textstyle\sum_{i=1}^n (x_i - \bar x_n)^2, here denoted M2,n:

Full article ▸

related documents
Poisson distribution
Benford's law
Kurtosis
Metropolis–Hastings algorithm
Binomial distribution
Conditional probability
Rank (linear algebra)
Logistic function
Poisson process
Generalized mean
Assignment problem
Average
Haar measure
Chain rule
ML (programming language)
Boolean ring
Definable real number
Merkle-Hellman
Trie
B-tree
Base (topology)
Outer product
Preorder
Presburger arithmetic
Idempotence
Ring (mathematics)
Unicity distance
Monster group
Pigeonhole principle
Pareto distribution