Linear prediction is a mathematical operation where future values of a discretetime signal are estimated as a linear function of previous samples.
In digital signal processing, linear prediction is often called linear predictive coding (LPC) and can thus be viewed as a subset of filter theory. In system analysis (a subfield of mathematics), linear prediction can be viewed as a part of mathematical modelling or optimization.
Contents
The prediction model
The most common representation is
where is the predicted signal value, x(n − i) the previous observed values, and a_{i} the predictor coefficients. The error generated by this estimate is
where x(n) is the true signal value.
These equations are valid for all types of (onedimensional) linear prediction. The differences are found in the way the parameters a_{i} are chosen.
For multidimensional signals the error metric is often defined as
where is a suitable chosen vector norm.
Estimating the parameters
The most common choice in optimization of parameters a_{i} is the root mean square criterion which is also called the autocorrelation criterion. In this method we minimize the expected value of the squared error E[e^{2}(n)], which yields the equation
for 1 ≤ j ≤ p, where R is the autocorrelation of signal x_{n}, defined as
and E is the expected value. In the multidimensional case this corresponds to minimizing the L_{2} norm.
The above equations are called the normal equations or YuleWalker equations. In matrix form the equations can be equivalently written as
where the autocorrelation matrix R is a symmetric, Toeplitz matrix with elements r_{i,j} = R(i − j), vector r is the autocorrelation vector r_{j} = R(j), and vector a is the parameter vector.
Full article ▸
