
related topics 
{math, number, function} 
{rate, high, increase} 
{math, energy, light} 
{water, park, boat} 
{ship, engine, design} 
{land, century, early} 
{son, year, death} 
{food, make, wine} 

The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e. sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every single equation.
The most important application is in data fitting. The best fit in the leastsquares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model.
Least squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear leastsquares problem occurs in statistical regression analysis; it has a closedform solution. The nonlinear problem has no closed solution and is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, thus the core calculation is similar in both cases.
The leastsquares method was first described by Carl Friedrich Gauss around 1794.^{[1]} Least squares corresponds to the maximum likelihood criterion if the experimental errors have a normal distribution and can also be derived as a method of moments estimator.
The following discussion is mostly presented in terms of linear functions but the use of leastsquares is valid and practical for more general families of functions. For example, the Fourier series approximation of degree n is optimal in the leastsquares sense, amongst all approximations in terms of trigonometric polynomials of degree n. Also, by iteratively applying local quadratic approximation to the likelihood (through the Fisher information), the leastsquares method may be used to fit a generalized linear model.
Contents
Full article ▸


related documents 
Dirac delta function 
Hausdorff dimension 
Taylor's theorem 
Metric space 
Extended Euclidean algorithm 
Uniform continuity 
Template (programming) 
Interpolation 
Cholesky decomposition 
Operator 
Set (mathematics) 
Square root 
Integer 
Analysis of algorithms 
Exponential function 
Icon (programming language) 
Kernel (matrix) 
Differential geometry 
Tail recursion 
Monoid 
Riemannian manifold 
VigenĂ¨re cipher 
Cantor's diagonal argument 
Equivalence relation 
Semidirect product 
Communication complexity 
Standard ML 
Supremum 
Insertion sort 
Complete metric space 
