In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space, most commonly the Euclidean space R^{n}. The Gram–Schmidt process takes a finite, linearly independent set S = {v_{1}, …, v_{k}} for k ≤ n and generates an orthogonal set S′ = {u_{1}, …, u_{k}} that spans the same kdimensional subspace of R^{n} as S.
The method is named for Jørgen Pedersen Gram and Erhard Schmidt but it appeared earlier in the work of Laplace and Cauchy. In the theory of Lie group decompositions it is generalized by the Iwasawa decomposition.
The application of the Gram–Schmidt process to the column vectors of a full column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix).
Contents
The Gram–Schmidt process
We define the projection operator by
where 〈u, v〉 denotes the inner product of the vectors u and v. This operator projects the vector v orthogonally onto the vector u.
The Gram–Schmidt process then works as follows:
The sequence u_{1}, ..., u_{k} is the required system of orthogonal vectors, and the normalized vectors e_{1}, ..., e_{k} form an orthonormal set. The calculation of the sequence u_{1}, ..., u_{k} is known as Gram–Schmidt orthogonalization, while the calculation of the sequence e_{1}, ..., e_{k} is known as Gram–Schmidt orthonormalization as the vectors are normalized.
Full article ▸
