Talk:Gram–Schmidt process

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated C-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
C Class
Mid Importance
 Field:  Algebra


The Matlab implementation for the Gram-Schmidt process is for a specific norm and inner product definition (here being the Standard Euclidean Inner Product and by it's extension the 2-norm). Should be updated to reflect that. — Preceding unsigned comment added by BlackMetalStats (talkcontribs) 00:19, 3 April 2017 (UTC)

I agree with the user below. How can it have been wrong here for SO long. Someone needs to change this. Check ANY reference if its not obvious.

—Preceding unsigned comment added by (talk) 22:11, 15 December 2009 (UTC)

The projection = is not self adjoint. It should be , or else it should be pointed out that is understood as linear in the second argument. Weeping604 (talk) 12:14, 10 July 2008 (UTC)

In the definition of the projection operator I changed <u, v> to denote the inner product instead of the dot product. The dot product is just a special case.


Does anyone know when the Gram Schimdt process was discovered? And who Mr. Gram and Mr. Schmidt are?

Jørgen Pedersen Gram (1850-1916), Danish mathematician.


See EXTERNAL LINKS in the article. It tells you.JFB80 (talk) 14:35, 18 April 2012 (UTC)

I was going to rewrite the process in terms of projections to clean it up a bit, but I'll probably stuff it up somewhere and be shouted at. So, if you feel up to doing so, I think it'd clear up the process greatly. Dysprosia 01:41, 26 Aug 2004 (UTC)

I changed the calculation of u_2 etc. to use projection onto u_1 instead of onto e_1. This allows the process to work in fields where you cannot take square roots, such as the rationals. I have a program that produces a series of orthogonal vectors spanning the right space. .

Ewwwwww, I really don't like the notation proj_u v. If anything it should be written proj_v u, to emphasize that the projection is just a scaling of u.

Maybe we need an illustration[edit]

Maybe we need an illustration of the projection just like most text books.

I just created this…
Gram–Schmidt process.svg

…would it fulfill its purpose? –Gustavb 07:16, 18 February 2006 (UTC)
Yes, I guess that is basically it. However, I think that is too long. We need to have
so the direction of is correct, but it should only go up to the dashed line. Does that make sense to you? If not, I'll change the picture myself, but I don't know the TeX package you are using (it looks quite easy to use though). -- Jitse Niesen (talk) 14:52, 18 February 2006 (UTC)
Yes, of course…guess I was tired when I did it. I have now created a new one but unfortunately I can't upload it due to a restriction in wikimedia commons which only allows users registered for more than four days to replace files (I registered yesterday.) –Gustavb 15:24, 18 February 2006 (UTC)
That seems rather silly to me; what's the purpose of prohibiting somebody from overwriting their own files? If you want, you can email it to me at (making the obvious substitution) and I'll upload it on your behalf. You can also just wait a couple of days. -- Jitse Niesen (talk) 15:49, 18 February 2006 (UTC)
I don't think it checks who uploaded the original file – it just block all uploads for the purpose of reducing vandalism (I assume). So yes, it's a bad solution. Thanks for uploading the new version. –Gustavb 16:27, 18 February 2006 (UTC)


Does the set of vectors v1, …, vk not need to form a basis over the vector space? Tis is not mentioned anywhere on the page and is very important to the defintion I have learnt.Stb20 12:48, 22 January 2007 (UTC)

I'm rather sure that they don't need to form a basis. Do you have a text book which says that they do need to form a basis?
They do need to be linearly independent though. This is mentioned in the article. -- Jitse Niesen (talk) 11:35, 23 January 2007 (UTC)
My book does mention it yes, although it is refering to this process in forming an othogonal basis, rather than just an orthogonal set of vectors, so I assume this is the only time it applies. Is it still worth a mention? Stb20 11:47, 24 January 2007 (UTC)


for the projection, shouldn't it be proj(V) = [(V*U)/(U*U)] * U?? * is the dot product.

The author wrote it as (v,u)/(u, u) * u, which I don't understand what they mean.

<V,U> is a matrix or something else?

If it is a matrix then there shouldn't be division. If the expression is still correct, please explain what do the symbols mean.

Xfliang 05:40, 16 February 2007 (UTC)

<v,u> is another notation for the dot product of v and u. I added a bit to the article to explain this. -- Jitse Niesen (talk) 07:17, 16 February 2007 (UTC)

Factual accuracy on the method presented here[edit]

Maybe I'm reading the information on the page wrong, but as far as I can tell if I try and use the information here to create the 4th Legendre polynomial starting from 1,x,x^2,x^3 etc. I get an incorrect answer , however if I use the information on Mathworld I get the right answer. Eraserhead1 23:31, 18 February 2007 (UTC)

I'm sorry, but I can't see where the algorithm on this page is different from the algorithm in MathWorld. Could you perhaps explain where the algorithms differ? Is P4 the first case where you get different results? -- Jitse Niesen (talk) 13:01, 10 March 2007 (UTC)

numerically stable algorithm[edit]

I think there"s a mistake in the superscripts in that section. In the last equation it should be either uk(k-1) OR on the right-hand side of the eqution all superscripts must be increased by one. Also the notation for the projection operator is a bit confusing: isn't actually meant projuk(k-1)uk(k-2) instead of proju(k-1)uk(k-2)? There seems to be a k missing... 12:31, 14 June 2007 (UTC)

Perhaps it's easier to write the algorithm as
The vectors are then orthogonal. If you now use to denote the vector , you get the algorithm as mentioned in the article. Perhaps the notation isn't quite consistent, but I prefered to save some superscripts. Do you think the version with is less confusing?
Do you understand the difference between the original and the stabilized algorithm on an implementation level? There is pseudocode for the stabilized version in the article. The original version looks like
input: v1, …, vk; output: u1, …, uk
for j from 1 to k do
for i from 1 to j − 1 do
end for
end for
It's only a subtle difference in the code. -- Jitse Niesen (talk) 13:26, 14 June 2007 (UTC)

Yes, I got the difference:-) Thanks for the answer. Though I'm still thinking about the why...Raneko 10:20, 15 June 2007 (UTC)

Shouldn't the line in the pseudocode:

for i from 1 to j − 1 do

actually be:

for i from j + 1 to k do

With that change, my Python implementation produces sensible results (unlike the original):

def stabGramSchmidt(a):
        # ( )
        k,n = a.shape
        for j in xrange(k):
                for i in xrange(j+1,k):
                        a[j] -= a[i] * dot(a[i],a[j]) / dot(a[i],a[i]) # remove from a[j] projection of a[j] over a[i]
                a[j] /= linalg.norm(a[j])       # normalize
        return a

modified gram-schmidt[edit]

It'd be nice to go over the distinction between classical and modified gram-schmidt. If anyone's interested in writing it up, the following PDF is a decent example to follow: Lavaka (talk) 01:04, 3 January 2008 (UTC)

Unknown alternative method[edit]

The last paragraph is intriguing, but there's no actual substance to it:

"In Quantum Mechanics there are several orthogonalization schemes with characteristics better suited for applications than the Gram-Schmidt one. The most important among them are the Symmetric and the Canonical orthonormalization (see Solivérez & Gagliano)."

I added a clarification tag. (talk) 23:18, 29 October 2012 (UTC)

modified Gram-Schmidt is possibly flawed[edit]

The normalisation procedure may introduce errors in finite precision calculations. Thus, in the modified Gram-Schmidt process, the higher-order vectors have to be projected onto the *normalised* vectors previously obtained, haven't they?? Drgst (talk) 21:10, 29 July 2013 (UTC)

Extension to polynomials?[edit]

When working with polynomials and an arbitrary weight function, there is a recursive Gram-Schmidt orthonomalization technique. More details are provided below: (talk) 05:51, 30 August 2013 (UTC)


Twice I removed ([1], [2]) the joke added by (talk · contribs) aka (talk · contribs · WHOIS). It was also, for obvious reasons, removed by Cluebot ([3]). I think we shouldn't insult our humourless readers with that kind of lame jokes. - DVdm (talk) 22:17, 30 October 2013 (UTC)

Not sure[edit]

Not sure what Weeping604 was thinking, but he's wrong. Both of those expressions are vectors, and so it's impossible for either to be self-adjoint. However, if you consider , then is the projection matrix you multiply any vector against: to project it onto and it IS self adjoint.

I don't even see how to convert Weeping604's second expression into a projection matrix time .

There's a very good reason why the standard is to define Hermitian forms (the complex version of bilinear forms which encompass inner product and dot product) as conjugate linear in the first term and linear in the second term; it simplifies the notation. Because of how matrix multiplication is defined, you're able to denote the complex inner product as . If you define your Hermitian form as conjugate linear in the second term, then you have to represent your inner product as . --Yoda of Borg (talk) 08:39, 5 November 2013 (UTC)

I aggree that is selfadjoint; the problem is that it reads in the article!

I could change it in the text, but the gif next to it uses also the *wrong* version; I don't know how to update it. JoghurtGF (talk) 09:50, 5 January 2017 (UTC)

Determinant formula[edit]

I have two complaints about this section!

First of all, the section defines some vectors ei and then never mentions them again. Frankly, I have no idea what's going on there.

Secondly, the section contains the text: "Note that the expression for uk is a "formal" determinant, i.e. the matrix contains both scalars and vectors; the meaning of this expression is defined to be the result of a cofactor expansion along the row of vectors."

I highly doubt that this explanation will be the least bit enlightening to anyone who didn't already know what was going on, and I also doubt that the link would be very enlightening with a bit of additional context. (For instance, the text should indicate which row the Laplace expansion is being taken over - the last one - and then indicate the formula as a linear combination of the v's with coefficients combing from the corresponding minor determinants.) 2602:30A:C04C:5F30:805:108B:A61A:2175 (talk) 23:37, 3 August 2014 (UTC)

Francesco Caravelli: I have been trying very hard to find the determinant formula in the literature. There is no reference in the wiki page. Very frustrating! Update: I have found a similar formula in Gantmacher: Theory of matrices (1959) Volume 1, Pages 256-258.

— Preceding unsigned comment added by (talk) 17:28, 16 March 2017 (UTC)