Talk:Rayleigh quotient iteration

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, Low-priority)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Low Priority
 Field:  Applied mathematics

This entry is utterly useless without any equations/proofs. I second this.

There is also an error in that the provided matrix example (as of 10/19/2009, 4:22 PM PST) is not actually Hermitian (real symmetric, in this case) —Preceding unsigned comment added by (talk) 23:24, 19 October 2009 (UTC)

The algorithm is wrong. When mu comes near an eigenvalue, the matrix A-mu*I becomes near singular, that is, badly conditionned. Hence, computing the inverse of A-mu*I leads to a possibly inaccurate computation. Moreover, the matrix A-mu*I is inversed twice instead of one. This algorithm should be deeply modified before being published... Finally, the example is wrong, since the matrix is not symetric! —Preceding unsigned comment added by (talk) 08:36, 2 March 2010 (UTC)

The method is sound. Once you get too close to the eigenvalue any method you're using to invert the linear system will throw a fit but in practical implementations of matrices sizes thousands by thousands this generally doesn't occur until you're within within 4 or 5 significant figures. If you manage to pick a mu which gives you an invalid linear system then you've already got your answer. The method is such that if your guess is wrong by a significant amount then it will hopefully give you a better approximation. The proximity to the true eigenvalue the algorithm can get are determined by the precision of your mathematical code and CPUs number handling. Furthermore the method doesn't have to be for symmetric matrices, it just happens people have proven convergence rates for the special types of matrices often encountered in large eigenvalue problems, such as Hermitian. There's no need to be able to order the Rayleigh quotients, only to measure the convergence of the series and that is done using standard absolute values on . AlphaNumeric (talk) 12:01, 29 July 2012 (UTC)

The Rayleigh iteration converges quadratically for general diagonalizable matrices and cubically for normal matrices. So the example should still show quadratic convergence. As far as I know, it is not really a problem if mu is close to an eigenvalue, since then only error components close to the eigenspace will blow up, and this is acceptable. But I guess that proving that a numerical algorithm solves the system A-mu I reliably would be an advanced topic. —Preceding unsigned comment added by (talk) 21:57, 13 May 2010 (UTC)

Determining which eigenvalue the method finds[edit]

I think the article needs to have some comment about how reliable the algorithm is at finding a specific eigenvalue. The example in the article gives the impression you can pick a ridiculously bad guess for the eigenvalue you're after and the Rayleigh method will still converge very quickly. In reality, ie in realistic problems where eigenvalues are wanted, this isn't certain to occur. For example, suppose we have an Hermitian matrix with positive eigenvalues. The article gives the impression if you pick a mu which is less than 0 you'll end up converging to the eigenspace corresponding to the first eigenvalue (for simplicity assume no degeneracy) as once you invert that eigenvalue's eigenspace will be dominant. The Rayleigh update to mu can make mu jump to being larger than the first eigenvalue and if you have a very close spectrum this might mean it jumps to being on the other side of (say) the second eigenvalue. Once that occurs all bets are off as to whether you'll get the first eigenvalue out, since it is no longer associated to the dominant eigenspace. If you don't update mu but just do an inverse power method then the aforementioned procedure will converge to the eigenvalue you were after but obviously much slower. That seems to be the price you pay, either you get quick convergence to some eigenvalue or you get slower convergence to the one you were initially closest to. AlphaNumeric (talk) 11:54, 29 July 2012 (UTC)


I'm confused as to how the Octave code works without multiplying by A on every iteration (how can you compute the Rayleigh quotient otherwise?). I have looked at a variety of sources and could not find any using this method. Can anyone explain?

Thanks. InverseHypercube (talk) 10:28, 15 December 2014 (UTC)

is the Rayleigh quotient for the matrix . If is a sufficiently good approximation of an eigenvector for an eigenvalue , , so the lambda in the Octave code approximates . Taking the reciprocal and adding yields a new approximation of the eigenvalue. Maybe I should change the code to compute the real Rayleigh quotient, even though it involves an additional matrix-vector product? Sboerm (talk) 14:33, 15 December 2014 (UTC)
Thanks for the clarification, I assumed something like this was happening. I think it would be best to compute the real Rayleigh quotient for pedagogical purposes, or at least add a note. Do we know that using this approximation yields the same convergence properties as using the Rayleigh quotient? InverseHypercube (talk) 20:19, 15 December 2014 (UTC)

the example is probably wrong[edit]

As far as I can see the examples 3 eigenvectors (v1,v2,v3) are wrong since only the 2nd component changes while those have to be linearly independent (in order to span R^3) Rapus95 (talk) 19:29, 14 June 2016 (UTC)

Hopefully it's corrected now. PeR (talk) 07:22, 19 November 2016 (UTC)