Reading about the Lanczos algorithm, it was at first fairly surprising that the αs and βs are the tridiagonal decomposition of a matrix, A, such that . I was surprised particularly because the algorithm is so simple yet produces something that in form is very close to an eigendecomposition.
Now, I have a good geometric/conceptual sense of what a diagonal matrix means, especially when all the entries are positive: it means that the matrix scales things along the coordinate axes. Similarly, if a matrix is diagonalizable, it means we can pick a coordinate system in which it is diagonal. What's the equivalent for a tridiagonal matrix? Obviously it means that each output degree of freedom is linear in the matching input degree of freedom and the two "neighboring" degrees of freedom, with the first and last degree of freedom only having one neighbor.
Does this imply that, given a tridiagonal decomposition, we could do a full eigendecomposition by starting at the top or bottom corner and do eigendecomposition of the 2×2 block of T to find a 2×2 rotation matrix, U, to diagonalize that block, then update V accordingly, and then go down the diagonal doing the same? —Ben FrantzDale (talk) 11:13, 6 June 2011 (UTC)