Jump to content

Talk:Tridiagonal matrix

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Geometric meaning

[edit]

Reading about the Lanczos algorithm, it was at first fairly surprising that the αs and βs are the tridiagonal decomposition of a matrix, A, such that . I was surprised particularly because the algorithm is so simple yet produces something that in form is very close to an eigendecomposition.

Now, I have a good geometric/conceptual sense of what a diagonal matrix means, especially when all the entries are positive: it means that the matrix scales things along the coordinate axes. Similarly, if a matrix is diagonalizable, it means we can pick a coordinate system in which it is diagonal. What's the equivalent for a tridiagonal matrix? Obviously it means that each output degree of freedom is linear in the matching input degree of freedom and the two "neighboring" degrees of freedom, with the first and last degree of freedom only having one neighbor.

Does this imply that, given a tridiagonal decomposition, we could do a full eigendecomposition by starting at the top or bottom corner and do eigendecomposition of the 2×2 block of T to find a 2×2 rotation matrix, U, to diagonalize that block, then update V accordingly, and then go down the diagonal doing the same? —Ben FrantzDale (talk) 11:13, 6 June 2011 (UTC)[reply]

Introduction with general form

[edit]

The introduction should reference a general tridiagonal matrix instead of a specific example. One could just copy the often referenced matrix in the following sections and delete them there. HerrHartmuth (talk) 08:08, 13 November 2019 (UTC)[reply]

Direct sum of 1x1 and 2x2 matrices

[edit]

A tridiagonal matrix is usually not a direct sum of 1x1 and 2x2 matrices. The article is wrong. Svennik (talk) 13:32, 3 April 2024 (UTC)[reply]