# Tridiagonal matrix

In linear algebra, a tridiagonal matrix is a matrix that has nonzero elements only on the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal.

For example, the following matrix is tridiagonal:

$\begin{pmatrix} 1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end{pmatrix}.$

The determinant of a tridiagonal matrix is given by the continuant of its elements.[1]

An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.

## Properties

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix.[2] In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that p + q/2 = n -- the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.[3]

The set of all n × n tridiagonal matrices forms a 3n-2 dimensional vector space.

Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.

### Determinant

The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation.[4] Write f1 = |a1| = a1 and

$f_n = \begin{vmatrix} a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{vmatrix}.$

The sequence (fi) is called the continuant and satisfies the recurrence relation

$f_n = a_n f_{n-1} - c_{n-1}b_{n-1}f_{n-2}$

with initial values f0 = 1 and f-1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.

### Inversion

The inverse of a non-singular tridiagonal matrix T

$T = \begin{pmatrix} a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{pmatrix}$

is given by

$(T^{-1})_{ij} = \begin{cases} (-1)^{i+j}b_i \cdots b_{j-1} \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i \leq j\\ (-1)^{i+j}c_j \cdots c_{i-1} \theta_{j-1} \phi_{i+1}/\theta_n & \text{ if } i > j\\ \end{cases}$

where the θi satisfy the recurrence relation

$\theta_i = a_i \theta_{i-1} - b_{i-1}c_{i-1}\theta_{i-2} \quad \text{ for } i=2,3,\ldots,n$

with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy

$\phi_i = a_i \phi_{i+1} - b_i c_i \phi_{i+2} \quad \text{ for } i=n-1,\ldots,1$

with initial conditions ϕn+1 = 1 and ϕn = an.[5][6]

Closed form solutions can be computed for special cases such as symmetric matrices with all off-diagonal elements equal[7] or Toeplitz matrices[8] and for the general case as well.[9][10]

### Solution of linear system

A system of equations A x = b for $\scriptstyle b\in \reals^n$ can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations.[11]

### Eigenvalues

When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely $a + 2 \sqrt{bc} \, \cos(k \pi / {(n+1)})$, for $k=1,...,n.$ [12][13]

## Computer programming

A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to tridiagonal form as a first step.

A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n − 1 containing the subdiagonal and superdiagonal elements.

## Notes

1. ^ Thomas Muir (1960). A treatise on the theory of determinants. Dover Publications. pp. 516–525.
2. ^ Horn, Roger A.; Johnson, Charles R. (1985). Matrix Analysis. Cambridge University Press. p. 28. ISBN 0521386322.
3. ^ Horn & Johnson, page 174
4. ^ El-Mikkawy, M. E. A. (2004). "On the inverse of a general tridiagonal matrix". Applied Mathematics and Computation 150 (3): 669–679. doi:10.1016/S0096-3003(03)00298-4. edit
5. ^ Da Fonseca, C. M. (2007). "On the eigenvalues of some tridiagonal matrices". Journal of Computational and Applied Mathematics 200: 283–286. doi:10.1016/j.cam.2005.08.047. edit
6. ^ Usmani, R. A. (1994). "Inversion of a tridiagonal jacobi matrix". Linear Algebra and its Applications. 212-213: 413–414. doi:10.1016/0024-3795(94)90414-6. edit
7. ^ Hu, G. Y.; O'Connell, R. F. (1996). "Analytical inversion of symmetric tridiagonal matrices". Journal of Physics A: Mathematical and General 29 (7): 1511. doi:10.1088/0305-4470/29/7/020. edit
8. ^ Huang, Y.; McColl, W. F. (1997). "Analytical inversion of general tridiagonal matrices". Journal of Physics A: Mathematical and General 30 (22): 7919. doi:10.1088/0305-4470/30/22/026. edit
9. ^ Mallik, R. K. (2001). "The inverse of a tridiagonal matrix". Linear Algebra and its Applications 325: 109–139. doi:10.1016/S0024-3795(00)00262-7. edit
10. ^ Kılıç, E. (2008). "Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions". Applied Mathematics and Computation 197: 345–357. doi:10.1016/j.amc.2007.07.046. edit
11. ^ Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed. ed.). The Johns Hopkins University Press. ISBN 0-8018-5414-8.
12. ^ Noschese, S.; Pasquini, L.; Reichel, L. (2013). "Tridiagonal Toeplitz matrices: Properties and novel applications". Numerical Linear Algebra with Applications 20 (2): 302. doi:10.1002/nla.1811. edit
13. ^ This can also be written as $a - 2 \sqrt{bc} \, \cos(k \pi / {(n+1)})$ because $\cos(x) = -\cos(\pi-x)$, as is done in: Kulkarni, D.; Schmidt, D.; Tsui, S. K. (1999). "Eigenvalues of tridiagonal pseudo-Toeplitz matrices". Linear Algebra and its Applications 297: 63. doi:10.1016/S0024-3795(99)00114-7. edit