# Tridiagonal matrix algorithm

In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagonal system for n unknowns may be written as

${\displaystyle a_{i}x_{i-1}+b_{i}x_{i}+c_{i}x_{i+1}=d_{i},\,\!}$

where ${\displaystyle a_{1}=0\,}$ and ${\displaystyle c_{n}=0\,}$.

${\displaystyle {\begin{bmatrix}{b_{1}}&{c_{1}}&{}&{}&{0}\\{a_{2}}&{b_{2}}&{c_{2}}&{}&{}\\{}&{a_{3}}&{b_{3}}&\ddots &{}\\{}&{}&\ddots &\ddots &{c_{n-1}}\\{0}&{}&{}&{a_{n}}&{b_{n}}\\\end{bmatrix}}{\begin{bmatrix}{x_{1}}\\{x_{2}}\\{x_{3}}\\\vdots \\{x_{n}}\\\end{bmatrix}}={\begin{bmatrix}{d_{1}}\\{d_{2}}\\{d_{3}}\\\vdots \\{d_{n}}\\\end{bmatrix}}.}$

For such systems, the solution can be obtained in ${\displaystyle O(n)}$ operations instead of ${\displaystyle O(n^{3})}$ required by Gaussian elimination. A first sweep eliminates the ${\displaystyle a_{i}}$'s, and then an (abbreviated) backward substitution produces the solution. Examples of such matrices commonly arise from the discretization of 1D Poisson equation and natural cubic spline interpolation.

Thomas' algorithm is not stable in general, but is so in several special cases, such as when the matrix is diagonally dominant (either by rows or columns) or symmetric positive definite;[1][2] for a more precise characterization of stability of Thomas' algorithm, see Higham Theorem 9.12.[3] If stability is required in the general case, Gaussian elimination with partial pivoting (GEPP) is recommended instead.[2]

## Method

The forward sweep consists of modifying the coefficients as follows, denoting the new coefficients with primes:

${\displaystyle c'_{i}={\begin{cases}{\begin{array}{lcl}{\cfrac {c_{i}}{b_{i}}}&;&i=1\\{\cfrac {c_{i}}{b_{i}-a_{i}c'_{i-1}}}&;&i=2,3,\dots ,n-1\\\end{array}}\end{cases}}\,}$

and

${\displaystyle d'_{i}={\begin{cases}{\begin{array}{lcl}{\cfrac {d_{i}}{b_{i}}}&;&i=1\\{\cfrac {d_{i}-a_{i}d'_{i-1}}{b_{i}-a_{i}c'_{i-1}}}&;&i=2,3,\dots ,n.\\\end{array}}\end{cases}}\,}$

The solution is then obtained by back substitution:

${\displaystyle x_{n}=d'_{n}\,}$
${\displaystyle x_{i}=d'_{i}-c'_{i}x_{i+1}\qquad ;\ i=n-1,n-2,\ldots ,1.}$

## Derivation

The derivation of the tridiagonal matrix algorithm is a special case of Gaussian elimination.

Suppose that the unknowns are ${\displaystyle x_{1},\ldots ,x_{n}}$, and that the equations to be solved are:

{\displaystyle {\begin{aligned}b_{1}x_{1}+c_{1}x_{2}&=d_{1};&i&=1\\a_{i}x_{i-1}+b_{i}x_{i}+c_{i}x_{i+1}&=d_{i};&i&=2,\ldots ,n-1\\a_{n}x_{n-1}+b_{n}x_{n}&=d_{n};&i&=n.\end{aligned}}}

Consider modifying the second (${\displaystyle i=2}$) equation with the first equation as follows:

${\displaystyle ({\mbox{equation 2}})\cdot b_{1}-({\mbox{equation 1}})\cdot a_{2}}$

which would give:

${\displaystyle (a_{2}x_{1}+b_{2}x_{2}+c_{2}x_{3})b_{1}-(b_{1}x_{1}+c_{1}x_{2})a_{2}=d_{2}b_{1}-d_{1}a_{2}\,}$
${\displaystyle (b_{2}b_{1}-c_{1}a_{2})x_{2}+c_{2}b_{1}x_{3}=d_{2}b_{1}-d_{1}a_{2}\,}$

where the second equation immediately above is a simplified version of the equation immediately preceding it. The effect is that ${\displaystyle x_{1}}$ has been eliminated from the second equation. Using a similar tactic with the modified second equation on the third equation yields:

${\displaystyle (a_{3}x_{2}+b_{3}x_{3}+c_{3}x_{4})(b_{2}b_{1}-c_{1}a_{2})-((b_{2}b_{1}-c_{1}a_{2})x_{2}+c_{2}b_{1}x_{3})a_{3}=d_{3}(b_{2}b_{1}-c_{1}a_{2})-(d_{2}b_{1}-d_{1}a_{2})a_{3}\,}$
${\displaystyle (b_{3}(b_{2}b_{1}-c_{1}a_{2})-c_{2}b_{1}a_{3})x_{3}+c_{3}(b_{2}b_{1}-c_{1}a_{2})x_{4}=d_{3}(b_{2}b_{1}-c_{1}a_{2})-(d_{2}b_{1}-d_{1}a_{2})a_{3}.\,}$

This time ${\displaystyle x_{2}}$ was eliminated. If this procedure is repeated until the ${\displaystyle n^{th}}$ row; the (modified) ${\displaystyle n^{th}}$ equation will involve only one unknown, ${\displaystyle x_{n}}$. This may be solved for and then used to solve the ${\displaystyle (n-1)^{th}}$ equation, and so on until all of the unknowns are solved for.

Clearly, the coefficients on the modified equations get more and more complicated if stated explicitly. By examining the procedure, the modified coefficients (notated with tildes) may instead be defined recursively:

${\displaystyle {\tilde {a}}_{i}=0\,}$
${\displaystyle {\tilde {b}}_{1}=b_{1}\,}$
${\displaystyle {\tilde {b}}_{i}=b_{i}{\tilde {b}}_{i-1}-{\tilde {c}}_{i-1}a_{i}\,}$
${\displaystyle {\tilde {c}}_{1}=c_{1}\,}$
${\displaystyle {\tilde {c}}_{i}=c_{i}{\tilde {b}}_{i-1}\,}$
${\displaystyle {\tilde {d}}_{1}=d_{1}\,}$
${\displaystyle {\tilde {d}}_{i}=d_{i}{\tilde {b}}_{i-1}-{\tilde {d}}_{i-1}a_{i}.\,}$

To further hasten the solution process, ${\displaystyle {\tilde {b}}_{i}}$ may be divided out (if there's no division by zero risk), the newer modified coefficients, each notated with a prime, will be:

${\displaystyle a'_{i}=0\,}$
${\displaystyle b'_{i}=1\,}$
${\displaystyle c'_{1}={\frac {c_{1}}{b_{1}}}\,}$
${\displaystyle c'_{i}={\frac {c_{i}}{b_{i}-c'_{i-1}a_{i}}}\,}$
${\displaystyle d'_{1}={\frac {d_{1}}{b_{1}}}\,}$
${\displaystyle d'_{i}={\frac {d_{i}-d'_{i-1}a_{i}}{b_{i}-c'_{i-1}a_{i}}}.\,}$

This gives the following system with the same unknowns and coefficients defined in terms of the original ones above:

${\displaystyle {\begin{array}{lcl}x_{i}+c'_{i}x_{i+1}=d'_{i}\qquad &;&\ i=1,\ldots ,n-1\\x_{n}=d'_{n}\qquad &;&\ i=n.\\\end{array}}\,}$

The last equation involves only one unknown. Solving it in turn reduces the next last equation to one unknown, so that this backward substitution can be used to find all of the unknowns:

${\displaystyle x_{n}=d'_{n}\,}$
${\displaystyle x_{i}=d'_{i}-c'_{i}x_{i+1}\qquad ;\ i=n-1,n-2,\ldots ,1.}$

## Variants

In some situations, particularly those involving periodic boundary conditions, a slightly perturbed form of the tridiagonal system may need to be solved:

{\displaystyle {\begin{aligned}a_{1}x_{n}+b_{1}x_{1}+c_{1}x_{2}&=d_{1},\\a_{i}x_{i-1}+b_{i}x_{i}+c_{i}x_{i+1}&=d_{i},\quad \quad i=2,\ldots ,n-1\\a_{n}x_{n-1}+b_{n}x_{n}+c_{n}x_{1}&=d_{n}.\end{aligned}}}

In this case, we can make use of the Sherman-Morrison formula to avoid the additional operations of Gaussian elimination and still use the Thomas algorithm. The method requires solving a modified non-cyclic version of the system for both the input and a sparse corrective vector, and then combining the solutions. This can be done efficiently if both solutions are computed at once, as the forward portion of the pure tridiagonal matrix algorithm can be shared.

In other situations, the system of equations may be block tridiagonal (see block matrix), with smaller submatrices arranged as the individual elements in the above matrix system (e.g., the 2D Poisson problem). Simplified forms of Gaussian elimination have been developed for these situations.[4]

The textbook Numerical Mathematics by Quarteroni, Sacco and Saleri, lists a modified version of the algorithm which avoids some of the divisions (using instead multiplications), which is beneficial on some computer architectures.

## References

1. ^ Pradip Niyogi (2006). Introduction to Computational Fluid Dynamics. Pearson Education India. p. 76. ISBN 978-81-7758-764-7.
2. ^ a b Biswa Nath Datta (2010). Numerical Linear Algebra and Applications, Second Edition. SIAM. p. 162. ISBN 978-0-89871-765-5.
3. ^ Nicholas J. Higham (2002). Accuracy and Stability of Numerical Algorithms: Second Edition. SIAM. p. 175. ISBN 978-0-89871-802-7.
4. ^ Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2007). "Section 3.8". Numerical Mathematics. Springer, New York. ISBN 978-3-540-34658-6.
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 2.4". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8