# Clenshaw algorithm

In numerical analysis, the Clenshaw algorithm[1] is a recursive method to evaluate a linear combination of Chebyshev polynomials. It is a generalization of Horner's method for evaluating a linear combination of monomials.

It generalizes to more than just Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation.[2]

## Clenshaw algorithm

Suppose that $\phi_k,\; k=0, 1, \ldots$ is a sequence of functions that satisfy the linear recurrence relation

$\phi_{k+1}(x) = \alpha_k(x)\,\phi_k(x) + \beta_k(x)\,\phi_{k-1}(x),$

where the coefficients $\alpha_k$ and $\beta_k$ are known in advance. Note that in the most common applications, $\alpha(x)$ does not depend on $k$, and $\beta$ is a constant that depends on neither $x$ nor $k$.

Our goal is to evaluate a weighted sum of these functions

$S(x) = \sum_{k=0}^n a_k \phi_k(x)$

Given the coefficients $a_0, \ldots, a_n$, compute the values $b_k(x)$ by the "reverse" recurrence formula:

$b_{n+1}(x) = b_{n+2}(x) = 0,$
$b_{k}(x) = a_k + \alpha_k(x)\,b_{k+1}(x) + \beta_{k+1}(x)\,b_{k+2}(x).$

The linear combination of the $\phi_k$ satisfies:

$S(x) = \phi_0(x)a_0 + \phi_1(x)b_1(x) + \beta_1(x)\phi_0(x)b_2(x).$

### Horner as a special case of Clenshaw

A particularly simple case occurs when evaluating a polynomial of the form

$S(x) = \sum_{k=0}^n a_k x^k$.

The functions are simply

$\phi_0(x) = 1,$
$\phi_k(x) = x^k = x\phi_{k-1}(x)$

and are produced by the recurrence coefficients $\alpha(x) = x$ and $\beta = 0$.

In this case, the recurrence formula to compute the sum is

$b_k(x) = a_k + x b_{k+1}(x)$

and, in this case, the sum is simply

$S(x) = a_0 + x b_1(x) = b_0(x)$,

which is exactly the usual Horner's method.

### Special case for Chebyshev series

Consider a truncated Chebyshev series

$p_n(x) = a_0 + a_1T_1(x) + a_2T_2(x) + \cdots + a_nT_n(x).$

The coefficients in the recursion relation for the Chebyshev polynomials are

$\alpha(x) = 2x, \quad \beta = -1,$

with the initial conditions

$T_0(x) = 1, \quad T_1(x) = x.$

Thus, the recurrence is

$b_k(x) = a_k + 2xb_{k+1}(x) - b_{k+2}(x)$

and the final sum is

$p_n(x) = a_0 + xb_1(x) - b_2(x).$

One way to evaluate this is to continue the recurrence one more step, and compute

$b_0(x) = 2a_0 + 2xb_1(x) - b_2(x),$

(note the doubled a0 coefficient) followed by

$p_n(x) = b_0(x)-x b_1(x)-a_0=\frac{1}{2}\left[b_0(x) - b_2(x)\right].$

### Geodetic applications

Clenshaw's algorithm is extensively used in geodetic applications where it is usually referred to as Clenshaw summation.[4] A simple application is summing the trigonometric series to compute the meridian arc. These have the form

$m(\theta) = C_0\,\theta + C_1\sin \theta + C_2\sin 2\theta + \cdots + C_n\sin n\theta.$

Leaving off the initial $C_0\,\theta$ term, the remainder is a summation of the appropriate form. There is no leading term because $\phi_0(\theta) = \sin 0\theta = \sin 0 = 0$.

$\sin (k+1)\theta = 2 \cos\theta \sin k\theta - \sin (k-1)\theta$,

making the coefficients in the recursion relation

$\alpha_k(\theta) = 2\cos\theta, \quad \beta_k = -1.$

and the evaluation of the series is given by

$b_{n+1}(\theta) = b_{n+2}(\theta) = 0,$
$b_k(\theta) = C_k + 2\cos \theta \,b_{k+1}(\theta) - b_{k+2}(\theta),\quad\mathrm{for\ } n\ge k \ge 1.$

The final step is made particularly simple because $\phi_0(\theta) = \sin 0 = 0$, so the end of the recurrence is simply $b_1(\theta)\sin(\theta)$; the $C_0\,\theta$ term is added separately:

$m(\theta) = C_0\,\theta + b_1(\theta)\sin \theta.$

Note that the algorithm requires only the evaluation of two trigonometric quantities $\cos \theta$ and $\sin \theta$.

Sometimes it necessary to compute the difference of two meridian arcs in a way that maintains high relative accuracy. This is accomplished by using trigonometric identities to write

$m(\theta_1)-m(\theta_2) = \sum_{k=1}^n 2 C_k \sin\bigl({\textstyle\frac12}k(\theta_1-\theta_2)\bigr) \cos\bigl({\textstyle\frac12}k(\theta_1+\theta_2)\bigr).$

Clenshaw summation can be applied in this case[5] provided we simultaneously compute $m(\theta_1)+m(\theta_2)$ and perform a matrix summation,

$\mathsf M(\theta_1,\theta_2) = \begin{bmatrix} (m(\theta_1) + m(\theta_2)) / 2\\ (m(\theta_1) - m(\theta_2)) / (\theta_1 - \theta_2) \end{bmatrix} = C_0 \begin{bmatrix}\mu\\1\end{bmatrix} + \sum_{k=1}^n C_k \mathsf F_k(\theta_1,\theta_2),$

where

$\mathsf F_k(\theta_1,\theta_2)= \begin{bmatrix} \cos k \delta \sin k \mu\\ \displaystyle\frac{\sin k \delta}\delta \cos k \mu \end{bmatrix},$

$\delta = \textstyle\frac12(\theta_1-\theta_2)$, and $\mu = \textstyle\frac12(\theta_1+\theta_2)$. The first element of $\mathsf M(\theta_1,\theta_2)$ is the average value of $m$ and the second element is the average slope. $\mathsf F_k(\theta_1,\theta_2)$ satisfies the recurrence relation

$\mathsf F_{k+1}(\theta_1,\theta_2) = \mathsf A(\theta_1,\theta_2) \mathsf F_k(\theta_1,\theta_2) - \mathsf F_{k-1}(\theta_1,\theta_2),$

where

$\mathsf A(\theta_1,\theta_2) = 2\begin{bmatrix} \cos \delta \cos \mu & -\delta\sin \delta \sin \mu \\ - \displaystyle\frac{\sin \delta}\delta \sin \mu & \cos \delta \cos \mu \end{bmatrix}.$

The standard Clenshaw algorithm can now be applied to yield

\begin{align} \mathsf B_{n+1} &= \mathsf B_{n+2} = \mathsf 0, \\ \mathsf B_k &= C_k \mathsf I + \mathsf A \mathsf B_{k+1} - \mathsf B_{k+2}, \qquad\mathrm{for\ } n\ge k \ge 1,\\ \mathsf M(\theta_1,\theta_2) &= C_0 \begin{bmatrix}\mu\\1\end{bmatrix} + \mathsf B_1 \mathsf F_1(\theta_1,\theta_2), \end{align}

where $\mathsf B_k$ are $2\times2$ matrices. Finally we have

$\frac{m(\theta_1) - m(\theta_2)}{\theta_1 - \theta_2} = \mathsf M_2(\theta_1, \theta_2).$

In the limit $\theta_2 \rightarrow \theta_1$, this relation correctly evaluates the derivative $dm(\theta_1)/d\theta_1$, providing that, in evaluating $\mathsf F_k$ and $\mathsf A$, we take $\lim_{\delta\rightarrow0}(\sin k\delta)/\delta = k$.

1. ^ Clenshaw, C. W. (July 1955). "A note on the summation of Chebyshev series". Mathematical Tables and other Aids to Computation 9 (51): 118–110. doi:10.1090/S0025-5718-1955-0071856-0. ISSN 0025-5718. edit Note that this paper is written in terms of the Shifted Chebyshev polynomials of the first kind $T^*_n(x) = T_n(2x-1)$.