# Talk:Chebyshev polynomials

WikiProject Mathematics (Rated B-class, Mid-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 B Class
 Mid Importance
Field: Analysis
WikiProject Russia / Science & education (Rated B-class, Mid-importance)
To participate: Feel free to edit the article attached to this page, join up at the project page, or contribute to the project discussion.
B  This article has been rated as B-Class on the project's quality scale.
Mid  This article has been rated as Mid-importance on the project's importance scale.

Michael, it's not actually your browser. You can set that in your wikipedia preferences - by default it will render simple equations in normal text. And you apparently just complicated the equation by adding some white space ;-) snoyes 23:37 Mar 1, 2003 (UTC)

You're right; it works. Thanks. Why is the default the option that renders things as normal text, given that that is never what is intended when things are set with TeX? Michael Hardy 23:45 Mar 1, 2003 (UTC)

I don't know why that is the default option - best to ask one of the developers, maybe on the village pump. --snoyes 23:53 Mar 1, 2003 (UTC)

And BTW, there don't seem to be any other articles about special polynomial sequences on Wikipedia. Michael Hardy 23:45 Mar 1, 2003 (UTC)

//Here's the polynomial sequence on the page. Κσυπ Cyp   19:47, 25 Jan 2004 (UTC)

#include <stdio.h>

int asdf(int, int);
int choo(int, int);

int main() {
int n, k;
for(n=0;n<=16;++n) {
printf("T<sub>%d</sub>(x)=", n);
for(k=0;k<=n/2;++k) {
printf("%s%d", k&&asdf(n, k)>=0?"+":"", asdf(n, k));
if(n-2*k) printf("x");
if(n-2*k>1) printf("<sup>%d</sup>", n-2*k);
if(k==n/2) printf("<br>\n");
}
}
return(0);
}

int tmp[1000];

int asdf(int n, int k) {
int r, a;
if(!n) return(1);
if(!k) return(1<<(n-1));
for(a=r=0;a<k;++a) r-=(asdf(n, a)*choo(n-2*a, k-a))>>(2*(k-a));
return(r);
}

int choo(int n, int k) {
int x, y;
for(tmp[0]=y=1;y<=n;++y) for(tmp[x=y]=0;x;--x) tmp[x]+=tmp[x-1];
return(tmp[k]);
}


If we recursively define a function (the letter ξ an arbitrary choice)
$\xi(n,0)=\lceil 2^{n-1}\rceil$
$\xi(n,k)=-\sum^{k-1}_{a=0}\frac{\xi(n,a)}{4^{k-a}}{n-2a\choose k-a}$
then
$T_n(x)=\sum^{\lfloor n/2\rfloor}_{k=0}\xi(n,k)x^{n-2k}$

## Maxima and minima

It looks to me like the local maxima of the Chebyshev polynomials of the first kind all have value 1, and the local minima all have value -1. If so (I don't know enough to say whether it is actually true), then this is an interesting property that should be mentioned somewhere on the page. I'm guessing it probably pops right out of one of the definitions, but that's not transparent to a muggle like me. -dmmaus 04:01, 15 August 2005 (UTC)

I think that follows from the definition:

$T_n(x) = \left\{ \begin{matrix} & \cos(n\arccos(x)) & \mbox{ , } \ x \in [-1,1] \\ & \cosh(n \, \mathrm{arcosh}(x)) & \mbox{ , } \ x \ge 1 \\ & (-1)^n \cosh(n \, \mathrm{arcosh}(-x)) & \mbox{ , } \ x \le -1 \\ \end{matrix}\right.$

Obviously, for x in [-1, 1] since the behaviour is given by the cosine, the local maxima and minima are 1 and -1. If one can prove that there are no maxima/minima outside this interval, then you assertion would follow. Oleg Alexandrov 04:07, 15 August 2005 (UTC)

Ah, good point. I'm afraid my eyes glaze over when I see hyperbolic trig functions in the vicinity. :-) Still, I think it'd be nice to mention this property explicitly in the article, since it's the standout physical feature of the graphs when you look at them. -dmmaus 08:54, 15 August 2005 (UTC)

I don't know that much about these polynomials to attempt to write in the article. Let us hope somebody else will mention this fact. Oleg Alexandrov 17:04, 16 August 2005 (UTC)
I've mentioned it.
There's also an interesting paralel here, which is a bit OT. Notice that for a degree n polynomial (with fixed leading coeff) to have a minimal maximum absolute value on an interval, it has to reach this maximal absolute value the highest number of times possible: n+1 times. (Clearly a degree n polynomial can have at most n-1 places where its derivative is 0, so it's impossible that is has more than n+1 extremal point in an interval.) This paralels up nicely with a general theorem in numerical analysis that the polynomial p of degree n (with no restriction on the leading coeff) that approximates a continuous function f in the sense that max|p-f| on an interval is minimal, |p-f| has to take this maximum n+1 times. I'm not sure how this general theorem is stated exactly, and I also don't know if it has any connection to this. – b_jonas 21:45, 29 December 2005 (UTC)
You have cited famous approximation theorem by Chebyshev (not later than 1857). And yes, extremal properties of polynomials of first kind (minimal C-norm) follow from this theorem immediately. Mir76 (talk) 16:18, 16 August 2009 (UTC)
Let me say thanks to User:172.128.197.224 for his correction of the statement in the second paragraph of the article. – b_jonas 18:30, 18 February 2006 (UTC)

## Optimal approximations to arbitrary functions

The theorem says that, if an nth degree polynomial P approximates a function F on a closed interval in such a way that the error function P(x)-F(x) has n+2 maxima, and those maxima have alternating signs and the same absolute value (that is, P-F has maxima +ε, -ε, +ε, ...), then that polynomial is locally optimal. That is, there is no other polynomial close to P that has maximum error less than ε. This locality situation is the same as the situation in differential calculus: elementary calculus problems involve finding the maximum value of a function. One sets the derivative to zero. That gives a local maximum (or minimum), but there might be other solutions that are far away.

I will outline the proof below. I think this theorem, and its proof, are interesting, but don't seem to be part of the standard maths curriculum. (Orthogonal polynomials in general seem to be a mathematical orphan child.) I would suggest that this proof ought to be on a separate "proof" page. WP is trying to minimize the number of such pages, because WP isn't a textbook, but this would probably be a good candidate.

The proof really requires a graph to motivate it. I'm new to WP; it should be made by someone who is comfortable making and uploading graphs, such as the graphs on the Chebyshev page.

Here's the proof: Suppose P(x)-F(x) has n+2 maxima, alternating signs, all with absolute value ε. Suppose Q is another nth degree polynomial very close to P, that is strictly better. Superimpose tha graph of Q-F on that of P-F. Since Q is close to P, they will be about the same, but at the n+2 points where P-F is maximal, Q-F must lie strictly inside P-F. This is because the maximal excursions of Q-F are δ, which is strictly less than ε. So (Q-F)-(P-F) is strictly nonzero at those n+2 points, with alternating signs. But (Q-F)-(P-F) = Q-P, an nth degree polynomial. Since it has n+2 alternating nonzero values, it must have n+1 roots. But nth degree polynomials can't do that.

Now here's an outline of why Chebyshev interpolation gets very close to this optimum polynomial: If the Taylor series for a function converges very rapidly (the way, for example, sin, cos, and exponential do), the error from chopping off the series after some finite number of terms will be close to the first term after the cutoff. That term will dominate all the later terms, and that term plus all the later ones are of course the error function. The same is true if we use a Chebyshev series instead of a Taylor series.

Now if we expand F(x) in its Chebyshev series, and chop it off after the Tn term, we have an nth degree polynomial. The error is dominated by the Tn+1 term, so it looks like Tn+1. Tn+1 is level, with n+2 maxima, with alternating signs. This means the error function is approximately level, and hence, by the theorem above, this polynomial is approximately optimal.

If one wants a truly optimal polynomial, one "tweaks" the polynomial P so that its error excursions are truly level. Remes' algorithm does this.

Should there be a separate WP page for this stuff, linked from Chebyshev? What should it be called? William Ackerman 18:46, 21 February 2006 (UTC)

Thanks for stating the theorem. – b_jonas 21:13, 21 February 2006 (UTC)

## The relationship

I removed the following:

Starting with T0( cos u ) = 1 and T1( cos u ) = cos u, one quickly calculates that cos 2u = 2 cos^2 u - 1, cos 3u = 4 cos^2 u - 3 cos u, cos 4u = 8 cos^4 u - 8 cos^2 u + 1, etc. By setting u = 0, whence cos u = 1, one can immediately detect that both sides of each equation evaluate to unity, which is a good clue that the relationship may be correct—which, of course, it is.

What does "the relationship may be correct" mean? which relationship? Not $T_n(\cos\theta)=\cos(n\theta)$ because thats a definition. It may show that the trignometric identities are true, but thats not the point of the article. PAR 13:27, 11 August 2006 (UTC)

## reasoning not correct

I removed the following part because it is nonsense:

From reasoning similar to that above, one can develop a closed-form generating formula for Chebyshev polynomials of the first kind:

$\cos(n \theta)=\frac{e^{i n \theta}+e^{-i n \theta}}{2}=\frac{(e^{i \theta})^n+(e^{i \theta})^{-n}}{2} \,\!$

which, combined with DeMoivre's formula: THIS IS FALSE:

$\! e^{i \theta}=\cos\theta+i \sin\theta=\cos\theta+i \sqrt{1-\cos^2\theta}=\cos\theta+\sqrt{\cos^2\theta-1} \,\!$

yields:

$\cos(n \theta)=\frac{\left(\cos\theta+ \sqrt{\cos^2\theta-1}\right)^n+\left(\cos\theta+ \sqrt{\cos^2\theta-1}\,\right)^{-n}}{2} \,\!$

which, of course, is far more expedient for determining the cosine of N times a given angle than is cranking through almost N rounds of the recursive generator calculation. Finally, if we replace $\cos(\theta)$ with x, we can alternatively write:

$T_n(x)=\frac{\left(x+ \sqrt{x^2-1}\right)^n+\left(x+ \sqrt{x^2-1}\right)^{-n}}{2}.$

## Linear operators

Should someone write a section about these as the eigenfunctions of a second order linear differential operator (which is symmetric wrt the inner product $(f,g)= \int_{-1}^{1} f(x)g(x)\frac{1}{\sqrt{1-x^2}}\,dx$)? I would, but I'm not really comfortable with the theory yet... Marbini (talk) 17:54, 2 April 2008 (UTC)

## Factoring

Can anything of interest be said about factorization of Chebyshev polynomials? Michael Hardy (talk) 00:04, 9 December 2008 (UTC)

Apparently. Googling for "factoring chebyshev" turned up this and this which seem to at least contain some pointers. Fredrik Johansson 00:15, 9 December 2008 (UTC)

So there's really not very much.....maybe. There is a sense in which the spread polynomials Sn, satisfying

$S_n(\sin^2\theta) = \sin^2(n\theta)\,$

are trivially equivalent to the Chebyshev polynomials Tn, satisfying

$T_n(\cos\theta) = \cos(n\theta).\,$

But the article titled spread polynomials has some material on factorization, and maybe there the story is different. Or maybe not? Michael Hardy (talk) 21:36, 10 March 2009 (UTC)

Some interest in factoring the Chebyshev T-polynomials stems from reduction of cosines to rational numbers as PDF here. - R. J. Mathar (talk) 19:35, 12 March 2014 (UTC)

## Explicit formulas

Can someone provide sources for the given explicit formulas, or at least some hints on how to obtain them?

I found some other formulas in Mason, J. C. "Chebyshev polynomials", 2003. CRC Press, 2003., pp.35-36 and in Snyder, M. A. "Chebyshev methods in numerical approximation", Englewood Cliffs, NJ, USA, 1966., p. 14.

In the first book, there are some other references to Rivlin (1974) and Clenshaw (1962). Mstempin (talk) 21:23, 3 August 2009 (UTC)

## History

I believe Chebyshev developed these polynomials while attempting to solve the problem of the inversor, a mechanical linkage that converts circular motion into linear motion. DonPMitchell (talk) 03:42, 13 March 2010 (UTC)

## Dickson polynomial

The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
The result of this discussion was no consensus. NukeofEarl (talk) 16:24, 30 October 2014 (UTC)

I would like to propose merging Dickson polynomial into Chebyshev polynomials. All the properties, except the analysis over finite fields, is already here. — Arthur Rubin (talk) 00:27, 21 September 2011 (UTC)

Dickson polynomials are mainly studied over finite fields, when they are not equivalent to Chebyshev polynomials.
It appears in their application, these two polynomials groups do really differ, and I wouldn't support a merge... Gryllida (talk) 22:16, 2 January 2014 (UTC)

The above discussion is closed. Please do not modify it. Subsequent comments should be made in a new section.

## Proposed section to merge: Spread polynomials and some other Chebyshev-linked Polynomials

spread polynomials are in a sense equivalent to the Chebyshev polynomials of the first kind, but enable one to avoid square roots and conventional trigonometric functions in certain contexts, notably in rational trigonometry.

Dickson polynomials[1][2][3][4] ,Dn(x), are equivalent to Chebyshev polynomials Tn(x), with a slight and trivial change of variable:
$D_n(2x) = 2 T_n(x) \,$

Dickson polynomials are given by:

$D_n(x)=\sum_{p=0}^{\lfloor n/2\rfloor}\frac{n}{n-p} \binom{n-p}{p} (-x^{n-2p}.$

The first few Dickson polynomials are

$D_0(x) = 2 \,$
$D_1(x) = x \,$
$D_2(x) = x^2 - 2 \,$
$D_3(x) = x^3 - 3x \,$
$D_4(x) = x^4 - 4x^2 + 2 \,$

Boubaker polynomials are also linked to Chebyshev polynomials. they can be defined by the recurrence relation[5][6][7][8]:

$B_n(x)= \begin{cases} 1, & \mbox{if } n = 0, \\ x, & \mbox{if } n = 1, \\ x B_{n - 1}(x) - B_{n - 2}(x),& \mbox{if } n \geq 3. \end{cases},$

Lucas polynomials Ln(x)[9] are a polynomial sequence defined by the following Chebyshev-like recurrence relation:

$L_m(x) = \begin{cases} 2, & \mbox{if } m = 0 \\ x, & \mbox{if } m = 1 \\ x L_{m - 1}(x) + L_{m - 2}(x), & \mbox{if } m \geq 2. \end{cases}$

## References

1. ^ [1]
2. ^ [2]
3. ^ [3]
4. ^ [4]
5. ^ O.D. Oyodum, O.B. Awojoyogbe, M.K. Dada, J.N. Magnuson, Comment on "Enhancement of pyrolysis spray disposal performance using thermal time-response to precursor uniform deposition" by K. Boubaker, A. Chaouachi, M. Amlouk and H. Bouzouita. On the earliest definition of the Boubaker polynomials, Eur. Phys. J. Appl. Phys., Volume 46, pp. 2120-21202.
6. ^ Paul Barry and Aoife Hennessy, Journal of Integer Sequences (JIS),Meixner-Type Results for Riordan Arrays and Associated Integer Sequences, Chapter 6: The Boubaker polynomials [5]
7. ^ A. S. Kumar , An analytical solution to applied mathematics-related Love's equation using the ‘’’Boubaker polynomials’’’ expansion scheme| journal=International Journal of the Franklin Institute (elsevier) [6]
8. ^ Encyclopedia of Physics Research, Chapter 21: Cryogenics Vessels Thermal Profilng Using the Boubaker Polynomials, Editors: Nancy B. Devins and Jillian P. Ramos, [7]
9. ^ [8]

## Possible corrections

### Section on Orthogonality and Example 1 has errors

The session on Orthogonality has a somewhat major issue, namely that the discrete orthogonality condition as written is wrong. The discrete orthogonality condition neglects to mention that the formula given is only valid for $0\leq i,j \leq N-1$. For other values of $i,j$ the aliasing condition gives alternating sign "tiled" versions of what's currently written.

Also, in Basis Set, Example 1, the closed-form Chebyshev coefficients $a_n$ for Log(1+x) is given correctly. However, immediately afterward it is written that these same coefficients $a_n$ can alternately be computed via the discrete Gauss-Lobatto sum. This is false, as is readily verified numerically. The $a_n$ in the Gauss-Lobatto discretization are the aliasing-folded versions of the $a_n$ computed in the continuous integral case, and are not the same. To a newcomer who does not know this already, this is completely non-obvious, especially considering that the same symbol $a_n$ is used for both. I suggest that different symbols should be used for the two cases to emphasize the fact that the two quantities are different. — Preceding unsigned comment added by MathDoobler (talkcontribs) 04:13, 3 January 2014 (UTC)

## Third and Fourth Kind

Is there any reason why the Chebyshev polynomials V_n(x) and W_n(x) of the third and fourth kind are not mentioned (at least: defined) in the article? R. J. Mathar (talk) 19:29, 12 March 2014 (UTC)

I hadn't heard of them. Got a reference which is uses them? — Arthur Rubin (talk) 16:27, 18 March 2014 (UTC)
For example
1. J. C. Mason (2010). "1.2.3". Chebyshev Polynomials. CRC Press.
2. J. C. Mason. "Chebyshev polynomials of the second, third and fourth kinds in approximation...". doi:10.1016/0377-0427(93)90148-5.
3. J. C. Mason, G. H. Elliott (1995). "Constrained near-minimax approximation...". Num. Alg. (9). pp. 39–54. doi:10.1007/BF02143926.
4. S. E. Notaris. "Interpolatory quadrature..". doi:10.1016/S0377-0427(97)00018-6.
5. "A survey on third and fourthkind...". Appl. Math Comput. 2008. doi:10.1016/j.amc.2007.09.018.
6. G. Heinig (2001). "Chebyshev-hankel matrices...". Lin. Alg. Applic. pp. 181–196. doi:10.1016/S0024-3795(00)00300-1.

and many, many others R. J. Mathar (talk) 17:25, 1 May 2014 (UTC)