Jump to content

Talk:Chebyshev polynomials

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Illahnou (talk | contribs) at 16:27, 22 September 2011 (Proposed section to merge: Spread polynomials and some other Chebyshev-linked Polynomials). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconMathematics B‑class Mid‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-priority on the project's priority scale.

Michael, it's not actually your browser. You can set that in your wikipedia preferences - by default it will render simple equations in normal text. And you apparently just complicated the equation by adding some white space ;-) snoyes 23:37 Mar 1, 2003 (UTC)

You're right; it works. Thanks. Why is the default the option that renders things as normal text, given that that is never what is intended when things are set with TeX? Michael Hardy 23:45 Mar 1, 2003 (UTC)

I don't know why that is the default option - best to ask one of the developers, maybe on the village pump. --snoyes 23:53 Mar 1, 2003 (UTC)

And BTW, there don't seem to be any other articles about special polynomial sequences on Wikipedia. Michael Hardy 23:45 Mar 1, 2003 (UTC)


//Here's the polynomial sequence on the page. Κσυπ Cyp   19:47, 25 Jan 2004 (UTC)

#include <stdio.h>

int asdf(int, int);
int choo(int, int);

int main() {
  int n, k;
  for(n=0;n<=16;++n) {
    printf("T<sub>%d</sub>(x)=", n);
    for(k=0;k<=n/2;++k) {
      printf("%s%d", k&&asdf(n, k)>=0?"+":"", asdf(n, k));
      if(n-2*k) printf("x");
      if(n-2*k>1) printf("<sup>%d</sup>", n-2*k);
      if(k==n/2) printf("<br>\n");
    }
  }
  return(0);
}

int tmp[1000];

int asdf(int n, int k) {
  int r, a;
  if(!n) return(1);
  if(!k) return(1<<(n-1));
  for(a=r=0;a<k;++a) r-=(asdf(n, a)*choo(n-2*a, k-a))>>(2*(k-a));
  return(r);
}

int choo(int n, int k) {
  int x, y;
  for(tmp[0]=y=1;y<=n;++y) for(tmp[x=y]=0;x;--x) tmp[x]+=tmp[x-1];
  return(tmp[k]);
}

If we recursively define a function (the letter ξ an arbitrary choice)
then

Maxima and minima

It looks to me like the local maxima of the Chebyshev polynomials of the first kind all have value 1, and the local minima all have value -1. If so (I don't know enough to say whether it is actually true), then this is an interesting property that should be mentioned somewhere on the page. I'm guessing it probably pops right out of one of the definitions, but that's not transparent to a muggle like me. -dmmaus 04:01, 15 August 2005 (UTC)[reply]

I think that follows from the definition:

Obviously, for x in [-1, 1] since the behaviour is given by the cosine, the local maxima and minima are 1 and -1. If one can prove that there are no maxima/minima outside this interval, then you assertion would follow. Oleg Alexandrov 04:07, 15 August 2005 (UTC)[reply]

Ah, good point. I'm afraid my eyes glaze over when I see hyperbolic trig functions in the vicinity. :-) Still, I think it'd be nice to mention this property explicitly in the article, since it's the standout physical feature of the graphs when you look at them. -dmmaus 08:54, 15 August 2005 (UTC)[reply]

I don't know that much about these polynomials to attempt to write in the article. Let us hope somebody else will mention this fact. Oleg Alexandrov 17:04, 16 August 2005 (UTC)[reply]
I've mentioned it.
There's also an interesting paralel here, which is a bit OT. Notice that for a degree n polynomial (with fixed leading coeff) to have a minimal maximum absolute value on an interval, it has to reach this maximal absolute value the highest number of times possible: n+1 times. (Clearly a degree n polynomial can have at most n-1 places where its derivative is 0, so it's impossible that is has more than n+1 extremal point in an interval.) This paralels up nicely with a general theorem in numerical analysis that the polynomial p of degree n (with no restriction on the leading coeff) that approximates a continuous function f in the sense that max|p-f| on an interval is minimal, |p-f| has to take this maximum n+1 times. I'm not sure how this general theorem is stated exactly, and I also don't know if it has any connection to this. – b_jonas 21:45, 29 December 2005 (UTC)[reply]
You have cited famous approximation theorem by Chebyshev (not later than 1857). And yes, extremal properties of polynomials of first kind (minimal C-norm) follow from this theorem immediately. Mir76 (talk) 16:18, 16 August 2009 (UTC)[reply]
Let me say thanks to User:172.128.197.224 for his correction of the statement in the second paragraph of the article. – b_jonas 18:30, 18 February 2006 (UTC)[reply]

Optimal approximations to arbitrary functions

The theorem says that, if an nth degree polynomial P approximates a function F on a closed interval in such a way that the error function P(x)-F(x) has n+2 maxima, and those maxima have alternating signs and the same absolute value (that is, P-F has maxima +ε, -ε, +ε, ...), then that polynomial is locally optimal. That is, there is no other polynomial close to P that has maximum error less than ε. This locality situation is the same as the situation in differential calculus: elementary calculus problems involve finding the maximum value of a function. One sets the derivative to zero. That gives a local maximum (or minimum), but there might be other solutions that are far away.

I will outline the proof below. I think this theorem, and its proof, are interesting, but don't seem to be part of the standard maths curriculum. (Orthogonal polynomials in general seem to be a mathematical orphan child.) I would suggest that this proof ought to be on a separate "proof" page. WP is trying to minimize the number of such pages, because WP isn't a textbook, but this would probably be a good candidate.

The proof really requires a graph to motivate it. I'm new to WP; it should be made by someone who is comfortable making and uploading graphs, such as the graphs on the Chebyshev page.

Here's the proof: Suppose P(x)-F(x) has n+2 maxima, alternating signs, all with absolute value ε. Suppose Q is another nth degree polynomial very close to P, that is strictly better. Superimpose tha graph of Q-F on that of P-F. Since Q is close to P, they will be about the same, but at the n+2 points where P-F is maximal, Q-F must lie strictly inside P-F. This is because the maximal excursions of Q-F are δ, which is strictly less than ε. So (Q-F)-(P-F) is strictly nonzero at those n+2 points, with alternating signs. But (Q-F)-(P-F) = Q-P, an nth degree polynomial. Since it has n+2 alternating nonzero values, it must have n+1 roots. But nth degree polynomials can't do that.

Now here's an outline of why Chebyshev interpolation gets very close to this optimum polynomial: If the Taylor series for a function converges very rapidly (the way, for example, sin, cos, and exponential do), the error from chopping off the series after some finite number of terms will be close to the first term after the cutoff. That term will dominate all the later terms, and that term plus all the later ones are of course the error function. The same is true if we use a Chebyshev series instead of a Taylor series.

Now if we expand F(x) in its Chebyshev series, and chop it off after the Tn term, we have an nth degree polynomial. The error is dominated by the Tn+1 term, so it looks like Tn+1. Tn+1 is level, with n+2 maxima, with alternating signs. This means the error function is approximately level, and hence, by the theorem above, this polynomial is approximately optimal.

If one wants a truly optimal polynomial, one "tweaks" the polynomial P so that its error excursions are truly level. Remes' algorithm does this.

Should there be a separate WP page for this stuff, linked from Chebyshev? What should it be called? William Ackerman 18:46, 21 February 2006 (UTC)[reply]

Thanks for stating the theorem. – b_jonas 21:13, 21 February 2006 (UTC)[reply]

The relationship

I removed the following:

Starting with T0( cos u ) = 1 and T1( cos u ) = cos u, one quickly calculates that cos 2u = 2 cos^2 u - 1, cos 3u = 4 cos^2 u - 3 cos u, cos 4u = 8 cos^4 u - 8 cos^2 u + 1, etc. By setting u = 0, whence cos u = 1, one can immediately detect that both sides of each equation evaluate to unity, which is a good clue that the relationship may be correct&#151;which, of course, it is.

What does "the relationship may be correct" mean? which relationship? Not because thats a definition. It may show that the trignometric identities are true, but thats not the point of the article. PAR 13:27, 11 August 2006 (UTC)[reply]

reasoning not correct

I removed the following part because it is nonsense:

From reasoning similar to that above, one can develop a closed-form generating formula for Chebyshev polynomials of the first kind:

which, combined with DeMoivre's formula: THIS IS FALSE:

yields:

which, of course, is far more expedient for determining the cosine of N times a given angle than is cranking through almost N rounds of the recursive generator calculation. Finally, if we replace with x, we can alternatively write:

Linear operators

Should someone write a section about these as the eigenfunctions of a second order linear differential operator (which is symmetric wrt the inner product )? I would, but I'm not really comfortable with the theory yet... Marbini (talk) 17:54, 2 April 2008 (UTC)[reply]

Factoring

Can anything of interest be said about factorization of Chebyshev polynomials? Michael Hardy (talk) 00:04, 9 December 2008 (UTC)[reply]

Apparently. Googling for "factoring chebyshev" turned up this and this which seem to at least contain some pointers. Fredrik Johansson 00:15, 9 December 2008 (UTC)[reply]

So there's really not very much.....maybe. There is a sense in which the spread polynomials Sn, satisfying

are trivially equivalent to the Chebyshev polynomials Tn, satisfying

But the article titled spread polynomials has some material on factorization, and maybe there the story is different. Or maybe not? Michael Hardy (talk) 21:36, 10 March 2009 (UTC)[reply]

Explicit formulas

Can someone provide sources for the given explicit formulas, or at least some hints on how to obtain them?

I found some other formulas in Mason, J. C. "Chebyshev polynomials", 2003. CRC Press, 2003., pp.35-36 and in Snyder, M. A. "Chebyshev methods in numerical approximation", Englewood Cliffs, NJ, USA, 1966., p. 14.

In the first book, there are some other references to Rivlin (1974) and Clenshaw (1962). Mstempin (talk) 21:23, 3 August 2009 (UTC)[reply]

History

I believe Chebyshev developed these polynomials while attempting to solve the problem of the inversor, a mechanical linkage that converts circular motion into linear motion. DonPMitchell (talk) 03:42, 13 March 2010 (UTC)[reply]

Dickson polynomial

I would like to propose merging Dickson polynomial into Chebyshev polynomials. All the properties, except the analysis over finite fields, is already here. — Arthur Rubin (talk) 00:27, 21 September 2011 (UTC)[reply]

Proposed section to merge: Spread polynomials and some other Chebyshev-linked Polynomials

spread polynomials are in a sense equivalent to the Chebyshev polynomials of the first kind, but enable one to avoid square roots and conventional trigonometric functions in certain contexts, notably in rational trigonometry.

Dickson polynomials[1][2][3][4] ,Dn(x), are equivalent to Chebyshev polynomials Tn(x), with a slight and trivial change of variable:

Dickson polynomials are given by:

The first few Dickson polynomials are

Boubaker polynomials are also linked to Chebyshev polynomials. they can be defined by the recurrence relation[5][6][7][8]:

Lucas polynomials Ln(x)[9] are a polynomial sequence defined by the following Chebyshev-like recurrence relation:

References

  1. ^ [1]
  2. ^ [2]
  3. ^ [3]
  4. ^ [4]
  5. ^ O.D. Oyodum, O.B. Awojoyogbe, M.K. Dada, J.N. Magnuson, Comment on "Enhancement of pyrolysis spray disposal performance using thermal time-response to precursor uniform deposition" by K. Boubaker, A. Chaouachi, M. Amlouk and H. Bouzouita. On the earliest definition of the Boubaker polynomials, Eur. Phys. J. Appl. Phys., Volume 46, pp. 2120-21202.
  6. ^ Paul Barry and Aoife Hennessy, Journal of Integer Sequences (JIS),Meixner-Type Results for Riordan Arrays and Associated Integer Sequences, Chapter 6: The Boubaker polynomials [5]
  7. ^ A. S. Kumar , An analytical solution to applied mathematics-related Love's equation using the ‘’’Boubaker polynomials’’’ expansion scheme| journal=International Journal of the Franklin Institute (elsevier) [6]
  8. ^ Encyclopedia of Physics Research, Chapter 21: Cryogenics Vessels Thermal Profilng Using the Boubaker Polynomials, Editors: Nancy B. Devins and Jillian P. Ramos, [7]
  9. ^ [8]