Talk:Taylor series

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Good article Taylor series has been listed as one of the Mathematics good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it.
May 12, 2011 Good article nominee Listed
Wikipedia Version 1.0 Editorial Team / v0.7 (Rated GA-class)
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
 GA  This article has been rated as GA-Class on the quality scale.
 ???  This article has not yet received a rating on the importance scale.
Note icon
This article is within of subsequent release version of Mathematics.
Taskforce icon
This article has been selected for Version 0.7 and subsequent release versions of Wikipedia.
WikiProject Mathematics (Rated GA-class, Top-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
GA Class
Top Importance
 Field: Analysis
One of the 500 most frequently viewed mathematics articles.

Multivariate Taylor Series[edit]

Why was the section on `multivariate Taylor series' removed by (Compare the version of 17:53, 2006-09-20 vs that of 17:55, 2006-09-20). I am going to add it again, unless someone provides a good reason not to. -- Pouya Tafti 14:32, 5 October 2006 (UTC)

I agree with Pouya as well! There's no separate article on multivariate Taylor series on wikipedia, so it should be mentioned here.Lavaka 22:22, 17 January 2007 (UTC)
I have recovered the section titled `Taylor series for several variables' from the edition of 2006-09-20, 17:53. Please check for possible inaccuracies. —Pouya D. Tafti 10:37, 14 March 2007 (UTC)

The notation used in the multivariate series, e.g. fxy is not defined. Ma-Ma-Max Headroom (talk) 08:46, 9 February 2008 (UTC)

Can someone please check that the formula given for the multivariate Taylor series is correct? It doesn't agree with the one given on the Wolfram Mathworld article. Specifically, should in the denominator of the righthand side of the first equation not be ? As an example, consider the Taylor series for centered around . As it is, the formula would imply that the Taylor series would be instead of . Note that the two-variable example given in this same section produces the second (correct, I believe) series, contradicting the general formula at the start of the section. Ben E. Whitney 19:14, 23 July 2015 (UTC)

It's correct in both. Using your function and the conventions of the article, we have
as required. Sławomir Biały (talk) 21:08, 23 July 2015 (UTC)
Oh, I see! I think I'd mentally added a factor for the different ways the mixed derivatives could be ordered without realizing it. Should have written it out. Thank you! Ben E. Whitney 15:56, 24 July 2015 (UTC)
No worries. This seems to be a perennial point of misunderstanding. It might be worthwhile trying to clarify this in the article. Sławomir Biały (talk) 16:02, 24 July 2015 (UTC)

Madhava of Sanfamagrama[edit]

Actually, I think Archimedes should be accredited with the first use of the Taylor series, since he used the same method as Madhava: using an infinite summation to achieve a finite trigonometric result. Liu Hui independently employed a similar method 400 years later, but still about 800 years prior to Madhava's work, although the Wikipedia article on Liu Hui does not reflect this.

In fact, it would have been quite easy for them to perform the same task as Madhava. It isn't difficult to square an arc (albeit in an infinite number of steps) using simple Euclidean geometry. I believe that Archimedes and later Liu Hui were aware of this. Last time I heard about it was at a History and Philosophy of Mathematics conference in 1998 at the Center for Philosphy of Science, University of Pittsburgh. Anyone care to dredge up a reference?

Taylor series with Lagrange and Peano remainders[edit]

Why there's nothing about those two remainders in the article?

Confused addition[edit]

An editor recently added an explanation " the next term would be smaller still and a negative number, hence the term x^9/9! can be used to approximate the value of the terms left out of the approximation. " for the error estimate in the series of sin(x). This explanation is just nonsense: the next term might be positive or negative (depending on the sign of x), and the sign of that term together with the magnitude of the next term (which might or might not be smaller, depending on the magnitude of x) is simply not enough information to make the desired conclusion, even in the real case. More importantly, it is simply not necessary to justify this claim here, and it distracts from the larger point being made in this section. --JBL (talk) 18:41, 13 July 2015 (UTC)

As I expected your explanation is very poor, and you do need to provide one if you are going to revert a good edit. Yes, I see that in the particular example how the signs alternate and I also see that each term, in this particular example, is increasingly small. So, what precisely is your objection? I took the original explanation and expanded just a little, saying that further terms are small and the next term in particular is negative, hence the term X^9/9! is a good approximation of the error introduced by the truncation. You need to do a number of things. First you need to learn to read, second you need to learn to respect others edits. If the edits are completely off the mark, the edit should be deleted. If the edit is pretty close, then you should consider editing the edit to improve things just that much more. But if you are of the opinion that if one single thing is wrong with the edit and deleting it is the answer, we could extrapolate that attitude to the whole of Wikipedia and in the end we will have nothing left as Wikipedia is shot full of errors. My edit was not completely off the mark, hence it should be left and possibly improved. Please read the original material and then read my edits. Finally, if you are squatting on this article in the mistaken belief that you should be the arbiter of the "truth", you need to move to one side. I did not start a reversion war, you did. Thank you Zedshort (talk) 19:37, 13 July 2015 (UTC)
Before I chime in on this, both of you have to stop edit warring over this (and both of you know that). @Zedshort: in particular I think your comment above is unnecessarily confrontational.
Now, as for the content: it is true that Joel's objections over the sign of the error term are valid. The next term is , which is negative if x is positive but positive if x is negative. Hence it is not prudent to refer to the error being "positive" or negative. In short I agree with Joel on this, although I will say that Zedshort is correct that the next error terms are not bigger in magnitude, because of Taylor's theorem.--Jasper Deng (talk) 19:47, 13 July 2015 (UTC)
Yes, I see that I assumed the value of x to be positive. But the result is the same if dealing with negative values of x, as the next term is opposite in sign to the X^9/9! , further terms are diminishingly small and hence X^9/9! term provides an upper bound on the error introduced by the truncation. I will not apologize for being direct and to the point with someone regardless of who they are. Zedshort (talk) 20:04, 13 July 2015 (UTC)
You would want to say then that the next term is opposite in sign or something along those lines. But I don't think it's necessary. Whatever its sign, the validity of the truncation is guaranteed by Taylor's theorem. All the terms of the exponential function's Taylor series are positive for positive x, but that doesn't change anything. In other words, I'd not want to imply to the reader that the sign of the terms have anything to do with it.--Jasper Deng (talk) 20:15, 13 July 2015 (UTC)
Jasper Deng is right that the correct explanation is by Taylor's theorem. Zedshort's attempted version is not salvageable: in addition to the error about the sign, it is simply not true that the contributions from subsequent terms of the Taylor series get smaller and smaller in absolute value. At x = 12, the term x^9/9! is about 14000 and the term x^11/11! is about 18000. The error of the 7th-order polynomial at x = 12 is about 5000, but the fact that 5000 < 14000 does not follow from anything written by Zedshort.
Even if the argument weren't wrong in all respects, it is unnecessary where placed and distracts from the point of the section. --JBL (talk) 20:53, 13 July 2015 (UTC)
That however is incorrect in general. Please see the article on Taylor's theorem. For the series to converge subsequent terms must tend to zero. Therefore I can always find a point at which the error introduced by subsequent terms is less than any positive given error, for a given x. It may not be the 7th-order. It could be higher-order. But at some point, it is true that subsequent terms' contributions tend to zero.--Jasper Deng (talk) 21:01, 13 July 2015 (UTC)
Yes of course for fixed x they eventually go to zero (and for the sine it is even true that they even eventually go monotonically to zero in absolute value, which need not be true in general) but there is no way to use that to rescue the edits in question. --JBL (talk) 21:15, 13 July 2015 (UTC)

I agree with the revert. This edit gives three false impressions: (1) that more terms in the Taylor series always leads to a better approximation, (2) that the error in the Taylor approximation is never greater than the next term of the Taylor series, and (3) that the sign of the next term in the Taylor series is relevant to reckoning the error. (Regarding the second item, in case it is not already clear, Taylor's theorem is what gives the actual form of the error, as well as estimates of it. The fact that is the next term of the Taylor series for sin(x) is only of peripheral relevance.) Reinforcing these misconceptions works against what the section tries to achieve, which is to emphasize the problems that can arise when applying the Taylor approximation outside the interval of convergence. Sławomir Biały (talk) 22:23, 13 July 2015 (UTC)

I've reverted the edit as it seems that consensus is pretty much against the edit in question.--Jasper Deng (talk) 23:12, 13 July 2015 (UTC)

Complete Sets[edit]

The Taylor series predates the ideas of complete basis sets, the loose ends of which were not fixed until 1905 with the square integrability condition. The TS is merely a statement of the completeness of the polynomial, where each term of the sum is regarded as an element of a complete (but not necessarily orthonormal) set. If a function f(x) is written as the sum of an orthonormal polynomial set, the nth derivative of f appearing in the TS simply extracts the nth coefficient of the orthonormal sum. (talk) 09:39, 27 June 2016 (UTC)

This is not true. The Weierstrass approximation theorem comes closest to what you are articulating here, which states that continuous functions on compact sets can be uniformly approximated by polynomials, but the polynomials need not be truncations of the Taylor series. Indeed, it is easy to construct examples of functions whose Taylor series does not converge to the function, although these functions will be approximated by other sequences of polynomials. The question of approximating in L^2 is qualitatively very different. There are families of orthogonal polynomials that give series expansions of functions, but in general there is no relationship between the series expansions that one gets in this way and the coefficients of the Taylor series. Sławomir
00:36, 28 June 2016 (UTC)