Talk:Taylor series

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Good article Taylor series has been listed as one of the Mathematics good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it.
May 12, 2011 Good article nominee Listed
Wikipedia Version 1.0 Editorial Team / v0.7 (Rated GA-class)
WikiProject icon This article has been reviewed by the Version 1.0 Editorial Team.
 GA  This article has been rated as GA-Class on the quality scale.
 ???  This article has not yet received a rating on the importance scale.
Note icon
This article is within of subsequent release version of Mathematics.
Taskforce icon
This article has been selected for Version 0.7 and subsequent release versions of Wikipedia.
WikiProject Mathematics (Rated GA-class, Top-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
GA Class
Top Importance
 Field: Analysis
One of the 500 most frequently viewed mathematics articles.

Multivariate Taylor Series[edit]

Why was the section on `multivariate Taylor series' removed by (Compare the version of 17:53, 2006-09-20 vs that of 17:55, 2006-09-20). I am going to add it again, unless someone provides a good reason not to. -- Pouya Tafti 14:32, 5 October 2006 (UTC)

I agree with Pouya as well! There's no separate article on multivariate Taylor series on wikipedia, so it should be mentioned here.Lavaka 22:22, 17 January 2007 (UTC)
I have recovered the section titled `Taylor series for several variables' from the edition of 2006-09-20, 17:53. Please check for possible inaccuracies. —Pouya D. Tafti 10:37, 14 March 2007 (UTC)

The notation used in the multivariate series, e.g. fxy is not defined. Ma-Ma-Max Headroom (talk) 08:46, 9 February 2008 (UTC)

Can someone please check that the formula given for the multivariate Taylor series is correct? It doesn't agree with the one given on the Wolfram Mathworld article. Specifically, should n_{1}!\cdots n_{d}! in the denominator of the righthand side of the first equation not be (n_{i}+\ldots+n_{d})!? As an example, consider the Taylor series for f(x,y)=xy centered around (0,0). As it is, the formula would imply that the Taylor series would be 2xy/(1!1!)=2xy instead of 2xy/2!=xy. Note that the two-variable example given in this same section produces the second (correct, I believe) series, contradicting the general formula at the start of the section. Ben E. Whitney 19:14, 23 July 2015 (UTC)

It's correct in both. Using your function f(x_1,x_2)=x_1x_2 and the conventions of the article, we have
\begin{align}f(x_1,x_2)&=\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty \frac{1}{n_1!n_2!}\frac{\partial^{n_1+n_2}f(0,0)}{\partial x_1^{n_1}\partial x_2^{n_2}}
&=\frac{1}{1!1!}\frac{\partial^2f(0,0)}{\partial x_1\partial x_2}\\
x_1x_2=f(x_1,x_2) &= f(0,0) + \frac{1}{2!}\sum_{i=1}^2\sum_{j=1}^2 \frac{\partial^2f(0,0)}{\partial x_i\partial x_j}x_ix_j\\
&=f(0,0) +  \frac{1}{2!}\left(\frac{\partial^2f(0,0)}{\partial x_1\partial x_1}x_1^2+\frac{\partial^2f(0,0)}{\partial x_2\partial x_1}x_2x_1 + \frac{\partial^2f(0,0)}{\partial x_1\partial x_2}x_1x_2 + \frac{\partial^2f(0,0)}{\partial x_2\partial x_2}x_2^2\right)\\
&=0 + \frac{1}{2}(0 + x_2x_1 + x_1x_2 + 0)\\
&= x_1x_2 
as required. Sławomir Biały (talk) 21:08, 23 July 2015 (UTC)
Oh, I see! I think I'd mentally added a factor for the different ways the mixed derivatives could be ordered without realizing it. Should have written it out. Thank you! Ben E. Whitney 15:56, 24 July 2015 (UTC)
No worries. This seems to be a perennial point of misunderstanding. It might be worthwhile trying to clarify this in the article. Sławomir Biały (talk) 16:02, 24 July 2015 (UTC)

Madhava of Sanfamagrama[edit]

Actually, I think Archimedes should be accredited with the first use of the Taylor series, since he used the same method as Madhava: using an infinite summation to achieve a finite trigonometric result. Liu Hui independently employed a similar method 400 years later, but still about 800 years prior to Madhava's work, although the Wikipedia article on Liu Hui does not reflect this.

In fact, it would have been quite easy for them to perform the same task as Madhava. It isn't difficult to square an arc (albeit in an infinite number of steps) using simple Euclidean geometry. I believe that Archimedes and later Liu Hui were aware of this. Last time I heard about it was at a History and Philosophy of Mathematics conference in 1998 at the Center for Philosphy of Science, University of Pittsburgh. Anyone care to dredge up a reference?

Taylor series with Lagrange and Peano remainders[edit]

Why there's nothing about those two remainders in the article?

Suggested summation with 00 undefined[edit]

f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots.

 = \sum_{n=0} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}

 = f(a)+ \sum_{n=1} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}

The latter representation having the advantage of not having to define 0^0. Some indication of the controversy over the definition of 0^0 can be found at Exponentiation under the heading "History of differing points of view."

--Danchristensen (talk) 04:29, 12 May 2015 (UTC)

In this setting x^0=1 even if x is zero. See Exponentiation#Zero to the power zero for explanation. On this point, it is important that the article should agree with most sources on the subject. The article does explain what 0! and x^0 are. By inventing our own way of writing power series, we would have the unintended effect of making the article more confusing vis a vis most sources, instead of less confusing. So I disagree strongly with the proposed change, unless it can be shown to be a common convention in mathematics sources. Sławomir Biały (talk) 11:17, 12 May 2015 (UTC)
0^0 is usually left undefined on the reals. --Danchristensen (talk) 14:01, 12 May 2015 (UTC)
That's a common misconception. For power functions associated with integer exponents, exponentiation is defined inductively, by multiplying n times. For x^0, this is an empty product, so equal to unity. It's true that if we are looking at the real exponential, then x^r is defined as  exp (r\log x). But that actually refers to a different function. For more details, please see the link I provided. It is completely standard in this setting to take 0^0=1. See the references included in the article. Sławomir Biały (talk) 14:37, 12 May 2015 (UTC)
"A common misconception?" Really? --Danchristensen (talk) 15:50, 12 May 2015 (UTC)
This discussion page is not a forum for general debate. If you have sources you want us to consider, please present them. Otherwise, I regard this issue as settled, per the sources cited in the article. Sławomir Biały (talk) 16:15, 12 May 2015 (UTC)

After this edit, the proposed text now includes a passage "The latter representation having the advantage of not having to define 0^0." According to whom is this an advantage? What secondary sources make this assertion? Sławomir Biały (talk) 11:57, 13 May 2015 (UTC)

As pointed out in the original article, the summation there depends on defining 0^0=1, a controversial point for many. I present a version that does not depend on any particular value for 0^0. --Danchristensen (talk) 15:13, 13 May 2015 (UTC)
You have still not presented any textual evidence that x^0=1 is remotely controversial in the setting of power series and polynomials. And the consensus among editors and sources alike appears to contradict this viewpoint. Sławomir Biały (talk) 16:08, 13 May 2015 (UTC)
Some indication of the controversy can be found at Exponentiation under the heading "History of differing points of view." --Danchristensen (talk) 17:08, 13 May 2015 (UTC)
Taylor series and polynomials do not appear to be listed there. Sławomir Biały (talk) 17:20, 13 May 2015 (UTC)
Key points: "The debate over the definition of 0^0 has been going on at least since the early 19th century... Some argue that the best value for 0^0 depends on context, and hence that defining it once and for all is problematic. According to Benson (1999), 'The choice whether to define 0^0 is based on convenience, not on correctness.'... [T]here are textbooks that refrain from defining 0^0." --Danchristensen (talk) 17:40, 13 May 2015 (UTC)
Yes, and abundant reliable sources make the decision in the context of Taylor series and polynomials to define 0^0 = 1, and this article is rightly written to reflect these sources. Whether you happen to think this consensus (of both reliable sources and editors of this page) is morally right or not is totally irrelevant. --JBL (talk) 17:51, 13 May 2015 (UTC)
Morally right??? Come now. As we see in Exponents, there is some controversy -- opposing camps, if you will -- on the matter of 0^0. This article would be more complete with at least a nod in the direction if not an endorsement of the "other camp" in this case. The summation I suggested is not anything radical. It follows directly from the original summation. It simply does not depend on any particular value for 0^0. --Danchristensen (talk) 18:33, 13 May 2015 (UTC)
See: "Technically undefined..." --Danchristensen (talk) 18:56, 13 May 2015 (UTC)
This is obviously not a reliable source, and also does not support your view that we should write Taylor series in a nonstandard way. If you do not have a reliable source, there is no point in continuing this conversation. (Actually there is no point whether or not you have a reliable source because it is totally clear that there is not going to be consensus to make the change that you want, but it is double-extra pointless without even a single reliable source to reference.) --JBL (talk) 19:53, 13 May 2015 (UTC)
P.S. If you want someone to explain "morally right" or why the history section of a different article is irrelevant, please ask on someone's user talk page instead of continuing to extend and multiply these repetitive discussions on article talk pages. --JBL (talk) 19:53, 13 May 2015 (UTC)
It shows how one author worked around 0^0 being undefined for a power series -- using a similar idea to that I proposed for Taylor series. See link to textbook at bottom of page. The relevant passage is an excerpt. --Danchristensen (talk) 20:05, 13 May 2015 (UTC)
Here are some sources that do not make this special distinction for Taylor series: G. H. Hardy, "A course of pure mathematics", Walter Rudin "Principles of mathematical analysis", Robert G. Bartle "Elements of real analysis", Lars Ahlfors "Complex analysis", Antoni Zygmund "Measure and integral", George Polya and Gabor Szego "Problems and theorems in analysis", Erwin Kreyszig "Advanced engineering mathematics", Richard Courant and Fritz John, "Differential and integral calculus", Jerrold Marsden and Alan Weinstein, "Calculus", Serge Lang, "A first course in calculus", Michael Spivak "Calculus", George B. Thomas "Calculus", Kenneth A. Ross "Elementary Analysis: The Theory of Calculus", Elias Stein "Complex analysis". I've only included sources by mathematicians notable enough to have their own Wikipedia page. I assume we should go with the preponderance of sources on this issue, per WP:WEIGHT. Sławomir Biały (talk) 00:34, 14 May 2015 (UTC)
It would be interesting to hear how they justified their positions, if they did. Was it correctness, or, as Benson (1999) put it, simply convenience. Or doesn't it matter? --Danchristensen (talk) 02:49, 14 May 2015 (UTC)
It doesn't matter. --JBL (talk) 04:03, 14 May 2015 (UTC)
Agree, it doesn't matter. We just go by reliable sources, not our own feelings about their correctness. Sławomir Biały (talk) 11:18, 14 May 2015 (UTC)

Nicolas Bourbaki, in "Algèbre", Tome III, writes: "L'unique monôme de degré 0 est l'élément unité de A[(X_i)_{i\in I}]; on l'identifie souvent à l'élément unité 1 de A". Sławomir Biały (talk) 11:35, 14 May 2015 (UTC)

Also from Bourbaki Algebra p. 23 (which omits the "often", and deals with exponential notation on monoids very clearly):
Let E be a monoid written multiplicatively. For nZ the notation n x is replaced by xn. We have the relations
xm + n = xm.xn
x0 = 1
x1 = x
(xm)n = xmn
and also (xy)n = xnyn if x and y commute.
Quondum 14:06, 14 May 2015 (UTC)
(Shouldn't that be x0 = e, where e is the multiplicative identity of E?) Have they not simply defined x^0=e? Note that natural numbers have two identity elements: 0 for addition, 1 for multiplication. --Danchristensen (talk) 17:58, 14 May 2015 (UTC)
I'm simply quoting exactly from Bourbaki (English translation). They are dealing with a "monoid written multiplicatively", where they seem to prefer denoting the identity as 1. Just before this, they give the additively written version with 0. And before that they use the agnostic notation using the operator ⊤, and there they use the notation e. —Quondum 18:41, 14 May 2015 (UTC)
The problem with normal exponentiation on N is that you have two inter-related monoids on the same set (a semi-ring?). Powers of the multiplicative identity 1 are not the problem. The problem is with powers of the additive identity 0. It's a completely different structure. --Danchristensen (talk) 21:09, 14 May 2015 (UTC)
How so? With an operation thought of as addition, we use an additive notation, and change the terminology, as well as the symbols. We could call it "integer scalar multipication" or whatever instead of "exponentiation"; I'd have to see what Bourbaki calls it (ref. not with me at the moment). Instead of xn, we write n.x, meaning x+...+x (n copies of x). The entire theory of exponentiation still applies. —Quondum 22:16, 14 May 2015 (UTC)
Please: this is not a forum! --JBL (talk) 22:42, 14 May 2015 (UTC)

Confused addition[edit]

An editor recently added an explanation " the next term would be smaller still and a negative number, hence the term x^9/9! can be used to approximate the value of the terms left out of the approximation. " for the error estimate in the series of sin(x). This explanation is just nonsense: the next term might be positive or negative (depending on the sign of x), and the sign of that term together with the magnitude of the next term (which might or might not be smaller, depending on the magnitude of x) is simply not enough information to make the desired conclusion, even in the real case. More importantly, it is simply not necessary to justify this claim here, and it distracts from the larger point being made in this section. --JBL (talk) 18:41, 13 July 2015 (UTC)

As I expected your explanation is very poor, and you do need to provide one if you are going to revert a good edit. Yes, I see that in the particular example how the signs alternate and I also see that each term, in this particular example, is increasingly small. So, what precisely is your objection? I took the original explanation and expanded just a little, saying that further terms are small and the next term in particular is negative, hence the term X^9/9! is a good approximation of the error introduced by the truncation. You need to do a number of things. First you need to learn to read, second you need to learn to respect others edits. If the edits are completely off the mark, the edit should be deleted. If the edit is pretty close, then you should consider editing the edit to improve things just that much more. But if you are of the opinion that if one single thing is wrong with the edit and deleting it is the answer, we could extrapolate that attitude to the whole of Wikipedia and in the end we will have nothing left as Wikipedia is shot full of errors. My edit was not completely off the mark, hence it should be left and possibly improved. Please read the original material and then read my edits. Finally, if you are squatting on this article in the mistaken belief that you should be the arbiter of the "truth", you need to move to one side. I did not start a reversion war, you did. Thank you Zedshort (talk) 19:37, 13 July 2015 (UTC)
Before I chime in on this, both of you have to stop edit warring over this (and both of you know that). @Zedshort: in particular I think your comment above is unnecessarily confrontational.
Now, as for the content: it is true that Joel's objections over the sign of the error term are valid. The next term is -\frac{x^{11}}{11!}, which is negative if x is positive but positive if x is negative. Hence it is not prudent to refer to the error being "positive" or negative. In short I agree with Joel on this, although I will say that Zedshort is correct that the next error terms are not bigger in magnitude, because of Taylor's theorem.--Jasper Deng (talk) 19:47, 13 July 2015 (UTC)
Yes, I see that I assumed the value of x to be positive. But the result is the same if dealing with negative values of x, as the next term is opposite in sign to the X^9/9! , further terms are diminishingly small and hence X^9/9! term provides an upper bound on the error introduced by the truncation. I will not apologize for being direct and to the point with someone regardless of who they are. Zedshort (talk) 20:04, 13 July 2015 (UTC)
You would want to say then that the next term is opposite in sign or something along those lines. But I don't think it's necessary. Whatever its sign, the validity of the truncation is guaranteed by Taylor's theorem. All the terms of the exponential function's Taylor series are positive for positive x, but that doesn't change anything. In other words, I'd not want to imply to the reader that the sign of the terms have anything to do with it.--Jasper Deng (talk) 20:15, 13 July 2015 (UTC)
Jasper Deng is right that the correct explanation is by Taylor's theorem. Zedshort's attempted version is not salvageable: in addition to the error about the sign, it is simply not true that the contributions from subsequent terms of the Taylor series get smaller and smaller in absolute value. At x = 12, the term x^9/9! is about 14000 and the term x^11/11! is about 18000. The error of the 7th-order polynomial at x = 12 is about 5000, but the fact that 5000 < 14000 does not follow from anything written by Zedshort.
Even if the argument weren't wrong in all respects, it is unnecessary where placed and distracts from the point of the section. --JBL (talk) 20:53, 13 July 2015 (UTC)
That however is incorrect in general. Please see the article on Taylor's theorem. For the series to converge subsequent terms must tend to zero. Therefore I can always find a point at which the error introduced by subsequent terms is less than any positive given error, for a given x. It may not be the 7th-order. It could be higher-order. But at some point, it is true that subsequent terms' contributions tend to zero.--Jasper Deng (talk) 21:01, 13 July 2015 (UTC)
Yes of course for fixed x they eventually go to zero (and for the sine it is even true that they even eventually go monotonically to zero in absolute value, which need not be true in general) but there is no way to use that to rescue the edits in question. --JBL (talk) 21:15, 13 July 2015 (UTC)

I agree with the revert. This edit gives three false impressions: (1) that more terms in the Taylor series always leads to a better approximation, (2) that the error in the Taylor approximation is never greater than the next term of the Taylor series, and (3) that the sign of the next term in the Taylor series is relevant to reckoning the error. (Regarding the second item, in case it is not already clear, Taylor's theorem is what gives the actual form of the error, as well as estimates of it. The fact that x^9/9! is the next term of the Taylor series for sin(x) is only of peripheral relevance.) Reinforcing these misconceptions works against what the section tries to achieve, which is to emphasize the problems that can arise when applying the Taylor approximation outside the interval of convergence. Sławomir Biały (talk) 22:23, 13 July 2015 (UTC)

I've reverted the edit as it seems that consensus is pretty much against the edit in question.--Jasper Deng (talk) 23:12, 13 July 2015 (UTC)