# Talk:Taylor series

Taylor series has been listed as one of the Mathematics good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it.
 May 12, 2011 Good article nominee Listed
Wikipedia Version 1.0 Editorial Team / v0.7
This article has been selected for Version 0.7 and subsequent release versions of Wikipedia.
 GA This article has been rated as GA-Class on the quality scale.
WikiProject Mathematics (Rated GA-class, Top-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 GA Class
 Top Importance
Field: Analysis
One of the 500 most frequently viewed mathematics articles.

## Multivariate Taylor Series

Why was the section on multivariate Taylor series' removed by 203.200.95.130? (Compare the version of 17:53, 2006-09-20 vs that of 17:55, 2006-09-20). I am going to add it again, unless someone provides a good reason not to. -- Pouya Tafti 14:32, 5 October 2006 (UTC)

I agree with Pouya as well! There's no separate article on multivariate Taylor series on wikipedia, so it should be mentioned here.Lavaka 22:22, 17 January 2007 (UTC)
I have recovered the section titled Taylor series for several variables' from the edition of 2006-09-20, 17:53. Please check for possible inaccuracies. —Pouya D. Tafti 10:37, 14 March 2007 (UTC)

The notation used in the multivariate series, e.g. fxy is not defined. Ma-Ma-Max Headroom (talk) 08:46, 9 February 2008 (UTC)

Can someone please check that the formula given for the multivariate Taylor series is correct? It doesn't agree with the one given on the Wolfram Mathworld article. Specifically, should $n_{1}!\cdots n_{d}!$ in the denominator of the righthand side of the first equation not be $(n_{i}+\ldots+n_{d})!$? As an example, consider the Taylor series for $f(x,y)=xy$ centered around $(0,0)$. As it is, the formula would imply that the Taylor series would be $2xy/(1!1!)=2xy$ instead of $2xy/2!=xy$. Note that the two-variable example given in this same section produces the second (correct, I believe) series, contradicting the general formula at the start of the section. Ben E. Whitney 19:14, 23 July 2015 (UTC)

It's correct in both. Using your function $f(x_1,x_2)=x_1x_2$ and the conventions of the article, we have
\begin{align}f(x_1,x_2)&=\sum_{n_1=0}^\infty\sum_{n_2=0}^\infty \frac{1}{n_1!n_2!}\frac{\partial^{n_1+n_2}f(0,0)}{\partial x_1^{n_1}\partial x_2^{n_2}} &=\frac{1}{1!1!}\frac{\partial^2f(0,0)}{\partial x_1\partial x_2}\\ &=x_1x_2 \end{align}
Also,
\begin{align} x_1x_2=f(x_1,x_2) &= f(0,0) + \frac{1}{2!}\sum_{i=1}^2\sum_{j=1}^2 \frac{\partial^2f(0,0)}{\partial x_i\partial x_j}x_ix_j\\ &=f(0,0) + \frac{1}{2!}\left(\frac{\partial^2f(0,0)}{\partial x_1\partial x_1}x_1^2+\frac{\partial^2f(0,0)}{\partial x_2\partial x_1}x_2x_1 + \frac{\partial^2f(0,0)}{\partial x_1\partial x_2}x_1x_2 + \frac{\partial^2f(0,0)}{\partial x_2\partial x_2}x_2^2\right)\\ &=0 + \frac{1}{2}(0 + x_2x_1 + x_1x_2 + 0)\\ &= x_1x_2 \end{align}
as required. Sławomir Biały (talk) 21:08, 23 July 2015 (UTC)
Oh, I see! I think I'd mentally added a factor for the different ways the mixed derivatives could be ordered without realizing it. Should have written it out. Thank you! Ben E. Whitney 15:56, 24 July 2015 (UTC)
No worries. This seems to be a perennial point of misunderstanding. It might be worthwhile trying to clarify this in the article. Sławomir Biały (talk) 16:02, 24 July 2015 (UTC)

In the example section following the "The Maclaurin series for (1 − x)^-1" it is stated "so the Taylor series for x^-1 at a = 1 is". I'm not sure it is obvious why the Taylor series for x^-1 logically follows from the Maclaurin series given. Also, right after that example, how does the integral of 1 = -x? --Knoxjeff (talk) 19:04, 26 March 2012 (UTC)

The Taylor series expansion for arccos is notably missing. I tried simplifying it myself. Maybe someone else can figure it out an add it? --Jlenthe 01:08, 9 October 2006 (UTC)

How did Maclaurin publish his special case of the Taylor theorem in the 17th century (i.e. 1600's) if he was born in 1698? I suspect this is a mistake.

--For the "List of Taylor series" I would like to have the first few terms of each series written out for quick reference. I could doit myself, but I don't want to mess anything up.

Here's a start: I added the first few terms of tan x. 24.118.99.41 06:58, 24 April 2006 (UTC)

"Note that there are examples of infinitely often differentiable functions f(x) whose Taylor series converge but are not equal to f(x). For instance, all the derivatives of f(x) = exp(-1/x²) are zero at x = 0, so the Taylor series of f(x) is zero, and its radius of convergence is infinite, even though the function most definitely is not zero."

f(x) has no Taylor series for a=0, since f(0) is not defined. You have to state explicitly that you've defined f(x)=exp(-1/x²) for x not equal to 0 and f(0)=0 . This is merely lim[x->0] f(x), but it is a requirement for rigor.

Don't complain, fix! Wikipedia:Be bold in editing pages. -- Tim Starling 02:03 16 Jun 2003 (UTC)

By the way, would people call $\sum_{n=0}^{\infin} \sum_{p=0}^{\infin} \frac{\partial^n}{\partial x^n} \frac{\partial^p}{\partial y^p} \frac{f(a,b)}{n!p!} (x-a)^n(y-b)^p$ a Taylor series? Or does it have a name at all? If someone said something about a Taylor series of a 2D (or n D) function, I'd guess they meant something like that... Also, can the term analytic function refer to a 2D function? Κσυπ Cyp 19:01, 17 Oct 2003 (UTC)

1st question: sure, see e.g. http://www.csit.fsu.edu/~erlebach/course/numPDE1_f2001/norms.pdf - Patrick 19:51, 17 Oct 2003 (UTC)
I just shoved it quickly into the article, at the bottom. Κσυπ Cyp 21:49, 17 Oct 2003 (UTC)
Does the double sum in 2D form (in that PDF file) mean that I first have to go through the whole range of "r" and then increase "s" by one and yet again go for "r"s? I'm slightly confused (probably of my fault). 83.8.149.147 18:42, 17 January 2007 (UTC)

Shouldn't the article include something about the "Taylor" for whom the series are named? If I knew, I'd do it myself Dukeofomnium 16:41, 5 Mar 2004 (UTC)

Good idea. Often a good way to start investigating such things is to click the "What links here" link on the article page. In this case, that reveals that the Brook Taylor page links to the article. -- Dominus 19:00, 5 Mar 2004 (UTC)

What is a "formulat"? it's on the last line. A typo or a word I'm unfamiliar with? Goodralph 16:28, 2 Apr 2004 (UTC)

Edited the geometric series to include cases where n might not start from zero. Stealth 17:22, Feb 19, 2005 (UTC)

In the Taylor series formula, what happens if x=a? when n=0 we get 0 raised to the 0th power, which is undefined. The formula is correct if we define 0^0=1.

The Taylor's series is also alternately defiend as follows (I'm using the LaTeX notation here): f(x + h) = f(x) + h f^{\prime}(x) + (h^2/2!) f^{\prime \prime}(x + \theta h) for some 0 < \theta < 1 I'm new to this field, so I'm reading up on this a bit before I can add this to the article with suitable comments, but I didn't find this form mentioned on Mathworld and most pages in the top 10 google hits for Taylor's Series.

how can u use taylor series for intergration? also could sumeone actulally put maclaurin series in it so i can see how much it differs from taylor series

Madhava didn't invent the Taylor series, but he may have discovered the equivalent series expansion for a few limited cases, which is very different (but still impressive):

Madhava discovered the series equivalent to the Maclaurin expansions of sin x, cos x, and arctan x around 1400, which is over two hundred years before they were rediscovered in Europe. Details appear in a number of works written by his followers such as Mahajyanayana prakara which means Method of computing the great sines. In fact this work had been claimed by some historians such as Sarma (see for example [2]) to be by Madhava himself but this seems highly unlikely and it is now accepted by most historians to be a 16th century work by a follower of Madhava. This is discussed in detail in [4].

That quote is taken from [1], which is also the first external link listed on the Madhava article. I left in the credit to Madhava despite the fact that the above source calls into question whether he or one of his followers (two centuries later) discovered the aforementioned examples, because I'm in no position to weigh the validity of such claims. I think it's still significant enough to merit mention, since these examples from Indian mathematicians do seem to be the earliest known examples.

Actually, on re-reading I think the above quote is only calling into question the authorship of Mahajyanayana prakara and not the discoverer of the series expansions that work contains. In any event, it's still clearly a limited result and needs to be described as such. --Wclark 21:25, 30 September 2005 (UTC)

Sounds good to me. --Pranathi 22:32, 30 September 2005 (UTC)

## Error Estimates

I think something on the error estimates for a truncated series would be useful. That's exactly what I'm looking for right now. User:NeilenMarais

I'm looking for the exact same thing. 69.140.90.164 01:15, 10 January 2006 (UTC)

I'm studying for a test, and looking for the same thing too. For now I'll stop being lazy and read my textbook. Maybe I can add something on the subject later when I have time. Eumedemito 03:26, 21 October 2007 (UTC)

Oh, it's at Taylor's theorem (see "15 Where are the error bounds?"). Maybe there could be a mention to that section, though. —Preceding unsigned comment added by Eumedemito (talkcontribs) 03:38, 21 October 2007 (UTC)

Actually, I think Archimedes should be accredited with the first use of the Taylor series, since he used the same method as Madhava: using an infinite summation to achieve a finite trigonometric result. Liu Hui independently employed a similar method 400 years later, but still about 800 years prior to Madhava's work, although the Wikipedia article on Liu Hui does not reflect this.

In fact, it would have been quite easy for them to perform the same task as Madhava. It isn't difficult to square an arc (albeit in an infinite number of steps) using simple Euclidean geometry. I believe that Archimedes and later Liu Hui were aware of this. Last time I heard about it was at a History and Philosophy of Mathematics conference in 1998 at the Center for Philosphy of Science, University of Pittsburgh. Anyone care to dredge up a reference? 151.204.6.171

## Feb. 08 2006----Possible error in proof of multivariable form of Taylors Thm

Hi, I'm not positive I've found an error so I'm just referring it to you for checking. In the proof of the multivariable form of Taylor's theorem, I believe that a paramaterizing variable 't' has been assigned a false value. In the article it is assigned a value of zero, while I think that is should be of value 'one'. I'm only a lowely undergrad, so I'm most likely wrong here, but all the same, I'd appreciate it if you would check it out and let me know if it does indeed need correcting.

I'm also a little leery of the explanation for the coefficiant of 'a' in same proof: i!*C(i,alpha) does not equal 1/alpha! by my intuition, but instead equals 1/[(alpha!*(i-alpha)!]. Perhaps my confusion stems from a sloppy transition from n=1 to n=N. This seems probable, but then would need considerable re-writing.

http://en.wikipedia.org/wiki/Taylor%27s_theorem P.S. I'd really appreciate feedback, Thanks! --student4life 04:06, 9 February 2006 (UTC) Also I'm going to edit a few errors in the explicit taylor series expansions of both ln(1+x) and e^x/sinx. student4life 22:03, 9 February 2006 (UTC)

Hey, I think you are right about this. I am also an undergraduate, and dont have that much experience in this area, but with my knowledge their is always only one solution to $\|\alpha\|=0$,but if one wanted the sum over vectors that have an absolute value of 1, then there would be many solutions and would actually need to be put in summation notation. --RETROFUTURE

## Taylor series of f

Is it just me or should the taylor series of f be written "T(f)" rather than "T(x)" ....? Fresheneesz 02:26, 6 March 2006 (UTC)

As a formal object, the Taylor series depends on the function f and the center a, so the notation T(f) would be better, or T(f,a) would be better still. However, on a more concrete level, the Taylor series should be viewed as a function, which I suppose the notation T(x) is meant to indicate. Notation is of course arbitrary. I am actually not aware of any standard notatoin for Taylor series, so I don't know whether there is a good precedent for using this slightly inaccurate notation. -lethe talk + 17:47, 31 March 2006 (UTC)
How about T(f,a;x) or T(f,a)(x)?130.234.198.85 23:20, 25 January 2007 (UTC)
I think the answer to that is no. But! I have another question. Is a Taylor series a special case of power series? Because if so that should be noted in the definition, not as a passing comment. Fresheneesz 11:02, 29 March 2006 (UTC)

By 'special case' of a power series, are you asking if more than one unique power series converges to the function just like the Taylor Series? Because if I remember right, which I dont always do, the Taylor Series is not the only type of power series that can converge to an arbitrary function. 19:34, 17 July 2006 (UTC)

The Taylor series can be based on the derivatives of the function on any value of X, not just 0 as in the Maclaurin series. Except for trivial case of a constant function, all the series will be different and still represent the same function. -- Petri Krohn 22:48, 31 October 2006 (UTC)
For the record (this discussion is very old now), a Taylor series does not have to have nonzero radius of convergence, so it does not necessarily define any function outside its center; therefore writing T(x) in the general case is just nonsense. T(f) would be better and T(f,a) to make the center explicit better still. T(f,a)(x) on the other hand again mistakenly suggests a function of x (while it is just a formal power series in x). In fact everything about the Taylor series representing a function (including the opening sentence of this article) is wrong in the general (non-anayltic smooth function) case. Marc van Leeuwen (talk) 07:32, 2 December 2013 (UTC)

## Infinitely differentiable

I have a feeling that the taylor series doesn't have to have an infinitely differentiable function. Such a series would simply end long before infinity, which isn't a crime as far as i'm concerned. I'll strike it from the definition if I get a confirmation.

A Taylor series doesn't have to converge, but if it does, then it is smooth. -lethe talk +

Second question: what does a do? When it says that the function is "around the point x= " something what does that mean? How does one choose a? These variables need to be better defined, i'll start a bit. Fresheneesz 10:46, 29 March 2006 (UTC)

The Taylor series of a function depends only on the function's value and derivatives at a single point (this point is a), whereas a better approximation to a function might look at the values of the function at many points. For some smooth functions, knowing its derivatives to all orders can tell you very little about the function. For these functions, the Taylor series may be a lousy approximation. For other functions (known as analytic), a Taylor series tells you everything there is to know about the function. As for how you choose a, well, it's up to you. For an analytic function, it doesn't matter, choose it anywhere you want (close to the range where you want to approximate the function is best). For a meromorphic function, the Taylor series only tells you about the function up to the nearest pole, so choose a between the poles where you want to know about the function. -lethe talk + 13:53, 29 March 2006 (UTC)

Fresheneesz, I disagree with your replacing real functions in Taylor's series with complex functions. If you want to be full general, the function can take values in any Banach space, but that is besides the point. Let us stick to the most widespread case, that being functions of a real variable. I told you about this many times before, please do not try to be most concise, most general, etc. It harms the understanding of the article by people who don't know this stuff. Oleg Alexandrov (talk) 17:27, 29 March 2006 (UTC)

There is every reason to discuss Taylor series in the context of analytic functions of a complex variable, after first mentioning the case of real variables. This is a math article. Not only are analytic functions in mathematics far more often thought of in the complex domain (contrary to what Oleg Alexandrov says), but one's understanding of Taylor series is greatly enhanced by this way of thinking. Example: The Taylor series of f(x) = 1/(x^2 + 1) is f(x) = 1 - x^2 + x^4 - x^6 + . . ., which mysteriously converges only for |x| < 1 (and some boundary points). But if one considers the complex version f(z) = 1/(z^2 + 1) it is clear that this function has (pole) singularities precisely at z = ±i. Since each Taylor series' region of convergence is inside a circle of radius R in the complex plane (and possibly part of its boundary) for some 0 <= R <= oo, it is exactly these Poles that explain why the radius of convergence R is equal to 1: because | ±i - 0 | = 1.
It would be appropriate to limit the discussion to real variables *only* if the article were meant to be understood at the lowest possible level and no higher. But that is not how Wikipedia math articles are designed. How about beginning with real variables, then stating that the natural milieu of Taylor series is the complex plane, and using complex variables from then on?Daqu 16:25, 13 April 2006 (UTC)
Hmmm... I guess it's true that analytic functions are usually thought of in the complex plane. After all, they can be uniquely extended to the entire complex plane. On the other hand, meromorphic functions (like your example) naturally live in different Riemann surfaces (not C). So instead of assuming that the variables are all complex, let's assume that they're valued in some Riemann surface. But actually, the article deals with functions of more than one variable. So we should really develop the theory of complex manifolds in the intro, and then throughout the article make all our variables understood in those terms. And we'll just close our eyes altogether to functions that are not even meromorphic. -lethe talk + 17:04, 13 April 2006 (UTC)
Certainly a worthwhile question. (But no, an analytic function need not have any extension to the entire plane; as long as it has a definition (locally by power series) in a connected open subset U of C then it is analytic in U.) No, there is no necessity to get into analytic continuation here, though it might be referenced. With no reference to analytic continuation, it is simply true and begging to be mentioned that every Taylor series about c in C converges in the interior of a circle about c of some radius R (with 0 <= R <= oo) in the complex plane, and for no z with |z| > R. I'm not sure about Taylor series in > 1 variable, but for one variable there is even a explicit formula for R in terms of the coefficients b_n of the series: R = 1/(lim sup_{n → oo} |b_n|^(1/n)).Daqu 05:43, 16 April 2006 (UTC)
Let me add an example. Without complex variables, it is difficult to understand the simplest phenomena with Taylor series:

Let f(x) = 1/(x^2 + 1). The Taylor series about x = 0 converges only in a disk of radius 1 about the center 0. But why? f(x) is perfectly well-behaved on all of the real numbers. The answer is that f(z) = 1/(z^2 + 1) becomes infinite as z -> i (or -i) in the complex numbers. Since Taylor series always converge in a circular disk in the complex plane, that disk's radius cannot exceed 1 (or it would include ±i, where the function is undefined!).Daqu 22:57, 13 July 2006 (UTC)

## Mistake in the final example?

In the final example given exp(x)/sin(x), the Maclaurin series for each of the functions are used and we are told to compare powers of x to evaluate the unknown coefficients, however the coefficient of x^0 on the RHS is 1, while the coefficient of x^0 on the LHS is 0. This expansion is not a Taylor series. —Preceding unsigned comment added by 137.219.45.123 (talkcontribs)

You're right about that. e^x/sin x doesn't even have a Taylor series about zero, there's a pole there! No wonder the example was left unfinished. I've changed the sin to cos, so the Taylor series should be defined. Removed a few steps in the calculation (it was painfully explicit). Should be OK now, I hope? Thanks for pointing out this embarrassing error. -lethe talk + 13:52, 31 March 2006 (UTC)

## Wrong redirect

Series expansion redirects to this article. My opinion is that there are several kinds of series expansions, with the Taylor series being one of them. Therefore wouldn't it be better to have an own article about series expansion? --Abdull 13:46, 5 June 2006 (UTC)

You mean like the Fourier series expansion? Right now, I'm not inclined to make a separate disambiguation page, because I don't think there are enough different uses, but I might feel differently if you stated a case. For now, I will add a link to the top of the article. If you think more is needed, please say so. -lethe talk + 19:37, 21 June 2006 (UTC)

## A Casual Proof needs revision

I am just starting here at wikipedia, so I still need experience in my word choice, flow, voice, et cetera. I created the 'A Casual Proof' part. I thought of the proof myself, but it isnt very formal, so someone could make it at least a little more formal. If you think this is unnecesary, then I suppose we could remove it. Otherwise, polish it if you will. And getting that darn derivative to not intersect the f would be a nice help too, if someone knows how to do this.

Can someone tell me why this was taken out? Was it unnecesary because there was a proof on the Taylor Theorem page? RETROFUTURE 01:46, 18 July 2006 (UTC)

Proofs are not that important in an encyclopedia, and they can be distracting. The Taylor series articles is already big enough. I guess we are better off without it. Oleg Alexandrov (talk) 03:31, 18 July 2006 (UTC)

Okay then. RETROFUTURE 16:10, 18 July 2006 (UTC)

Some of the longer articles in WP have proofs that are 'hidden' by default.
For example Stress (physics)
I disagree that proofs are "not that important" in an encyclopædia.
—DIV (128.250.80.15 (talk) 09:33, 25 June 2008 (UTC))

A proof is essential to the article. Please put it back. BriEnBest (talk) 21:52, 19 October 2010 (UTC)

## The name of the series

The most common name for the series is Taylor series, although it's often called MacLaurin series when used at 0. The article states all this correctly, but my question is: why did this way of naming arise? Sure, the MacLaurin series is a "special case" of the "more general" Taylor series, but it is so only in a very superficial sense. If you want the Taylor series expansion for e.g. sine at a, you can get it with the MacLaurin expansion by simple translation - that is, get the MacLaurin series expansion of sin(x-a). Given that, as articles says, MacLaurin's result was published earlier than Taylor's, why is the most common name the Taylor series? For the uninitiated it would seem as if Taylor unwittingly has taken the credit of MacLaurin's discovery (if you believe that the first person to discover something is in some way special) simply by stating the theorem in a more popular way. 82.103.195.147 11:09, 12 August 2006 (UTC)

It seems to me that it's the same kind of thing as Rolle's Theorem and Mean Value Theorem. It's pretty much the same thing, but Rolle's theorem is more specific.RageGarden 03:55, 21 April 2007 (UTC)
I agree that "Maclaurin series" would be the proper name for historical reasons (I have even heard that he considered also the general case, but I am not sure), but "Taylor series" is the term in use. I think that most mathematicians say "Taylor series" also in the case around 0, but most introductory books call this case "Maclaurin series". I suggest we leave the article as it is. Jesper Carlstrom 08:40, 21 April 2007 (UTC)

## First example

The first example ends by saying Expanding by using multinomial coefficients gives the requisite Taylor series. Actually, using multinomial coefficients is not enough: to really get the coefficients we need to add infinitely many terms, a thing which should at least be mentioned. I think it would be better to substitute the cosinus with the sinus in the example, in order to get finitely many terms to add. 62.94.48.91 09:56, 28 August 2006 (UTC)

## Rewrite of introduction on 31 October 2006

Moved from User talk:Petri Krohn

I did a partial revert of your changes to the Taylor series article. Your changes introduced some mistakes. A Taylor series is not a sum of derivatives, it is a sum of terms with each term being a derivative times a power over a factorial. Also, not all trignometric functions are globally analytic, like the tangent function. Also, you introduced a subtle mistake by implying that partial sums are always a good approximation to an infinitely differentiable function. That is true only for analytic functions, and only then just in a range. You can reply here if you have any comments. Thanks. Oleg Alexandrov (talk) 04:24, 1 November 2006 (UTC)

I think the old intro sucks. It gives the impression, that the series is only an approximation of the function, and a tool in the "easy" calculation of values. I especially dislike this sentence: Functions that involve rational operations such as addition, subtraction, multiplication and division are relatively easy to evaluate. Many other functions aren't so easy to evaluate, like those that involve... This may be true, but I think is is weasel text with no place in the intro.
The intro should point out three things:
1. The series is constructed from the derivatives of the function. Knowing the values of the derivatives for one value of X allows one to calculate the value of the function everywhere.
2. The series in not an approximation of the function, but the exactly the same function, and can be replaced for it in mathematical proofs.
3. The two things above only apply to a limited set of well behaved functions. The intro must be able to name this set of functions and direct to the relevant article.
-- Petri Krohn 04:56, 1 November 2006 (UTC)
P.S. Mathematicians like to see the formulas at the begining of the article. This makes the article inaccessible to most readers. I believe 90% of readers will not read past the first fomula. If anything important can be expressed verbally, it should be placed in the beginning before the first formula. -- Petri Krohn 05:05, 1 November 2006 (UTC)
The statement Taylor series can be used to produces all the values of an analytic function, if the value of the function, and of all of its derivatives, is known at a single point.
is accurate only locally. The new intro has other subtle mistakes. Oleg Alexandrov (talk) 05:07, 1 November 2006 (UTC)
Another mistake: For trigonometric functions, the derivatives at 0 are usually trivial to produce.
That is not true for tangent.
Peter, please fix those. I will look again at the intro tomorrow. Oleg Alexandrov (talk) 05:09, 1 November 2006 (UTC)
I tried some fine tuning. -- Petri Krohn 05:24, 1 November 2006 (UTC)

## Where are the error bounds?

Most text books give error bounds -- either in terms of an integral or the value of one of the derivatives at a point in the interval between a and x. Why do we not have them here? JRSpriggs 12:55, 10 December 2006 (UTC)

It is at Taylor's theorem. JRSpriggs 06:02, 2 January 2007 (UTC)

## Derivations of Some Series

Hello, I'm a student learning about Taylor Series and I was wondering if there can be some additional pages that derive the Taylor Series for say cos(x). It would be interesting to see it done. —Preceding unsigned comment added by 69.255.197.49 (talkcontribs)

Not sure what you want. All you need to do is compute the derivatives and evaluate at x=0 as the article explains. MathHisSci (talk) 15:18, 30 August 2010 (UTC)

## Taylor series with Lagrange and Peano remainders

Why there's nothing about those two remainders in the article?

## Difference between Taylor series and Taylor Polynomials

I think it is necessary to include information regarding the difference between a taylor series and a taylor polynomial. They are not the same.

A Taylor series is an INFINITE series of terms which, as they approach the nth term, will be EQUIVALENT to the stated function, whether it be sin x, cos x, ex... etc

A Taylor polynomial is a defined number of terms as specified by the notation, Pn(x), where n is the given amount of terms. Because n is defined as a finite number, Pn(x) will be EQUAL to that expanded series to that degree, and therefore will not be equal to the taylor series. It will be an approximation of it.

Please verify this information. EDIT...I messed up what was in bold...fixed now—The preceding unsigned comment was added by 24.229.193.72 (talk) 16:10, 25 February 2007 (UTC).

Do we really need to transpose the gradient vector?

$T(\mathbf{x}) = f(\mathbf{a}) + \nabla f(\mathbf{a})^T (\mathbf{x} - \mathbf{a}) + \frac{1}{2} (\mathbf{x} - \mathbf{a})^T \nabla^2 f(\mathbf{a}) (\mathbf{x} - \mathbf{a}) + \cdots$

rather than

$T(\mathbf{x}) = f(\mathbf{a}) + \nabla f(\mathbf{a}) (\mathbf{x} - \mathbf{a}) + \frac{1}{2} (\mathbf{x} - \mathbf{a})^T \nabla^2 f(\mathbf{a}) (\mathbf{x} - \mathbf{a}) + \cdots$

Which one is the convention? Jackzhp 21:36, 11 April 2007 (UTC)

The point is that both $\nabla f(\mathbf{a})$ and $(\mathbf{x} - \mathbf{a})$ are column vectors. So to form the inner product one must convert the first one into a row vector before matrix multiplication. JRSpriggs 07:47, 12 April 2007 (UTC)

## Possible Uses

As it is right now, it states that Taylor series can be used as partial sums to approximate the function, but wouldn't it also be useful to say that as an infinite sum it can be used to show convergence and as an infinite sum Taylor series exactly are the function?RageGarden 18:35, 19 April 2007 (UTC)

In the paragraph above, it does say "Functions that are equal to their Taylor series around any point a in their domain are called analytic functions." I'm not sure what you mean by "it can be used to show convergence". We can sometimes interpret a constant series as a Taylor series at a point (i.e., a Taylor series with x replaced by some constant) and use knowledge of the convergence of the Taylor series to conclude convergence of the constant series. Is that what you mean? That might be worth mentioning. Doctormatt 21:01, 19 April 2007 (UTC)
Yeah, sorry about the vagueness. I wasn't exactly sure how to word it but you got the general idea of what I was going for.RageGarden 04:08, 20 April 2007 (UTC)
I did some editing. Is the result better? Jesper Carlstrom 09:01, 20 April 2007 (UTC)

## Explaining my revert

I reverted some edits (link). Here is why:

• The partial sums of a Taylor series are called Taylor polynomials. I don't see why this should not be mentioned.
• You do need sufficiently many terms for a good approximation. For example: approximating $e^x$ by 1+x is good only in some cases, you must take care to include sufficiently many terms for the problem considered. I don't see why that was removed.
• Finally, it is indeed necessary that the series converges. For instance, approximating (the real function) arctan by the Maclaurin series works only between -1 and 1; it does not help that it is analytic. Of course it helps if the function is analytic for all complex numbers (entire), simply because then the series converges! But this is on the other hand way to strict: arctan is a good example, it is not entire, but the Taylor series is useful anyway.

Jesper Carlstrom 07:19, 14 May 2007 (UTC)

It is quite possible for the series to converge to the WRONG value. So convergence is NOT ENOUGH. JRSpriggs 07:35, 14 May 2007 (UTC)
You are right. On the other hand, analytic is not enough either (arctan). Entire seems a bit too much to assume. What conditions should we use? Jesper Carlstrom 08:02, 14 May 2007 (UTC)
I now have a new proposal. By the way, it seems to me that the "right" criterion for the Taylor series to converge to the function (provided it converges at all) is: f is differentiable in an open complex neighborhood of a path from a to x. This is a bit too advanced, so maybe the best thing is to state the property for entire functions only. Jesper Carlstrom 08:30, 14 May 2007 (UTC)
If the function has a complex derivative at every point in a disk centered on a, then the Taylor series converges uniformly to the function in any smaller disk centered on a. JRSpriggs 09:02, 14 May 2007 (UTC)
Do you think that your suggestion would be better than the stuff I put there? The information you suggest to put there is essentially already to be found below in the article. I have the feeling that stating these things early would be to require too much from the readers. Jesper Carlstrom 15:08, 14 May 2007 (UTC)
Thanks for your edit. By the way, notice that neighborhood redirects to neighbourhood. Jesper Carlstrom 09:30, 15 May 2007 (UTC)

My spelling checker (the one built into Firefox), does not recognize British spellings. JRSpriggs 07:25, 16 May 2007 (UTC)

## History

I'm trying to understand the history section (from today). What on earth does "the second-order Taylor series approximations of the sine and cosine functions" mean? The second-order term is 0 - is this the discovery? Could that be stated in the language of the time? Moreover, what does this mean: "the power series of the radius, diameter, circumference, angle θ, π and π/4, along with rational approximations of π, and infinite continued fractions." What is a power series of the radius? What is the power series of θ? What does infinite continued fractions have to do with this? I seriously begin to wonder if someone is making fun of us. Jesper Carlstrom 11:57, 21 May 2007 (UTC)

## Taylor series formula

I noticed the following comment associated with the Taylor series formula: As stated below, the Taylor series need not equal the function. So please don't write f(x)=... here Current formula

$f(a)+\frac{f'(a)}{1!}(x-a)^1+\frac{f''(a)}{2!}(x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+\cdots$

However should the ammendment be made which satisfies the statement above

$p_n(x) = f(a)+\frac{f'(a)}{1!}(x-a)^1+\frac{f''(a)}{2!}(x-a)^2+\cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n$

--Zven 22:45, 13 July 2007 (UTC)

You are referring to Taylor polynomials for which there is a link in the intro paragraph, so I don't think this needs inclusion in this article. However, I don't see this explicit form at Taylor polynomial either: perhaps you could find a good way to incorporate it there? Cheers, Doctormatt 23:44, 13 July 2007 (UTC)
Yeah I think you are right, will have a look at the other article and see if it can be included --Zven 00:12, 14 July 2007 (UTC)
This is still being discussed at Talk:Taylor's theorem#Taylor's theorem approximation. As I said there, I think this article is the best place to mention the Taylor polynomials. -- Jitse Niesen (talk) 12:27, 24 July 2007 (UTC)

## figure text

"as the degree of the taylor series rises" is not nice because a power series has no degree.

The editors mainly consider real analysis rather than complex analysis, but they are not explicit about it.

In complex analysis a convergent taylor series always converges to the function value f(x).

In real analysis a convergent taylor series may converge to a value different from the function value f(x).

Bo Jacoby 23:23, 22 July 2007 (UTC).

## Log Base What?

This page uses a logarithm function, but does not give the base of the log. —Preceding unsigned comment added by 72.196.234.57 (talk) 22:37, 18 September 2007 (UTC)

You're right, that should be mentioned. Thanks, now fixed. -- Jitse Niesen (talk) 01:01, 19 September 2007 (UTC)
Also, rewritten the natural logarithm as the usual "ln" (as opposed to the former "log" which implies base 10).Gulliveig (talk) 04:09, 16 September 2008 (UTC)
Not true. That's only by some conventions. Whenever I write 'log', it means the natural log, not the log base 10. You will also find the same convention adopted in most mathematical analysis textbooks. siℓℓy rabbit (talk) 10:58, 16 September 2008 (UTC)
I agree with Gulliveig. Briancady413 (talk) 18:33, 7 August 2014 (UTC)

## Complex Taylorseries

In the introduction to the Taylor expansion it is stated, that the formulation is also valid for functions of complex variables. Does this mean, in practice, that one does not have to separate the complex variable z in its real and imaginary content for the Taylor expansion? One can just write the expansion in z itself and in the end everything goes well right? In the example:

$f(a,b) = \frac{a+c}{b+d}$

where a and b are complex variables and c and d are complex constants, one could therefore write the first order Taylor expansion as

$f(a,b) = \frac{c}{d} + \frac{1}{d} \bigg[ a - \frac{c}{d} b \bigg]$

It might be helpfull to include some remarks on the use of complex functions and maybe possible restrictions in their application? —Preceding unsigned comment added by Ddeklerk (talkcontribs) 07:34, 29 October 2007 (UTC)

That one does not need to separate the real and imaginary parts is exactly what it means. Michael Hardy (talk) 16:57, 20 September 2008 (UTC)

## Why convergent?

Can anyone support this claim:

"The Taylor series need not in general be a convergent series, but often it is."Randomblue 20:57, 15 November 2007 (UTC)

In the article, several examples are given of Taylor series that converge for every x. An example of a Taylor series that does not converge for any $x\neq 0$ is given in section Properties. Jesper Carlstrom 10:10, 16 November 2007 (UTC)
It also points out that the series converges everywhere for all analytic functions, which takes care of the "often it is" part. -- Dominus 15:21, 16 November 2007 (UTC)
Sorry, I was mistaken. The example in section Properties is one that converges everywhere, but not to the value of the function. There is no example of a Taylor series that diverge everywhere. But the warning that Taylor series need not converge can be read as saying that they need not converge everywhere. That is supported in the article. -- Jesper Carlstrom (talk) 21:32, 16 November 2007 (UTC)

## Clarification requested

In the Convergence section, I think these sentences are unclear:

"If f(x) is equal to its Taylor series in a neighborhood of a, it is said to be analytic in this neighborhood. If f(x) is equal to its Taylor series everywhere it is called entire. The exponential function ex and the trigonometric functions sine and cosine are examples of such functions."

Examples of which functions? Functions that are analytic? Entire? Both? I think it's unclear as currently written. --Kweeket Talk 00:30, 30 November 2007 (UTC)

Agreed. I fixed that. Jesper Carlstrom (talk) 11:39, 30 November 2007 (UTC)

## Integral of e^(x^2)

There was a huge ruckus at my school when i asked my maths teacher what this would be; he claimed this is inevaluable. I looked up some sites and its widely stated that the gaussian function (integral of e^(-x^2)) is evaluated using the taylor series expansion of e^(-x^2). My simple question is that if the taylor expansions accept complex arguments, would it be possible to substitute x by xi (i being square root of -1) and reduce the gaussian function expansion to e^(x^2), thereby evaluating the above integral term by term?Leif edling (talk) 18:00, 23 April 2008 (UTC)

Well, $\int e^{-x^2}\,dx$ can't be expressed in elementary functions either, although you can obviously write down a convergent power series for this. In fact, you can do this for either integral, it isn't hard. Probably your professor meant that there is no closed-form expression in elementary functions. I often tell students that this integral can't be evaluated without more advanced techniques. silly rabbit (talk) 21:24, 23 April 2008 (UTC)
If you are allowed the use the (non-elementary) error function, then you can get a closed-form expression for the antiderivative of e^(x^2), which can be obtained from the (also non-elementary) antiderivative of e^(-x^2) by substituting xi for x. In general, if the function whose antiderivative is being sought is continuous and can be numerically evaluated (meaning it is possible to compute the numerical value assumed by the function for any numerically specified value of its argument), then basically any method for numerical integration will allow you to also numerically evaluate its antiderivative. This can also be used here, but in this case (just as for the antiderivative of e^(-x^2)) using the Taylor series expansion $\sum_{n=0}^\infin\frac{x^{2n+1}}{n! (2n+1)}$ is faster and more accurate.  --Lambiam 22:09, 26 April 2008 (UTC)

Rightly pointed out there Lambiam. But, unfortunately the error function is way beyond our syllabus at high school level (infact it was probably beyond our math teacher's scope because he obviously knew nothing about it :P). Using the convergent power series expansion seems logical enough, as does using the error function. But is the error function valid for an indefinite integral?Leif edling (talk) 00:53, 29 April 2008 (UTC)

Sure, all you need to do is add in a constant of integration:
$\int e^{x^2}\,dx=-i\,\frac{\sqrt{\pi}}{2}\,\operatorname{erf}(ix)+C.$
--Lambiam 14:45, 30 April 2008 (UTC)

A new problem's arisen; a few mathematics textbooks here in India say that this integral "cannot be evaluated" alongwith a few other standard forms e.g. xtanx (which can also, apparently, be evaluated as an infinite power series utilizing the taylor series expansion). Isn't the statement "cannot be evaluated" wrong on the part of the authors?Leif edling (talk) 07:42, 15 May 2008 (UTC)

At the very least it is an unfortunate and misleading statement. The usual meaning of "to evaluate" in mathematics is: "to ascertain the numerical value of". In that sense the integral can be evaluated just as well as the integral of ex. For example,
$\int_0^1 e^{x^2}\,dx=$  1.46265 17459 07181 60880 40485 86856 98815 51208 70096 21673 91856 60114 58021 87633 14290 97917 ...
--Lambiam 11:38, 19 May 2008 (UTC)

## Vector notation for multivariable Taylor series

Perhaps the following is worthy of addition into the article?

An alternative, more compact notation for the multivariable Taylor series.

Let $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ be a function of $n$ real variables. Define the vectors $\mathbf{x_{0}}=(x_{1_{0}},x_{2_{0}},\ldots,x_{n_{0}})$ and $\mathbf{\Delta x}=(\Delta x_{1},\Delta x_{2},\ldots,\Delta x_{n})$. If $f$ is infinitely differentiable at the point $\mathbf{x_{0}}$, then the Taylor series expansion for $f(x_{1},x_{2},\ldots,x_{n})$ about the point $\mathbf{x_{0}}$ is:

$f(\mathbf{x_{0}}+\mathbf{\Delta x})=\sum_{k=0}^{\infty} \frac{\left.(\mathbf{\Delta x}\cdot\nabla)^{k} f\right|_{\mathbf{x_{0}}}}{k!}$

Where $\displaystyle \nabla \overset{\mathrm{def}}{=} (\frac{\partial}{\partial x_{1}},\frac{\partial}{\partial x_{2}},\ldots,\frac{\partial}{\partial x_{n}})$ is the gradient vector of $f$.

Note therefore that $(\mathbf{\Delta x}\cdot\nabla)\equiv\left(\Delta x_{1} \frac{\partial}{\partial x_{1}}+\Delta x_{2} \frac{\partial}{\partial x_{2}}+\ldots+\Delta x_{n} \frac{\partial}{\partial x_{n}}\right)$ is a differential operator. Saran T. (talk) 11:46, 8 May 2008 (UTC)

## ERROR in the first 2 equations!!!

Could someone please modify the first 2 equations. The first one should say f(x)=f(a)+f'(a)*(x-x_a)/1! +.... in stead of simply f(a) + ... i understand that you can get the "f(x)=" idea from the sentence, but the equation should be written correctly and completely... The second equation, the one with the sigma, should also have a "f(x)=..." in front of it. For more info, see the link below. Marius82.208.174.72 (talk) 23:44, 18 July 2008 (UTC)

It is not an error. Functions need not be equal to their Taylor series. Consider, for instance, the Taylor series of the function $e^{-1/x^2}$ around the point x=0. Every term of the series is zero, but the function is not itself zero. siℓℓy rabbit (talk) 00:49, 19 July 2008 (UTC)
The error is in the mathworld link you gave, just so you know. siℓℓy rabbit (talk) 00:51, 19 July 2008 (UTC)

You *CANNOT* do a T.S.E. around $e^{-1/x^2}$ @ x=0, for said function is not differentiable at point of interest. —Preceding unsigned comment added by 71.146.134.150 (talk) 07:36, 23 August 2008 (UTC)

That function is differentiable at the origin and its derivative is zero (use the definition). More precisely, the function is $f:\R\to\R$ with $f(x) = e^{-1/x^2}$ for $x \ne 0$ and $f(0)=0$. -- Jitse Niesen (talk) 13:06, 23 August 2008 (UTC)

That's exactly the point. f(0) is *NOT DEFINED*. Of course if one wanted to say (f(x) = ... x =! 0, and f(x) = 0, x=0) to plug up your little discontinuity (at which point, continuity, differentiability all works out per definition), then that's fine, but that type of cavalier penmanship has no place in any sort of a mathematics forum. I'm not sure if this type of pathological function has a place in this article; just go pick rectangle(x) if you wanted to come up with something where the T.S.E. doesn't exactly make sense or is less than meaningful. —Preceding unsigned comment added by 71.146.134.150 (talk) 21:41, 26 August 2008 (UTC)

Well, it was of course clear from the context that the function had to be continuous at the origin. Since you are a bit slow on the uptake, here it is using less "cavalier penmanship":
$f(x) = \begin{cases}e^{-1/x^2}&\mathrm{if}\ x\not= 0\\ 0 &\mathrm{if}\ x=0 \end{cases}.$
Also, the example of this function is not in the article, as you seem to believe. But it is an important one (and may even deserve to be in the article). Regardless, there are a great many functions which are not equal to their Taylor series. Any smooth function of compact support (on a noncompact manifold), for example, cannot be equal to its Taylor series everywhere. The existence of such functions is important since it implies that there are smooth partitions of unity (in fancy terms, the sheaf of smooth functions on a smooth manifold is flasque). Particular consequences of this include the existence of distributions which arise significantly in the field of Fourier analysis. Jitse could probably recite a similar litany of issues with the Taylor approximation near such bad points in a way that applies directly to numerical analysis. siℓℓy rabbit (talk) 22:02, 26 August 2008 (UTC)

## List of Taylor series

I suggest to split the list into a new article.--79.111.200.210 (talk) 17:14, 28 July 2008 (UTC)

That makes sense to me. Move the list, and leave a heavily pruned section here with the Taylor series for say exp, ln, square root, sin and cos. -- ~~

The Parker-Sochacki method is a recent advance in finding Taylor series which are solutions to differential equations. This algorithm is an extension of the Picard iteration.

## Probabilistic interpretation of Taylor series

(This part has been deleted in April 2009)81.247.77.249 (talk) 10:01, 5 June 2009 (UTC)

## Mistake in examples section?

At the beginning of the examples section, it says "The Maclaurin series for any polynomial is the polynomial itself." That doesn't seem right. The Maclaurin series for x is 2x. For x^2 + 2x, it's 5x^2 + 4x. It would seem that the powers of the terms are the same, but the coefficients are different. Syndrome (talk) 21:26, 4 January 2009 (UTC)

There appears to be something wrong with your implementation of the Maclaurin series. I would check the details of your calculation. siℓℓy rabbit (talk) 22:39, 4 January 2009 (UTC)
Whoops, my bad. Syndrome (talk) 23:17, 5 January 2009 (UTC)

At the Calculation of Taylor series, first example. The aritmetic isn't clear enough. Or, sould I say the last step is wrong. Francisco —Preceding undated comment added 08:50, 29 June 2010 (UTC).

## Formal exponential

When $A$ is an operator, everyone knows what $exp(A)$ means. The problem with the recent edits is that if applied to $A = x \partial$, the result is not what it should, because multiplication by $x$ and $\partial$ do not commute, so the powers of $A$ are not correct for Taylor's formula. Also I don't think this kind of non-classical stuff should appear (if ever) as early as in the Definition section. --Bdmy (talk) 13:04, 17 April 2009 (UTC)

I agree that the recent edits are problematic, and even be as bold as to disagree that we can even assume that "everyone knows what $exp(A)$ means". We should be careful to make sure we are providing enough context. In addition, I object to the flippant use of differential operators with proper definition of the notation. Plastikspork (talk) 22:54, 17 April 2009 (UTC)
You are right, but at the same time also wrong (there is no contradiction): things are even more complicated; the apparent points can be resolved, but at the expense of potentially still more embarassment. The explanation is as follows: the operator $\,A$ is not $x\partial\,,$ but only $\partial\,,$ so $\,A^2={\partial}^2$ etc. Every contribution to the sum is weighted by a real number, e.g. $\,(x-a)^n/n!$, etc.; so the sum of all these terms, acting on a function f, can be written, per natural definition, as $\sum_{n=0}^\infty\, \frac{(x-a)^n}{n!}\,\hat\partial^nf(a)=\{\sum_{n=0}^\infty\, \frac{(x-a)^n}{n!}\,\hat\partial^n\}f(a)\stackrel{\rm{def}}{=}\,\left\{\exp((x-a)\hat\partial )\right\}f(a)\,.$ To avoid any misunderstanding, one must thus be very careful (note the hat-symbol; this is stressing that only $\partial$ is acting as an operator on f, whereas $\frac{(x-a)^n}{n!}$ only play the role of weighting prefactors, arbitrary integer, rational, real or even complex numbers. So actually no multiplication operator is involved!). In any case, it is better - I agree - not to overemphasize things. - By the way, $\exp A\,\,(=1+A+A^2/2!+...)$ is generally a nonlinear expression, and should be distinguished from the operation $\exp\{\, \hat\partial \}\,\,(={1+\hat\partial}+\frac{{\hat\partial}^2}{2!}+...)$ acting linearly on a function f. All this would need an extra article, e.g. "linear operators in Hilbert space". - With regards, 87.160.47.134 (talk) 15:20, 18 April 2009 (UTC)
Thanks for fixing your formula. But, the notation is (in my opinion) imprecise since the differential operator is being applied to a constant (people write this all the time, but it's not always clear for less experienced students in which order to apply the operations). Better is $f(x) = \left. \sum_{n=0}^\infty\, \frac{(x - a)^n}{n!} \partial^n f(x) \right|_{x = a}$. Thanks! Plastikspork (talk) 17:28, 18 April 2009 (UTC)

By the way, I believe the original problem was with the multi-dimensional version which can be written using only an operator exponential, without redefining the exponential: $f(\mathbf{x} + \mathbf{a}) = \left. \exp\left[ \sum_{k=1}^n a_k \frac{\partial}{\partial y_k} \right] f(\mathbf{y}) \right|_{\mathbf{y} = \mathbf{x}}$ (see MathWorld: Taylor Series. Note that my problem isn't so much with the exponential, but the flippant introduction of a bunch of new notation without proper context. Perhaps a section on "connection with the operator exponential" would be useful? Thanks! Plastikspork (talk) 17:28, 18 April 2009 (UTC)

Again, by your remark, you have discovered the essential point: I would have originally written $\left (\exp \{a_1\hat \partial_1\} \cdot\exp \{a_2\hat\partial_2\} \cdot\dots \right ) f(\mathbf y)|_{\mathbf y=\mathbf x}\,,$ but since the operators commute, the product of exponentials is identical with a single exponential for the sum, $\exp (\hat A_1+\hat A_2+\dots )\,.$ And physically, one obtains the final "pseudo-one-dimensional" result, by just replacing the one-dimensional derivative by the directional-derivative from $\mathbf x$ to $\mathbf x+\mathbf a\,.$ - Regards, 87.160.110.194 (talk) 13:39, 19 April 2009 (UTC)
Sure, for $\mathbb R^n$, the operators commute. Again, my problem isn't with the exponential, it's with the flippant inclusion of a ton of stuff with no context. For example, the reader might wonder why you are putting a hat on your partial derivatives, and what $\hat \partial_1$ means. Plastikspork (talk) 22:55, 19 April 2009 (UTC)
The hat-symbol is only distinguishing an operator from a simple real number used e.g. as a weighting-factor called $\alpha^n$ or $(x-a)^n$. In contrast, the multiplication operator would be $\widehat{(x-a)}\,.$ It is common practice to use the hat-symbol for this distinction, surely a subtle point, and one which one should mention and explain in an article like that we are discussing, although I did not see the necessity for it, before. In any case, this is one of the reasons, why I stated above that "one should not overemphasize", which is a useful attitude, by the way. In any case, as mentioned, in recent years and decades I did not at all see the necessity to be so subtle as to distinguish (x-a) and $\widehat{(x-a)}$ (the distinction is that in the first case one is weighting the derivatives of the function f (and the essential point of the Taylor expansion is that exactly with the weights $(x-a)^n/n!$ one gets frequently the result that the function T exists and is identical with f), whereas in the "multiplication-operator case" the operator performs a map from the function f to a new function g, i.e. $g(x)=(x-a)\cdot f(x)\,;$ mathematicians would write $f\to g:=\widehat{(x-a)}\,\, f\,,$ or something like that, which one would hardly understand, and they would also carefully describe the definition ranges.
In any case, by your remark (precisely: wrong remark, sorry to express it so unpolitely; I admit that it took me rather long to understand the point, myself) that in combination with $\partial$, i.e. in $x\partial$, the quantity x would be an operator, I learned that there is really a necessity to be so subtle as to distinguish multiplication-weightings, and multiplication-operators. So principally this is didactics, but at a rather high and complicated level, difficult to understand. And in fact, here the mathematicians not only at school should admit some sins. I repeat: too high a stuff to include it in an encyclopedic article; at least this is what I think presently. For, if even some non-simple-minded persons as us have these and other "difficulties", what then about a common reader? - Besides, I agree with you concerning the necessity to be as careful as you are with the above-mentioned precision concerning the distinction of functions, function arguments, function values, derivatives and their their arguments rsp. values etc. But I don't understand what you mean with "stuff without context". Could you give a good example, or just improve a certain sentence of the present text according to your opinion? Perhaps, your formulations would also fit to my taste. - Best regards! 87.160.99.83 (talk) 20:46, 20 April 2009 (UTC)
Sorry, but what am I wrong about? The fact that introducing differential operator notation without definition is confusing to an inexperienced student? By "stuff with out context", I mean Wikipedia_talk:CONTEXT#Provide_context_for_the_reader. This entire thread is an example of what happens when "proper context" (i.e. unambiguous notation) is not included. Believe me, I know what an operator is, I just don't think it's necessary to introduce it in an article about Taylor series. The formula that I would use, would be the one that appears in a cited secondary source: $f(\mathbf{x} + \mathbf{a}) = \sum_{j=0}^\infty \left[ \frac{1}{j!} \left( \mathbf{a} \cdot \nabla_{\mathbf{y}} \right)^j f(\mathbf{y}) \right]_{\mathbf{y} = \mathbf{x}}$. After all, we aren't supposed to be doing any original research here. By the way, you should sign up for an account, that way we would still know who you are even when your IP keeps changing. Thanks! Plastikspork (talk) 02:52, 21 April 2009 (UTC)
The "wrong" statement, which at first embarassed me, was that we were dealing with the operator $x\partial\,.$ (I excuse myself, for realizing only now that the statement originated not from you, but from user "Bdmy".) It then took me some time to understand that in the present context x does not play the role of a multiplication operator, but of a simple real number, although a certain one. In a thorough formulation one should thus replace at first (x-a) by a general real number, say δ, and $(x-a)^n$ by $\delta^n$, and only to the final equations one should append a   "$|\delta = (x-a)$". (Actually some people do so, although originally I thought this were "overemphasized".) Now I think different, considering this fact as one more subtlety which seems really necessary. - Best regards, you are right; and I learned a lot from your remarks; the usefulness of the "exponential writing" is only because of generalizations. There are a lot, and important and nontrivial ones, and interesting too. But to mention them in the present context would again be legitimately called "overemphasized". Thus for now I would like to finish herewith. 87.160.88.235 (talk) 10:32, 21 April 2009 (UTC)

## Generalizations

What about the following addition to the text, " ... " ? (In agreement with the suggestion of another user I would like to add it to the article, if you consider it as an improvement; otherwhise it should be kept in the discussion section.)

" Relation to the exponential function and a generalization

There is a relation of the Taylor series to the exponential function (see above). Namely, by analogy with the series $\exp\{\alpha \beta\}=\sum_{n=0}^\infty \frac {\alpha^n\beta^n}{n!}\,,$ with real numbers or complex ones, one can formally write $T(x)=\sum_{n=0}^\infty \frac{ (x-a)^n f^{(n)}(a)}{n!}=\exp\{ (x-a)\hat\partial\}\,\,\,f(y)_{\,|y\to a}\,,$ where $\hat \partial$ represents the derivation operator, i.e. ${\hat\partial}^n\,\,f=f^{(n)}\,.$ —Preceding unsigned comment added by 87.160.75.55 (talk) 08:56, 22 April 2009 (UTC)

As is, the text does not explain why we should care, or why this is a generalization. I do think that something like this should be added because it leads to the exponential map, but it needs to be explained better. I don't see the reason for the hat on $\hat \partial$; simply $\partial$ (or perhaps more properly D in the single-variable case) already denotes an operator. A reference would also be very useful; the best I could find is Olver, Applications of Lie Groups to Differential Equations, 2e, p. 31 (via Google Books), but it does not quite support the formula you give. -- Jitse Niesen (talk) 10:42, 22 April 2009 (UTC)

## Generalizations II

Since the text was not yet ready, when it was already commented, I repeat:

What about the following addition to the text, " ... " ? (In agreement with the suggestion of another user I would like to add it to the article, if you consider it as an improvement; otherwhise it should be kept in the discussion section.)

" Relation to the exponential function

There is a relation of the Taylor series to the exponential map (see above). Namely, by analogy with the series $\exp\{\alpha \beta\}=\sum_{n=0}^\infty \frac {\alpha^n\beta^n}{n!}\,\,(=e^{\alpha\cdot\beta})\,,$ with real numbers α and β or complex ones, and with Euler's number e (=$\exp (1)\approx 2.718\,)\,\,,$ one can formally write $T(x)=\sum_{n=0}^\infty \frac{ (x-a)^n f^{(n)}(a)}{n!}=\exp\{ (x-a)\hat\partial\}\,\,\,f(a)\,,$ where $\hat \partial$ represents the derivative-operator, i.e. ${\hat\partial}^n\,\,f=f^{(n)}\,.$

Generalization

In fact, this is not only formal, but leads to important generalizations. E.g., the exponential, without the function f to which it is applied, may be interpreted as an abstract representation of the one-dimensional translation group $\mathcal T(1)$, since the argument of any function f and its derivatives $f^{(n)}$ is shifted from a to x, corresponding to a translation of one-dimensional objects by a-x. This group is a so-called Lie group, i.e. it has analogous derivative properties as the function f. More general Lie groups, e.g. SO(m) with m=2,3,... , the group of rotations of the m-dimensional real space $\mathbb R^m\,,$ or the corresponding group SU(m) for the space of complex numbers $\mathbb C^m\,,$ can be described by similar expressions, e.g. $SU(m)=\left\{\exp (i\sum_{\alpha =1}^{m^2-1} A_\alpha\,\hat G_\alpha )\right\}\,,$ where the $\hat G_\alpha$ are the generating operators of the group; they correspond to $\frac{\hat\partial}{i}\,$   (the quantity i is the imaginary unit) and are represented e.g. by matrices. In contrast, the real rsp. complex numbers $A_\alpha \,,$ site-dependent "gauge fields" in physical theories, describe the local strength of the action of the group and correspond to the variable (x-a). Of course at the same time the functions f are replaced by vector functions. "

Remark: All this is no "theory invention", but well known, e.g. from Wikipedia articles on Lie groups and Gauge fields, although not everyone sees all interrelations. Rather, everyone should get used to the idea, that even if he or she at present does not understand an item, he or she can learn, and should be informed, if necessary. This is the Wikipedia idea. Usually the understanding comes with time, and through interaction. —Preceding unsigned comment added by 87.160.75.55 (talk) 10:56, 22 April 2009 (UTC) .

- In the hurry, I also forgot to sign; sorry, and best regards, 87.160.75.55 (talk) 11:38, 22 April 2009 (UTC)

Interesting. Other issues aside, what source would be cited? Thanks! Plastikspork (talk) 17:36, 22 April 2009 (UTC)
Two references for the 'generalization' section (and thus indirectly also for the preceding one): (i), one could cite, e.g., Hall, Brian C., Lie groups, Lie Algebras and Representations: An Elementary Introduction, New York and elsewhere, Springer 2003 (e.g. as special reference after first mentioning the term 'Lie groups', and perhaps also after the word 'representation', somewhat previous). (ii), after the term 'gauge field theories' I would then give the second reference, namely to Carlo Becchi, (1997), Introduction to Gauge theories, which is directly available under http://arxiv.org/pdf/hep-ph/9705211 . - With regards, 87.160.46.175 (talk) 08:32, 23 April 2009 (UTC)

I have never seen the above-mentioned formalism. But I do know, for example, $e^D$ makes sense as a so-called a differential operator of infinite order, or pseudo-differential operator. Maybe there is no relation. But I share the same problem with Bdmy: I don't think the formalism obtained by the power series representation of the exponential function belongs to this article. The article should discuss the Tayor series in the usual sense, that is, one in real analysis. -- Taku (talk) 13:11, 4 May 2009 (UTC)

## Clarification needed

THE FOLLOWING SHOULD BE EXPLAINED: "Another reason why the Taylor series is the natural power series for studying a function f is given by the probabilistic interpretation of Taylor series. Given the value of f and its derivatives at a point a, the Taylor series is in some sense the most likely function that fits the given data." 212.123.27.210 (talk) 11:50, 16 July 2009 (UTC)

I suppose this is an argument similar to ones used for expanding a distribution in moments, but I agree that it needs to be clarified. Plastikspork (talk) 17:26, 16 July 2009 (UTC)

## ! operator

What does the ! operator mean? SharkD (talk) 12:45, 31 August 2009 (UTC)

'!' represents the factorial operation, which multiplies a positive integer with every preceding integer until 1. For example, if you were to evaluate 5!, the result would be: 5 X 4 X 3 X 2 X 1 = 120. 82.178.109.148 (talk) 13:38, 9 April 2010 (UTC)

## log(1+x)

The diagram and statement that the Taylor series of log(1+x) only converges in a small region needs to be clarified. This is actually only the McLaurin sequence; yet log(1+x) is analytic at all values of x except x=-1. ALso the definition of analytic needs to be clarified a bit. It should be explained more precisely how analytic means for any point x_0 the taylor series based at x_0 converges in a neighbourhood of x_0. —Preceding unsigned comment added by 137.205.56.18 (talk) 13:52, 11 November 2009 (UTC)

## Textbookish example

The example in the Taylor series in several variables section is very textbookish, and anyway is not a very good example since it only goes out to second order and essentially is only a routine calculus exercise. I suggest that the example be removed, or at the very least changed to something more suitable. Compare with the examples in Computing Taylor series which are actually needed to show the kinds of techniques one uses in practice. Sławomir Biały (talk) 14:05, 7 July 2010 (UTC)

## Hille's theorem

1. If Hille's theorem were a generalization of Taylor's theorem (as is claimed), then it is incomprehensible that the Hille limit converges under less strict conditions than those of the Taylor series.
2. Hille's expression is $\lim_{h\to 0^+}\sum_{n=0}^\infty \frac{t^n}{n!}\frac{\Delta_h^nf(a)}{h^n}$ while the Taylor's series can be written $\sum_{n=0}^\infty \lim_{h\to 0^+}\frac{t^n}{n!}\frac{\Delta_h^nf(a)}{h^n}$.
3. Hille's theorem may need an article of its own. Bo Jacoby (talk) 09:46, 23 July 2010 (UTC).
I'm sorry that you have difficulty comprehending the theorem. I think it's fairly straightforward. The difference is that the h limit is on the outside of the summation rather than the inside. The two limits can only be interchanged in some cases, e.g., when the summation converges uniformly in h. For an entire function bounded on the positive real axis, the terms of the summation satisfy a Cauchy-type estimate and the limit can be brought inside the summation. So for such functions, one does have
$\lim_{h\to 0^+}\sum_{n=0}^\infty \frac{t^n}{n!}\frac{\Delta_h^nf(a)}{h^n} = \sum_{n=0}^\infty \lim_{h\to 0^+}\frac{t^n}{n!}\frac{\Delta_h^nf(a)}{h^n}.$
Best, Sławomir Biały (talk) 12:14, 23 July 2010 (UTC)
I've added references. Both the Hille and Phillips source and the Feller source refer to the theorem as a generalization of Taylor series, so hopefully that settles at least the first point (even if you yourself are not convinced). Sławomir Biały (talk) 12:54, 23 July 2010 (UTC)
With regard to point #3, here I have stated only a special case of the theorem in a way that generalizes the Taylor series. But the full version of the theorem applies to any continuous semigroup of operators, and gives essentially the Borel summability of the exponential series of the semigroup generator. With the translation semigroup, the generator is the derivative operator, and the exponential series is just the Taylor series. It may be appropriate to have a separate article in which the full theorem is discussed more completely. However, I think that it is also appropriate to have a discussion of the special case here, even if just to bring in a link to the Newton series at a relevant place. Exactly how to organize the discussion of the theorem here may evolve over time. Ultimately I would like to see, for instance, a section on generalizations and extensions that includes applications to infinite dimensions. This potential section could then also contain a discussion of Hille's theorem. Sławomir Biały (talk) 13:09, 23 July 2010 (UTC)
Thank you. I think I understand sufficiently as I wrote the two expressions showing how the sum and the lim are placed, so you don't need to be neither sorry nor patronizing. The problem is the use of the word "generalization". For the function $e^{-x^{-2}}$ the hille expression is not equal to the taylor series, and so the hille expression is not a generalization of the taylor series. Am I correct? Bo Jacoby (talk) 13:49, 23 July 2010 (UTC).
I see. I've included two very good references that use the word "generalization" in this way, so that should settle the matter. Sławomir Biały (talk) 14:14, 23 July 2010 (UTC)
The article now answers this objection implicitly: "When the function f is analytic at a, the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series." Your example is non-analytic at a = 0. The Hille series is a generalization of the Taylor series of an analytic function in the sense that the function is equal to its series expansion regardless of analyticity considerations. (Here "series expansion" needs to be understood in a "Borel summation" sense.) Sławomir Biały (talk) 14:42, 23 July 2010 (UTC)

I suggest

1. that this subsection be moved to the bottom of the article, as it is less elementary than the rest of the article
2. that some formula like $\frac{d^nf}{dx^n}=\lim_{h\to 0^+}\frac{\Delta_h^nf}{h^n}$ be included to clarify the connection between Taylor series and Hille expression,
3. that the presentation "generalization of the Taylor series" be changed to "generalization of Taylor series of analytic functions" to avoid the above misunderstanding regarding generalization. Bo Jacoby (talk) 18:39, 23 July 2010 (UTC).
I'm not sure I agree with #1. There are parts of the article that are decidedly less elementary. Pretty much anyone with a good grasp of high school level calculus should be able to understand the statement of Hille's theorem, whereas having a good understanding of analytic functions and entire functions goes well beyond the standard high school curriculum. Probably many people with college degrees in mathematics will not be able to understand the paragraph about second category sets in Frechet spaces, and there is a list of comparatively sophisticated examples of applications in the "Convergence" section. If we move this small new addition of mine to the bottom, then I would like to see it expanded as well into a fully fledged "Generalizations" section (see my comment above). This would probably also mitigate any potential confusion as to whether this is a "generalization" in the appropriate sense. In the mean time, I think the section fits most naturally where it currently resides. Perhaps to address your third concern, the section should be retitled just "Hille's theorem" for now, until someone comes up with some concrete ideas for how to structure my proposed "Generalizations" section. Sławomir Biały (talk) 19:05, 23 July 2010 (UTC)

## Proposed example of application in cosmology

If the cosmological redshift is assumed to be Doppler shift then the universe looks as if it were expanding and its expansion as accelerating with acceleration about dH/dt = −0.5(H0)2, where H is Hubble parameter, H0 is its value for t=0. t is time, and RE is Einstein's radius (radius of curvature of space).

It is possible to demonstrate though, with simple Newtonian math, that if energy is conserved globally then the universe couldn't be expanding since then the cosmological redshift Z(r), comes out as a special type of relativistic time dilation resulting in redshift Z(r) = exp(r/RE) − 1 and acceleration of this alleged "expansion" comes out as the second term of Taylor series of presented Z(r) around r=0, which was observed in 1998 by Supernova Cosmology Project team, less than one standard deviation off the presented above dH/dt.

So if one assumes that our universe is a stationary Einstein's universe then the cosmological redshift can't be the redshift resulting from Doppler shift but it might be the redshift resulting from the (relativistic) dynamical friction of photons (slowing down of proper time in deep space with distance from observer) and then one must come to a conclusion that the observed cosmological redshift Z(r) brings in natural way, as the second term of Taylor series, the "acceleration of expansion of universe", as a difference between Taylor series of the "observed expansion" Z(r) and the uniform expansion (Zu = r/RE + (r/RE)2 + ...) presently observed but considered by some cosmologists to be an action of "dark energy". Jim (talk) 01:00, 20 December 2010 (UTC)

I'll reiterate what I did in my edit summary. This content seems to be very offtopic for a general article about Taylor series. The Taylor approximation to first or second order appears throughout the sciences as a way to approximate nonlinear models. But this particular application seems very poorly selected, in part because the physics is not what most readers are likely to be familiar with (even most physicists have never studied cosmology), but also because Taylor series enter only in a very trivial manner and the rest of the text dwells excessively on the physics (which is irrelevant here). Sławomir Biały (talk) 12:06, 20 December 2010 (UTC)
It also is an uncited nonstandard cosmology to boot, essentially original research. jps (talk) 22:47, 23 December 2010 (UTC)

## Proposed change to notation

As a PhD Chemical Engineer I speak from the distinctly biased POV of a reader, not editor, of mathematical articles. I'd like to gauge consensus about changing the example for the multivariate expansion:

\begin{align} f(x,y) & \approx f(a,b) +(x-a)\, f_x(a,b) +(y-b)\, f_y(a,b) \\ & {}\quad + \frac{1}{2!}\left[ (x-a)^2\,f_{xx}(a,b) + 2(x-a)(y-b)\,f_{xy}(a,b) +(y-b)^2\, f_{yy}(a,b) \right], \end{align}

to

\begin{align} f(x,y) & \approx f(x_0,y_0) +(x-x_0)\, \frac{\partial {f}}{\partial {x}}(x_0,y_0) +(y-y_0)\, \frac{\partial {f}}{\partial {y}}(x_0,y_0) \\ & {}\quad + \frac{1}{2!}\left[ (x-x_0)^2\,\frac{\partial^2 {f}}{\partial {x}^2}(x_0,y_0) + 2(x-x_0)(y-y_0)\,\frac{\partial^2 {f}}{\partial {x} \partial {y}}(x_0,y_0) +(y-y_0)^2\, \frac{\partial^2 {f}}{\partial {y}^2}(x_0,y_0) \right] \end{align}

to render it more readily comprehensible. I'd suggest that examples such as this second order expansion are designed to allow a glimpse of the practical application of a theory for technical nonspecialists so the verbosity is justified.

Thanks for the great article and please consider this request. Doug (talk) 20:04, 26 March 2011 (UTC)

I have no objection to using the upright partial derivative, although I don't think the expansion point should be labeled $(x_0,y_0)$ because that section uses subscripts as indices. Ideally it should probably be $(a_1,a_2)$ and x should be $x_1$ amd y should be $x_2$. Sławomir Biały (talk) 20:33, 26 March 2011 (UTC)

## Note re GAN nomination

(copied from review page accidentally started by nominator) Jezhotwells (talk) 19:09, 11 April 2011 (UTC) I started this review process as a recent author of the page Taylor's theorem, and naturally checking out the contents of this page. This is my first GAN suggestion and I am not quite familiar with the process outside what was said in Wikipedia:Reviewing good articles. My apologies.

It seems that while not perfect, this article is well above the typical "B class" articles. It has some helpful pictures, while some of them might benefit from editing, but I don't see this as a crucial flaw. The small number of in-line references is probably the biggest flaw. In general the page is very informative, very helpful for students and researchers with a number of explicit Taylor expansions (which I have not verified) and has a good coverage of the relationship of Taylor series to other subjects in mathematical analysis. While it might lead to stepping on thin ice regarding POV, I feel that this trap has been succesfully avoided. Even if this review process is not favourable, I hope it initiates a final thrust for the devouted authors of this page to finish the great job. Lapasotka (talk) 10:17, 11 April 2011 (UTC)

I have nominated the false start for deletion. Please follow the instructions and don't start the review page when nominating. Thanks. Jezhotwells (talk) 19:09, 11 April 2011 (UTC)

## References

I suggest some references:

• MR1411907 Boas, Ralph P. A primer of real functions. Fourth edition. Revised and with a preface by Harold P. Boas. Carus Mathematical Monographs, 13. Mathematical Association of America, Washington, DC, 1996. xiv+305 pp. ISBN: 0-88385-029-X
• MR1916029 Krantz, Steven G.; Parks, Harold R. A primer of real analytic functions.

Second edition. Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks] Birkhäuser Boston, Inc., Boston, MA, 2002. xiv+205 pp. ISBN: 0-8176-4264-1

• MR1234937 Ruiz, Jesús M. The basic theory of power series. Advanced Lectures in Mathematics. Friedr. Vieweg & Sohn, Braunschweig, 1993. x+134 pp. ISBN: 3-528-06525-7

There are also books on analytic geometry that would be relevant:

• MR1760953 de Jong, Theo; Pfister, Gerhard Local analytic geometry. Basic theory and applications. Advanced Lectures in Mathematics. Friedr. Vieweg & Sohn, Braunschweig, 2000. xii+382 pp. ISBN: 3-528-03137-9
• MR1131081 Łojasiewicz, Stanisław Introduction to complex analytic geometry. Translated from the Polish by Maciej Klimek. Birkhäuser Verlag, Basel, 1991. xiv+523 pp. ISBN: 3-7643-1935-6

Kiefer.Wolfowitz  (Discussion) 10:11, 13 April 2011 (UTC)

## Why does "Series expansion" redirect here?

Currently, Series expansion redirects here. I wonder if this is correct. A Taylor series ist just one example of a series expansion. Others are MacLaurin series, Laurent series, Legendre polynomials, Fourier series, Zernike polynomials and several others.

I would suggest to change Series expansion into a short article defining the term "series expansion" and showing a list od links to articles on al kinds of series.

HHahn (Talk) 12:03, 19 May 2011 (UTC)

Good idea. Go ahead! Jakob.scholbach (talk) 13:56, 19 May 2011 (UTC)
Thanks. I did. Please have a look for "englishification" (I am not a native speaker of English). HHahn (Talk) 17:39, 21 May 2011 (UTC)

## Too many examples?

I think this page is overburdened by example calculations. Remember that Wikipedia is not supposed to be a textbook. I suggest making the section "List of Maclaurin Serieses" its own page and collecting all other examples in one section, leaving only two or three of them. Lapasotka (talk) 15:35, 24 January 2012 (UTC)

I agree. Sławomir Biały (talk) 21:03, 24 January 2012 (UTC)

"The Greek philosopher Zeno considered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility: the result was Zeno's paradox."

This is a gross misstatement of Zeno's Paradox: see Atomism and Its Critics (Pyle) for an extensive discussion of this. — Preceding unsigned comment added by GeneCallahan (talkcontribs) 04:12, 20 September 2012 (UTC)

## Eponym

From a recent edit summary "The name origin needs to be defined right after the name itself" (as opposed to discussing the eponym in the second sentence, as the origin version of the lead does). I have two issues with this. First, it seems very unlikely that we have any such guideline. If we do, then I would like a link. I have seen many articles that do not do this, so if there is a guideline, it seems to be largely ignored. Secondly, writing is typically clearer when each sentence expresses just one idea. Insisting that we cram all kinds of mandatory information into the first sentence only garbles the text. Sławomir Biały (talk) 21:09, 21 November 2012 (UTC)

Where the name comes from is important; it is not *more* important than a clear statement of what a Taylor series is. The current formulation is better that the proposed alternative. --JBL (talk) 03:13, 22 November 2012 (UTC)

## Taylor Series for arctan(x)

Doesn't the taylor series for arctan(x) converge on the interval [-1,1]. It is listed as converging on the open interval (-1,1). — Preceding unsigned comment added by 70.164.249.253 (talk) 05:26, 21 January 2013 (UTC)

That's right, I've corrected it. --JBL (talk) 15:37, 21 January 2013 (UTC)
These series all assume a complex argument, so the disc |x|<1 is correct. One could say also that it converges at all boundary point except $x=\pm i$. I don't know if it's worth it though. Sławomir Biały (talk) 17:14, 21 January 2013 (UTC)

## Euler's Formula

This page would be a good place to derive exp(ix) = cox(x) + i sin(x) by adding their Taylor series, and thus exp(pi i) + 1 = 0, which everyone loves. — Preceding unsigned comment added by 108.39.200.125 (talk) 12:51, 27 April 2013 (UTC)

Surely a better place to derive such a formula would be at the Euler's formula page. Lo and behold, among the proofs there is the power series one that you suggest. Sławomir Biały (talk) 12:55, 27 April 2013 (UTC)

## Small Tip

I'm not a native english speaker, so i could be wrong, but:

"The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable in a neighborhood of a real or complex number a is the power series " Why asking to be in a whole neiborhood, why not just in a point? You don't need absolutely the neiborhood, because you are just doing derivatives in that point, and using limits of other points, but you don't need all the neiborhood to be diferentiable.Sorry for the grammar. — Preceding unsigned comment added by Santropedro1 (talkcontribs) 06:28, 16 June 2013 (UTC)

If you put $x=a$ into the definition of the Taylor series, you just get that the whole Taylor series is $f(a)$, so there's really no "series" at a single point. Sławomir Biały (talk) 11:59, 16 June 2013 (UTC)
I see that my reply was not quite the point you were making. You're right that we only need infinite differentiability at $a$ in order to define the Taylor series. Sławomir Biały (talk) 13:27, 16 June 2013 (UTC)

2 It's hard to understand this part: By integrating the above Maclaurin series we find the Maclaurin series for log(1 − x), where log denotes the natural logarithm In the beggining of the article, in the Examples part. I don't get it, it should be more clear. — Preceding unsigned comment added by Santropedro1 (talkcontribs) 07:44, 16 June 2013 (UTC)

Could you be more specific about the difficulty you're having with that statement? Sławomir Biały (talk) 11:59, 16 June 2013 (UTC)

## Multi-index notation

I see a lot of people replacing $\alpha!$ in the multivariable version of the Taylor series with $|\alpha|!$. First, let me say that $\alpha!$ is obviously correct. All one needs to do is to compute the $\alpha$ partial of the Taylor series based at $a$ and see that it agrees on the nose with $D^\alpha f(a)$.

I am aware that some textbooks have a $|\alpha|!$ in them, and it seems worthwhile to explain why this is also correct and agrees with what we have in the article. In these texts, the nth term of the Taylor series is not expressed with partial derivatives, but instead a term of the form

$\frac{1}{n!}D^nf(a)(x-a)\text{ or }\frac{1}{n!}\frac{d}{dt}_{t=0} f(a+t(x-a))$

or any number of other obviously equivalent forms. If you expand these expressions out in partials and then use commutativity of mixed partials, there is a multinomial coefficient of $\binom{n}{\alpha}$ multiplying $D^\alpha f(a)$ because it appears this many times in the expansion. Now, observing that

$\binom{n}{\alpha}\frac{1}{n!} = \frac{1}{\alpha!}$

explains why our formula agrees with this alternative one. Sławomir Biały (talk) 21:28, 16 November 2013 (UTC)

## Very unclear section

Wikipedia is supposed to be understandable to everybody, even nonmathematicians. Normally when an article talks about numbers, people implicitly assume it's only dealing with real numbers but the section "Analytic functions" suddenly switches topics to complex numbers without indroducing at the beginning that it's doing so and uses the term "open disk" without ever having mentioned complex numbers even once earlier in the article. That section ought to be about real numbers instead and say a function is anylytic at a if there exists a number r>0 such that it's equal to it's taylor series centred at a for all x in the open interval (a - r, a + r). Maybe there could instead be a separate section later in the article about taylor series' of functions of complex numbers.

There's also a possible alternate definition of a function being analytic at a. Maybe the article should just say a function is anylytic at a if and only if there exists a positive number r such that there exists a way to express it as an infinite sum of powers of x - a in the open interval (a - r, a + r). The reason for the alternate definition is that all polynomials that can be expressed as an infinite sum of that form in that interval are already equal to their taylor series in that interval. Blackbombchu (talk) 00:11, 27 November 2013 (UTC)

Why should it deal with real numbers only? I don't agree with that at all. The article talks about both real and complex numbers. Leaving out complex numbers would be a serious omission. Sławomir Biały (talk) 01:52, 27 November 2013 (UTC)

## Much better looking way to write all those mathematical expressions in the Examples section.

$(x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\frac{1}{4}(x-1)^4+\cdots,\!$ was written by writing $(x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\frac{1}{4}(x-1)^4+\cdots,\!$ in the code. It could instead easily be made to look like (x - 1) - 1/2(x - 1)2 + 1/3(x - 1)3 - 1/4(x - 1)4 + …, by writing (x - 1) - {{sfrac|1|2}}(x - 1)<sup>2</sup> + {{sfrac|1|3}}(x - 1)<sup>3</sup> - {{sfrac|1|4}}(x - 1)<sup>4</sup> + …, in the code. The same can be done for all of the expressions in that section except for the last one from sigma notation and the second last one from a combination of a superscript and a subscript. I think those 2 expressions should instead be made to look like log(x0) + 1/x0(x - x0) - 1/${x_0^2}$(x - 0)2/2 + … and 1 + x1/1! + x2/2! + x3/3! + x4/4! + x5/5! + … = 1 + x + x2/2 + x3/6 + x4/24 + x5/120 + … = $\sum_{n=0}^\infty$ xn/n!. I will edit all parts of mathical mathimatical expressions that can be converted from images to text in the article if no one opposes it in the next 5 days. Maybe that will start the wiki evolving to have a source code that can handle even more complex expressions than superscripts, subscripts, and fractions. Blackbombchu (talk) 01:38, 27 November 2013 (UTC)

Formulas that appear on their own line should normally be displayed using latex rather than HTML. The latex code supports a variety of different output formats and is more easily maintainable than html alternatives. When formulae are inline, then opinions among editors are evenly split between those that prefer latex and those that prefer HTML, but it's considered inappropriate (in the spirit of WP:RETAIN) to change from one style to another in an article. Sławomir Biały (talk) 01:49, 27 November 2013 (UTC)
Maybe just $a = x_{0}$ from the Examples section and $(a,b) = (0,0)$ from the Example section should be edited into regular text since they're in the middle of a line with regular text. Blackbombchu (talk) 02:21, 27 November 2013 (UTC)
Yes, I agree with that proposed change. Sławomir Biały (talk) 02:30, 27 November 2013 (UTC)
Basically we keep LaTex for large formulas in the hope that one day the software folks will figure out how to display it in a size compatible with the surrounding text. Apparently it is quite a hard problem. 150.203.160.15 (talk) 03:13, 28 November 2013 (UTC)
Wolfram MathWorld already did that. Maybe Wikipedia could interact more with Wolfram MathWorld to find out how to solve that problem. Don't tell be this belongs in a proposal. I already suggested in a proposal that Wikipedia can probably do it if Wolfram MathWorld did it. Blackbombchu (talk) 02:54, 3 December 2013 (UTC)
You can enable MathJax under preferences, but it's only in beta at the moment. Historically, Wikipedia has been rather slow at rolling out new software because of the need to support a large variety of legacy software (*cough* IE 7 *cough*). But if you're bothered by the way latex equations display, then you should enable this option. Sławomir Biały (talk) 16:02, 3 December 2013 (UTC)

## Taylor series do not need to represent the (or indeed any) function

I talke issue with the opening sentence of this article "a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point". While there is no doubt that Taylor used the series to represent the original function, and that in many cases representing the function is what one has in mind when forming the Taylor series, it is also a fact that any formal series occurs as the Taylor series of some smooth function RR (indeed of infinitely many of them). Thus in general there is nothing one can say about the coefficients of a Taylor series, nor therefore about its convergence. I think the article should be honest about this, and not say anything about Taylor series representing functions, unless the context is sufficiently narrow (for instance if one starts from an analytic function) to ensure that this is true. And it would be good to state explicitly that a Taylor series is a formal power series. Marc van Leeuwen (talk) 07:45, 2 December 2013 (UTC)

I agree with this. An easy fix is to replace "function" in the first sentence with "analytic function". I'll go ahead and do this. On second thought, part of the lead elaborates on the issue of convergence, so it might be better to remove "representation of a function" in favor of something like "power series". Sławomir Biały (talk) 13:06, 2 December 2013 (UTC)

## Whole lot of missing information in the Analytic functions section

I'm pretty sure that a real function with domain R can be infinitely differentiable without being analytic but all complex functions that with domain C that are differentiable on C are also infinitely differentiable and analytic on C. For that reason, I think it would be better for that section to define analytic for real functions and also define it the same way for complex functions but discuss the criterion of being analytic for a complex function being pretty much meaningless. I can't think of a proof but maybe someone could try and hunt for that information in a reliable source. I think a complex function can be infinitely differentiable at a point without being analytic at that point but it can't be singly differentiable in an open disc without being analytic in that open disc. Even the function that assigns to a + bi, a + 2bi for all real numbers a and b is not differentiable. I was using the same definition of a derivitive for complex functions as real functions. Blackbombchu (talk) 02:13, 3 December 2013 (UTC)

In either the real or complex case, infinite differentiability at a point does not guarantee analyticity. The Cauchy-Goursat theorem tells you that if a function is complex differentiable in a whole disc, then it's analytic in the disc, but this is a much stronger condition that smoothness at a single point. Sławomir Biały (talk) 16:05, 3 December 2013 (UTC)

## Alternating "Taylor" and "Maclaurin" in the Examples section

Maybe it's just be, but does anyone else find the constantly alternating names for the series irritating?Thomas J. S. Greenfield (talk) 00:30, 30 March 2014 (UTC)

It's a little jarring, but it seems that the idea is to get both terms into play at the beginning. This might be helpful to some readers (even if I personally would just call everything a "Taylor series"). Sławomir Biały (talk) 12:35, 30 March 2014 (UTC)
Okay...Thomas J. S. Greenfield (talk) 14:39, 29 April 2014 (UTC)

## Taylor series in several variables

First formula in the above mentioned section in the article - should not there be the factorial of the sum of indexes instead of the product of factorials of particular indexes? — Preceding unsigned comment added by 213.150.1.136 (talkcontribs) 2014-09-11T16:29:33

No, see above. Sławomir Biały (talk) 16:58, 11 September 2014 (UTC)
To 213.150.1.136: Perhaps you are confused by thinking that the factorial of the sum is less than the product of the factorials. In fact, the reverse is true: 2!·5!·3!<(2+5+3)! . JRSpriggs (talk) 06:40, 12 September 2014 (UTC)

## Suggested summation with 00 undefined

$f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots.$

$= \sum_{n=0} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}$

$= f(a)+ \sum_{n=1} ^ {\infty} \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}$

The latter representation having the advantage of not having to define $0^0.$ Some indication of the controversy over the definition of $0^0$ can be found at Exponentiation under the heading "History of differing points of view."

--Danchristensen (talk) 04:29, 12 May 2015 (UTC)

In this setting $x^0=1$ even if x is zero. See Exponentiation#Zero to the power zero for explanation. On this point, it is important that the article should agree with most sources on the subject. The article does explain what $0!$ and $x^0$ are. By inventing our own way of writing power series, we would have the unintended effect of making the article more confusing vis a vis most sources, instead of less confusing. So I disagree strongly with the proposed change, unless it can be shown to be a common convention in mathematics sources. Sławomir Biały (talk) 11:17, 12 May 2015 (UTC)
$0^0$ is usually left undefined on the reals. --Danchristensen (talk) 14:01, 12 May 2015 (UTC)
That's a common misconception. For power functions associated with integer exponents, exponentiation is defined inductively, by multiplying n times. For x^0, this is an empty product, so equal to unity. It's true that if we are looking at the real exponential, then x^r is defined as $exp (r\log x)$. But that actually refers to a different function. For more details, please see the link I provided. It is completely standard in this setting to take 0^0=1. See the references included in the article. Sławomir Biały (talk) 14:37, 12 May 2015 (UTC)
"A common misconception?" Really? --Danchristensen (talk) 15:50, 12 May 2015 (UTC)
This discussion page is not a forum for general debate. If you have sources you want us to consider, please present them. Otherwise, I regard this issue as settled, per the sources cited in the article. Sławomir Biały (talk) 16:15, 12 May 2015 (UTC)

After this edit, the proposed text now includes a passage "The latter representation having the advantage of not having to define $0^0.$" According to whom is this an advantage? What secondary sources make this assertion? Sławomir Biały (talk) 11:57, 13 May 2015 (UTC)

As pointed out in the original article, the summation there depends on defining $0^0=1$, a controversial point for many. I present a version that does not depend on any particular value for $0^0.$ --Danchristensen (talk) 15:13, 13 May 2015 (UTC)
You have still not presented any textual evidence that x^0=1 is remotely controversial in the setting of power series and polynomials. And the consensus among editors and sources alike appears to contradict this viewpoint. Sławomir Biały (talk) 16:08, 13 May 2015 (UTC)
Some indication of the controversy can be found at Exponentiation under the heading "History of differing points of view." --Danchristensen (talk) 17:08, 13 May 2015 (UTC)
Taylor series and polynomials do not appear to be listed there. Sławomir Biały (talk) 17:20, 13 May 2015 (UTC)
Key points: "The debate over the definition of $0^0$ has been going on at least since the early 19th century... Some argue that the best value for $0^0$ depends on context, and hence that defining it once and for all is problematic. According to Benson (1999), 'The choice whether to define 0^0 is based on convenience, not on correctness.'... [T]here are textbooks that refrain from defining $0^0.$" --Danchristensen (talk) 17:40, 13 May 2015 (UTC)
Yes, and abundant reliable sources make the decision in the context of Taylor series and polynomials to define 0^0 = 1, and this article is rightly written to reflect these sources. Whether you happen to think this consensus (of both reliable sources and editors of this page) is morally right or not is totally irrelevant. --JBL (talk) 17:51, 13 May 2015 (UTC)
Morally right??? Come now. As we see in Exponents, there is some controversy -- opposing camps, if you will -- on the matter of $0^0.$ This article would be more complete with at least a nod in the direction if not an endorsement of the "other camp" in this case. The summation I suggested is not anything radical. It follows directly from the original summation. It simply does not depend on any particular value for $0^0$. --Danchristensen (talk) 18:33, 13 May 2015 (UTC)
See: "Technically undefined..." --Danchristensen (talk) 18:56, 13 May 2015 (UTC)
This is obviously not a reliable source, and also does not support your view that we should write Taylor series in a nonstandard way. If you do not have a reliable source, there is no point in continuing this conversation. (Actually there is no point whether or not you have a reliable source because it is totally clear that there is not going to be consensus to make the change that you want, but it is double-extra pointless without even a single reliable source to reference.) --JBL (talk) 19:53, 13 May 2015 (UTC)
P.S. If you want someone to explain "morally right" or why the history section of a different article is irrelevant, please ask on someone's user talk page instead of continuing to extend and multiply these repetitive discussions on article talk pages. --JBL (talk) 19:53, 13 May 2015 (UTC)
It shows how one author worked around $0^0$ being undefined for a power series -- using a similar idea to that I proposed for Taylor series. See link to textbook at bottom of page. The relevant passage is an excerpt. --Danchristensen (talk) 20:05, 13 May 2015 (UTC)
Here are some sources that do not make this special distinction for Taylor series: G. H. Hardy, "A course of pure mathematics", Walter Rudin "Principles of mathematical analysis", Robert G. Bartle "Elements of real analysis", Lars Ahlfors "Complex analysis", Antoni Zygmund "Measure and integral", George Polya and Gabor Szego "Problems and theorems in analysis", Erwin Kreyszig "Advanced engineering mathematics", Richard Courant and Fritz John, "Differential and integral calculus", Jerrold Marsden and Alan Weinstein, "Calculus", Serge Lang, "A first course in calculus", Michael Spivak "Calculus", George B. Thomas "Calculus", Kenneth A. Ross "Elementary Analysis: The Theory of Calculus", Elias Stein "Complex analysis". I've only included sources by mathematicians notable enough to have their own Wikipedia page. I assume we should go with the preponderance of sources on this issue, per WP:WEIGHT. Sławomir Biały (talk) 00:34, 14 May 2015 (UTC)
It would be interesting to hear how they justified their positions, if they did. Was it correctness, or, as Benson (1999) put it, simply convenience. Or doesn't it matter? --Danchristensen (talk) 02:49, 14 May 2015 (UTC)
It doesn't matter. --JBL (talk) 04:03, 14 May 2015 (UTC)
Agree, it doesn't matter. We just go by reliable sources, not our own feelings about their correctness. Sławomir Biały (talk) 11:18, 14 May 2015 (UTC)

Nicolas Bourbaki, in "Algèbre", Tome III, writes: "L'unique monôme de degré 0 est l'élément unité de $A[(X_i)_{i\in I}]$; on l'identifie souvent à l'élément unité 1 de $A$". Sławomir Biały (talk) 11:35, 14 May 2015 (UTC)

Also from Bourbaki Algebra p. 23 (which omits the "often", and deals with exponential notation on monoids very clearly):
Let E be a monoid written multiplicatively. For nZ the notation x is replaced by xn. We have the relations
xm + n = xm.xn
x0 = 1
x1 = x
(xm)n = xmn
and also (xy)n = xnyn if x and y commute.
Quondum 14:06, 14 May 2015 (UTC)
(Shouldn't that be x0 = e, where e is the multiplicative identity of E?) Have they not simply defined $x^0=e?$ Note that natural numbers have two identity elements: 0 for addition, 1 for multiplication. --Danchristensen (talk) 17:58, 14 May 2015 (UTC)
I'm simply quoting exactly from Bourbaki (English translation). They are dealing with a "monoid written multiplicatively", where they seem to prefer denoting the identity as 1. Just before this, they give the additively written version with 0. And before that they use the agnostic notation using the operator ⊤, and there they use the notation e. —Quondum 18:41, 14 May 2015 (UTC)
The problem with normal exponentiation on N is that you have two inter-related monoids on the same set (a semi-ring?). Powers of the multiplicative identity 1 are not the problem. The problem is with powers of the additive identity 0. It's a completely different structure. --Danchristensen (talk) 21:09, 14 May 2015 (UTC)
How so? With an operation thought of as addition, we use an additive notation, and change the terminology, as well as the symbols. We could call it "integer scalar multipication" or whatever instead of "exponentiation"; I'd have to see what Bourbaki calls it (ref. not with me at the moment). Instead of xn, we write n.x, meaning x+...+x (n copies of x). The entire theory of exponentiation still applies. —Quondum 22:16, 14 May 2015 (UTC)
Please: this is not a forum! --JBL (talk) 22:42, 14 May 2015 (UTC)

An editor recently added an explanation " the next term would be smaller still and a negative number, hence the term x^9/9! can be used to approximate the value of the terms left out of the approximation. " for the error estimate in the series of sin(x). This explanation is just nonsense: the next term might be positive or negative (depending on the sign of x), and the sign of that term together with the magnitude of the next term (which might or might not be smaller, depending on the magnitude of x) is simply not enough information to make the desired conclusion, even in the real case. More importantly, it is simply not necessary to justify this claim here, and it distracts from the larger point being made in this section. --JBL (talk) 18:41, 13 July 2015 (UTC)

As I expected your explanation is very poor, and you do need to provide one if you are going to revert a good edit. Yes, I see that in the particular example how the signs alternate and I also see that each term, in this particular example, is increasingly small. So, what precisely is your objection? I took the original explanation and expanded just a little, saying that further terms are small and the next term in particular is negative, hence the term X^9/9! is a good approximation of the error introduced by the truncation. You need to do a number of things. First you need to learn to read, second you need to learn to respect others edits. If the edits are completely off the mark, the edit should be deleted. If the edit is pretty close, then you should consider editing the edit to improve things just that much more. But if you are of the opinion that if one single thing is wrong with the edit and deleting it is the answer, we could extrapolate that attitude to the whole of Wikipedia and in the end we will have nothing left as Wikipedia is shot full of errors. My edit was not completely off the mark, hence it should be left and possibly improved. Please read the original material and then read my edits. Finally, if you are squatting on this article in the mistaken belief that you should be the arbiter of the "truth", you need to move to one side. I did not start a reversion war, you did. Thank you Zedshort (talk) 19:37, 13 July 2015 (UTC)
Before I chime in on this, both of you have to stop edit warring over this (and both of you know that). @Zedshort: in particular I think your comment above is unnecessarily confrontational.
Now, as for the content: it is true that Joel's objections over the sign of the error term are valid. The next term is $-\frac{x^{11}}{11!}$, which is negative if x is positive but positive if x is negative. Hence it is not prudent to refer to the error being "positive" or negative. In short I agree with Joel on this, although I will say that Zedshort is correct that the next error terms are not bigger in magnitude, because of Taylor's theorem.--Jasper Deng (talk) 19:47, 13 July 2015 (UTC)
Yes, I see that I assumed the value of x to be positive. But the result is the same if dealing with negative values of x, as the next term is opposite in sign to the X^9/9! , further terms are diminishingly small and hence X^9/9! term provides an upper bound on the error introduced by the truncation. I will not apologize for being direct and to the point with someone regardless of who they are. Zedshort (talk) 20:04, 13 July 2015 (UTC)
You would want to say then that the next term is opposite in sign or something along those lines. But I don't think it's necessary. Whatever its sign, the validity of the truncation is guaranteed by Taylor's theorem. All the terms of the exponential function's Taylor series are positive for positive x, but that doesn't change anything. In other words, I'd not want to imply to the reader that the sign of the terms have anything to do with it.--Jasper Deng (talk) 20:15, 13 July 2015 (UTC)
Jasper Deng is right that the correct explanation is by Taylor's theorem. Zedshort's attempted version is not salvageable: in addition to the error about the sign, it is simply not true that the contributions from subsequent terms of the Taylor series get smaller and smaller in absolute value. At x = 12, the term x^9/9! is about 14000 and the term x^11/11! is about 18000. The error of the 7th-order polynomial at x = 12 is about 5000, but the fact that 5000 < 14000 does not follow from anything written by Zedshort.
Even if the argument weren't wrong in all respects, it is unnecessary where placed and distracts from the point of the section. --JBL (talk) 20:53, 13 July 2015 (UTC)
That however is incorrect in general. Please see the article on Taylor's theorem. For the series to converge subsequent terms must tend to zero. Therefore I can always find a point at which the error introduced by subsequent terms is less than any positive given error, for a given x. It may not be the 7th-order. It could be higher-order. But at some point, it is true that subsequent terms' contributions tend to zero.--Jasper Deng (talk) 21:01, 13 July 2015 (UTC)
Yes of course for fixed x they eventually go to zero (and for the sine it is even true that they even eventually go monotonically to zero in absolute value, which need not be true in general) but there is no way to use that to rescue the edits in question. --JBL (talk) 21:15, 13 July 2015 (UTC)

I agree with the revert. This edit gives three false impressions: (1) that more terms in the Taylor series always leads to a better approximation, (2) that the error in the Taylor approximation is never greater than the next term of the Taylor series, and (3) that the sign of the next term in the Taylor series is relevant to reckoning the error. (Regarding the second item, in case it is not already clear, Taylor's theorem is what gives the actual form of the error, as well as estimates of it. The fact that $x^9/9!$ is the next term of the Taylor series for sin(x) is only of peripheral relevance.) Reinforcing these misconceptions works against what the section tries to achieve, which is to emphasize the problems that can arise when applying the Taylor approximation outside the interval of convergence. Sławomir Biały (talk) 22:23, 13 July 2015 (UTC)

I've reverted the edit as it seems that consensus is pretty much against the edit in question.--Jasper Deng (talk) 23:12, 13 July 2015 (UTC)