Talk:Asymptotic expansion

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Mid Importance
 Field:  Analysis

Some points of organisation[edit]

For an asymptotic scale, I also know the definition of an (arbitrary) family of functions S={fm} such that

Then this induces an evident ordering on the set of indices, and asymptotic expansions are defined in the same way, as finite sums on mm' , such that the difference is negligible w.r.t. all m<m' .

Without being too general, we should allow at least for negative indices, e.g. for Laurent series, but maybe there is really a need for an indexing set other than N, in order to be able to develop on the scale xa(log x)b. for example. MFH: Talk 14:00, 24 May 2005 (UTC)

Yes. There is quite a big subject - orders of infinity. Charles Matthews 11:38, 25 May 2005 (UTC)

the definition[edit]

An asymptotic expansion is NOT a series but a polynomial approximation in a neighborhood of a point so that the function is equal to its polynomial approximation plus a remainder. For instance Taylor's formula with remainder is an asymptotic expansions but it often happens, in number theory for instance, that asymptotic expansions with only one or two terms are known.

Edmund Landau's Little oh notation is often used to give the size of the remainder. See Apostol p370

See for instanceéveloppement_limité

Examples of asymptotic expansions include the Best Constant Approximation which leads to continuity, Best Affine Approximation which leads to differentiability, Best Quadratic Approximation which leads to second differentiability, etc. In general, this leads to Lagrange's treatment of the differential calculus without recourse to limits.

Assimilating expansions and series is a VERY SERIOUS ERROR with much potential for confusion.

Schremmer (talk) 19:26, 21 November 2011 (UTC)

I think that

is unnecessarily limiting. I think this is enough:

If you agree, pls change it. --Zero 14:58, 22 August 2005 (UTC)

Just a point on clarity should the definition refer to these as a `class' of functions instead of a `series'? just what we currently have

`is a formal series of functions which has the property that truncating the series after a finite number of terms'

seems not to make sense and that what we actually mean is that asymptotic functions are a class of functions that truncate series expansion after a finite number of terms. (talk) 14:53, 3 May 2009 (UTC)

differentiation and integration[edit]

My question is if we have

then is correct that:


In several books i have read that this is true, however i'm not completely sure though. —The preceding unsigned comment was added by (talk) 09:52, 11 January 2007 (UTC).

The first one is certainly not true: imagine that 'g' is the same as 'f' except that it has very fine wriggles that get smaller quickly but not flatter. The second one might be true most of the time, perhaps with some sanity conditions required. --Zerotalk 12:09, 11 January 2007 (UTC)

another example[edit]

Stirling's approximation is another important example - should it be added to the main page? Lavaka 00:23, 15 February 2007 (UTC)

derivation of asymptotic expansion of error function[edit]

Here is a derivation of the asymptotic expansion of the error function (PDF-Proposition 2.10) (talk) 00:09, 9 April 2008 (UTC)

halting summation[edit]

Sometimes books say we truncate an asymptotic series "when it begins to diverge". What does this mean? If you know exactly what this means could you add it to the article? But if you just have a rough idea I would appreciate if you could explain here on the talk page. (talk) 04:18, 24 April 2008 (UTC)

It means "stop summing the series when you get to the smallest term, and do not continue summing after that". Remarkably, this often gives the very best approximation for whatever thing it is that you are trying to compute, and the remaining error is often smaller the smallest term (at which you stopped summing). Although this is "common knowledge" and regularly used in numerical applications, I have to admit I have never seen a proof or discussion that explains why this works. It would be nice to have a reference and a proof sketch for this. linas (talk) 03:42, 31 January 2010 (UTC)