Talk:Asymptotic analysis

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
High Importance
 Field: Analysis
This article has comments.

Notation query: f ~ g can be written with the keyboard character ~. HTML isn't needed. For my browser this is fine.

Charles Matthews 11:15, 20 Oct 2004 (UTC)

On my current browser it looks fine, but whatever I was using before showed it up too high. I guess it is pot-luck according to the font one is using. Now I see there is a html entity: ∼ aka ∼   --Zero 11:43, 20 Oct 2004 (UTC)


To Michael's question: "Do some people prefer \ldots to \cdots between plus signs? Why?", my answers are (1) they have no artistic taste, (2) they are so old that they learned their typography on typewriters where temporarily shifting the baseline by a fraction of a line was too much trouble, (3) none of the above cos I just made them all up. ;-) -- Zero 03:59, 6 Nov 2004 (UTC)


Could somebody please include some layman's language in this article at the top; just a sentence saying that asymptotic analysis covers how long something takes to be processed, or whatever the definition is. Thanks. -- Ian Howlett 19:30, 4 Jul 2005 (UTC)


I would like to second the idea of a layman's explanation of Asymptotic "equality." As an outsider looking in on the math community, a plain-English explanation of the concept would go a long ways for understanding of what you're trying to describe. I recommend that you ask someone outside the math community if they understand what you're writing before you post it; that way you've taken care of your "lowest common denominator." 76.111.8.135 (talk)

[edit]

ISO 31-11 uses the Unicode character ≃ to denote "is asymptotically equal to". Is this character actually in usage? Wouldn't it then be a good addition to the this article? --Abdull 21:16, 28 May 2006 (UTC)

I don't think there is concensus on use of several such "equivalence relation symbols" - some use ~ for asympt.equiv, some use it for proportional, some use a kind of alpha for the latter. I think this is a fashion phenomenon under constant evolution. you might compare the HTML entity code vs amstex commands for the following symbols (from http://www.acronet.net/tags/characters.html):
& sim;   ∼     tilde operator = varies with = similar to = proportional = asympt.equiv.
& cong;  ≅  approximately equal to / (geometrically) congruent
& asymp; ≈  almost equal to / asymptot.equal (\approx in latex)
& equiv; ≡  identical to (used by mathematicians for (arithmetic) congruence)
(and the symbol of this section's title is displayed as division sign in the "edit summary" line of my browser...)— MFH:Talk 22:19, 12 February 2007 (UTC)

I agree there are different "fashions", but it would be good to clarify that. For example what is the LaTeX symbol \asymp \asymp good for? -- Christian.Mercat (talk) 15:22, 15 February 2010 (UTC)

Just wanted to note that this usage (which is fine with me, but I'm no expert) conflicts with the discussion of the ≈ character over at approximation (in the Mathematics section). People with the knowledge and the refs may want to help bring the treatments together. /ninly(talk) 05:14, 11 August 2010 (UTC)

Definition of asymptotic equivalence[edit]

Someone did not like my more general definition of asymptotic equivalence, which I took to be f=g(1+o(1)). He argued, that "my" definition (which is common in asymptotic analysis) was not more general, because asymptotic equivalence only deals with limits... Please, explain to me, by using the present definition of the article (lim f/g = 1), whether or not 0 is asymptotically equivalent to 0, and then explain to me, why my definition is not more general than the present definition. According to my definition, 0 ist asymptotically equivalent to 0. ASlateff 128.131.37.74 05:27, 11 February 2007 (UTC)

I agree that the current definition (f ~ g iff lim f/g = 1) is not good. Among others, the function f/g may not be defined even though the function g/f is defined, so there is clearly a problem. I prefer the f = (1+o(1)) g definition. (I prefer putting it that way round because many non-specialists would get confused by the other one thinking that 1+o(1) would be an argument to g.) Else, one can also put it f-g = o(g) (which is of course the same). — MFH:Talk 15:37, 12 February 2007 (UTC)
May I suggest to reestablish my version of the article? ASlateff 128.131.37.74 00:43, 17 February 2007 (UTC)

The current text "This defines an equivalence relation (on the set of functions being nonzero for all n large enough (most mathematicians prefer the definition f-g=o(g) in terms of Landau notation, which avoids this restriction))." is incorrect. There is no function which is o(g) unless g is nonzero for large enough argument, and then f would need to have that property too. Actually the version f = (1+o(1))g is the only one mentioned here that allows f and g to be 0 infinitely often (provided all but finitely many of these places are in common). I'm changing it. Zerotalk 10:46, 16 September 2009 (UTC)

That is not correct: if g equals 0 infinitely often, then f-g=o(g) just implies that f also equals 0 at the same places (maybe with finitely many exceptions); it is strictly the same statement as f = (1+o(1))g or f = g + o(g). But the WP definition of little-o notation was incorrect in this respect until I corrected it just now, which probably led you astray. I'll try to see if I can get things right in this article as well. Marc van Leeuwen (talk) 12:38, 14 September 2011 (UTC)
Yes, my comment was with respect to the definition of o() existing at that time. Your new definition of o(1) is mathematically better, but I didn't want to change it since the version with ε is harder for novices to understand. Same with your definition of f\sim g. But I won't argue. On the other hand I wonder why you want f=g+o(g) at all, rather than just the equivalent f=(1+o(1))g. The latter has several clear advantages, (1) it fits well with both the new and the old definitions of o(), (2) it requires no thought at all about what o() means when its argument is 0, (3) it is transparent that f\sim g\implies g\sim f whereas f=g+o(g) needs the triangle inequality to see that. Zerotalk 14:45, 14 September 2011 (UTC)
I'm not sure I can appreciate your remarks fully, but that's because I'm not entirely comfortable with the meaning of expressions with buried little-o expressions. As far as I have seen, their meaning is defined operationally by algebraically manipulating until the little-o expressions are isolated. Then f=(1+o(1))g means exactly the same as f=g+o(1)g, f=g+o(g) and f-g=o(g), so whatever follows from one follows from any other of them. In particular, I cannot easily see the symmetry of the relation for any one of them without some special consideration for the set of values where f and g vanish (their symmetric difference should be finite), contrary to what you say in (3). The main point seems that f=(1+o(1))g does not exactly mean that f is the product of g and a function that is in 1+o(1): that would force f to share every single zero of g, no exceptions allowed (and it would not define a symmetric relation!). Also, since one wants to have o(1)g=o(g)=o(1)g for any g (I think), I don't think I agree with you points (1) and (2), in spite of appearances. But maybe the point of view that the meaning of any statement involving little-o notation can only mean anything asymptotic (it says nothing away from infinity) can somehow resolve these subtleties. Marc van Leeuwen (talk) 16:56, 14 September 2011 (UTC)
My main concern is not with correctness (except that we need to avoid incorrectness) but with clarity for the likely readership of this page. Someone coming here is likely to be a complete novice wrt this sort of notation and certainly won't understand the subtleties that fascinate us mathematicians (we have met, btw). So what we decide here in this detailed discussion is not necessarily what is best for the article. Leaving that aside, I am a specialist in asymptotics but I don't recall ever seeing "their meaning is defined operationally by algebraically manipulating until the little-o expressions are isolated". I wouldn't know how to define that anyway; what if the o()s are buried inside other functions, even inside other o()s?. In my neck of the woods, an equation involving O() or o() on the right side (not on the left side, to make this simpler) is interpreted that each o() and O() can be replaced by a function in that class so that the equation becomes true for large enough n. So f=(1+o(1))g means that there is some h in "o(1)" such that f=(1+h)g for large enough n. People consider in practice that the predicate "for sufficiently large n" applies to any expression with o() in it, but I'm not sure I could find a written source which states that explicitly. There is a related matter which I think is a more serious notational issue than zeros. I'm reading a paper that says some function f(n) derived from random cubic graphs of n vertices is o(1) as n goes to infinity, but f(n) is not even defined if n is odd. Very few people would bother writing this in a pedantically precise manner and would probably answer "everyone knows what I mean" if challenged. Zerotalk 00:58, 15 September 2011 (UTC)
Btw, to complete the interpretation as I understand it, consider an equation with o(), O() etc on both sides. This means that no matter which actual functions the Evil Demon substitutes for the o()s and O()s on the left side, there are functions which can be substituted for the o()s and O()s on the right side so that the equation is true for large enough n, where "large enough" might depend on the Evil Demon's choices. Zerotalk 01:04, 15 September 2011 (UTC)

Why single out computer science?[edit]

I understand that asymptotic analysis is an important tool in the analysis of computational complexity, but there are other fields besides CS which also regularly use this. (Physics is an obvious example, but you could also include all physical sciences, theoretical biology and ecology, etc.) I suggest removing the emphasis on CS and leaving Applied Mathematics as the principal area of application. Tim 136.186.1.187 (talk) 02:50, 4 May 2010 (UTC)

Right, and add statistics as an example at least as big as CS. Plus lots of other stuff. I'm not sure about your version, but I agree the existing text needs revision. Zerotalk 10:36, 11 August 2010 (UTC)

Method of dominant balance[edit]

Does anyone else find the example in the "Method of dominant balance" section to be hard to follow? Assumptions seem to be pulled out of a hat; also the constant A that appears along the way is mysteriously discarded. If y(x) is a solution, then so is Ay(x), so this is an actual error. Zerotalk 09:11, 17 October 2010 (UTC)