Talk:Big O notation

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Computing (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 
WikiProject Mathematics (Rated B-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
Mid Importance
 Field: Analysis
One of the 500 most frequently viewed mathematics articles.
This article has comments.
WikiProject Computer science (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
This article has an assessment summary page.

Algorithms and their Big O performance[edit]

I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?

I think separate would be better, to increase the article count :-). Then you can have links from the Complexity, Computation and Computer Science pages. Maybe you can call it "Algorithm run times" or something like that. --AxelBoldt
Or something like analysis of algorithms or Algorithmic Efficiency since you may sometimes choose based on other factors as well. --loh
I'd recommend puting it under computational complexity which earlier I made into a redirect to complexity theory. It should be a page of it's own, but I didn't want to write it ;-) --BlckKnght

Removed polylogarithmic[edit]

Reinstated my VERY bad. Missed a bracket

"Tight" bounds?[edit]

The article refers to terms "tight" and "tighter", but these terms are never defined! Alas, other pages (e.g. "bin packing") refer to this page as the one giving a formal definition of this term; moreover, "Asymptotically tight bound" redirects here. Yes, the term is intuitively clear, but a math-related page should have clear definitions.

I think a short section giving a formal definition of the term "tight bound" and referring to Theta-notation is needed (e.g. as a subsection of section "8. Related asymptotic notations"), and once such a section is created the redirection from "Asymptotically tight bound" should link directly there.

Little-o notation: Graham.Knuth.Patashnik.1994 vs Sipser.1997[edit]

‎Sapphorain reverted my remarks on a deviating definition by Sipser, stating that Sipser.1997's definition is equivalent to that of Graham.Knuth.Patashnik.1994 (given before in the article). She/he is probably right, as I didn't think much about this issue.

However, the article says about Graham.Knuth.Patashnik.1994's definition:

If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation f(x) = o(g(x)) is equivalent to \lim_{x \to \infty}\frac{f(x)}{g(x)}=0.

while Sipser.1997 says f(n) ∈ o(g(n)) if

\lim_{n \to \infty}\frac{f(n)}{g(n)}=0

i.e. he doesn't require g(x) to becomes nonzero beyond a certain point.

Moreover, the article says about Graham.Knuth.Patashnik.1994's definition:

g itself is not [in little-o of g], unless it is identically zero near ∞

while Sipser.1997 says:

g(n)∉o(g(n)) for all g

i.e. he doesn't require g not to be identically zero near ∞.

For these reasons, I felt (and still feel) that both definitions slightly differ.

In particular, the ‎Sapphorain's statement (in the edit summary) "a function is never a o of itself (in either definition!)" appears to challenge the restriction "unless it is identically zero near ∞" made in the article; if s/he is right, that restriction is confusing and should be removed. - Jochen Burghardt (talk) 15:44, 7 March 2015 (UTC)

There is no reference for any paper or book by Sipser published in 1997 on MathSciNet. Anyway, if one uses the usual definition for a limit, once g(x) become identically zero beyond a certain point, the ratio f(x)/g(x) is not defined, for any f, and in particular g(x)/g(x) is not defined. So the limit does not exist, and in particular cannot be 0. Thus as you expose it here, what you write doesn't make sense. So I assume something must be missing in what you are trying to reproduce from Sipser's publication. Sapphorain (talk) 16:26, 7 March 2015 (UTC)

You are right: g can't be 0 infinitely often when the limit exists. I overlooked that Sipser (see "Further reading" in the article for the full reference) requires the range of f and g to be the set of positive (i.e. >0) reals. Sorry for the confusion. - Jochen Burghardt (talk) 18:52, 7 March 2015 (UTC)

What about O(xx) ?[edit]

Is it its own class? Or where does it belong to? --RokerHRO (talk) 13:55, 19 March 2015 (UTC)

74.111.162.230 (talk) 16:04, 1 May 2015 (UTC) Using exponential identities it can be shown that x^x=E^(x ln(x)) so it is faster then an exponential but slower then E^(x^(1+ε)) for any ε>0. It is about as fast as the factorial as explained here.

It grows more quickly than factorial, but only by a lower-order (single exponential) factor. —David Eppstein (talk) 16:14, 1 May 2015 (UTC)