Talk:Big O notation
|This is the talk page for discussing improvements to the Big O notation article.|
|Archives: 1, 2|
|This article is of interest to the following WikiProjects:|
|This article has an assessment summary page.|
Algorithms and their Big O performance
I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?
- I think separate would be better, to increase the article count :-). Then you can have links from the Complexity, Computation and Computer Science pages. Maybe you can call it "Algorithm run times" or something like that. --AxelBoldt
- Or something like analysis of algorithms or Algorithmic Efficiency since you may sometimes choose based on other factors as well. --loh
- I'd recommend puting it under computational complexity which earlier I made into a redirect to complexity theory. It should be a page of it's own, but I didn't want to write it ;-) --BlckKnght
Reinstated my VERY bad. Missed a bracket
The article refers to terms "tight" and "tighter", but these terms are never defined! Alas, other pages (e.g. "bin packing") refer to this page as the one giving a formal definition of this term; moreover, "Asymptotically tight bound" redirects here. Yes, the term is intuitively clear, but a math-related page should have clear definitions.
I think a short section giving a formal definition of the term "tight bound" and referring to Theta-notation is needed (e.g. as a subsection of section "8. Related asymptotic notations"), and once such a section is created the redirection from "Asymptotically tight bound" should link directly there.
Little-o notation: Graham.Knuth.Patashnik.1994 vs Sipser.1997
Sapphorain reverted my remarks on a deviating definition by Sipser, stating that Sipser.1997's definition is equivalent to that of Graham.Knuth.Patashnik.1994 (given before in the article). She/he is probably right, as I didn't think much about this issue.
However, the article says about Graham.Knuth.Patashnik.1994's definition:
If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation f(x) = o(g(x)) is equivalent to
while Sipser.1997 says f(n) ∈ o(g(n)) if
i.e. he doesn't require g(x) to becomes nonzero beyond a certain point.
Moreover, the article says about Graham.Knuth.Patashnik.1994's definition:
g itself is not [in little-o of g], unless it is identically zero near ∞
while Sipser.1997 says:
g(n)∉o(g(n)) for all g
i.e. he doesn't require g not to be identically zero near ∞.
For these reasons, I felt (and still feel) that both definitions slightly differ.
In particular, the Sapphorain's statement (in the edit summary) "a function is never a o of itself (in either definition!)" appears to challenge the restriction "unless it is identically zero near ∞" made in the article; if s/he is right, that restriction is confusing and should be removed. - Jochen Burghardt (talk) 15:44, 7 March 2015 (UTC)
- There is no reference for any paper or book by Sipser published in 1997 on MathSciNet. Anyway, if one uses the usual definition for a limit, once g(x) become identically zero beyond a certain point, the ratio f(x)/g(x) is not defined, for any f, and in particular g(x)/g(x) is not defined. So the limit does not exist, and in particular cannot be 0. Thus as you expose it here, what you write doesn't make sense. So I assume something must be missing in what you are trying to reproduce from Sipser's publication. Sapphorain (talk) 16:26, 7 March 2015 (UTC)
You are right: g can't be 0 infinitely often when the limit exists. I overlooked that Sipser (see "Further reading" in the article for the full reference) requires the range of f and g to be the set of positive (i.e. >0) reals. Sorry for the confusion. - Jochen Burghardt (talk) 18:52, 7 March 2015 (UTC)
What about O(xx) ?
188.8.131.52 (talk) 16:04, 1 May 2015 (UTC) Using exponential identities it can be shown that x^x=E^(x ln(x)) so it is faster then an exponential but slower then E^(x^(1+ε)) for any ε>0. It is about as fast as the factorial as explained here.