Talk:Big O notation

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Computing (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 
WikiProject Mathematics (Rated B-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
Mid Importance
 Field: Analysis
One of the 500 most frequently viewed mathematics articles.
This article has comments.
WikiProject Computer science (Rated B-class, Top-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Top  This article has been rated as Top-importance on the project's importance scale.
 
This article has an assessment summary page.

Algorithms and their Big O performance[edit]

I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?

I think separate would be better, to increase the article count :-). Then you can have links from the Complexity, Computation and Computer Science pages. Maybe you can call it "Algorithm run times" or something like that. --AxelBoldt
Or something like analysis of algorithms or Algorithmic Efficiency since you may sometimes choose based on other factors as well. --loh
I'd recommend puting it under computational complexity which earlier I made into a redirect to complexity theory. It should be a page of it's own, but I didn't want to write it ;-) --BlckKnght

Removed polylogarithmic[edit]

Reinstated my VERY bad. Missed a bracket

"Tight" bounds?[edit]

The article refers to terms "tight" and "tighter", but these terms are never defined! Alas, other pages (e.g. "bin packing") refer to this page as the one giving a formal definition of this term; moreover, "Asymptotically tight bound" redirects here. Yes, the term is intuitively clear, but a math-related page should have clear definitions.

I think a short section giving a formal definition of the term "tight bound" and referring to Theta-notation is needed (e.g. as a subsection of section "8. Related asymptotic notations"), and once such a section is created the redirection from "Asymptotically tight bound" should link directly there.

Base of log?[edit]

Sorry if I missed it, but I couldn't find specified anywhere what base of log is being used. Does it not matter since it is only comparing orders? Or is it assumed to be base 2 since it's computer science? Thanks for any clarification. — Preceding unsigned comment added by 68.250.141.172 (talk) 20:06, 23 January 2014 (UTC)

The base of the log does not matter once it is moved inside of Big-O. Changing the log base involves multiplying by a constant. Glrx (talk) 22:21, 25 January 2014 (UTC)

Hardy and Littlewood's \Omega Notation Section Is Problematic.[edit]

Not only is the paper incorrectly cited, but if you read the paper (available here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1091181/pdf/pnas01947-0022.pdf), the letter Omega does not appear.

Discussion of the definition of limit superior as x approaches infinity is also absent from the paper, which is a relief because \limsup_{x \to \infty} \left|\frac{f(x)}{g(x)}\right| appears non-sensical to me. How can x approach infinity from above?

I'm inclined to strike this section from the page.

Thoughts?

76.105.173.109 (talk) 20:49, 17 February 2014 (UTC)

You are looking at the wrong paper. The right one is here (probably behind a paywall, sorry). Also, there is nothing in that formula which says that x is approaching infinity from above. Incidentally, they do not use limsup in their definition and returning to the original version might help people with poor analysis background (like most computer scientists). Here is what they say:
We define the equation f = Ω(φ), where φ is a positive function of a variable, which may be integral or continuous but which tends to a limit, as meaning that there exists a constant H and a sequence of values of the variable, themselves tending to the limit in question, such that |f| > Hφ for each of these values. In other words, f = Ω(φ) is the negation of f = o(φ).
I think that the condition H > 0 is required and was omitted accidentally. McKay (talk) 04:13, 18 February 2014 (UTC)
Incidentally, this is the definition most used in computer science even though that is rarely admitted. Consider an algorithm that takes time n2 for even n and is trivial (constant time) for odd n. This happens all the time but everyone writes Ω(n2) even though it does not satisfy the textbook definition of Ω that they claim to be using. McKay (talk) 04:13, 18 February 2014 (UTC)