Talk:Big O notation
This is the talk page for discussing improvements to the Big O notation article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2, 3, 4Auto-archiving period: 90 days |
This article has not yet been rated on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||||
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
|
Abuse of notation in definition of big O
There is no algorithm that "equals" big O of anything. Big O is a function from a set of functions to another set of functions. So when one writes O(expression), this denotes a set of functions. It is inconsistent to write that any algorithm equals O(expression), because an algorithm is not a set. Furthermore, even if that issue were fixed, it is imprecise and misleading to say that an algorithm is in a big O set. The algorithm itself cannot be compared with the elements of a big O set, which are mathematical functions. It is more precise to indicate that a mathematical function that expresses a property of the algorithm is in some big O set.
For example, it would be incorrect to say that mergesort is in O(n log n), but it would be correct to say that the work of mergesort as a function of input size n is in O(n log n). — Preceding unsigned comment added by 128.237.218.106 (talk) 11:33, 16 October 2013 (UTC)
- No. There is no abuse of notation. The relation "=" used here is NOT the standard equality, and I agree that this is misleading. According to Hardy and Wright (Theory of numbers, Oxford 1938), the expression "f=O(g) means that |f|<Ag,[...], for all values of [the variable] in question“ (so in this first definition "f=O(g)" is considered as one single symbol, and "=" has no meaning of its own). They write: "O(g) denotes an unspecified f such that f=O(g)"; thus the symbol "O(g)" does not denote a set, but an unspecified element of a certain set. They give the example "O(1)+O(1)=O(1)=o(x)", and finally note: "It is to be observed that the relation "=" asserted between O or o symbols , is not usually symmetrical". And if one finds this too confusing, there is the alternate Vinogradov notation <<, which entirely avoids all these problems... Sapphorain (talk) 15:14, 16 October 2013 (UTC)
Algorithms and their Big O performance
I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?
- I think separate would be better, to increase the article count :-). Then you can have links from the Complexity, Computation and Computer Science pages. Maybe you can call it "Algorithm run times" or something like that. --AxelBoldt
- Or something like analysis of algorithms or Algorithmic Efficiency since you may sometimes choose based on other factors as well. --loh
- I'd recommend puting it under computational complexity which earlier I made into a redirect to complexity theory. It should be a page of it's own, but I didn't want to write it ;-) --BlckKnght
Removed polylogarithmic
Reinstated my VERY bad. Missed a bracket
"Tight" bounds?
The article refers to terms "tight" and "tighter", but these terms are never defined! Alas, other pages (e.g. "bin packing") refer to this page as the one giving a formal definition of this term; moreover, "Asymptotically tight bound" redirects here. Yes, the term is intuitively clear, but a math-related page should have clear definitions.
I think a short section giving a formal definition of the term "tight bound" and referring to Theta-notation is needed (e.g. as a subsection of section "8. Related asymptotic notations"), and once such a section is created the redirection from "Asymptotically tight bound" should link directly there.
Mnemonics
(Below "Family of Bachmann-Landau notation" table)
This "mnemonics" business appears very unclear and clumsy to me. I strongly suspect it is the private production of a well-intentioned wikipedia contributor. Neither Bachmann, nor Landau, nor Knuth (at least in the references cited) mentioned anything close to what is claimed below the "Bachmann-Landau" notation table. Knuth uses the word "mnemonic" in his paper, but only to refer to the fact that the symbol O is commonly used and has become a reference. I note in passing that the "mnemonic" concerning the Omega symbol refers to the Knuth version of this symbol, and not to the Hardy-Littlewood version (which is the only one Landau knew): so who in the world is supposed to have devised these mnemonic "recipes"? I still think a precise reference is very much needed to justify the conservation of this part in the article. Sapphorain (talk) 21:21, 3 September 2013 (UTC)
- It looks much like OR to me. It would definitely need a proper source attribution, but anyway, I’m unconvinced this bit of trivia needs to be included in the (already quite long) article at all.—Emil J. 12:17, 4 September 2013 (UTC)
- I suppressed the bit of trivia. The bibliographic references inside were suppressed too, but can be found elsewhere in the page. Sapphorain (talk) 22:08, 16 November 2013 (UTC)
f(x) versus f
Currently in this article, the notation f(x) is used to denote a function. In the articles Function (mathematics), Limit (mathematics) etc., f is used to denote a function (a rule for mapping numbers to numbers) and f(x) is unambiguously used to denote its value (a number). These are two different meanings of the word function: in the first approach (used in this article), function means a dependent variable or (physical) quantity, and when talking about the function's behavior, one must always say what variables are regarded as the independent variables or experiment parameters that it depends on; in the second approach (in Function (mathematics) etc.), function is defined as a rule mapping objects to other objects. Perhaps the notation of function should be unified in Wikipedia? 90.190.113.12 (talk) 12:34, 9 January 2014 (UTC)