Jump to content

Wikipedia:Reference desk/Archives/Computing/2020 October 4

From Wikipedia, the free encyclopedia
Computing desk
< October 3 << Sep | October | Nov >> October 5 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 4[edit]

Big-O[edit]

Is it theoretically possible to write a program that can establish the Big-O time complexity of any other program? Taumata994 (talk) 16:56, 4 October 2020 (UTC)[reply]

First thing that comes to mind is the halting problem; if we can't tell generally if an arbitrary program will halt, how can we determine its time complexity? I think Rice's theorem is applicable. --jpgordon𝄢𝄆 𝄐𝄇 17:52, 4 October 2020 (UTC)[reply]
I had the same thought. But Big-O-complexity is not a semantic property. And, on the other hand, any program that has a Big-O-bound has to terminate. I suspect it is possible to get a Big-O-limit at least for primitive recursive functions (and corresponding programs). --Stephan Schulz (talk) 21:02, 4 October 2020 (UTC)[reply]
The determiner "any" is ambiguous. It can be existential, as in "is there any person who can fly?", where the question is, in the notation of mathematical logic, whether . Or it can be universal, as in "any person can claim to be unique". This becomes . I assume that in the question the universal sense is intended. What does it mean that program "establishes" the Big-O time complexity of program ? I assume that this means that, given as input, produces a formula such that the time complexity of is , where is the size of the input for program . Let us restrict the input to to programs that halt for all inputs, so that our efforts are not immediately stymied by the halting problem. The next question is, what is acceptable as a formula? Can this be a lambda expression for computing the function ? If so, construct a program that, on input , produces a lambda expression that given input simulates for all inputs of length , counts how many steps this all takes, and returns the result. Since halts for all inputs, this defines a total function , and the running time of on an input of length is dominated by .  --Lambiam 21:51, 4 October 2020 (UTC)[reply]
I see one issue in doing this for all n, as that could be theoretically infinite. Also is O- always well defined? Graeme Bartlett (talk) 02:54, 7 October 2020 (UTC)[reply]
Well, semi-formally, O(f) is the set of all functions that in the limit (towards positive infinity) grow at most a constant factor faster than f. So everything that is in O(n2) is also in O(n3) is also in O(n4) and of course also in O(2n). In common usage, we usually assume that we give a reasonably sharp upper bound, but mathematically, that is not necessary. And Lambiam is right - if we have a total program p, it is easy to create from that a program P that will always return an upper bound to the first program's runtime - for input n, P just simulates p on n and counts the steps, then returns them. You won't get a nice, compact, symbolic representation, of course... --Stephan Schulz (talk) 09:06, 7 October 2020 (UTC)[reply]