Wikipedia:Reference desk/Archives/Mathematics/2010 January 28

From Wikipedia, the free encyclopedia
Mathematics desk
< January 27 << Dec | January | Feb >> January 29 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 28[edit]

Golden Ratio[edit]

What is the closest rational number to the golden rectangle?

Also express this in a fraction.174.3.98.236 (talk) 00:51, 28 January 2010 (UTC)[reply]

It doesn't exist; there are rational numbers arbitrarily close to the golden ratio (as is the case with any real number). However, "best rational approximations" with minimal denominators can be derived from the continued fraction of the number, which turn out to be the ratio of successive Fibonacci numbers in the case of the golden ratio. --COVIZAPIBETEFOKY (talk) 00:58, 28 January 2010 (UTC)[reply]
Precisely, F2n/F2n-1 < φ < F2n+1/F2n, and F2n+1/F2n - F2n/F2n-1 = 1/F2nF2n-1 --84.220.118.69 (talk) 08:27, 28 January 2010 (UTC)[reply]
Plausible rational approximations include (amongst many others) 2, 1.6, 1.62, 1.618, 1.6180, 1.61803, 1.618034 etc. How much precision do you want? -- SGBailey (talk)
Oh yes 2/1, 16/10, 162/100, 1618/1000, 16180/10000, 161803/100000, 1618304/1000000. -- SGBailey (talk) 12:35, 28 January 2010 (UTC)[reply]
Decimal fractions p/q like this will only give an approximation with error roughly 1/q. As others noted above, the best rational approximations are ratios of successive Fibonacci numbers, which have error of order 1/q2. — Emil J. 12:48, 28 January 2010 (UTC)[reply]
... for example, 987/610 gives an answer accurate to six significant figures. Dbfirs 09:54, 1 February 2010 (UTC)[reply]

Reversion to the mean[edit]

What is the technical difference between reversion to the mean and regression to the mean? The pdf by Samuels (1991) linked towards the base of the regression to the mean article distinguishes between the two. Could anyone explain please, in layperson's language, what the difference is? Thanks 89.242.92.249 (talk) 13:52, 28 January 2010 (UTC)[reply]

I only have access to the first page from where I am, but it looks to me like he's just making a moderate generalization: regression towards the mean means that the expected value of a second instance of a random variable is closer to the mean than the given value of the first instance of that random variable; reversion (as he wants to define it) would mean that the expected value of a second instance of a random variable is closer to the mean than the expected value of the first instance of that random variable. note the italics. If I'm correct in what he's getting at, that doesn't strike me as unreasonable, but it also doesn't strike me as a well-established way of looking at things. --Ludwigs2 18:55, 28 January 2010 (UTC)[reply]

NP definition[edit]

I was rereading the NP and NP-Complete pages again. If a solution requires polynomial time but exponential space, it is NP, correct? The pages make it clear that exponential time is NP, but not exponential space. -- kainaw 16:49, 28 January 2010 (UTC)[reply]

Space is always bounded by time, an algorithm working in polynomial time cannot use more than polynomial space. Exponential time is not included in NP (or so we hope), it is the other way round. The only known (and only presumed valid) inclusions between these classes are that NP is included in PSPACE (polynomial space), which in turn is included in EXP (exponential time), which is included in EXPSPACE (exponential space). — Emil J. 17:11, 28 January 2010 (UTC)[reply]
I do not see the justification for the claim that space is always bounded by time. The time to access space in a non-parallel system is bounded by time. However, deterministic Turing machines are commonly parallel. So, increasing parallelism will increase space without increasing time. -- kainaw 17:32, 28 January 2010 (UTC)[reply]
A parallel machine can only increase the space:time ratio by a constant, which isn't enough to allow a problem to have polynomial time and exponential space, which differ by more than a constant. --Tardis (talk) 17:41, 28 January 2010 (UTC)[reply]
Not really, parallel machines usually studied in complexity theory allow for non-constant number of processors. — Emil J. 18:09, 28 January 2010 (UTC)[reply]
Hm. OK; I was thinking of a parallel machine not in terms of complexity theory (where of course various allowed scalings of processor count with problem size make sense) but in terms of implementing a normal Turing machine with a (real, fixed) parallel system. Thanks for the correction. --Tardis (talk) 19:29, 28 January 2010 (UTC)[reply]
First, get your terminology straight. Polynomial time without further qualification always means deterministic non-randomized sequential polynomial time. If you want polynomial time in some parallel model, you have to explicitly say so. Now, a polynomial time parallel algorithm with polynomially many processors uses only polynomial space, and can be simulated by a polynomial time sequential algorithm, so it is in P (and a fortiori NP). A polynomial-time algorithm using superpolynomially many processors may take you outside NP. The exact strength depends on the parallel computation model, but in models obeying the parallel computation thesis, the problems solvable in parallel polynomial time (which automatically implies at most exponentially many processors and thus at most exponential space) comprise exactly PSPACE, i.e., coincide with problems solvable sequentially in polynomial space. PSPACE contains NP, and is believed to be strictly larger. — Emil J. 18:09, 28 January 2010 (UTC)[reply]
Everything I've read on NP states that it is based on a Turing machine is assumed to be parallel. Many of the P algorithms depend on the parallelism of the Turing machine. I don't see how there would be confusion when using the notation NP and P in the original question. I assumed it implies that I am discussing NP (complexity) and not some random concept of polynomial time. -- kainaw 18:16, 28 January 2010 (UTC)[reply]
You misunderstand the basic definitions then. P and NP are defined in terms of sequential Turing machines. — Emil J. 18:22, 28 January 2010 (UTC)[reply]
Perhaps Kainaw is confusing non-determinism with parallelism: non-deterministic machines are kinda-sorta-like parallelism in their many worlds (instead of lucky guess) interpretation. But that parallelism, despite its indefinite number of allowed PEs, is much affected by its restriction to effectively random branching. --Tardis (talk) 19:29, 28 January 2010 (UTC)[reply]
I just don't see "sequential" in NP (complexity). Further, many of the algorithms in P that I've read use more than one tape in the machine. So, they are running more than one tape in parallel. It is possible that they are parallel versions of the more accepted serial algorithm, but it leads me to assume that NP is defined as simply requiring a Turing machine, not a serial (or sequential) Turing machine. -- kainaw 23:24, 28 January 2010 (UTC)[reply]
Multi-tape Turing machines are equivalent to the single-tape variety under (merely) a polynomial reduction; for complexity analysis (which is always in the limit of large problem size), the only parallelism that counts is the kind EmilJ mentioned where the number of processors is allowed to increase (in some fashion) with the size of the input. The phrase "sequential Turing machine" is rarely used because, for a fixed machine that can't scale its tape count with the input size, it makes no difference. --Tardis (talk) 23:53, 28 January 2010 (UTC)[reply]
So the claim that started this all, "space is bounded by time", really means "space in a fixed machine is bounded by time." That makes a lot more sense. It also explains that an algorithm that requires exponential space will require exponential time, which is what I was looking for. Regardless of how parallel the machine is (in Turing machines, that means it has multiple tapes), exponential space will require exponential time. -- kainaw 01:45, 29 January 2010 (UTC)[reply]
Emil, why is space always bounded by time? Is it that we are assuming that space can be allocated only in linear time (or at least not constant time)? I have seen an algorithm that requires the allocation of exponential space, but only accesses polynomially much of it, and if the allocation is considered to be a constant-time (or even polynomial-time) operation the algorithm runs in polynomial time. Is such an algorithm considered to have polynomial or exponential space complexity? —Bkell (talk) 23:38, 29 January 2010 (UTC)[reply]
Wait, I take my specific example back. The algorithm I had in mind requires the allocation of quadratically much space but only accesses linearly much of it. Even so, the idea can easily be extended to what I described above, so my question still stands. —Bkell (talk) 23:45, 29 January 2010 (UTC)[reply]
I believe (as I pointed out just above) that he meant "space in a fixed machine is bounded by time." We are talking about Turing machines. So, space is a tape that moves back and forth. By being fixed, we mean that there may be multiple tapes all running in parallel, but the number of them is fixed. Now, assume you have t tapes. Your algorithm requires 2n space (exponential based on the size of the problem). If the space required is less than t, you can access all space in fixed time. Just read/write the tape you need. Once the space required exceeds t, you need to store more than one item of data per tape. It takes time to move to the item to be used. Because space is increasing exponentially and t is fixed, you quickly require exponential time to seek a data item on one of the tapes.
If you don't throw in the stipulation of a "fixed" machine, the statement is false. Space is not bounded by time. For a problem of size n that requires 2n space, I just get a machine were t=2n and seek time is fixed. -- kainaw 00:09, 30 January 2010 (UTC)[reply]
And what on earth would be an algorithm using a "non-fixed machine"??? If you can vary the machine depending on the input, you can "compute" anything, including arbitrary noncomputable problems, this simply does not make sense. In any case, this is not how it is defined. An algorithm is by definition given by a single, fixed Turing machine. In the standard Turing machine model, any problem computable in time t(n) is also computable in space t(n) (since the linear blow-up due to multiple tapes can be cancelled by increasing the working tape alphabet, see the tape compression theorem). (This is also true for the RAM model, since there are no multiple tapes to take care of.) — Emil J. 13:48, 1 February 2010 (UTC)[reply]
@Bkell: no standard computation model I've ever seen (Turing machines, RAM, ...) involves the concept of allocation, so you have to consult whatever place you've seen it in to see how they define space complexity. In any case your problem certainly does not require exponential space, it is a trivial exercise to rewrite such an algorithm (at the expense of polynomial increase in time) so that it only accesses (and allocates, whatever that is needed for) consecutive memory cells, and thus uses only polynomial space. — Emil J. 13:48, 1 February 2010 (UTC)[reply]

j-invariant[edit]

I'm working on a proof showing the j-invariant is a bijection of the standard fundamental domain and the Riemann sphere. I see that the j-invariant has a simple pole at infinity. Does a simple pole at infinity always imply that the function sends infinity to infinity? StatisticsMan (talk) 18:39, 28 January 2010 (UTC)[reply]

The statement that f has a pole at infinity means that f(z) tends to infinity as z tends to infinity. So if we continuously extend f to a function defined at infinity, then the extended function must send infinity to infinity. Algebraist 18:43, 28 January 2010 (UTC)[reply]
Well, I have been reading through the first part of the textbook to try to understand this better and I still don't get. I actually understood what you said already, although I am glad you said it because it helped me be more sure of it and all that. But, I have a specific function so it is already defined as something at infinity. I understand that the limit as |z| goes to infinity of the function is infinity, but how do I know it is "continuous" at infinity, so that the function value is the same as the limit. One piece of additional information on this function is it is a modular function of weight 0. Does that help at all? Thanks. StatisticsMan (talk) 22:38, 30 January 2010 (UTC)[reply]
If you check, this point was also included in the answer above. "The limit as |z| goes to infinity of the function is infinity" is exactly the same as "the function extended by putting f(∞):=∞ is continuous at infinity". It may help you to recall that topologically the Riemann sphere is the one-point compactification of the complex plane. The open neighborhoods of ∞ in C∪{∞} are the complements of compact subsets of C. --pma 00:26, 1 February 2010 (UTC)[reply]
How is your function defined at infinity, then? Algebraist 00:31, 1 February 2010 (UTC)[reply]
This is my question. I do not know how it is defined at infinity. It is defined as , where the 's are Eisenstein series of weight . It is then a theorem that the function gives a bijection between the standard fundamental domain with -equivalent sides identified and the point at infinity included and the Riemann sphere. This is the proof I am looking at. And, it somehow concludes infinity is sent to infinity but I am not understanding why. I know that it has a simple pole at infinity, is holomorphic on the upper half plane, and it is a bijection between the fundamental domain (but excluding the point at infinity) and . From this, it should be immediate that is sent to itself I guess. StatisticsMan (talk) 15:49, 1 February 2010 (UTC)[reply]
Just to be clear, . StatisticsMan (talk) 17:35, 1 February 2010 (UTC)[reply]