Wikipedia:Reference desk/Archives/Mathematics/2012 September 16
Mathematics desk | ||
---|---|---|
< September 15 | << Aug | September | Oct >> | September 17 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
September 16
[edit]Probability question
[edit]Hi, this is for personal interest and is not a homework question.
1. Say I choose n independent random numbers uniformly in the range 0 to 1, and then sort them in ascending order, calling the resulting sequence x1, x2, ... xn.
2. Now suppose I start with x0 = 0 and then recursively set xi+1 = xi + r where r is a random variable from a distribution P. When I get to xn+1 I stop, and then I divide all the x1 through xn by xn+1. Is there any distribution P that will ensure that the resulting distribution of x1, x2, ... xn is identical to that obtained in method 1? Failing that, is there any sequence of different (i-dependent) distributions* that would achieve that result?
* But not xi-dependent, since I already know how to do that.
86.179.116.254 (talk) 00:15, 16 September 2012 (UTC)
- I don't quite understand part 2, could you provide an example, please ? StuRat (talk) 17:04, 16 September 2012 (UTC)
- I think it is clear. Which bit don't you understand? I can't provide an example because I do not know any solution. 86.176.214.83 (talk) 17:35, 16 September 2012 (UTC)
- To attempt to rephrase 2 slightly (from someone other than the OP), he's generating a one-dimensional random walk starting from zero, drawing the step size from an unspecified distribution P. (My guess is that there is an implicit assumption that P is always non-negative.) He's then normalizing the points such that the total distance traveled (in n+1 points) is 1. His question is, what distribution P which would make the set of the first n points visited during this random walk be a uniformly distributed random variable across the unit interval? -- 71.35.125.16 (talk) 20:05, 16 September 2012 (UTC)
- (OP) Right. Over many trials, the sets of x's delivered by method 2 should be statistically indistinguishable from those delivered by method 1. By the way, the non-negative constraint is not an implicit assumption but follows from the requirements (if negative r is allowed then you are not guaranteed to end up with all the x's between 0 and 1, so it could never be a solution). Also by the way, interestingly (or perhaps obviously, I don't know), making P the same distribution as the intervals between the x's in (1) does not seem to work. 86.176.214.83 (talk) 20:42, 16 September 2012 (UTC)
- Hmm. I'm not going to give an answer here, because I don't know; but I'm going to restate the question a bit. If we just consider the case n = 1, then we are seeking a probability distribution P such that if X and Y are independent random variables distributed according to P, then X/(X + Y) is uniformly distributed on [0, 1]. Or maybe we'll settle for two probability distributions P1 and P2 such that if X and Y are independent random variables with X distributed according to P1 and Y distributed according to P2, then X/(X + Y) is uniformly distributed on [0, 1]. Right? —Bkell (talk) 05:41, 17 September 2012 (UTC)
- Yes, that is correct in the simple case of n = 1. Obviously the elegant and desirable solution is to have just one distribution P, and even more so if it worked for any n.86.128.1.50 (talk) 11:35, 17 September 2012 (UTC)
- I won't attempt a formal proof, but I'm convinced that the answer is no. The Strong Law of Large Numbers says that if n is reasonably large, the distribution of xn obtained by method 2 will be approximately Gaussian. The distribution of xn obtained by method 1 will be nothing like Gaussian. Looie496 (talk) 06:29, 17 September 2012 (UTC)
- I'm not sure, but I think you may have misunderstood the question. This is not about the distribution of xn alone. It is about the distribution of the whole set of n variables, once they have been normalised into the range 0 to 1 using the method explained. There appears to be nothing "Gaussian" about this for any choice of P or n. 86.128.1.50 (talk) 11:31, 17 September 2012 (UTC)
- Sure, but if we can get the distribution of x1, x2, …, xn to be the same in the two methods, then, in particular, the distribution of xn alone will be the same in the two methods. So, if it is impossible to do the latter, then it is also impossible to do the former. —Bkell (talk) 14:23, 17 September 2012 (UTC)
- Oh, but I see what you mean: after normalization, the distribution of xn in the second method is not Gaussian. —Bkell (talk) 14:30, 17 September 2012 (UTC)
- I'm not sure, but I think you may have misunderstood the question. This is not about the distribution of xn alone. It is about the distribution of the whole set of n variables, once they have been normalised into the range 0 to 1 using the method explained. There appears to be nothing "Gaussian" about this for any choice of P or n. 86.128.1.50 (talk) 11:31, 17 September 2012 (UTC)
Space battle with only one shot
[edit]Here is a mathematical question. In a space war, there are two ships fighting each other to the death. There is a large ship and a small ship. Both ships have a single weapon which can only be fired once. The weapon is so powerful that one hit from it would result in total destruction. The small ship has a probability of hitting of S_hit(x) = 0.9^x where x is the unit distance. The large ship has a probability of hitting of L_hit(x) = 0.6^x
Assuming that both ship are approaching each other at a straight line. What is the best strategy for the small ship? What is the best strategy for the large ship? It is obvious that the large ship should wait until the small ship approaches as close as possible before firing it's weapon but if it waits too long, the small ship would fire first and destroy it. For the small ship, it wants to get as close as possible before firing it's weapon because if it fire while too far away, then it would miss and giving the large ship a chance to shoot it at a closer distance. 220.239.37.244 (talk) 03:48, 16 September 2012 (UTC)
- Here you get into game theory, as you would probably want to fire just before your opponent, who presumably will do the same. StuRat (talk) 03:54, 16 September 2012 (UTC)
- You haven't stated what the payoffs are, so I'm going to make some assumptions. Each ship derives utility 1 from surviving, and utility epsilon from destroying the other ship while surviving (small enough to be ignored in our calculations, but positive to ensure that if one ship misses, the other ship will fire at distance 0). If the large ship fires at distance x, then the small ship has two options: fire at distance , or wait to fire until distance 0 (assuming it survives). One can evaluate the payoffs for each of these. Similarly if the small ship chooses to fire at distance x. Finding equilibrium then comes down to solving the equation . Let a be the solution: the equilibrium occur when one ship fires at distance a and the other fires at distance 0 (if it survives). Of course, this all changes if you change the payoff structure.--121.73.35.181 (talk) 06:36, 16 September 2012 (UTC)
- That distance is approximately 2.72 (thanks Wolfram Alpha!). I agree, that is the distance at which the two captains will be indifferent to firing or waiting (and will simply do whatever the other one doesn't). So, there are two equilibriums, one with the large ship firing at 2.72 and the small ship waiting, and one the other way around. In either case, plugging in 2.72 gives us a probability of the small ship winning of 75% (and the large ship 25%). --Tango (talk) 13:44, 16 September 2012 (UTC)
- It's pretty clear that the objective is to destroy the enemy ship. Therefore the strategy is one where the probability of the destruction of the enemy ship is maximised. However mathematically that occurs when the distance is zero which makes this a paradox. No real Captain would wait until the distance is zero before firing the weapon because it would most certainly means the enemy would fire first.
- It's obvious the strategy is as follows.
- If the enemy has already fired (and you are still alive) then wait until the distance is zero then fire the weapon.
- If the enemy has not fired yet, then wait until optimum distance Xoptimum then fire the weapon.
- The distance Xoptimum is one where the probability that the enemy ship will be destroyed is the greatest. But how to calculate Xoptimum.
- It seems to be that Xoptimum must a a function of the current distance to the enemy. Obviously at the start the distance is infinity and Xoptimum is a value. Later when the distance is 10, then Xoptimum can be a completely different value because you are still alive at distance 10. 220.239.37.244 (talk) 10:34, 16 September 2012 (UTC)
- The optimum distance is only a function of the current distance in as much as if you've already crossed it you can't fire at that distance. As long as you are at a distance greater than 2.72, then you shouldn't fire until you reach 2.72 (and then you should fire if your enemy doesn't and wait if your enemy does fire - although basic game theory breaks down a bit there since you won't actually know what they are going to do until they do it). If you start out closer than 2.72, then everyone just wants to fire straight away and it isn't entirely clear what happens because the rules didn't set out what happens if you fire simultaneously. --Tango (talk) 13:44, 16 September 2012 (UTC)
- It goes outside the realm of math when
game theoryhuman behavior is added in. Knowing how your enemy is likely to behave becomes important. Also, is there any delay between when the enemy commits to firing and when you fire ? StuRat (talk) 17:01, 16 September 2012 (UTC)
- It goes outside the realm of math when
- I though the whole point of "game theory" is that it does use mathematical models and methods to solve these kinds of problems. Therefore it is not outside the realm of math. 86.176.214.83 (talk)
- But that assumes that your opponent behaves rationally and can do the math. If not, then it comes down to what you guess your opponent will do. Then there's also an opponent who is too smart, so also considers what you will do in response to what he will do, etc. StuRat (talk) 21:54, 16 September 2012 (UTC)
- I'm not disputing that real-life game problems may involve imponderable aspects of human behaviour that are not amenable to exact mathematical analysis. My quibble was about your statement "It goes outside the realm of math when game theory is added in". As I understand it, game theory covers exactly those aspects of the subject that are amenable to mathematical (or logical) analysis. Even the game theory article defines the subject as "the study of mathematical models of ... etc." (my italics). 86.176.214.83 (talk) 22:33, 16 September 2012 (UTC)
- Point taken and correction made. StuRat (talk) 22:53, 16 September 2012 (UTC)
- You don't need your opponent to be able to do the maths. There are plenty of examples where people (and other animals) have naturally come to the same solution as game theory would suggest simply be empirical observation, or even evolutionary pressures. --Tango (talk) 23:07, 17 September 2012 (UTC)
- When the ships are further apart than the critical distance of c. 2.72, both ships have a lower chance of survival if they fire first, compared to if the other ship fires first. Therefore, neither should fire. When the ships are closer than c. 2.72, both ships have a greater chance of survival if they fire first. However, both also further increase their chances if they can fire first as late as possible, so the behaviour for <= 2.72 may not be so simple as just firing as fast as possible. 86.176.214.83 (talk) 17:33, 16 September 2012 (UTC)
- There was an analogy I was trying to think of at the time, which came to me later, which is the game of "chicken". In fact, we have an article Chicken (game), but I haven't looked at it yet, and I don't know how relevant it actually is to this problem. 86.128.1.50 (talk) 12:04, 17 September 2012 (UTC)
- When the ships are further apart than the critical distance of c. 2.72, both ships have a lower chance of survival if they fire first, compared to if the other ship fires first. Therefore, neither should fire. When the ships are closer than c. 2.72, both ships have a greater chance of survival if they fire first. However, both also further increase their chances if they can fire first as late as possible, so the behaviour for <= 2.72 may not be so simple as just firing as fast as possible. 86.176.214.83 (talk) 17:33, 16 September 2012 (UTC)
- There may be more complications than already mentioned. First of all, there are no turns in this game, and it's ill-defined in that we don't know what happens if both fire at the same time. This can be worked around by assuming that at distances of, say, 0.01n, the captain of the small ship gets their opportunity to fire, and at 0.01(n - 1/2) the other captain does. The question is what happens if we replace the constant 0.01 and repeat the problem with smaller constants. We can start at values of n which are so huge that firing during the fist turn would be "a bad idea" by any standards.
- The fundamental dilemma is that waiting is insofar beneficial as it will increase the chances of hitting later on, but detrimental as it adds to the chances the other captain will hit should they fire during their upcoming turn. We can assume that the captain with the better chance to hit (in this example the one going up against the bigger target) will never want to fire at a chance of <50%. It will boil down to the questions what would happen if the enemy fired during the next turn, vs. if I fire first. As one can see quite easily, it is advantageous to fire if the risk to miss is below the enemy's chance to hit me during their turn: 1 - ax = bx, with a=0.9 and b=0.6 or vice versa determines the critical value x. If one adds bx, one gets the equation posted by IP user 121.73
- Oops. It must read, 'If one adds ax...' - ¡Ouch! (hurt me / more pain) 06:35, 19 September 2012 (UTC)
- If one solves the equation for x, one does unfortunately not get the solution to the continuous problem, as both captains would fire at the same instant, and we don't know what happens in that case. It does, however, solve the turn-based problem, as it provides a very good approximation of the distance at which one captain will fire. This will still assume that there are only two choices, fire and hold, but no "run" option for the captain of the faster ship if they shot first and missed. - ¡Ouch! (hurt me / more pain) 06:05, 19 September 2012 (UTC)
Root of cubic formula
[edit]Root of quadratic formula is
but what is the root of cubic formula? Sunny Singh (DAV) (talk) 16:06, 16 September 2012 (UTC)
- The more terms are added, the more possible solutions. With the quadratic formula we already have the possibility of 2 roots, introduced by the ± sign, but larger formulas get unwieldy fast. However, since you asked, here it is, along with the reasons it's not recommended for common use: [1]. Wikipedia has a slightly different version: Cubic_function#General_formula_of_roots. StuRat (talk) 16:38, 16 September 2012 (UTC)
- "Larger" formulas cease to exist fast! 86.176.214.83 (talk) 17:42, 16 September 2012 (UTC)
- Actually the brilliant, but tragically short-lived Évariste Galois, showed via Galois theory that it is impossible to have algebraic solutions for general polynomials of order 5 and higher. His theory also provides the tools for telling whether or not a given higher order polynomial may be a special case with algebraic roots (e.g. x8 - 2 = 0). Dragons flight (talk) 17:31, 17 September 2012 (UTC)
Before the computer, tables of square roots and cube roots and trigonometric functions were used. So a problem was considered solved when reduced to such functions. But now the computation of a square root is not much easier or faster than solving the equation in the first place like this. The root of cubic formula has lost importance. Bo Jacoby (talk) 06:30, 17 September 2012 (UTC).
- I sort of doubt that the importance of that formula ever had much to do with getting concrete numerical approximations to the root. It's too awkward to use. I can practically guarantee that the quartic formula was never important for that reason. The formulas are important for theoretical reasons, not because you can look up roots in tables. --Trovatore (talk) 20:07, 20 September 2012 (UTC)
Thanks, I got my answer. I have an another which revolves around my mind but it is not from mathematics. The question is -
"We, in India, read quadratic equation in standard 10. In which standard quadratic equation is taught to the students of USA?" Sunny Singh (DAV) (talk) 17:16, 17 September 2012 (UTC)
- What's a "standard"? If you mean the same as grade level, then my recollection was that it was taught around 7th or 8th grade in the USA. Dragons flight (talk) 17:31, 17 September 2012 (UTC)
- However, that's for college prep students. Those who don't intend to go to college may not encounter the quadratic formula at all, depending on state minimum curriculum requirements. StuRat (talk) 18:48, 17 September 2012 (UTC)
By quadratic equation I mean root of quadratic formula. Did you, Dragons flight, mean the same? Sunny Singh (DAV) (talk) 14:50, 18 September 2012 (UTC)
- Yes, he probably did, because that is the context of the question and it sounds about right to me too - I think I learned it around 7th grade. As StuRat mentioned, mathematics teaching standards will vary from state to state in the US, but I think that they all follow pretty much the same schedule. It would be great if someone could find a reference with a state-by-state summary of basic mathematics education requirements, but I haven't found anything with a few quick searches. 209.131.76.183 (talk) 12:57, 20 September 2012 (UTC)
A problem with units
[edit]The following differential equation came up when I was solving a simple electrostatics problem: , where V (which represents the electrical potential) is a function of r (which represents a distance). The solution is obviously . But unit-wise, this answer doesn't make sense because you can't take the logarithm of a quantity with units. How can this dilemma be resolved? 65.92.7.148 (talk) 17:34, 16 September 2012 (UTC)
- Rewrite the solution as , where C is an arbitrary constant that has the same units as r. — Quondum 18:10, 16 September 2012 (UTC)
- You can just take the logarithm of your units, it won't hurt them. Or in other words, formally calculating with logarithms of units works fine. You'll have that , and you can use the additive constant to cancel the logarithm of the meter. —Kusma (t·c) 12:36, 17 September 2012 (UTC)