Wikipedia:Reference desk/Archives/Mathematics/2007 August 17

From Wikipedia, the free encyclopedia
Mathematics desk
< August 16 << Jul | August | Sep >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 17[edit]

Summation[edit]

OK another stupid question for this evening. Does 1+1+1/2+1/4+1/4 etc add up to 10/3? Jack Daw 01:00, 17 August 2007 (UTC)[reply]

1 + 1/2 + 1/4 + 1/8... = 2 = 6/3
1 + 1/4 + 1/16 + 1/64... = 1/(1-1/4) = 4/3
4/3 + 6/3 = 10/3
So yes. It does. Gscshoyru 01:40, 17 August 2007 (UTC)[reply]

I do not understand what is so hard about this question.

1+1+1/2+1/4+1/4
4/4 + 4/4 + 2/4 + 1/4 + 1/4
(4+4+2+1+1)/4
12/4
3/1
3
202.168.50.40 03:17, 17 August 2007 (UTC)[reply]
Um... there's ...'s. That means continuation... i.e. 1 + 1/2 + 1/4 + 1/8 ... means that there's a + 1/16 and a + 1/32, etc. Ok? (Actually it's an etc... but it means the same thing) Gscshoyru 03:34, 17 August 2007 (UTC)[reply]
As Gscshoyru says, the 'etc' in the original post can be read as ... which means "and the pattern continues". However I don't see a pattern. There are two ones, one 1/2 and two 1/4. What is the pattern here? -- SGBailey 07:50, 17 August 2007 (UTC)[reply]
The pattern is two of a kind, one of a kind, two of a kind, one of a kind and so on. 1+1+1/2+1/4+1/4+1/8+1/16+1/16+1/32+1/64+1/64+1/128+1/256+1/256 etc. Jack Daw 12:46, 17 August 2007 (UTC)[reply]
For the sake of preserving mathematics's reputation for pedantry, I feel someone should point out that Gscshoyru's argument above implicitly relies upon the fact that, since every summand is positive, the convergence is absolute and so we can reorder and rebracket however we wish without affecting the result. Algebraist 00:32, 18 August 2007 (UTC)[reply]
With the above result, then we can rewrite the sum as
This... gives the wrong answer. And because you're using the wrong sums -- the first sum is 1/2 + 1/8 + 1/32... the second is 1/2 + 1/4 + 1/8... The first sum should be 1/2^(2n) plain, and the other should be 1/2^n. Gscshoyru 03:06, 18 August 2007 (UTC)[reply]
No, it seems correct to me. Perhaps you said that when I was trying to fix the TeX up and it looked like .
No... I said it later than that. And it's still wrong, becasue you're getting the wrong answer. Look at what I said your patterns were, as opposed to what they should be.
That's what it should be. Gscshoyru 05:21, 18 August 2007 (UTC)[reply]
The pattern, as Jack Daw states, is "1+1+1/2+1/4+1/4+1/8+1/16+1/16+1/32+1/64+1/64+1/128+1/256+1/256+...". Algebraist helpfully notes that we can rearrange the sums as we please as all terms are positive. So, we pick out all the "doubled terms" first:
That's for the doubled terms. This is indeed correct.
It's for the "single terms" where I have made the error:
So
And also also.
While I flubbed up the original expression, the first sum was always correct: we've merely summed up the expression in two different ways, and neither way of summing is more correct than any other (although it would help for me to actually get the sums right!).
Ah. I see what you were doing. I saw the wrong answer and assumed you were doing something completely wrong... when in fact your error was only minor. Yes, this demonstrates that re-bracketing still works (at least for those two re-brackets). Sorry for assuming otherwise. Gscshoyru 11:49, 18 August 2007 (UTC)[reply]
No worries, I shouldn't have made the silly error!

Over the reals[edit]

what does it mean?

Depends on the context... sorta... what's the context, please? Gscshoyru 01:36, 17 August 2007 (UTC)[reply]

solve angles in radians added together, then it says kEI at the end

kEI ? Like ? That means that the k that appears in the answer belongs to some previously defined set I. Do you know what this I is? –King Bee (τγ) 01:59, 17 August 2007 (UTC)[reply]

yeah, yeah, K is the element of integers. But what does that mean with "over the reals"? I've been searching in google and wikipedia and you guys use it in context as if the person reading it knows already

"Over the reals" refers to the fact that your underlying field is the real numbers, and is normally used when you're finding solutions to an equation that could have complex solutions, to say that you're only solving "over the reals", i.e. only looking for real solutions. You can also use it to say things like "the set of functions over the reals" (i.e. the set of functions whose domain and range are subsets of the real numbers), or "the complex numbers form a vector space over the reals" (i.e. you can treat the set of complex numbers as a vector space over the field of real numbers). Confusing Manifestation 02:11, 17 August 2007 (UTC)[reply]
The OP appears to be trying to solve trig problems. If you are asked to solve a trig problem "over the reals", then you are expected to provide a trig general solution, and is a part of the answer. Splintercellguy 12:26, 17 August 2007 (UTC)[reply]

What are the chances...?![edit]

Hi! I was playing a card game (palace to be exact) when me and my friends became really bored and decided to shuffle the cards and choose a random card out of the standard 52. The first time I shuffled, the card "six of hearts" (6 of ♥) appeared. I reshuffled and again I get the card: "six of hearts"! I kept reshuffling and again and again I get the card "six of hearts'"! Now the deck IS a normal 52 card deck and all the cards are there. And I end up drawing the card "six of hearts" exactly thirteen (13) times...in a Row! By my calculations (and if I'm wrong, please correct me) the odds of that happening are:

(4x52)^13

4- Because there are four different types (♠,♣,♥,♦) and the card was always hearts.

52- Because there are 52 cards ina a standard deck

13- Because if an act of probaility happens again then the odds increase by the amount of possibilities multiplied by itself ( 1/2, 1/4, 1/8, 1/16...)

Assuming this information is correct the answer becomes:

(4x52)^13= 1.364028217e30

ergo:

1.364028217e30 : 1

Is this correct? It seems to be such an astronomically high number that it seems to be inconceievable, but, it just happened, so would I have a much better chance to win the lottery? Or be in a plane crash? What other statistic could be higher than this?!

Many thanks! ECH3LON 02:25, 17 August 2007 (UTC)

The fact that there are four suits doesn't matter; only that each card in the deck is distinct. With a truly random setup, the chance of you drawing the six of hearts is 1/52. Do that twice, and it becomes , etc. So the probability is given by . Yes, it is astronomically unlikely. My guess is that the six of hearts in your deck is bent or that your method of shuffling and picking a card is somehow nonrandom. Strad 02:44, 17 August 2007 (UTC)[reply]

It's 1 in 52^13 or 1/(52^13) which is a most unlikely result. I suspect trickery! 202.168.50.40 03:20, 17 August 2007 (UTC)[reply]

There's nothing special about the six of hearts, and therefore nothing special about the first draw; what's notable is that the 12 other cards drawn are the same as the first one. So the correct exponent is 12, i.e. 1/(52^12). This is still too small a number for it to be plausible that this really happened. The age of the Earth in milliseconds is less than 52^12. --Anonymous, August 17, 04:33 (UTC).
The first draw still factors in to the probability. Drawing any card from the deck has a probability of 1, drawing a six has a probability of 4/52, drawing a heart has a probability of 13/52, drawing the six of hearts has a probability of 1/52. The correct exponent is 13. Strad 04:45, 17 August 2007 (UTC)[reply]
But, as Anonymous says, before it's drawn there's nothing special about the six of hearts. Certainly the probability of drawing the six of hearts thirteen times in a row is 1/52^13, but what's so amazing is that the same card came up thirteen times in a row, which means that it would have been just as remarkable had the Jack of Clubs been the one drawn repeatedly. Of course, one other issue with the assumption that the draws were independant is how the OP was shuffling - if, for example, they were doing the kind of shuffle that would tend to put, say, the top card consistently near the middle, then that would affect the probabilities significantly. Confusing Manifestation 06:55, 17 August 2007 (UTC)[reply]
The correct exponent is NOT thirteen. The exponent indicates the number of consecutive draws . The probability of drawing the 6 of hearts three times in a row would in fact be , but the scenario drawn here where the person drew any particular card three times in a row becomes . You need to go back to your math teacher and ask for your money back because they clearly didn't teach you well. Donald Hosek 23:48, 17 August 2007 (UTC)[reply]

Just suppose you perform a skewed riffle shuffle like this: you take the upper half of the deck with your right hand and the lower half with the left hand, then interleaf cards so that the last card falling is the one released by the right thumb. This way the topmost card is still the same. If you draw the top card and return it to the deck on its top, you can get the same card as many times as you want with probability 1. CiaPan 10:11, 17 August 2007 (UTC)[reply]

You better not report an unbelievable experience. If you are believed, then you was lying. Bo Jacoby 11:44, 17 August 2007 (UTC).[reply]
But since it's unbelievable, there's no danger of that... Tesseran 17:37, 17 August 2007 (UTC)[reply]
We are missing important details in the story, and that affects our analysis. As others have remarked, "shuffle" has many meanings, and we cannot know what was actually done. Nor do we know if the top card was chosen, or — as is more usual — if the deck was cut to a random card.
Persi Diaconis and colleagues have famously analyzed some random card shuffles, a difficult problem, while a simple group theory analysis explains a deterministic "perfect shuffle" (with two different versions). The ideal result would be a uniformly distributed random permutation of the cards. There are ways to achieve that with a single shuffle, but they are laborious and unnatural for hand shuffling. (For example, start with 52 cards in any order, and select a card at random with a uniform distribution; repeat until every card has been selected.)
If we assume a "riffle shuffle" follows a certain mathematical model, we discover some intriguing behavior. Here's one model: First, we cut the deck, not exactly in half but approximately so. Then we assume the cards fall in clumps alternately from the right and left hands. The size of a clump favors single cards, with larger clumps being increasingly unlikely. (Diaconis has noted that he himself shuffles more precisely than this model!) One clever idea we can use in the analysis is Fourier theory on the group of permutations; other approaches have also yielded insight. And the peculiar behavior that emerges is that two or three passes leave considerable order, and in fact, a great deal of order persists even after five passes; but abruptly at seven passes we achieve excellent near-uniformity (depending on the measure).
In the more likely scenario that someone else is shuffling the deck, we suspect chicanery. A well-known classic effect in card conjuring is that a card appears at the top of the deck again and again, despite apparent efforts to prevent it and despite apparent attempts to preclude trickery. Speculation about how this is accomplished is useless, because a good magician can show the card at the top of the deck a dozen times with a dozen different methods, many of which demand skill acquired through much practice.
My guess is that the poster is either self-deluded or insincere; either way, I would prefer to have someone else shuffle the deck. ;-) --KSmrqT 09:33, 18 August 2007 (UTC)[reply]

Dice game[edit]

Imagine a game where on a player's turn he rolls three dice. If two of the dice match, he gets a point, and wins as soon as he reaches five. But if all three match, he wins instantly. What is the average number of rolls it should take to win?

I know that the odds of a pair on three dice are 5 in 18, and the odds of a trio one in 36.

18 rolls x a 5/18 chance gives five pairs in (on average) 18 rolls. Now, the odds of a trio in eighteen rolls are 1 - (35/36)18, i.e. 0.3978

This gives an average of 17.6 rolls to win. But this to me seems far too high. Any thoughts? EamonnPKeane 17:26, 17 August 2007 (UTC)[reply]

Are you sure the chance of a pair isn't 5/12? Chance of trio is (1)*(1/6)*(1/6) = 1/36, chance of three unique is (1)*(5/6)*(4/6) = 20/36, so chance of pair is 1 - (1/36 + 20/36) = 15/36 = 5/12?
Regardless of the right probabilitees for one roll of three, here is a hint if you ignore the possibility of rolling a trio. Baccyak4H (Yak!) 17:48, 17 August 2007 (UTC)[reply]
(More) And if you wish to address the possibility of rolling a trio, consider this. Baccyak4H (Yak!) 17:55, 17 August 2007 (UTC)[reply]
(Last) Ask yourself, how can we win on roll number x? Note that to do so, two things have to happen: one is that we actually have to roll x times, the other is that the xth roll has to win(!). Baccyak4H (Yak!) 18:04, 17 August 2007 (UTC)[reply]
I agree, the chance of rolling a pair which is not also a triple with three 6-sided dice is 5/12. I figured it out by saying there are 6 possible doubles, each of which can have a third die with one of 5 values to avoid a triple, making 30 possible double combos. However, the doubles can be in any of three positions: DDs, DsD, or sDD, giving us 90 possible double combos when position is considered, out of 63 or 216 possible rolls for three six-sided dice. 90/216 reduces to 5/12. StuRat 06:55, 18 August 2007 (UTC)[reply]
A way of solving this is by using this generalization: what is the expected number of rolls needed to win, if you win when you have accumulated n points or roll three equals. Calling this number En, you want to find E5. We have E0 = 0. You can further express En+1 as a function of En, so En+1 = f(En). Then you can compute E1 = f(E0), E2 = f(E1), ..., E5 = f(E4). If the function has the form f(x) = a + rx, you have in general
En = a (1 − rn) / (1 − r).
 --Lambiam 21:01, 17 August 2007 (UTC)[reply]

By experimentation the answer seems to be about 11, which makes sense given all of the above.

I think it is closer to 9.9289.  --Lambiam 21:01, 18 August 2007 (UTC)[reply]

What is the name for the bottle or shape without an opening?[edit]

What is it?

Perhaps you're thinking of the Klein bottle? Friday (talk) 19:59, 17 August 2007 (UTC)[reply]
"Bottle" in the question hints at that. "shape without an opening" might refer to convex. PrimeHunter 20:10, 17 August 2007 (UTC)[reply]
Closed surface or closed manifold? —Keenan Pepper 05:14, 18 August 2007 (UTC)[reply]

Stability of a solution[edit]

I have a boundary value problem defined as follows:

u'(s) = F(u(s)) ,

With boundary conditions g(u(0),u(1)) = k

Let the solution of this be u0(s) . I have to find whether there exists another solution to this system u1(s) infinitesimally close to u0(s) (distance based on some norm). If there is, then the solution u0 is unstable because any disturbance can take it to u1. Is there a way to do this numerically (F and g are nonlinear so, general theoretical analysis might be hard)? Thanks! deeptrivia (talk) 22:13, 17 August 2007 (UTC)[reply]

Interesting idea. Perhaps you should linearise and around and see if the resulting linear problem has more than one solution. —Bromskloss 08:54, 18 August 2007 (UTC)[reply]
Take F(u)=0, g(u,v)=u-v. Then u=C is a solution for all constants C. But does it make u=0 unstable? (Igny 14:45, 18 August 2007 (UTC))[reply]
Just to make sure I understood correctly before I answer, was this a comment on my suggestion? —Bromskloss 15:36, 18 August 2007 (UTC)[reply]
Oh, well, here's what I meant anyway. Let , for a small .
Similarily for the boundary condition. Does that get you anywhere? (If it's even correct.) —Bromskloss 16:08, 18 August 2007 (UTC)[reply]
Ah, I see what you mean now and I don't have a good answer. Perhaps we (I) could use a more precise definition of stability. For a given , could very well determine , uniquely. What does it then mean to be unstable? I mean, you can only be unstable for a while, so to speak, because when , you have to end up at the predetermined . —Bromskloss 16:16, 18 August 2007 (UTC)[reply]
The "unstable" here is more like buckling of a beam. I am also thinking along the lines of linearization and then checking for the uniqueness of the linear BVP, but am not able to find the conditions for uniqueness of a system of differential equations when constraints can be on either end (unlike an initial value problem). Does someone know the conditions that a linear system of differential equations must satisfy for uniqueness (keeping in mind that some conditions are at s = 0, but some other at s = 1). Thanks, deeptrivia (talk) 17:29, 18 August 2007 (UTC)[reply]
I see. How about this? Take any , solve the equation (if possible) and observe what you get. You can write as a function of , . This equation, together with the boundary conditions (for the linearised problem) should determine what combinations of and are possible, and you just have to count them. (I think will even be linear, right?). —Bromskloss 18:13, 18 August 2007 (UTC)[reply]
The problem has mixed boundary conditions, so let's say in a system of six equations (and six functions u1(s) .. u6(s)), the boundary conditions are u1(0) = u10, u2(0) = u20, u3(0) = u30, u4(1) = u40, u5(1) = u50 and u6(1) = u60. Even if we have infinite solutions of the linear system, all solutions must have the same values at {u1(0),u2(0),u3(0),u4(1),u5(1),u6(1)}. So, does the and you are talking about only contain the unconstrained u's ? It looks like I haven't clearly understood your idea. deeptrivia (talk) 18:29, 18 August 2007 (UTC)[reply]
I meant to forget about the boundary conditions for a while, accepting any and calculating the corresponding . Then you apply the boundary conditions to discard most of them. Bah, I'm not sure it got much clearer. (I noted your question mark, btw. :-) Are you French?) —Bromskloss 19:10, 18 August 2007 (UTC)[reply]
Oh, I think it's a bit clearer now. Do you mean generating N different s and solving an initial value problem with them. Then, checking how many of these N starting points give the right . If I understood it right, doesn't N need to be very large to ensure a good probability of getting all possible solutions? In fact don't we need to consider infinitely many different s ? I'm not French but am curious to know which question mark made you think I was :) ? deeptrivia (talk) 19:29, 18 August 2007 (UTC)[reply]
I think I now understand it even better. Isn't it similar to the approach given in [1] for proving existence of solution to a BVP? Could this be adapted for finding uniqueness as well? deeptrivia (talk) 23:36, 18 August 2007 (UTC)[reply]

Let us say, for simplicity, F(u) is bounded Lipschitz function for all u. Then IVP u'=F(u), has a unique solution for all Then we have determined uniquely. The next question is to determine by forcing satisfy the BC

(hint) if you can find that exists for some and you can invoke implicit function theorem to show exists in some neighborhood of Is that all helpful? (Igny 01:05, 19 August 2007 (UTC))[reply]

This looks quite promising. Please check it out. deeptrivia (talk) 02:59, 19 August 2007 (UTC)[reply]
What exactly is your goal here? Are you studying a particular problem or you want to study a (more) general theory? (Igny 22:51, 19 August 2007 (UTC))[reply]
I am studying a particular set of equations. In fact, apparently the paper I mentioned above is not practically useful because it seems to require evaluation of an infinite series of integrals (if I understand it right). deeptrivia (talk) 01:24, 20 August 2007 (UTC)[reply]