Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Scsbot (talk | contribs)
edited by robot: adding date header(s)
Line 84: Line 84:


How can I export spreadsheet data from the TI-Nspire CAS Student Software into something like an Excel file or a CSV file? I ask this question hear because I would think that more people who visit this desk would use this software more often. --[[User:Melab-1|Melab±1]] [[User_talk:Melab-1|☎]] 18:49, 2 January 2012 (UTC)
How can I export spreadsheet data from the TI-Nspire CAS Student Software into something like an Excel file or a CSV file? I ask this question hear because I would think that more people who visit this desk would use this software more often. --[[User:Melab-1|Melab±1]] [[User_talk:Melab-1|☎]] 18:49, 2 January 2012 (UTC)

= January 3 =


= January 2 =


= January 1 =

Revision as of 00:02, 3 January 2012

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 27

in what way is the sum of all positive integers -1/12

per hardy and ramajamanan. I don't know much math but it's pretty obvoius to me that no amount of positive integers can add up to a negative number. — Preceding unsigned comment added by 80.98.112.4 (talk) 01:45, 27 December 2011 (UTC)[reply]

Well, it depends on what sort of addition you have in mind. We have an article on 1 + 2 + 3 + 4 + …. You're certainly correct that, in the usual sense in which infinite sums are interpreted in mathematics, this sum is infinite. --Trovatore (talk) 01:58, 27 December 2011 (UTC)[reply]

Right that article doesn't specify the special way/approach under which it isn't. I do understand infinities can be weird, for example in one partcular approach there are as many positive integers as positive and negative integers (1:1, 2:-1, 3:2, 4:-2 and and infinitum) but in another way there is one more positive integer than positive and negative integers: (1: no correspondence, 2:1, 3:-1, 4:2, 5:-2 etc.) it dependa on how you count.

so how can you count sch that you converge on 1/12? 188.156.156.39 (talk) 02:10, 27 December 2011 (UTC)[reply]

sorry, not 1/12 (weird enough, since they're all whole numbers and no operation like division is involved, just adding, but it's not even 1/12, but -1/12. how? (under what approach). 188.156.156.39 (talk) 02:14, 27 December 2011 (UTC)[reply]
You can't, in the ordinary sense. Any finite subset of the terms has positive sum, and in any usually applied topology, those sums either do not converge on anything, or converge to +∞
However, there is a function called the Riemann zeta function, that in certain parts of the complex plane equals the following sum:
Now, if you substitute −1 in for s, of course the series diverges. However, if you take the function where the series does converge, and travel around the complex plane to get to −1, avoiding poles, you get to the value
Now, is this enough to say that the sum is −1/12? In my opinion, no, usually. But there are a few special circumstances where it seems to make sense. --Trovatore (talk) 02:22, 27 December 2011 (UTC)[reply]


(edit conflict) By Zeta function regularization. The idea is that the sum defines an analytic function of s. This can be analytically continued to points outside the domain of where the summation makes sense. In particular, the value of its analytic continuation at is . It's not, however, the same thing as the infinite series , since this properly diverges to infinity. Sławomir Biały (talk) 02:23, 27 December 2011 (UTC)[reply]
The thing that really needs to be explained is, why should anyone interpret the sum as an instance of at s=−1, rather than, say, at s=−2? I haven't checked but I assume that gives you a different answer. Seems very ad hoc, not remotely natural. The surprising thing is that there are a few contexts, not obviously related, where it seems to be the right answer. --Trovatore (talk) 02:32, 27 December 2011 (UTC)[reply]
There's some sense it which the zeta function is the natural regularization from the point of view of Fourier analysis and spectral theory. The zeta function of an elliptic operator A is the trace of the kernel of As. (Here As is usually already defined by analytic continuation. The trace is regarded in a suitable distribution sense which effectively means that the trace operation commutes with analytic continuation.) The zeta function is natural because it contains most of the asymptotic information of the spectrum (think the prime number theorem). For instance, the Atiyah-Singer index is expressible in terms of zeta functions, and the Wiener–Ikehara theorem describes the asymptotics of the eigenvalues. Sławomir Biały (talk) 13:02, 27 December 2011 (UTC)[reply]

What is it called when an irrelevant outcome between 3rd place and 4th place determines who finishes 1st or 2nd?

I took a creative mathematics class back in high school that discussed how in ice skating, that depending on how well (or how badly) the last contestant performs--even if that candidate has no possibility of finishing first or second place--can flip the ranking of other ice skaters. I looked for it on the article Apportionment paradox but I don't know what it is called, or where to begin looking. It's when an athlete indirectly influences the ranking system in bizarre ways, such as if Alex is in first place, Brad is in second place, Charlie is in third place, and if Donald moves from 4th place to 3rd place, then suddenly Brad has more points then Alex. (I think it has to do with competitions which have a running total, and points are awarded based by relative ranking in the component events) I found it fascinating because it is something similar to what's actually happening in a company I do business with regarding how it awards contracting, so any guidance would be tremendously appreciated! Thank you all, 완젬스 (talk) 06:24, 27 December 2011 (UTC)[reply]

See Independence of irrelevant alternatives. Additionally, see Arrow's impossibility theorem. 77.125.138.22 (talk) 10:45, 27 December 2011 (UTC)[reply]
Yes! Thank you very much, that's it. 완젬스 (talk) 12:31, 27 December 2011 (UTC)[reply]


December 28

Optional stopping

Say I want to convince people that I can predict the result of a coin flip to better than 50% accuracy. I set up an experiment where I call heads or tails, flip the coin to determine whether I'm right, and repeat the process. However, I have the right to stop the experiment whenever I want, and if I stop the experiment while I'm ahead, I could get an accuracy higher than 50%.

If I want to get the highest expected percentage accuracy, what is the optimal strategy? Using this strategy, what accuracy can I obtain? What happens if I'm only allowed to stop the experiment after at least 10 coin flips, or after at least N coin flips, where N approaches infinity? --99.237.252.228 (talk) 00:12, 28 December 2011 (UTC)[reply]

I would guess that the best approach would be to stop flipping whenever more than 50% of the flips are in your favor. Half the time the first flip should go your way, and, of the half of the times it doesn't, in 1/4 of those you will get the next two flips your way. So, that's 5/8 of the time in your favour with just 3 flips. StuRat (talk) 00:17, 28 December 2011 (UTC)[reply]
Don't guess. If you've identified the dynamic, then why not give a general expression and its subsequent conclusion/s? Fly by Night (talk) 01:22, 28 December 2011 (UTC)[reply]
I expect a rapid regression to the mean. Others can do the math. StuRat (talk) 01:31, 28 December 2011 (UTC)[reply]
One has to be careful about how to phrase such a question. The expectation value of the accuracy will be 50%, regardless of what optional stopping protocol is used. This is the optional stopping theorem. By implementing a protocol such as the one Stu suggest, you will increase the likelihood of having slightly above average runs at the expense of having some very bad runs as well. In other words, you should not "expect" to look like you are able to predict the outcome of coin tosses. Sławomir Biały (talk) 01:35, 28 December 2011 (UTC)[reply]
It doesn't work as a betting strategy, due to the assumption that the bet must be doubled each time you continue. Thus, the few losses cost you as much as the far greater number of wins. However, in this setup, there's no monetary bet to double, so the few losses are not weighted more heavily than the many wins. StuRat (talk) 03:47, 28 December 2011 (UTC)[reply]
Again, you missed the nuance in my reply. I indicated that it is true that you can increase the chances of slightly-above-50% runs, but in doing so you both kill some very good runs while not eliminating some very poor runs. This tradeoff ensures that the expectation value of your accuracy remains 50%. Note that the original poster's question was specifically about expected value. Now, the optional stopping theorem states that a martingale stopped at a stopping time is a martingale. In this case, the original martingale is the discrete random walk obtained by computing time-averages of the process if the ith toss is heads, if it's a tails. Letting be the stopped process. Since is a martingale, the expectation value of at time is . So regardless of what protocol you use, you can only have even chances. (A basic meta-axiom of probability is that you can't get something for nothing.) Sławomir Biały (talk) 13:30, 28 December 2011 (UTC)[reply]
It's simple enough to show that you can increase the average percentage of heads. Let's say you stop after 1 flip if you get heads, and flip once more if you get tails. That would give you 100% heads 50% of the time, 50% heads 25% of the time, and 0% heads 25% of the time. This works out to an average of { 100(50) + 50(25) + 0(25) } / 100 = 62.5% heads. Your argument is only correct if getting one head does not count as much as getting two tails, and the OP said this is NOT the case here. StuRat (talk) 00:55, 29 December 2011 (UTC)[reply]
Two observations.
  • There's no strategy that guarantees a win, because there's an infinitesimal but nonzero probability that you'll go below 50% with the first toss and remain below 50% as long as you keep playing
  • The only strategy that maximizes the likelihood of a win is to stop playing as soon as more than 50% flips are in your favor, because, if you don't stop, there's an infinitesimal but nonzero chance of losing, as before.--Itinerant1 (talk) 01:45, 28 December 2011 (UTC)[reply]
That argument doesn't seem too rigorous. There's no strategy that guarantees a win, but the probability of losing is 0%, so I'm almost certain to win. I'm also not convinced that I should stop after getting at least 50%. There's an infinitesimal chance of losing, but there's a non-infinitesimal chance of increasing my accuracy, so why not continue? --99.237.252.228 (talk) 04:11, 28 December 2011 (UTC)[reply]
Is correctly predicting 1 toss out of 1 (100%) deemed better than, for example, correctly predicting 99 tosses out of 100 (99%)? That doesn't seem a very sensible scoring method... 81.159.105.243 (talk) 03:20, 28 December 2011 (UTC)[reply]
Yes, it is. That's why I introduced the condition that I can only stop after at least 10 tosses. Also, if my strategy is to always stop after the 1st flip, my expected accuracy would only be 50%, so that doesn't work. --99.237.252.228 (talk) 04:11, 28 December 2011 (UTC)[reply]
I wasn't suggesting that the strategy should be to always stop after the first flip. 81.159.105.243 (talk) 13:35, 28 December 2011 (UTC)[reply]
I ran some simulations. Here's the results of a run where I stop whenever more than 50% of the tosses have been heads, with no minimum number of flips allowed, and the maximum number allowed listed (I skipped even numbers since those would have lots of ties):
 MAX      WIN 
FLIPS      %
=====  =========  
  1    50.000000
  3    62.500000
  5    68.750000
  7    72.656250
  9    75.390625
 11    77.441406
 13    79.052734
 15    80.361938
 17    81.452942
 19    82.380295
 21    83.181190
 23    83.881973
Here's another run, but this time the minimum number of flips is 11:
 MAX      WIN 
FLIPS      %
=====  =========  
  11   50.000000
  13   55.639648
  15   59.466553
  17   62.361908
  19   64.676094
  21   66.590591
  23   68.213211
So, not only does setting a minimum number of flips mean more flips are required to do better than 50%, it also means that the rate at which the winning percentage grows is reduced from then on. StuRat (talk) 06:22, 28 December 2011 (UTC)[reply]
Next I repeated the above runs, but set the goal as winning at least 55% of the flips instead of 50%. Here it is with no minimum number of flips:
 MAX    OVER 55% 
FLIPS  PERCENTAGE
=====  ==========
   1   50.000000
   3   62.500000
   5   68.750000
   7   72.656250
   9   75.390625
  11   75.390625
  13   76.416016
  15   77.478027
  17   78.462219
  19   79.352188
  21   79.752640
  23   80.213654
And here it is with the minimum number of flips set to 11:
 MAX    OVER 55% 
FLIPS  PERCENTAGE
=====  ========== 
  11   27.441406
  13   38.720703
  15   44.360352
  17   48.388672
  19   51.548386
  21   52.846050
  23   54.267372
As you can see, if you want a higher percentage of heads, it requires more flips. If you actually wanted to do this for real, the time it takes to do all these flips would soon become a serious constraint. You could probably get 99% of the flips to be heads every time, except that the coin flipping would be interrupted by the death of the universe. In other words, the optimal strategy is entirely dependent on how much time you have. StuRat (talk) 06:35, 28 December 2011 (UTC)[reply]
(i) Is your "WIN %" measuring the probability that 50% heads was exceeded at any point? Is that what the question asked? I thought it was asking about the expected proportion of correct predictions in the run. (ii) I'm not sure about "You could probably get 99% of the flips to be heads every time", if I'm understanding it correctly. Any given surplus of heads over tails (or vice versa), however large, is certain to be achieved (in the "probability tends to one" sense) if we continue long enough, but is the same true for proportions? If we continue long enough, are we guaranteed to always eventually achieve 99% heads? It sounds very unlikely to me. In fact, it is not at all obvious to me that we are always guaranteed to even achieve 50.000001%. (iii) It seems plausible to me that the optimum strategy is to always stop if the number of heads is one more than the number of tails, but I don't think that has been anything like proved so far in this thread. (Note: in case not obvious, I'm using "heads" synonymously with "successful predictions" because we may assume that you always predict heads, since it makes no difference what you predict.) 81.159.105.243 (talk) 13:34, 28 December 2011 (UTC)[reply]
(i) Yes. The original question is unanswerable, because you could get any proportion you wished, if you had an infinite amount of time. StuRat (talk) 20:36, 28 December 2011 (UTC)[reply]
Let me make this very black and white: There is no optimal strategy that will maximize the expected proportion of heads to tails. This is a mathematical theorem. If you don't believe mathematics, trust casinos: go play roulette and bet $1 on black each time. You think you can expect to beat the house? Sławomir Biały (talk) 13:52, 28 December 2011 (UTC)[reply]
It's always possible to get ahead if you go on for long enough, in the sense that the probability of doing so tends to one. It may not be practical because you may have to go on playing for years (potentially even millions of years). 81.159.105.243 (talk) 14:02, 28 December 2011 (UTC)[reply]
But your question is not about the probability of exceeding 50%. It's about expected value. The probability of exceeding 50% can approach 1 while the expected value remains the same. This has nothing to do with practicality. There are some very poor runs in the tail of the distribution that average out with the modest slightly-above-50% runs. See my replies above. Sławomir Biały (talk) 14:06, 28 December 2011 (UTC)[reply]
See also Gambler's ruin. Sławomir Biały (talk) 14:14, 28 December 2011 (UTC)[reply]
Actually it's not my question (I'm not the OP). But, as I understand it, the question is about when to stop so as to maximise one's proportion of successful predictions. If we go on long enough, we are "certain", in the appropriate probabilistic sense, to reach the point where the number of successes exceeds the number of failures by one. Clearly it is advantageous* to continue to that point if we have not yet reached it. We next need to show (assuming it's true) that it is never advantageous to continue beyond that -- even though, if we continue, we are "certain" to eventually reach the point where our successes outnumber our failures by any stated fixed (i.e. not proportional) amount. 81.159.105.243 (talk) 14:24, 28 December 2011 (UTC) *advantageous in principle, but not necessarily in real life, since we don't have an indefinite amount of time...[reply]
In the original post, the game is stopped after a large number N. The expected proportion of heads (regardless of the stopping protocol) is 50%, even if the probability of being less than this is very small. (There is a difference between expected value and probability.) Now let N tend to infinity. Sławomir Biały (talk) 14:47, 28 December 2011 (UTC)[reply]
I'm envisaging that we do not stop tossing until we are are one ahead, so your "poor runs in the tail of the distribution" simply never happen to even out the expectation. Perhaps the legitimacy of that is more of a philosophical question? Actually, though, there are possibly some more prosaic quirks thrown up by this "highest expected percentage accuracy" measurement. Suppose our strategy is to (always guess heads and) stop if the first toss is heads, otherwise continue for two more tosses. Unless I have just made some silly mistake, this gives 1/2 chance of 100% accuracy, 1/8 chance of 2/3 accuracy, 1/4 chance of 1/3 accuracy and 1/8 chance of 0 accuracy, for an "expected" accuracy of 2/3 -- even though obviously we cannot make money at roulette this way! Is that how we're meant to calculate it? 81.159.105.243 (talk) 15:22, 28 December 2011 (UTC)[reply]
Good point. Sławomir Biały (talk) 15:33, 28 December 2011 (UTC)[reply]
This seems like a very strange way of counting, though. (As you note, it counts one round of 100% accuracy as the same as 1 million rounds of 99% accuracy.) I was thinking that the right way to compute the expected proportion was to add up all the heads and then divide by the total number of coin flips, among all the sample paths. This will give 50%, from the theorem I quoted. I suppose context is important. Sławomir Biały (talk) 21:31, 28 December 2011 (UTC)[reply]

in an infinite amount of time you will get any possible percentage, so you could stop whenever you have some arbitrarily high percentage. — Preceding unsigned comment added by 86.174.173.187 (talk) 15:35, 28 December 2011 (UTC)[reply]

StuRat said the same thing above, I believe. Is everyone certain that this really is true? As I mentioned above, I know it is true for surpluses of heads over tails (or vice versa), but is it true for percentages? Generally, we need to know the limit of the probability of getting at least x% heads anywhere (i.e. at any intermediate point) in a run of N tosses, as N -> infinity. If that number is 1 for any x then you guys are correct. It would surprise me though, on the basis that once you get to huge N it becomes hopelessly unlikely to get even 50.00001% heads (if you haven't already), and (I speculate) these increasingly hopeless odds overwhelm even the fact that N can keep getting bigger and bigger without limit. Any further thoughts about this? Am I wrong? 86.179.2.210 (talk) 18:21, 29 December 2011 (UTC)[reply]
Well, let's look at the probability of anywhere on a run getting >=75% heads. The probability of exceeding this on the first toss is 1/2. Thereafter, we can only pass from <75% to >=75% (actually, always to =75%) on turns that are a multiple of 4, say turn 4n. The probability of this happening on turn 4n is, according to my calculation, C(4n-1, n)/2^(4n). If we sum all these probabilities, we get 1/2 + sum(n = 1,2,3...) C(4n-1, n)/2^(4n). This is an overestimate of the required probability because we are multiple counting: we might pass from <75% to >=75% multiple times on a run. Even though an overestimate, it still looks very unlikely to me that that sum will converge to 1. It looks to me as if it converges to about 0.85526. Of course, it is possible I made a mistake, but for now I remain unconvinced that the claim is correct. 86.179.2.210 (talk) 00:13, 30 December 2011 (UTC)[reply]
I've extended my simulation:
 MAX      WIN 
FLIPS      %
=====  =========  
  1    50.000000
  3    62.500000
  5    68.750000
  7    72.656250
  9    75.390625
 11    77.441406
 13    79.052734
 15    80.361938
 17    81.452942
 19    82.380295
 21    83.181190
 23    83.881973
 25    84.501900
 27    85.055405
 29    85.553558
 31    86.005005
 33    86.4166   (Switched to floating point here, so accuracy is reduced slightly.)
 35    86.7939
It's not converging very quickly, so I'd expect the max value to be quite a bit higher, perhaps even 100% with an infinite number of flips allowed. StuRat (talk) 08:28, 1 January 2012 (UTC)[reply]
If I'm understanding correctly, your simulations are measuring the probability of getting more than 50% heads anywhere in the run. That probability does indeed tend to 1 as the number of tosses tends to infinity. What I am disputing is the claim that any percentage of heads (say 99%, or 75%, or even 51%) will eventually be reached with probability tending to one. That is quite a different matter. 86.171.174.74 (talk) 20:29, 1 January 2012 (UTC)[reply]
I agree that 99% HEADS may not be achievable every time. However, I'm not so sure 51% HEADS wouldn't be. Are you sure about that ? StuRat (talk) 22:47, 1 January 2012 (UTC)[reply]
No. My intuition says that the probability of attaining any fixed percentage of heads other than above 50% does not tend to 1 as the number of tosses tends to infinity, but I am not certain. As I mentioned above, you need to bear in mind that, as the number of tosses becomes huge, the probability of attaining even, say, 50.000001% heads (if not already) becomes so increasingly vanishingly small that even the ever-increasing number of tosses may not be able to overcome it. Another consideration is that, if you allow that 99% heads isn't eventually "certain", but 51% is, then there will be some number between 99% and 51% at which there is some, if you like, "discontinuity". Intuitively it seems unlikely to me that this "discontinuity" could happen anywhere other than at 50%. 86.171.174.74 (talk) 23:15, 1 January 2012 (UTC)[reply]

Is there an algorithm which - for every first-order proposition (with identity and predicate symbols) - determines whether that propostion is consistent?

77.124.12.169 (talk) 10:22, 28 December 2011 (UTC)[reply]

Consistent with what?
But, basically, no, whatever you might mean, even if you just mean "consistent with itself". A proposition "inconsistent with itself" would be the negation of a logical validity, and if there were an algorithm to determine whether the negation of a sentence is a validity, then there would also be one to determine if a sentence is a validity, and there isn't. --Trovatore (talk) 10:27, 28 December 2011 (UTC)[reply]
Yes, consistent with itself. 77.124.12.169 (talk) 11:17, 28 December 2011 (UTC)[reply]

Is there an algorithm which - for every consistent first-order proposition (with identity and predicate symbols) - determines that the propostion is consistent?

77.124.12.169 (talk) 11:16, 28 December 2011 (UTC)[reply]

If every input is consistent, then the algorithm that always returns "is consistent" would work. That same would apply to your question below. Maybe I'm misreading, though. Phoenixia1177 (talk) 05:34, 2 January 2012 (UTC)[reply]
Yes, you were misreading my question. I didn't ask whether there is an alogorithm such that - if every given proposition is consistent - then the alghorithm determines that its given proposition is consistent, but rather whether there is an algorithm such that - for every given consistent proposition - the algorithm determines that its given proposition is consistent; i.e. the algorithm determines that its given proposition is consistent - if that given proposition is really consistent.
For example, let's assume that the algorithm has two inputs, one of which is a consistent proposition, the other one being an inconsistent proposition; then the algorithm should determine that the first proposition is consistent. The algorithm is not expected to determine also whether the second input is a consistent proposition. You know, not every input must have an output: there are inputs for which a few algorithms don't halt...
84.228.187.129 (talk) 00:53, 3 January 2012 (UTC)[reply]
Validity is semidecidable from Godel's Completeness Theorem, so for your inconsistent sentences, you could run this on their negations. Phoenixia1177 (talk) 13:00, 3 January 2012 (UTC)[reply]
Also, as an aside, you should tighten up your second part to saying it halts exactly when it is consistent; currently, it sounds like you're saying, only, that it halts if the input is consistent. But, if it halts for some inconsistent, then you don't know if it halts if the prop. is consistent, just that it halted. Not trying to be an ass, just thought it was worth mentioning. Phoenixia1177 (talk) 13:34, 3 January 2012 (UTC)[reply]
Anyways, I haven't got an answer: Is there an algorithm which - for every consistent first-order proposition (with identity and predicate symbols) - determines that the propostion is consistent? 77.127.135.82 (talk) 00:24, 4 January 2012 (UTC)[reply]
What are you talking about? You have an answer; the inconsistent cannot be satisfied, thus their negations are valid, valid formulas are, essentially, an RE set. So, your consistent would be, essentially, coRE. So, in short, no, there is no such alg. for the consistent case. On an aside, you come off as kind of rude, perhaps, oddly, condescending.Phoenixia1177 (talk) 08:05, 4 January 2012 (UTC)[reply]

Is there an algorithm which - for every inconsistent first-order proposition (with identity and predicate symbols) - determines that the propostion is inconsistent?

77.124.12.169 (talk) 11:16, 28 December 2011 (UTC)[reply]

betting game

we throw a dice. if i get a 1 i lose all my money, but if i get a any other number i double my money. the laws of probability suggest that i should always continue playing as it is more likely i will gain then lose. yet obviously i will eventually get a 1 and lose all my money. i assume this is a case of gamblers ruin. so whats the best strategy? — Preceding unsigned comment added by 86.174.173.187 (talk) 15:39, 28 December 2011 (UTC)[reply]

This is a variant of the St. Petersburg paradox. The game has infinite expectation value, yet at some point it is absurd to continue playing. Sławomir Biały (talk) 16:14, 28 December 2011 (UTC)[reply]
mathematically though the st petersburg paradox makes sense to pay any value for. mathamatically here it is stupid to continue playing indefinately. anyway the solutions to the st petersburg dont help here. — Preceding unsigned comment added by 86.174.173.187 (talk) 17:50, 28 December 2011 (UTC)[reply]
Why is it "mathematically" stupid to continue playing indefinitely? You have a 5/6 chance of doubling your money with no risk but your initial investment. Surely "mathematically" the most rational thing to do is to continue playing the game. Actually, you would probably want to sell shares in this game. This hedges some of your own risk. Sławomir Biały (talk) 19:30, 28 December 2011 (UTC)[reply]
Note that there's a chance you could win more than all the money in the world. For example, the chance of getting over a trillion times your initial bet is around 0.07%. StuRat (talk) 20:47, 28 December 2011 (UTC)[reply]
This illustrates the same phenomenon as the St. Petersburg paradox. When the risk of losing the pot outweighs the benefit of doubling down, we would stop playing. But that depends on our individual utility functions. If we had linear utility functions, there would be no incentive ever to stop playing (which leads to a preposterous conclusion, of course). Sławomir Biały (talk) 21:21, 28 December 2011 (UTC)[reply]

there is a 100% chance you will eventually get a one and lose everything — Preceding unsigned comment added by 86.174.173.187 (talk) 11:16, 30 December 2011 (UTC)[reply]

...and the game has infinite expected value. Sławomir Biały (talk) 01:42, 31 December 2011 (UTC)[reply]
Well, it would, except that you can't throw "a dice" anyway, there being no such thing. --Trovatore (talk) 01:47, 31 December 2011 (UTC) [reply]
Al contrario, amico mio :): The OP geolocates to the United Kingdom and in Commonwealth English the correct singular of 'dice' is in fact 'dice'; 'die' is an archaism outside of the States. Source: OED. 24.92.85.35 (talk) 17:47, 1 January 2012 (UTC)[reply]
I'm gonna go full-bore prescriptivist on this one: The Brits were wrong to make that change. A dice is a barbarism; makes my teeth itch. It's as bad as a criteria. --Trovatore (talk) 19:43, 1 January 2012 (UTC) [reply]
More mathematically, when you say the game has infinite expected value, presumably you mean that the expected value of the strategy "play the game for n rounds or until you bust" has an expected value that goes to infinity as n goes to infinity.
However, it's probably more natural to read your statement as being the expected value of the strategy "play the game forever, or until you bust", and the expected value of that strategy is zero. The payoff is infinite if you're allowed to play forever, but the probability of that happening is zero, and in measure theory zero times infinity is generally taken to be zero. --Trovatore (talk) 04:47, 31 December 2011 (UTC)[reply]
Yes, I mean the first statement. The expected value tends to infinity as the number of rounds tends to infinity. The paradox here is that this would seem to imply that a risk-neutral party would play the game indefinitely, and go bust almost surely. Sławomir Biały (talk) 12:24, 31 December 2011 (UTC)[reply]
If someone offers you the opportunity to do this for a single throw. Unless you are extremely risk averse, you would probably play the game for one throw. It represents an extremely good, but risky, investment. Your expected return is 166%, although there is some volatility to worry about. Now, if we're in for more throws, the volatility grows much faster than the return the more tosses you're in for, since it's always "all or nothing": this is the 100% chance that you refer to. One needs to come up with a reasonable model of a rational risk-averse person or market (capital asset pricing model or Harold Markowitz#Choosing the best Portfolio), and work out the variance of the process to maximize your indifference curve. (Also note that if you were allowed to reinvest the cash between turns, that would substantially reduce the risk, although it would likely be a poorer expected payout.) Sławomir Biały (talk) 03:02, 31 December 2011 (UTC)[reply]

Now, what is the conclusion of all this. That the mean value is only relevant when you can repeat the experiment? That gamblers and statisticians and economists are insane? That gambling is more fun than winning? Bo Jacoby (talk) 11:02, 5 January 2012 (UTC).[reply]


December 29

15 Coin game

What is the name of the game that uses 15 coins in 5 horizontal rows with 1 coin in the top row incrementing by one in each row to 5 coins in the bottom row. Each of two players take turns and in turn is allowed to remove as many coins as desired from any single (horizontal) row. The player forced to remove the last coin loses. Thanks, hydnjo (talk) 05:17, 29 December 2011 (UTC)[reply]

It's a Nim variation. PrimeHunter (talk) 05:30, 29 December 2011 (UTC)[reply]
Arghh, of course. Thanks PH. hydnjo (talk) 05:59, 29 December 2011 (UTC)[reply]

Order statistics

Is there a nice "physical" interpretation for order statistics of fractional order? For example, I keep encountering beta distributions involving parameters with close to half integer and integer +- 1/3. --HappyCamper 07:24, 29 December 2011 (UTC)[reply]

Sum of digits of 2^1000

I'm trying to solve this problem which is to sum the digits in 2^1000. The obvious thing seems to be to just brute force it by working out 2^1000 (using big ints) and then suming the digits. However, I feel that that would be missing the point and think there may be a more clever way of doing it; perhaps based on the digits behaving in a predictable way every time they are multiplied by 2. I'd rather not get the full answer but if you could tell me whether I'm right or give me a hint otherwise. I can see for example that every digit is doubled and then if it is greater than 10 you keep the least significant digit and add the most significant to the next one after it is doubled but then with everything you need to keep track of I feel it looks a lot similar to what would be done simply by using big ints --178.208.197.58 (talk) 15:46, 29 December 2011 (UTC)[reply]

This was first posted to Wikipedia:Reference desk/Computing#Sum of digits of 2^1000. PrimeHunter (talk) 15:49, 29 December 2011 (UTC)[reply]
Note that , so , so must have 302 digits. 84.228.166.2 (talk) 17:38, 29 December 2011 (UTC)[reply]
...if written in decimal, of course. CiaPan (talk) 17:51, 29 December 2011 (UTC)[reply]
And more precisely: so , so has 1 + 301 = 302 digits in decimal notation. --CiaPan (talk) 17:58, 29 December 2011 (UTC)[reply]

The mean value of a random digit is (0+1+2+3+4+5+6+7+8+9)/10=4.5. The mean value of the sum of 302 random digits is 302×4.5=1359. The mean value of the square of a random digit is (0+1+4+9+16+25+36+49+64+81)/10=28.5. The variance is 28.5−4.52=28.5−20.25=8.25. The variance of the sum of 302 random digits is 302×8.25=2491.5. The standard deviation is the square root of the variance. So the sum of the digits of 21000 is 1359±50. Bo Jacoby (talk) 20:00, 29 December 2011 (UTC).[reply]

That's quite close to the actual answer which is 1366... but what I think the OP's more interested in is an algorithm, or some mathematical property between binary to decimal conversion that doesn't require big integers. Shadowjams (talk) 22:17, 29 December 2011 (UTC)[reply]
Actually, the math part of this question is whether there's some known properties of the sequence of decimal digit sum of numbers 2n, or in other words, whether the digit sum of doubled numbers follows a set pattern, or is it irrational (on a graph it looks irrational). The first few numbers are 1 2 4 8 7 5 10 11 13 8 7 14 19 20 22 26 25 14 19 29 31 26 25 41 37 29 40 35 43 41 37 47 58 62 61 59 64 56 67 71 61 50 46 56 58 62 70 68... I searched the OEIS for that string and nothing came up, which suggests to me it's not a common conversion. That said, I'm always amazed at the Math desk so maybe someone knows something about it.
Unless there's some programmer's trick (still waiting for that programmer that knows it to answer), all the straightforward methods seem to either directly rely on big int, or somehow duplicate its logic. If you could calculate 2n incrementally in decimal digits, then you could do this without ever outright using a bigint... That might work... maybe someone can expand on that idea.
I also got the whole implementation down to 50 characters in perl (commit to the code: a10dcea4a0e707a77bd9b7cf80b47ed4). Golf anyone? Shadowjams (talk) 23:04, 29 December 2011 (UTC)[reply]
Here is is in 27 characters in mathematica: +##&@@IntegerDigits[2^1000]. Sławomir Biały (talk) 00:18, 31 December 2011 (UTC)[reply]
I thought we shouldn't do homework so I didn't post the relevant OEIS link earlier but the answer has now been posted so here it is: OEIS:A001370. It links a table going to 10000. Perhaps you didn't put your sequence in quotes and overlooked the right result among other hits. There is no easy algorithm with manual computation. It would be a big work with any algorithm. PrimeHunter (talk) 01:27, 30 December 2011 (UTC)[reply]
I didn't realize OEIS was so picky about quotes. I don't think this is a homework question (particularly given the time of year) and even if it is, 1366 isn't the answer, the program to find it is. I'm pretty confident the OP knows how to solve the problem (although I didn't post my perl code for the same reason... just in case). I find the underlying question interesting though, and I'm curious if anyone has any insight on that part. (I'm not the OP btw) Shadowjams (talk) 06:13, 30 December 2011 (UTC)[reply]
I wonder if the original poster got the question wrong and was being asked for the sum if you kept on adding the digits together till you just got one digit, i.e. the digital root rather than the digital sum. Dmcq (talk) 09:45, 30 December 2011 (UTC)[reply]

Elementary calculation modulo 9: 21000 = 26×166+4 = (26)166×24 = 64166×16 == 1166×7 = 7 (mod 9). So the sum of the digits in 21000 is of the form 7+9n where the integer n ≈ (1359±50−7)/9 = 150.22±5.55 ≈ 150±5. Actually n=151 because 7+9×151=1366. Bo Jacoby (talk) 11:43, 30 December 2011 (UTC).[reply]

Excuse my ignorance.... I'm with you up to 64166×16, but I don't understand the next operation that goes to 1166×7. Shadowjams (talk) 23:15, 30 December 2011 (UTC)[reply]

See modular arithmetic. 64=9×7+1 and 16=9×1+7. This means that 64 is congruent with 1 (modulo 9), and 16 is congruent with 7 modulo 9. And so 64166×16 is congruent with 1166×7 (modulo 9). Bo Jacoby (talk) 23:37, 30 December 2011 (UTC).[reply]

I'm not the OP, but the question is Euler problem 16 so I think the actual sum of the digits is desired, not the digital root. 67.122.210.96 (talk) 09:08, 5 January 2012 (UTC)[reply]

December 30

cavitation

i need an equation to get the final speed of the water when a cavitation bubble collapses. p.s.: any equations about cavitation would be usefull. thanks, Jake1993811 (talk) 10:17, 30 December 2011 (UTC)[reply]

The final speed is zero. Bo Jacoby (talk) 11:50, 30 December 2011 (UTC).[reply]
It's not speed but the pressure Cavitation--Aspro (talk) 19:48, 30 December 2011 (UTC)[reply]
I believe the water is just accelerated in by the pressure so it would just depend on the pressure and the original size of the cavity. The cavity volume goes down as the cube of the size so the pressure must get pretty large at the end driven by the inertia. Dmcq (talk) 20:05, 30 December 2011 (UTC)[reply]

no ofence, but you people are completely not getting what i,m talking about. i know i was not to clear, and ill fix that tomorrow. but now, i,m to tired and enraged at your server admin to do so. Jake1993811 (talk) 04:13, 31 December 2011 (UTC)[reply]

Sorry we don't know is the result. You're best trying looking through google scholar or a book about it. Why should anybody here know much? They are singularities as far as any straightforward simple maths is concerned. There is no server admin to be enraged at offended or whatever etc. Dmcq (talk) 12:55, 31 December 2011 (UTC)[reply]

A cavitation bubble with volume V in water of pressure P and density ρ has potential energy PV. When the bubble collapses this potential energy is converted into kinetic energy Vρv2/2. So the order of magnitude of the velocity is v = √(P/ρ). A detailed solution to the equation of motion will provide this characteristic velocity with a dimensionless factor depending on the distance from the center of the bubble. Happy new year! Bo Jacoby (talk) 14:03, 31 December 2011 (UTC).[reply]

The water was at rest when the radius of the bubble was R0. When the bubble has contracted to radius R<R0, the inwards velocity of the surface of the bubble has increased to v>0. Then the velocity of the water at distance r>R from the center of the bubble is vR2/r2 because of the incompressibility of the water. The volume of water having this velocity is 4πr2dr. The kinetic energy of the flow is

Equating this to the work done

gives

For R→0 the assumption of incompressibility breaks down and an acoustic wave is emitted. Of course the velocity does not actually become infinite. Bo Jacoby (talk) 13:56, 1 January 2012 (UTC).[reply]

The time for the bubble to collaps is

The mean velocity of the surface of the bubble is independent of the size of the bubble.

The ugly integral was computed by [1]. Bo Jacoby (talk) 16:40, 1 January 2012 (UTC).[reply]

Determinants

Back in high school I learned about matrices and determinants. I learned how to calculate them, but recently I realized my math teacher never provided word problems to illustrate how they are applied in real life. Now I cannot find any examples online; also, the article on determinants is of no help. What are some simple examples of how they are used to solve real life problems? — Preceding unsigned comment added by 75.36.223.223 (talk) 21:33, 30 December 2011 (UTC)[reply]

There are presumably very many ways, and others will definitely add to this, but one use is when you have a system of equations (like 4x + 2y = 5; 3x + 7y = 17) so you have to solve by inverting the matrix. For huge systems of equations, you can't do this by hand, so you get software to do it for you. The software has to calculate the inverse of the matrix, but it has to know ahead of time if this is possible. So it computes the determinant. If it is even close to zero, it spits out a warning, because its numerical computation will be prone to error. IBE (talk) 22:37, 30 December 2011 (UTC)[reply]
I'm not sure that's entirely true; generally something like LU decomposition would be used to find the inverse or the determinant, so it wouldn't really save anything to compute the determinant first and then compute the inverse. Perhaps a better application would be a formula for the tetrahedron bounded by four given points in space. Determinants don't really have many elementary applications, else they would have been studied long before they actually were, but the more math you study the more you see them popping up in unexpected places.--RDBury (talk) 03:17, 31 December 2011 (UTC)[reply]
The determinant of a matrix is equal to the product of its eigenvalues. Widener (talk) 04:26, 31 December 2011 (UTC)[reply]

You know that the difference between two fractions is computed like this

The numerator is the determinant

If the determinant is zero then of course

and then the tupples (a,b) and (c,d) are proportional or linearly dependent. Bo Jacoby (talk) 09:01, 31 December 2011 (UTC).[reply]

One "real life" application of determinants (as noted above by RDBury) is to calculate areas and volumes. The area of a parallelogram with vertices at (0,0), (a,b), (c,d) and (a+c, b+d) is |ad - bc| i.e. the absolute value of the determinant

Similarly, the determinant of a 3x3 matrix is related to the volume of the parallelpiped formed from its rows (or from its colums). If the determinant is 0, then in the 2x2 case this means that (a,b) and (c,d) are proportional (as Bo pointed out) and the "parallelogram" is in fact a straight line through the origin. In the 3x3 case, a determinant of 0 means that the "parallelpiped" is reduced to a plane through the origin. Gandalf61 (talk) 12:04, 31 December 2011 (UTC)[reply]

Another useful application is in vector (or cross) products see cross_product

83.100.189.252 (talk) 15:47, 31 December 2011 (UTC) Yet another application is to find the eigenvalues of a matrix, see Eigenvalue_algorithm, although as above it's a more of a technique or algorithm. The best 'real-world' application is probably finding areas/volumes as described above - at least for 'worded' maths problems. — Preceding unsigned comment added by 83.100.189.252 (talk) 15:52, 31 December 2011 (UTC)[reply]

This is a mnemonic for remembering the cross product formula; it is not truly an application of the determinant. Matrices cannot have vectors as entries. Widener (talk) 16:32, 31 December 2011 (UTC)[reply]

The OP says that the determinant article is not good, yet it has its own applications section. I would direct the OP to the latter half of the article. Fly by Night (talk) 23:55, 31 December 2011 (UTC)[reply]

It seems like that would be hard to follow for someone who apparently hasn't touched the subject since high school, and the OP has clearly tried to look it up first. It also might not appear comprehensive - one could be forgiven for thinking there should be more than just those listed, as I myself had assumed. IBE (talk) 07:32, 2 January 2012 (UTC)[reply]
To me, the best intuitive way to understand the determinant is with volumes. A few people touched on this, but I think it's worth emphasizing. An n by n matrix is a linear transformation, mapping the points in n-dimensional space to other points in n-dimensional space. The determinant is the scaling factor for volume under this transformation. (When I say "volume" here I mean n-dimensional volume, so for n=3 this is traditional volume, for n=2 this is area, but we can also generalize the notion to larger n.) So if I have some some solid with a volume of 1 (the shape of it doesn't matter), and then apply a matrix with determinant c, the resulting solid will have a volume of c. One place this idea shows up is in multivariable calculus, when you do a change of variables on an integral using the Jacobian. Rckrone (talk) 23:08, 2 January 2012 (UTC)[reply]

December 31

Exporting data from TI-Nspire Student Software

How can I export spreadsheet data from the TI-Nspire CAS Student Software into something like an Excel file or a CSV file? I ask this question hear because I would think that more people who visit this desk would use this software more often. --Melab±1 18:49, 2 January 2012 (UTC)[reply]

January 3

January 2

January 1