Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Wikipedia Reference Desk covering the topic of mathematics.

Welcome to the mathematics reference desk.
Shortcut:
Want a faster answer?

Main page: Help searching Wikipedia

How can I get my question answered?

  • Provide a short header that gives the general topic of the question.
  • Type ~~~~ (four tildes) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Post your question to only one desk.
  • Don't post personal contact information – it will be removed. We'll answer here within a few days.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we’ll help you past the stuck point.


How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
 
See also:
Help desk
Village pump
Help manual


November 20[edit]

Proving the law of large numbers without defining probability[edit]

Is it possible to prove the law of large numbers without introducing the concept of probability?

This might seem like a strange question, so here's my motivation for asking it. In the frequentist interpretation of probability, the probability that an event E occurs in an experiment is defined as \lim_{N \rightarrow \infty} \frac{N_E}{N}, where N is the number of times the experiment is performed, and N_E is the number of times event E actually occurred in those experiments.

Obviously, this definition presumes that the above limit actually converges, a claim which isn't exactly non-trivial. It's usually stated as the law of large numbers, but the law of large numbers is typically proved only after the concept of probability has already been defined, making the whole thing a bit circular.

I happen to like the frequentist interpretation and would like to salvage it. This could be done if the law of large numbers could be proved without explicitly introducing ideas of probability. Does anyone know how this can be done? Thanks. 24.37.154.82 (talk) 00:09, 20 November 2014 (UTC)

I'd think the law of large numbers is mostly an empirical thing. — Melab±1 03:46, 20 November 2014 (UTC)
No, it isn't. Our article law of large numbers has a bad intro IMO. We can use the LLN as a result of a model of something, and then apply it to some experiment, but the statement and proof of the theorem are just regular pure mathematics. SemanticMantis (talk) 16:29, 20 November 2014 (UTC)
It isn't circular, and you can define probabilities without resorting to the limit you describe (Which does have sort of an empirical feel, and that might explain Melab's comment). For example, I can define the probability of getting heads (H) or tails (T) on a fair coin toss (x) without using any limit statements or invoking LLN: \mathbb{P}(x=H)=1/2, \mathbb{P}(x=T)=1/2 -- see? Maybe you were confused because we can invoke the LLN as a means of justifying a method to estimate a probability when only empirical experiments are available. There's nothing wrong with the frequentist interpretation either. Or rather, it has some problems, but all such interpretations do. The Bayesian approach also has problems, and rather than there being some huge distinction in world views or philosophy of math, people tend to just use whichever framework is most suitable for the problem at hand. SemanticMantis (talk) 16:29, 20 November 2014 (UTC)
Sure, you can define the probability that a fair coin lands on heads as \frac{1}{2}, but this definition carries zero content. You might as well define the \mathbb{P}(x=H)=\frac{1}{e}. The fact that we favour one value over another for the probability of a fair coin landing on heads tells us that it is more than a mere definition. 24.37.154.82 (talk) 17:01, 20 November 2014 (UTC)
Of course it's just a definition, made up to conform with our notion of "fair". How can you say it has zero content? It fully specifies the probability of the events occurring. Of course, some weighted coin might indeed have 1/e chance of ending up heads, and it's your prerogative if you want to define a certain mathematical probability in that manner. Backing away from this issue: the LLN is a theorem about probability theory, and as such, it depends on axioms of probability. There's really no way around that, and it doesn't entail any specific problem with the frequentist interpretation. You might want to read up on other Probability_interpretations, but all of them have the LLN depending on the same axioms and definitions that compose the classical discrete probability. SemanticMantis (talk) 20:42, 20 November 2014 (UTC)
I should further clarify: the different interpretations of probability have to do with how we make inferences about the real world based on mathematical constructs. None of the interpretations change the mathematics of probability. From a mathematics perspective, all probability theory can be constructed axiomatically, and has no technical need for any outside interpretation. SemanticMantis (talk) 20:46, 20 November 2014 (UTC)
We're talking over each other.
Until you define what you mean by "probability", you can't present an argument for why a fair coin should have probability of 1/2 to land on heads. If you try to avoid a discussion of what probability actually is by just defining the probability that a coin lands on heads as 1/2 -- which is what I thought you were doing above -- I would reply that the definition carries zero content. 76.68.233.159 (talk) 01:30, 21 November 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── If you're still reading, I apologize for the confusion. You're right, I defined a specific probability without really defining what a probability really is. There is a bit of a break between the machinery necessary to define the probability associated with discrete outcomes and finite numbers of experiments, compared to the way we define probabilities for the continuous case or for an infinite number of coin tosses. The former can be done relatively simply, but the latter is somewhat difficult. See probability axioms for the latter case. Rigorously defining "probability" in a general and axiomatic way is rather recent, and was only completed by Kolmogorov. Prior to that, people had been a little fast and loose with the foundations, and perhaps that is what you are picking up on. But to do all this rigorously means some time has to be spent constructing measure theory, and few people outside a graduate program in math will go that far. Anyway, the treatment of defining probability axiomatically as described in our article doesn't depend on any notion of LLN or the limit statements you have at the top. It is not circular, though it may often be presented in a semi-circular fashion if the students and instructor don't have the time and background to go through Lebesgue measure and all that. Does that help answer your question? SemanticMantis (talk) 14:37, 21 November 2014 (UTC)

See Principle of indifference why the probability for heads and tails should be equal. Bo Jacoby (talk) 22:48, 20 November 2014 (UTC).

\frac{N_E}{N} is the probability that a randomly chosen one among the N experiments was E.
\frac{1+N_E}{2+N} is the probability that the next experiment will be E.

Bo Jacoby (talk) 09:28, 21 November 2014 (UTC).

I'm surprised that nobody brought up sums of binomial coefficients yet.
Let n be an even integer. Then, let Sn be the sum of the 2n+1 coefficients (n²/2–n) to (n²/2+n). For large n, Sn / 2 (the denominator being the sum of all BCs with upper index n2) approach a non-zero value, even though they represent a diminishing fraction of the BCs with upper index n2.
Now, the law of large numbers says something else, namely that the sum of a fixed fraction of BCs should approach unity. However, if we look at other sums, for example, the sum of the 2kn+1 coefficients (n²/2–kn) to (n²/2+kn), for increasing k, we find that these approach fractions closer and closer to unity with increasing k. This should result in the LoLN for the case p=1/2.
I hope that's accessible to the OP. 217.255.188.220 (talk) 07:12, 26 November 2014 (UTC)

edit on https://en.wikipedia.org/wiki/Cubic_Hermite_spline#Interpolation_on_the_unit_interval_without_exact_derivatives[edit]

I changed 1/2 ( x vector dot p vector) for Cint equation to - 1/2 ( x vector dot p vector ). I'm pretty sure this is correct by experiment but hope I could get someone to confirm the edit is correct.

PLEASE disregard. +1/2 is correct. Satisfied by using 1/2 in corrected experiment.

I guess I'll mark this
Resolved
, also please remember to sign posts with four tildes, like this: ~~~~ SemanticMantis (talk) 16:15, 20 November 2014 (UTC)

Integration by parts[edit]

Why is it sometimes necessary to integrate twice using integration by parts? For example, when calculating the response to a unit step function in vibration theory, duhanel integral is used. In many of these cases integration by parts is used twice but what is the mathematical reasoning behind this? 217.33.132.194 (talk) 19:27, 20 November 2014 (UTC)

I put this question in a new section, which will happen automatically if you use the button at the top of the page. Have you read our article on integration by parts? If you can follow it, it gives a proof of why that formula is the correct formula. It just depends on the product rule and the fundamental theorem of calculus. It's just sort of the inverse of the product rule. If you don't understand the proof, tell us where you are stuck and we might be able to help further. SemanticMantis (talk)

Minimum angle between edges of a circle in a grid[edit]

I completely messed up this question a few days ago. I was going to reply back in the mess I made, but thought it better to give this a fresh start and state my solution - in case it is easier to make it better...

This problem takes place in a grid. It is pixels on a computer screen. Each is a square. The coordinate 0,0 is the center of the grid. It is the center of the pixel in the center of the grid. That is important to note. 0,0 is not the corner shared by four pixels. It is the center pixel itself. 0,1 is the pixel directly above 0,0. 1,0 is the pixel directly to the right of 0,0.

The goal is find the maximum angle increment that if I make a circle beginning at 0,r (for an integer radius r) and consider that an angle of 0 radians, I will step around the circle, in increments of the angle increment, filling in the pixel that is touched by the edge of the circle. The algorithm for drawing the circle is:

  • Let a=0 (angle in radians - this is where I messed up before. I called the angle r)
  • Let i be the increment
  • Draw a line from 0,0 that is r units long (a unit is the width of a pixel) at an angle a where 0 radians is straight up to 0,r, 0.5PI is at r,0, PI is at 0,-r, 0.75PI is at -r,0, and 2PI is back to 0,r.
    • I hope I got that right. A circle is 2PI radians, right?
  • Increment a by i and go back to the previous step until a>=2PI

My solution is very much a cheat. I create an image that is r by r pixels. I draw a quarter arc on it using the graphics libraries in the computer program. I then create another image that is r by r pixels. I draw a single dot at 0,r, increment a, draw the next dot wherever that angle places the line, increment a again, and draw. I keep incrementing until a>0.5PI. Then, I compare the image I drew to the original one. If I missed a pixel, the increment needs to be smaller.

One suggestion was that the increment must be the angle between the 0,r and 1,r from 0,0. I don't think that is actually the maximum i that still allows every pixel around the edge of the circle to be touched. 209.149.113.112 (talk) 21:07, 20 November 2014 (UTC)

Borrowed from Bresenham's circle algorithm to help clarify the question.
Are these old contest problems available for us to view online? I ask because the precise statement of the problem is still not clear to me, and I was hoping to see the official wording. -- ToE 21:51, 20 November 2014 (UTC)
Or perhaps you can clarify it directly by saying if the pixels shaded in the image to the right are sufficient, even though there are several which do intersect the circle (their corners are clipped) but are not shaded? -- ToE 22:52, 20 November 2014 (UTC)
They are old contest problems (from the ACM programming competition). This is one that I technically got correct, but I didn't like my solution. I haven't been able to find it online which is why I'm planning to use it for the upcoming end-of-semester competition. I didn't realize the sublety of this that you noted. The problem (from memory) stated that a complete circle must be drawn such that there is no gap. So, your image meets the requirement even though there are pixels touched by the circle's outline that are not shaded. That will increase the complication of writing the problem. If you don't mind, I will be stealing that image. 75.139.70.50 (talk) 00:51, 21 November 2014 (UTC)
The standard solution to this problem is to use the property that for a circle, x^2+y^2 = r^2. You can solve this for y and use it to draw 1/8th of a circle, from the top down to 45 degrees from the top. The other 7 parts can be generated by mirroring twice (so you get two quarter circles, one on top, one on the bottom) and then exchanging the axes for the remaining two quarters left and right. If you do it that way, your angle of the circle will never be more than 45 degrees, so you will never miss a pixel on the y axis when you step along the x axis. --Stephan Schulz (talk) 01:06, 21 November 2014 (UTC)
If you needed every pixed which intersects the circle in the slightest to be shaded, then I suspect that not only will there not be a simple formula for i in terms of r, but there won't even be a simple formula for a positive lower bound for i based on r. For some r, the circle will just happen to clip one pixel so close to the edge that i will have to be very small in order to touch that pixel and its 7 mirror images, but for r one larger or one smaller a much larger i would be suitable. Perhaps someone here can speak to the mathematics behind this pseudorandom(?) behavior.
But if the goal is simply no gap, and this is satisfied by corner touching, then it is much easier. Note that any two points which are at most 1 unit apart are either in the same pixel or on neighboring pixels (an 8-neighbor that is, sharing either sides or corners). So, what increment i yields points which are at most 1 unit apart on the circle? An easy answer is to let i = 1/r, because the distance along a the arc of a circle of radius r separated by θ radians is . So two points on your circle separated by i radians are ri = r(1/r) = 1 unit apart along the arc. We are interested in straight line distances, so we can tighten it up a bit by measuring along the chord. Do the trig and you will find that an increment of i = 2 arcsin(1/(2r)) = 2 arccsc(2r) will give you points exactly 1 unit apart. Either considering this geometrically or looking at the infinite series expansion for arccsc, it is clear that for large r, 2 arccsc(2r) ≈ 1/r, approaching 1/r from above (so it is a slight better, as in larger, increment, but not by much).
I suspect that, in practice, this will be the best, simple answer. In theory, a slightly larger i should work for most r, because the pixels present their smallest diameter when viewed from the origin) along the axes, and you are already starting in the middle of the pixel there, so you could afford a slightly larger increment and still catch neighboring pixels as you worked along the first 45°. But your algorithm does not stop there and mirror the results. It instead continues around the full circle, passing in the vicinity of the axes several more time, risking skipping a pixel along the way. Still, if you solved this numerically for each r, you should find values for i slightly larger than 2 arccsc(2r), but that, or simply 1/r, offer a good lower bound for i.
Your solution was ingenious, but it does have some problems. Look at the diagram and note the pixel with the dot at 45°. If the pixel below it was shaded, instead of the one below and to the right, that would still be a valid pixelated circle. So there are some value of i which will give a valid result, but will fail your test because it won't match the circle drawn by you graphics library. Back to the diagram, if the pixel below the 45° dotted pixel was shaded in addition to the one below and to the right, then that is still a valid pixelated circle, just not a minimal one. Once you get your i small enough, you will be picking up additional pixels not needed by the 8-neighbor no-gap rule. If you are testing your result against that drawn by the graphics library for equality, this could be a problem as your algorithm may never find a solution for some r, but if you are just testing that you didn't miss any of their pixels (as you stated), then that should be OK. Finally, it is possible that you might happen upon an i for some r which is slightly larger than 2 arccsc(2r), and which works for the first 45°, but fails at some point in the following 315°. -- ToE 11:50, 21 November 2014 (UTC) I □ pixels.

If each pixel has 8 neighbors (N NE E SE S SW W NW), then a king can follow a curve of pixels. Such a set of pixels may be called a king-curve. If each pixel has 4 neighbors (N E S W), then a rook can follow a curve of pixels. Such a set of pixels may be called a rook-curve. King-curves without common pixels may cross one another, but a king-curve and a rook-curve without common pixels do not cross. You may define that each pixel has 6 neighbors (N NE SE S SW NW) like in chinese checkers. Chinese-checkers-curves without common pixels do not cross. The coordinates to the pixels may be

(x , y) = (3 i , √3 j)

where i and j are integers and where i+j is even. Bo Jacoby (talk) 19:38, 21 November 2014 (UTC).


November 21[edit]

Integration by parts involving e and sin[edit]

How do you solve an integral that includes a function including e multiplied by a function including sin. I keep having to integrate by parts over and over again and it doesn't get me anywhere as e just differentiates and integrates to itself. 194.66.246.41 (talk) 10:37, 21 November 2014 (UTC)

Often, after a few iterations you get back the integral you started with. Then you treat this integral as a variable and solve the equation for it. At other times, the best solution you can find is an infinite series. -- Meni Rosenfeld (talk) 12:34, 21 November 2014 (UTC)
Have you tried expressing the sin as the sum of complex exponentials? See Euler's formula and the section relation to trigonometry. p,s, you can see the integral of an exponential multiplied by sin at List of integrals of exponential functions, but then you wouldn't have the fun of doing it yourself would you ;-) Dmcq (talk) 13:06, 21 November 2014 (UTC)
I wouldn't normally do it but one of the yahoo answers pages really looks good on this. https://answers.yahoo.com/question/index?qid=20080120225443AAVhiAmNaraht (talk) 16:05, 21 November 2014 (UTC)
To elaborate on Meni Rosenfeld's method:
\int e^x \sin(x) dx = -e^x \cos(x) + \int e^x \cos(x) dx = -e^x \cos(x) + e^x \sin (x) - \int e^x \sin(x) dx
Then add \int e^x \sin(x) dx to the left side:
2\int e^x \sin(x) dx = -e^x \cos(x) +e^x \sin(x)
\int e^x\sin(x) dx = \frac{-e^x \cos(x) + e^x \sin(x)}{2} + C
This is often taught in math textbooks as "solving for the unknown integral" and is applicable to any integral of this form.--Jasper Deng (talk) 18:41, 21 November 2014 (UTC)

Abs Value Inequality Question[edit]

I'm trying to follow a proof that takes a jump I can't see. It says "it follows from |f(x) - L| < 1 that |f(x)| < |L| + 1." I can work out -1 < f(x) - L < 1, then get f(x) < L + 1 from the middle and right terms, then reason since L is always less than or equal to |L|, that f(x) < |L| + 1. But how from here can I get to saying the absolute value of f(x) is less? Peter Michner (talk) 16:59, 21 November 2014 (UTC)

One way is to use the fact that |f(x) - L| = |L - f(x)|, giving you |L - f(x)| < 1, then repeat what you did before. Another is to take the left and middle terms your -1 < f(x) - L < 1 and multiply both of those side by -1, changing the sign of all terms and reversing the inequality. I'll give more than a hint if you wish, but I thought you might enjoy working it out here yourself. -- ToE 17:40, 21 November 2014 (UTC)
Alternately, just use the triangle inequality directly. This is often expressed as |x + y| ≤ |x| + |y|, but by substituting x = a - b and y = b, you get |a| ≤ |a - b| + |b| which you can rearrange as |a| - |b| ≤ |a - b|. Apply that to your formula and you get |f(x)| - |L| ≤ |f(x) - L| < 1. Take the left and right terms, |f(x)| - |L| < 1, and add |L| to both sides. -- ToE 17:52, 21 November 2014 (UTC)
I see that |a| - |b| ≤ |a - b| is part of what is called the reverse triangle inequality. -- ToE 18:04, 21 November 2014 (UTC)

November 22[edit]

Barbie and birthday problem[edit]

I just read this paragraph from our article on Barbie:

"In July 1992, Mattel released Teen Talk Barbie, which spoke a number of phrases including 'Will we ever have enough clothes?', 'I love shopping!', and 'Wanna have a pizza party?' Each doll was programmed to say four out of 270 possible phrases, so that no two dolls were likely to be the same."

Is it really true that no two dolls were likely to be the same? It seems that there are 270 choose 4 = 216,546,345 combinations of phrases that each Barbie could say. According to this formula for the generalized birthday problem, if more than sqrt(2C*ln2)=17,327 dolls are issued where C=216,546,345, there would be more than 50% chance that two dolls say the same thing. Is this calculation correct? If so, it seems our article needs fixing (or Mattel made a mistake--but I assume they can calculate basic probabilities, whatever their other flaws). --Bowlhover (talk) 06:51, 22 November 2014 (UTC)

You're right if the four phrases for each doll is chosen randomly. However a manufacturer could choose them sequentially from the whole 216 million possibilities, thus making a same set coincidence 'almost impossible'. --CiaPan (talk) 09:51, 22 November 2014 (UTC)
I think Bowlhover's idea of most people's ability to calculate what he or she calls basic probabilities is exaggerated. There's a reason why the birthday problem is commonly called the "birthday paradox": most people don't have an intuition for it, and the same would apply with the dolls. And I don't think Mattel would have called in a mathematician just to justify a claim like that. Also, in any case, whoever wrote "no two dolls" might well really have intended it to mean no two dolls that, in practice, people would compare. --65.94.50.4 (talk) 10:08, 22 November 2014 (UTC)
Perhaps they used Apples pseudo-random algorithm (as for ipod shuffle etc) which guarantees no repeats until 216,546,345 dolls have been manufactured. As CiaPan mentions above, the Birthday paradox doesn't apply unless the choice algorithm was close to truly random. Dbfirs 10:32, 22 November 2014 (UTC)
The source for the passage in our Barbie article states that "a computer chip... randomly selects four phrases for each doll" (4 from 269, after they removed the phrase "math class is tough" from the list of possible phrases after complaints) [1] As to whether the 'chip' was a genuine random-number generator, rather than the more usual quasi-random hardware or software, it doesn't say - the latter could have gone into a loop quite soon, depending on the sophistication of the design, as true randomness isn't something you can program, or create with ordinary digital hardware. A genuine random-number generator would of course give the results that the birthday paradox predicts (assuming they filtered out the cases where the same phrase was picked for a given doll), but I'd be wary of assuming they used one. AndyTheGrump (talk) 10:40, 22 November 2014 (UTC)
Yes, that's just 213338251 different combinations of the 269 phrases. Didn't Steve Jobs say something like "iPods are now less random in order to seem random" when the Apple algorithm was changed to shuffle a playlist quite a few years ago? As Andy says, we can't assume what type of algorithm Mattel used, so I've changed the article to read "no two given dolls were likely to be the same" in case they did use a cheap "random" chip. Dbfirs 11:20, 22 November 2014 (UTC)
The 'chip' dates to 1992 or earlier too - which might have made it less sophisticated than Apple's current efforts. I'm fairly sure that the principles of good quasi-random number generator design had been figured out by then, but actually implementing them rather than something simpler and 'near enough' for Barbie might not have seemed worth the effort. AndyTheGrump (talk) 11:36, 22 November 2014 (UTC)
Actually, on checking, I think I may have got the terminology wrong here quasi-randomness appears to be something slightly different from pseudorandomness - though I'll leave it to the mathematicians here to explain the difference. AndyTheGrump (talk) 11:42, 22 November 2014 (UTC)
The birthday paradox applies when comparing a large set of Barbies all at once. If one brought 17K Barbies into a single room (the mind boggles), there is a good chance two of them have the same set of four phrases. But the article is talking about two Barbies at a time, as when two children bring their Barbies together for a play date. The chance that two Barbies brought together have the same set of phrases is small; using the rule of thumb in the birthday problem article, the probability that the pair has the same phrase set is  p \approx 2^2/(2*216,546,345) \approx 9.2\times 10^{-9}. --Mark viking (talk) 11:44, 22 November 2014 (UTC)
The "so that no two dolls were likely to be the same" phrase is not in the cited source, nor was it in earlier versions of the Wikipedia article which said, "so chances were good that no two dolls owned by a girl or her friends would be exactly the same". The change was made in good faith by Ianmacm in this October 2006 edit, which was part of a string of edits they did tightening up the text. A web search shows that our new, mathematically questionable phrasing now appears in many sources, such as William C. Harris's 2008 book, An Integrated Architecture for a Networked Robotics Laboratory Using an Asynchronous Distance Learning Network Tool. -- ToE 12:21, 22 November 2014 (UTC)
There can't be many people on Wikpedia who would turn up to respond to a query about an edit that they made in 2006. If I've caused confusion over the mathematics, I'm sorry, but don't take the blame for other people ripping off things that I have written on Wikipedia, which has happened before:). From this 1992 issue of Barbie dolls, the most famous controversy was that one of the phrases was "Math class is tough!" As for whether a birthday paradox was intended by Mattel, I'm not sure. My rewording was (I think) intended to imply that no two dolls bought off the shelf at random would say the same phrases, which is pretty much correct. The rewording in this edit makes it clearer.--♦IanMacM♦ (talk to me) 15:47, 22 November 2014 (UTC)
Ianmacm, I hope you understood that I pinged you here because I thought you might find both the discussion of the mathematical implications of your 8 year old edit and the interesting places to which your work has diffused to be amusing. I certainly meant no blame. It is seldom that I track down an ancient edit and find that editor to still be active. Thank you for your years of editing Wikipedia, which I see date back all they way to 2005, well before I ever clicked the edit button. Cheers! -- ToE 17:06, 22 November 2014 (UTC)
No offence taken, it's just interesting to learn that some writers use Wikipedia for their research and copy things out of it word for word, which I knew already. For the average reader, I think that the phrase "two dolls" would be taken as meaning "two children meeting who each have one of the dolls". If a person bought an enormous number of the dolls, eventually two of them would be likely to say the same phrases, assuming that everything is random.--♦IanMacM♦ (talk to me) 17:21, 22 November 2014 (UTC)

Casino strategy[edit]

What are the chances that at a casino -$500 or 12 hours happens before +$50? With good strategy of course. What /is/ the best strategy? What game and bet sequence? Assuming you lost the first bet of this strategy, what strategy has the shortest odds of getting it back within 12 hours before minus $500? Is there a way to find this on your own for different ratios like 1:5, 1:2? 172.56.23.91 (talk) 19:31, 22 November 2014 (UTC)

I read somewhere that the best strategy for roulette (other than not betting at all of course) was to decide how much you intended to gamble during your entire lifetime, place the whole lot at once on red (or black), and never bet again. If this is correct, any strategy that takes 12 hours to carry out is worse... AndyTheGrump (talk) 19:52, 22 November 2014 (UTC)
I don't think this ("place the whole lot at once") is correct. Of course, the optimal strategy will depend on the utility function, and there are functions for which one big bet is optimal. But if we assume the standard logarithmic utility (or anything downward convex), with the added gratuitous requirement that you must bet X, the optimal solution is to bet it in many small increments. Your expectation is the same as with a big bet, but the variance is lower, which is good. -- Meni Rosenfeld (talk) 01:07, 23 November 2014 (UTC)
I said 12 hours because that's a small chance that the best strategy would take too long. Also the Banker bet in baccarat is 50.6% likely to win* (*slightly less than your stake) which is a smaller house edge than roulette so the "even money" bets of roulette can't be the best strategy. Also your strategy assumes you can afford to lose all the money you'd ever bet till death right now and would rather take a over 50% chance of losing a ton, never winning anything in your life and not being able to lose less than the whole lot. I'd think it's extremely unlikely that you'd lose every bet you ever bet in your life.
Best casino strategy: don't visit them. A large number of betting systems, most famously the Martingale, are based on the idea that strategies will work at a casino and recoup losses within a given period of time. Assuming that a game is random, this is never true. Also, the house has an edge which will wear down the player's original stake over a long period of time. Nobody would apply for a casino licence without this permanent built-in advantage. The only way to win at a casino is to quit while you are ahead. Old advice, but still true.--♦IanMacM♦ (talk to me) 19:55, 22 November 2014 (UTC)
The question doesn't specify what game the bettor is playing, and therefore what the house edge is, or how long a single round takes (one minute? three minutes?), or what the standard wager is. The OP says "with good strategy, of course". What is good strategy depends on what game is being played. There is no concept of good and bad strategy in roulette, because the edge is the same for all bets; the wheel is either an American wheel (approximately 6%) or a European wheel (either approximately 3%, or approximately 1.5%, depending how the zero is handled). Good strategy at craps is to bet with the shooter (approximately 0.4%, and the house makes money because of the bad side bets). Perfect strategy at blackjack depends on correctly memorizing the rules of basic strategy (which by most calculations is very close to dead-even, so that the house only makes money because most bettors do not memorize basic strategy). What is the house edge, what is the standard wager, and how often are there rounds of betting? Robert McClenon (talk) 20:04, 22 November 2014 (UTC)
Casinos would rather a person bet $10 a hundred times than bet $1000 once. This allows more opportunities for the house edge to come into play, buy more drinks at the bar etc. The limits at a table are designed to discourage betting large amounts on a single outcome. This means that strategies are a poor idea at any casino game where the house has the edge.--♦IanMacM♦ (talk to me) 20:21, 22 November 2014 (UTC)
Surely the drinks are free? Anyway, the best strategy is to own the casino. DuncanHill (talk) 22:00, 22 November 2014 (UTC)
I'm not an expert on casino free drinks policies, so this USA Today article is useful. Apparently many US states do not allow free drinks, and they are becoming less commonplace. Casinos have always encouraged drinking while gambling, as it can make gamblers feel better and lose track of how much they are betting.[2]--♦IanMacM♦ (talk to me) 22:25, 22 November 2014 (UTC)
A few, at least, outside of the US did as of a year ago, mainly smaller ones on various tropical islands - the nice part about that was they would give you free appetizers too, on occasion, as long as you were playing. They didn't really check to see how much you were actually gambling, so you could have come out ahead if you played slow; but, honestly, slow gambling to get free drinks and some nachos isn't my idea, or most peoples, idea of a free evening, so you aren't really winning.Phoenixia1177 (talk) 05:44, 23 November 2014 (UTC)
To return to the question posed by 172.56.23.91, betting systems are junk and always have been. No finite number of bets guarantees that the player will come out ahead. The more you bet, the more you can lose. The problem is made worse by some games such as American roulette and craps field bets giving the house an excessive edge. The real risk is that gambling to recoup losses will lead to even further losses. This is the classic road to problem gambling.--♦IanMacM♦ (talk to me) 11:08, 23 November 2014 (UTC)
Trying to answer the original question I would think the martingale already mentioned is the approach most likely to win back your $50. So first bet place $50 on black (say). If that loses place $100. If that loses place $200. If that loses place $400. You are then at your limit and have to stop.
The odds are simple to calculate. You've four chances so a 15/16 chance of winning one of these bets and regaining your $50. The problem is you've 1/16 chance of losing all four and being out a further $750. In gambling terms that's odds of 15:1 on, a fair bet but not very attractive odds if you can't afford to lose your stake. And that ignores any edge the house has. The edge changes the probability of each bet, making the chance of losing higher than 1/16 and so making the bet worse than fair.
This is intuitively an optimal strategy as smaller bets are worse. With small enough bets it's almost certain you'll hit $500 before $50; the reasoning is similar to the biased coin flipping example of gambler's ruin, except with red and black bets (say) instead of a biased coin.--JohnBlackburnewordsdeeds 17:42, 23 November 2014 (UTC)
Your first strategy is the real reason casinos have betting limits. Otherwise, a bet-doubling strategy would always be able to stop at a profit (provided sufficient funds to continue playing). It is still a pretty good strategy, as if you start low enough your ability to stop while ahead is pretty high. However, most people don't do this because it can take a long time and you only win 2^{n+1} -\sum_{k=1}^{n} 2^k. Also in real life you'd have to switch games/tables a few times, because even if the betting range accepted at a casino is fairly high, the range on one table is much lower. Another relevant article is Gambler's ruin. SemanticMantis (talk) 17:35, 24 November 2014 (UTC)
No, martingale is not the reason for betting limits. The right link is Table limit. See the last paragraph. PrimeHunter (talk) 22:41, 24 November 2014 (UTC)
Thanks for the link correction. I read the last paragraph, and it says in part " In reality casinos are not at risk from Martingale players." -- [citation needed] -- does not a bet doubling strategy at near-even odds let a player with with no limits stop at a profit? Perhaps the casinos think that it is not a big risk due to lack of people attempting the strategy, but as I understand it is a real an present mathematical risk. The real reasons are only known to casino owners, but WP is not an WP:RS, and I think the sentence I quoted is just wrong. SemanticMantis (talk) 23:45, 24 November 2014 (UTC)
Casinos are not stupid. They don't let people bet a million times their fortune. No limit means you can bet whatever you are able to put on the table (an honest casino might stop you when it exceeds what the casino can pay out). If you could continue to bet meaningless amounts like $21000000000 then sure, you couldn't walk away with a loss because you would either win at some time or die while still betting. But that would be the case for any strategy which involves betting until you are ahead or die. If there was no limit (meaning you can bet whatever you own) then I don't imagine billionaires would be dumb enough to play martingale with a modest start bet they can afford to double a lot of times before losing their fortune. The casino can only lose the start bet in martingale. The realistic risk for the casino would be very rich players making large individual bets and not playing a martingale strategy. PrimeHunter (talk) 01:49, 25 November 2014 (UTC)
"a player with with no limits" – there is no such thing. There are always limits, if only those imposed by the money supply of that currency. Suppose your minimum bet is $1 and your limit is $1bn. With the bet doubling strategy (betting on black to win) and fair odds you can win back your $1 almost every time. Red would have to come up 30 times for you to lose, but it will do that every 230 bets. Tour stake rises to 230 dollars and hits your limit. And the one time that happens you lose all the 230 − 1 dollars you've won in your remaining bets, so breaking even. Of course that's with fair odds; the odds are a less than fair at a casino so rather than being expected to break even you're expected to make a loss. Casinos lose no sleep over losing strategies like this.--JohnBlackburnewordsdeeds 03:30, 25 November 2014 (UTC)
Yes well we are on the math desk here, and I don't think money supply is really relevant to the math. I only wanted to point out that the martingale strategy is viable from a pure math perspective. I don't think casino policies are dumb, but the elaboration here has helped me to understand that the sentence I quoted above is indeed incorrect from a theoretical perspective: casinos are at risk from martingale players in theory. In practice, casinos may believe the risks are minimal, but that is a question of sociology/psychology, not math. SemanticMantis (talk) 15:22, 25 November 2014 (UTC)
The story of William Lee Bergstrom is relevant here. Nowadays most casinos are run by accountants who would say no if a customer turned up out of the blue and offered to place a huge amount of money on a single bet. Benny Binion was one of the old timers who was prepared to allow this type of bet. Modern casino table limits make any type of betting system impractical.--♦IanMacM♦ (talk to me) 11:17, 25 November 2014 (UTC)

November 23[edit]

Real Analysis: Continuous Polynomials[edit]

I am trying to prove that polynomials are continuous using induction and the delta/epsilon definition. I am a little confused on one area of the k+1 part of the induction process.

How do I show that |x^k+1 - c^k+1| is continuous in order to obtain an epsilon? I know the product of 2 continuous function is continuous, should I implement that?

After I show that it is continuous, do I take epsilon= epsilon/2*|a| or =(epsilon/2*|a|)^-(k+1))? I need to get |a||x^k+1 - c^k+1| + epsilon/2 < epsilon.

If you can use that products are continuous, then it follows that powers are too; so, x^k is because x is, every polynomial is a sum of such functions, so, again, continuous. Another way to see this, quickly, is that real functions are continuous iff they preserve limits of sequences, you know that you can taking powers commutes with limits, ergo, x^k is continuous. If you require an epsilon-delta proof: follow the proof showing f * g is cont. if f and g are, substituting x for f and x^k for g; using induction on k, you get the result (or use it for hints to get the result yourself).


If you're looking for something a little less textbookish, but interesting to try: you can use oscillations and the binomial theorem with induction to prove it if you can show they are continuous at 0; or the fact that differentiable functions must be continuous, induction, and that ((x + h)^k - x^k)/h gets rid of k-power terms to show that they are all diff, hence cont.Phoenixia1177 (talk) 22:26, 23 November 2014 (UTC)

November 24[edit]

November 25[edit]

A Great circle on 2-sphere[edit]

Given two points with defined latitude and longitude I want to know if there exist an analytical formula that can describe the great circle drawn through these two points. In particular if such a formula exists I will need to evaluate if a point lying on the 2-sphere and chosen at random is in fact located on the great circle drawn between these two points. Thanks --AboutFace 22 (talk) 01:47, 25 November 2014 (UTC)

The cross product of the two vectors will give you the normal vector of the great circle's plane. The dot product of the candidate vector with that normal vector, which (if I'm not mistaken) is the determinant of all three vectors, will be zero if they lie on the same great circle. —Tamfang (talk) 08:45, 25 November 2014 (UTC)
Also known as the scalar triple product. 129.234.186.11 (talk) 10:04, 25 November 2014 (UTC)
I'll add that if this is is for computational work that doesn't do symbolic manipulation, you won't get exact zeros, and it would likely make sense to accept results below some threshold, e.g. 0<D<\epsilon. SemanticMantis (talk) 15:17, 25 November 2014 (UTC)

Thank you everyone who contributed. Yes, it is for computational work. In particular I am considering a large letter H on a screen outside of 2-sphere, then I will project the letter onto it, the center of projection will be the center of the sphere, and then expand it into a functional basis, do the transforms. I will then need to find all the boundaries inside the sphere. It is not difficult to find projection of key points, corners, etc. I thought I will make my life easier if I can determine the linear boundaries of projection of the letter inside the sphere with a simple rule. I will need it for integration. <Not a home work, unfortunately :-) Many thanks, --AboutFace 22 (talk) 16:50, 25 November 2014 (UTC)

Function for ellipse area inscribed within the two unit square...[edit]

Consider the ellipses which are inscribed in a [-1,1]x[-1,1] square. Let A(y) be the function for the area of the ellipse which touches at (1,y) (and thus also at (y,1) and (-1,-y) and (-y,-1)) A(-1)=A(1)=0, A(0)=Pi. Does anyone have a closed form for A(y)?Naraht (talk) 02:45, 25 November 2014 (UTC)

Rotate coordinates 45 degrees and rescale, so your ellipse has equation x^2/a^2 + y^2/b^2 = 1 and passes through the point (t, 1-t) with slope -1 at that point. So you need to solve the equations t ^2/a^2 + (1-t)^2/b^2 = 1 and 2t/a^2 -2(1-t)/b^2 = 0 simultaneously for a, b. Then the area is straightforward. --JBL (talk) 03:02, 25 November 2014 (UTC)
It might make the computations simpler to parameterize the ellipses by (x, y) = (cos t, cos(t + a)), so the ellipses touch the square at (1, cos a), (-1, -cos a), (cos a, 1), (-cos a. -1)) at t=0, π, -a, π, - a resp. Then apply area = 1/2 |∫(ydx-xdy)|, which works out to π|sin a|. --RDBury (talk) 12:06, 25 November 2014 (UTC)
(To be clear, the integral in the comment above is a line integral around the ellipse and it is justified by Green's theorem).--Jasper Deng (talk) 04:54, 26 November 2014 (UTC)

math[edit]

10% of 100

Same as 100% of 10. --CiaPan (talk) 13:31, 25 November 2014 (UTC)
This related question, [3] explains how to calculate it, if needed.Phoenixia1177 (talk) 19:58, 25 November 2014 (UTC)
Even better link: [4] (a specific version rather than a diff, plus anchor/section link added). --CiaPan (talk) 21:59, 25 November 2014 (UTC)

What is the significance of quaternions and octonions being normed division algebra[edit]

Of course, I understand why they're normed division algebras, but in as much as I've seen (quaternions) used that property doesn't seem to be exploited. I'm aware that it matters if a "strong" derivative is being defined, but it turns out that even with the quaternions defining a strong derivative is far to restrictive. Does the normed division property "help" in some sense?--Leon (talk) 19:11, 25 November 2014 (UTC)

The norm property has a certain intuitive appeal; if you think you should be able to associate quaternions, octonions etc. with geometric objects such as transformations then that they're nicely normed means you can associate a 'size' with them which is preserved by composition of transformations. And this is certainly true of complex numbers and quaternions, used to describe transformations in 2 and 3 dimensions respectively. More abstractly it's simply a way to classify the sequence generated by Cayley–Dickson construction. At each stage you lose something until with octonions you lose being nicely normed (and alternate) as you generate sedenions.--JohnBlackburnewordsdeeds 22:57, 25 November 2014 (UTC)
There are useful algebras that have neither a norm nor that are division algebras. In particular, Clifford algebras form a family of associative algebras that are extremely useful, especially in the geometric context. They include the reals, complexes and quaternions, but not the octonions. They define division, but include nonzero zero divisors, so checks for division by zero divisors become a little more complicated. The norm may be replaced by something similar, which can produce a negative result, but as you say, one does not necessary look for a direct analogue. I'm not sure how derivatives tie in, though. —Quondum 23:15, 25 November 2014 (UTC)
Another theorem that follows directly from the nicely normed property on octonions is Degen's eight-square identity. Similar identities exist in four, two and trivially one dimensions but no other.--JohnBlackburnewordsdeeds 23:44, 25 November 2014 (UTC)

Normalization of Associated Legendre Polynomials.[edit]

I have a few questions concerning orthonormality of Associated Legendre Polynomials (ALP). I want to stress the word orthonormality as opposed to simply orthogonality. The reason for that is computational. It is a well known fact that when ALP with large indices l & m are computed the functional values grow in magnitude to the point that the exponents overflow. Double precision is required and in some cases even quadruple precision is needed. The normalization diminishes the absolute values of the functions considerably but not universally. I want to make sure that I understand normalization correctly. Wikipedia article on ALP gives two formulas.

\int_{-1}^{1}P_k^m(x)P_l^m(x)dx = \frac{2(l+m)!}{(2l+1)(l-m)!}\delta_{k,l}

Thus the normalization factor here will be:

N_1 = \sqrt{\frac{(2l+1)(l-m)!}{2(l+m)!}}

I call it normalization in respect to l

For my task it is more important to normalize in respect to m. It is given by this formula:

\int_{-1}^{1}\frac{P_l^m(x)P_l^n(x)}{(1-x^2)}dx = \frac{(l+m)!}{m(l-m)!} \,\,\,(m = n \ne 0)

The normalization factor for each subspace with a given l but differing m should be this:

N_2 = \sqrt{\frac{m(l-m)!}{(l+m)!}}

I call it normalization in respect to index m. I am uncertain about what to do with the weight factor (function) (1-x^2)^{-1} under the integral, however. Does it have to be included in the normalization formula, perhaps as \sqrt{(1-x^2)^{-1}} ?

I would appreciate if both normalization formulas (N_1 and N_2) will be confirmed. Thanks. --AboutFace 22 (talk) 22:49, 25 November 2014 (UTC)

November 26[edit]