Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Legolas52 (talk | contribs)
Line 404: Line 404:


= October 31 =
= October 31 =

==Pumping Lemma==
Hi, I've been trying to understand the pumping lemma to no avail. I understand that it has something to do with the pigeon hole principle and that the idea is to get a repeating string x(y^i)z. I just still don't understand how to pick the pumping string, or figure out the minimum pumping length, or how to solve a proof. The wikipedia page made some sense but it was confusing too :/

Revision as of 00:09, 31 October 2010

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


October 24

Differentiate a function

How would I compute , assuming that non-integer values of n! exist and can be computed using the gamma function/Stirling's approximation. 24.92.78.167 (talk) 02:56, 24 October 2010 (UTC)[reply]

Technically, non-integer values of n! do not exist, by definition. If we were to use n! as a shorthand for , which is equal for integer values of n, then, according to the page on the gamma function,
where is the Euler–Mascheroni constant. According to Stirling's approximation,
Since Stirling's approximation is different then the gamma function and they are not even equal at integer values, it is no surprise that this is a very different formula. 76.67.73.90 (talk) 03:31, 24 October 2010 (UTC)[reply]
The formulas aren't that different. . It is well known that and slightly less well known that it is . -- Meni Rosenfeld (talk) 06:55, 24 October 2010 (UTC)[reply]

"76.67.73.90", you wrote:

Can you explain what you mean by m? What is m? Michael Hardy (talk) 04:43, 24 October 2010 (UTC)[reply]

Looking at the linked page on the Gamma function, they meant x instead of m. It should be noted x is an integer in that formula (...since otherwise x! isn't defined). 67.158.43.41 (talk) 04:50, 24 October 2010 (UTC)[reply]
Remarks. It turns out that what is easier to write is the logarithmic derivative of , that is rather than itself. Actually, the construction (or one construction, at least) of the function starts from the construction of the digamma function as a solution of a simple linear functional equation. For this reason, the simplest formula for the derivative of is . As to the series expansion for you can find it here; the formulae given above by anonymous users are wrong or nonsensical for non-integer x. --pma 13:43, 27 October 2010 (UTC)[reply]

Finite subgroups of the multiplication group of the unit quaternions

I want a list of all finite subgroups of the multiplication group of the unit quaternions. --84.61.153.119 (talk) 08:30, 24 October 2010 (UTC)[reply]

You really should do your own homework. At least show some effort. If you don't know what a quaternion is, you may find the page quaternion group helpful. Jkasd 08:50, 24 October 2010 (UTC)[reply]
Also the list of small groups may be useful. --JohnBlackburnewordsdeeds 11:37, 24 October 2010 (UTC)[reply]

I want a list of all finite subgroups of the group O(4). --84.61.153.119 (talk) 14:37, 24 October 2010 (UTC)[reply]

Does the position of the median change if a plot is distorted like this?

Let's say I have a large number of values, creating a one dimensional plot, and I figure out what the median is and in what places on the graph it occurs. Then I want to change the values so that the highest and lowest values remain the same, while the current median value is changed, and all values in-between are "scaled" to allow this to happen. If I calculate the median again, will I always find it in the same places on the graph as before? I think it should be but I lack the knowledge to verify this. Thanks. (Sorry, I can't express this in any other way than with words, but I can try to rephrase it if it's hard to understand) Obiha (talk) 11:12, 24 October 2010 (UTC)[reply]

The median for a finite number of discrete values is simply the middle value: more precisely if there are an odd number of values, n say, it's the (n + 1)2 value, if there are an even number it's midway between the n2 and n2 + 1 values. To change the median you need to change just these values, but keeping them at their positions which may mean moving other values too. E.g. if the values are
1  5  9 13 17 21 25 29 33
the median is 17, the 5th element. To change the median to 21 you could change the numbers like so
1  6 11 16 21 23 25 28 31
The first and last numbers are unchanged but everything else has changed.--JohnBlackburnewordsdeeds 11:47, 24 October 2010 (UTC)[reply]
What the OP asked is: He has an unsorted sequence of values, say
1 4 3 9 4 5 4
He finds their median 4, and then notes that it appears is position 2, 5 and 7. Then he applies a transformation which changes the median - I don't know if he means specifically a linear scaling, but for now assume that the transformed values are:
1 6 4 9 6 7 6
What he wants to know is - if he takes these new values, find their median and note where it appears, will he also get 2, 5 and 7? -- Meni Rosenfeld (talk) 12:04, 24 October 2010 (UTC)[reply]
[ec] Yes, if either there are an odd number of values or the middle two values are equal (otherwise it could depend on your definition of median, and your method for finding it). Let be the values, f be the transformation you apply to each, be the transformed values, and be the medians of x and y. A median is characterized by the fact that at most half the values are less than it and at most half the values are greater than it. So if f merely satisfies and , it will follow that is the median of y. From these conditions it will also follow that which means that , so the median will appear in the same places for both sets. -- Meni Rosenfeld (talk) 11:59, 24 October 2010 (UTC)[reply]

Venn Diagram Word Problems

I recently undertook a logic and problem solving course at a local community college as part of an IT diploma. We just commenced Venn diagrams and I seem to be having a lot of difficulty with them. Can anyone recommend a resource or site which contains questions on which to practice? I have used this site so far http://www.math.tamu.edu/~kahlig/venn/venn.html and I am looking for more?24.89.210.71 (talk) 12:16, 24 October 2010 (UTC)[reply]

Problem

Sir i have on problem plz solove it.

If:

  7-3=124
  6+3=279
  5-2=63
  11+2=2613

then 15-3=? —Preceding unsigned comment added by Apsirkeel (talkcontribs) 12:58, 24 October 2010 (UTC)[reply]

I'll give you a hint:
7-3=12|4
6+3=27|9
5-2=6|3
11+2=26|13
15-3=??|??
-- Meni Rosenfeld (talk) 13:11, 24 October 2010 (UTC)[reply]

Choosing a pineapple at a supermarket

The pineapples, for example, at a supermarket are all the same price but vary in quality according to a Gaussian distribution. If I choose three pineapples at random from the pile, and then select the best of those three, what is the probability that the selected pineapple is within the top ten percent of quality of the whole pile?

How many pineapples do I need to compare to have at least a 50% chance of being in the top x percent of quality? Not a homework question, but a practical problem. Thanks 92.15.31.47 (talk) 16:57, 24 October 2010 (UTC)[reply]

ditto for wives, guys. how many past candidates should you first reject in order to have a sample base, and afterward pick the first one better than all of these? Assume that although normally distributed, the "value" is hidden to you, and you have no absolute way to compare candidates, but instead can only do a comparison function on candidates you have already rejected? You don't want to waste your time, so we're looking at the least number to have some high confidence (as with the number above) 84.153.247.182 (talk) 17:48, 24 October 2010 (UTC)[reply]
See Secretary problem. —Bkell (talk) 18:13, 24 October 2010 (UTC)[reply]
The answer to the first is straightforward, and the distribution doesn't matter. The chance of picking 1 that's in the top 10% is 0.1. The chance it's not is 0.9. If you pick three the chance all three are not in the top 10% is 0.93 = 0.729. The probability at least one is in the top 10%, so that the best is in the top 10%, is therefore 0.271.
If you want to choose n so there's at least a 50% chance that the best of those is in the top 10% then you need n = 7, as 0.97 = 0.478, so there's about a 52% change one of the seven will be in the top 10%.--JohnBlackburnewordsdeeds 18:17, 24 October 2010 (UTC)[reply]
Nit-picking, but would the fact that you are not replacing the chosen pinapple make any difference? 92.28.246.6 (talk) 20:12, 24 October 2010 (UTC)[reply]
Yes, if the qualities are not independently, identically distributed (we may discard this difficulty here, I guess) Pallida  Mors 20:21, 24 October 2010 (UTC)[reply]
The questioner stated "top 10% of quality", not "top 10% of the population of pineapples." So, if quality goes from 0 to 99, he wants a quality of 90-99. If the distribution is skewed towards 99, more than 10% of the population will have the top 10% of quality. -- kainaw 23:00, 24 October 2010 (UTC)[reply]
"top x percent of quality". I normally read that as the corresponding percentile. However, your point can be answered, considering the nth order statistic distribution, computing , the probability that the highest of the n pineapples extracted has a quality as good as the threshold x* [again, considering the iid assumption]. (Forgotten sign of Pallida  Mors 00:04, 25 October 2010 (UTC))[reply]
A relevant article is order statistic. Pallida  Mors 19:53, 24 October 2010 (UTC)[reply]

This is not the same as the secretary problem, or the wives problem suggested by 84.153.., since with the pineapples you can choose any of the pineapples you have previously inspected. Its more like a quality control (poor article) or Sampling (statistics) problem. 92.28.246.6 (talk) 20:03, 24 October 2010 (UTC)[reply]

Nitpicking: If the whole pile is N pineapples, and K of them are within the top ten percent of quality of the whole pile, and you buy n pineapples, then the probability that k of these are in the top ten percent of quality is

See hypergeometric distribution. Bo Jacoby (talk) 22:14, 24 October 2010 (UTC).[reply]


October 25

Roulette

Forgive my weakness in math. If I had $100 and I played roulette with the intent of walking away with as much money as possible, would I be better off placing all $100 on red/black and having it out in one spin, or incrementally wagering on red/black, say $5 at a time 20 times (so that the total wager is the same)? With a 1-to-1 payout, the possible results for my first option are $200 or $0. For the second, it'd be $105/$95, then $110/$100/$100/$90, and continue branching for 18 more levels. My gut says the most likely outcome would be something like $45 at the end, but I'm not sure how to do the calculations save for the brute force approach. The Masked Booby (talk) 02:12, 25 October 2010 (UTC)[reply]

You have to specify your goals more carefully. What do you mean by "as much money as possible"? Literally speaking, you could be saying that you want the maximum probability of breaking the house, even though that chance is going to be vanishingly small. Is that what you mean? That's probably not too hard to figure out, once we know how much money the house has, and which kind of roulette we're talking about (is there 0 and 00, or just 0)?. --Trovatore (talk) 02:55, 25 October 2010 (UTC)[reply]
Ah, never mind; I didn't read carefully enough. I didn't realize you were asking only about two discrete alternatives. --Trovatore (talk) 06:43, 25 October 2010 (UTC)[reply]
The House has some (small) advantage, since not every number is red or black. The expected value on a $1 bet is a $0.053 loss, in American roulette. Either strategy has an expected loss of 100*$0.053 = $5.30. The variance of the first strategy is much higher than the second strategy--you either win big or lose big, instead of winning small and losing small. The second strategy basically follows the multinomial distribution with p1=16/38, p2=16/38, p3=2/38. 67.158.43.41 (talk) 03:21, 25 October 2010 (UTC)[reply]
Make that the binomial distribution with p=16/38. The above is also correct, though if you bet "black" all the time, for winning total purposes, "red" and "neither black nor red" are the same outcome, so p2 and p3 can be combined into the implicit loss fraction of the binomial distribution. 67.158.43.41 (talk) 04:20, 25 October 2010 (UTC)[reply]
If there's no house advantage, then the distribution of the number of wins with 20 plays is distributed binomially with , which can be seen here. With house advantage it's which can be seen here.
Of course, since the house does have an advantage, if you want to maximize your expected money, the correct answer is not to play. -- Meni Rosenfeld (talk) 07:33, 25 October 2010 (UTC)[reply]
To answer your question specifically, I think on average you'd be better off making one big bet, since making many smaller bets would give a higher expectation of losing to the house edge when the ball falls into the zero pocket(s).
The start of the articles Martingale (betting system) and Martingale (probability theory), describe a system where you bet on black or red only, and double your bet if you lose the previous bet. This system trades making a small profit with a small probability of losing all your money. The golden rule would seem to be: walk away when you are winning. If you enjoy gambling, then making a series of small bets will entertain you for longer.
I believe it is impossible to have any simple chance gambling system which makes any profit in the long run and on average, as the house always has an edge. This does not apply to poker, which involves skill, not just chance. You may also be interested in the Kelly criterion which is used more in racehorse betting etc. 92.24.186.217 (talk) 17:10, 26 October 2010 (UTC)[reply]
For every bet of $n on red/black, you expect to lose $0.053*n as above. If you spread these bets out over time into $n/3, $n/3, and $n/3, you still expect to lose 3*$0.053*(n/3) = $0.053*n. There is no difference in the expectation value of the different strategies. Put another way, if you implemented both strategies a million times, the average loss for both over all trials would be (virtually) the same. In a game with such well-defined odds, barring cheating or otherwise breaking the rules, we can confidently say there is no gambling system which makes a long-term profit, betting a fixed total, period. Every single one has the same expected loss, $0.053*n, where n is the total amount you decide beforehand to bet. Other games are more complicated to analyze, like Poker, depending on your rules. They might actually give a chance of winning long-term, but of course casinos try to avoid such a situation.
The betting system you mention is really quite interesting, but it just yields a massive variance in your profit (or loss). That is, you either win huge or you lose huge using that strategy. The expected value is the same as any other strategy--for every $1 you end up betting, you'll lose $0.053, on average.
Overall, the point is (a) if you just want to make money, the only sane thing to do is not to play; (b) if you enjoy gambling, but don't want to lose much money, make lots and lots of very small bets so your eventual loss takes forever; (c) if you enjoy gambling for the risk, find some help (my own opinion) or another hobby unless you're rich, 'cause you'll want to bet large and you'll just lose big. 67.158.43.41 (talk) 22:17, 27 October 2010 (UTC)[reply]
My thinking was that if you are churning the same money on small bets over and over again, then you are being repeatedly "taxed" by the house edge. Whereas if you only make one bet, then you are only "taxed" once. There are other more complicated betting systems that may have the behaviour I described. 92.15.25.142 (talk) 18:33, 28 October 2010 (UTC)[reply]
Certainly if you "recycle" your winnings you'll keep losing more and more. That is, you'd expect to get back $100-$5.3 = $94.7 back with either strategy. If you then go back and bet the $94.7, you'll expect to lose another $0.053 per dollar. But the same problem pervades both strategies if you recycle your winnings in this way. With more smaller bets, you'll just take longer to lose the same amount (on average, of course). 67.158.43.41 (talk) 23:24, 28 October 2010 (UTC)[reply]
If you make one big bet then you probably won't lose any money to the house, although you have a small chance of losing everything. If you make lots of little bets you will certainly lose a proportion of your money to the house, but have almost no chance of losing everything. The expected value is the same, but your betting system allows you to change the risk profile. 92.24.186.230 (talk) 21:24, 31 October 2010 (UTC)[reply]
While there's no way to win on roulette, sometimes it can make sense to ask how to lose the least, for example when you get gift tokens for roulette that you can't directly cash, but you can cash the other sort of tokens you get as winnings. – b_jonas 13:18, 29 October 2010 (UTC)[reply]
I have an old "casino suite" game for PC, and have a strategy for roulette that doesn't build up winnings all that quickly, but is pretty safe. Instead of choosing a single number or any of the 2-to-1 ("black or red," "odd or even," or "1-18 or 19-36") or 3-to-1 (1-12, 13-24, 25-36) payouts, I usually put a chip on 35 different numbers. That way, I spend 35 chips on the bet, but I win 36 back. (The stinker is when the roll hits "0" or "00" and I completely lose.) I'm not sure if the person running the roulette table in a real-life casino would actually allow someone to bet that way, though. (I suspect not...) Kingsfold (Quack quack!) 15:30, 29 October 2010 (UTC)[reply]
Really, why shouldn't they? You aren't improving your expectations by means of that strategy; and the casino still has the edge, at each one of your bets. The only reason for not liking it would be if they thought you were cluttering the table in that manner, disencouraging others from betting.
Since the available capital for the casino is considerably larger than the amount that any single player may bet at once, there is no problem for the casino to consider the bets as independent. If you put 35 units on one number, or 35 units on 35 numbers, you have exactly the same expected loss and the bank thus the same expected gain in either case. "Expected" may be interpreted as "almost sure in the long run" - and the casino can afford the long perspective.
The elegant way to see this is not to calculate the two separate situations; but let's do it, if that may convince you. If you put 35 on one number, you have the probability 1/38 of winning (if your wheel contains the numbers 1-36 and 0 and 00, and there is no cheating or mechanical failure involved; else modify according to the wheel involved), and you end up with 36x35 = 1260 units if you win. The expected value of your 'revenue', if you use this strategy, thus is
an expected (i. e., average) loss of 35/19 units.
If instead you bet one unit each on the numbers 1 to 35, then you have the probability 35/38 of ending up with 36 units, and 3/38 of loosing all (since not only 0 or 00, but also that number on which you didn't bet, say 36, is a looser). Your expected 'revenue' with the second strategy thus is
with an expected loss of 35/19 units, on the average and in the long run.
There is no difference whatsoever between the expected average outcomes.
The casino does take the long run view. For them, it does not matter if you want to balance a small chance of winning much against a large chance of loosing little, or the other way around; as long as you play at their odds, they should be happy.
There is one and just one way to win in the long run in casino gambling: Be the casino owner!. JoergenB (talk) 19:34, 29 October 2010 (UTC)[reply]

Mentally generating random numbers

Suppose I am playing a game like poker, and my strategy includes bluffing with a certain (fixed) probability each hand. Of course, humans are notoriously bad at generating "random" numbers themselves. The bluff (poker) article mentions using the colors of my hidden cards or the second hand on my watch as randomizing devices, but suppose I want to use some kind of pseudorandom number generator. Nearly all of the results about PRNGs are for computers that can easily remember and manipulate large numbers, which humans cannot do. Are there results for "human-implementable" PRNGs that use numbers with no more than, say, four decimal digits and computations that can easily be performed mentally? —Bkell (talk) 14:14, 25 October 2010 (UTC)[reply]

My initial thought was to do something like this:
  • Choose prime numbers p, q.
  • Choose m and n, small generators of the multiplicative groups modulo p and q, respectively (this will be easier if m and n are equal, more so if they are equal to 2)
  • Choose thresholds so that is equal to the probability of bluffing.
  • Choose seeds .
  • At each iteration, take . Bluff if .
However, testing this gave a sequence with longer runs than random. Maybe some modification of this can give better results.
Another approach is to generate a sequence in advance and memorize it using some mnemonic. -- Meni Rosenfeld (talk) 15:19, 25 October 2010 (UTC)[reply]
I would just remember a long string of numbers. Or you could use some other more easily remembered pattern. E.g. if you've foolishly remembered π to 50 decimal places, or know the phone numbers of all your friends. You can extend the use of these by looking at them more than one way. For example you could alternate between bluffing if a digit is odd and bluffing when a digit is 5 or more. It even need not be numbers. A text, which might be easier to remember, could do the same job, though you need to be smarter at generating random numbers from it as the frequency and distribution of letters in most texts is far from random (or if it is random the text isn't easy to remember).--JohnBlackburnewordsdeeds 15:29, 25 October 2010 (UTC)[reply]

complete the sequence

5+3+2 = 151012
9+2+4 = 183662
8+6+3 = 482466
5+4+5 = 202504
 Then
7+2+5 = ?  —Preceding unsigned comment added by 119.154.134.131 (talk) 17:42, 25 October 2010 (UTC)[reply] 

sorry, i have just solved it answer is 143542 —Preceding unsigned comment added by 119.154.134.131 (talk) 17:56, 25 October 2010 (UTC)[reply]

For anyone looking at this, the solution is, given a+b+c, a*b concatenated with a*c concatenated with b(a+c) reversed. So, 7*2=14, 7*5=35, and 2(7+5)=24, reversed is 42. Concatenated, it is 143542. -- kainaw 12:56, 27 October 2010 (UTC)[reply]

Ergodicity

I'm teaching myself a little ergodic theory. Looking at the article ergodicity, I have a couple of questions. First, in the second item in the formal definition, that A should be an E, yes?

Also, am I mistaken in thinking that the third item in the formal definition implies the other three?--130.195.2.100 (talk) 21:53, 25 October 2010 (UTC)[reply]

Nevermind on the second. I just looked closer and noticed that the article mentions they're equivalent definitions.--130.195.2.100 (talk) 21:55, 25 October 2010 (UTC)[reply]
You can read the source of the definitions you indicate at Google Books. Maybe a book would be better anyway. Thm. 1.5 says that, yup, you're right, A should be E. Apparently it was badly copied. I've updated the page. 67.158.43.41 (talk) 09:06, 26 October 2010 (UTC)[reply]


October 26

Monomial Ordering

Resolved

How and why does monomial ordering effect the quotient of a polynomial ring by an ideal? For example, take a polynomial ring over a field of characteristic zero, with variables x and y, and with the so-called degree reverse lexicographical monomial ordering. If we calculate

However, if we use the so-called negative degree reverse lexicographical monomial ordering, then we have

I understand the first one, but I don't understand why terms are missing in the second one. I used SINGULAR to do the second calculation. More information on SINGULAR's monomial orderings can be found at this page. It seems that the second line involves the localized polynomial ring; whatever that might be. Any suggestions? Fly by Night (talk) 13:04, 26 October 2010 (UTC)[reply]

How do you understand the first one? It seems to me that and shouldn't be there, if it's an ordinary ideal with two generators, and an ordinary ring quotient. –Henning Makholm (talk) 13:21, 26 October 2010 (UTC)[reply]
(e/c) I don't understand either, as far as I can see the result should be . The quotient of a ring by an ideal depends only on the ring and the ideal, nothing else; monomial ordering only comes into the picture when looking for a basis.—Emil J. 13:26, 26 October 2010 (UTC)[reply]
There's a typo' in my post; that's all. It should have been y3 instead of y2. I deleted this question, but Henning Makholm seems to have reinstated it. I removed the question because I found the answer. The two monomial orderings change the calculations. One gives you the whole polynomial ring as the numerator and the other gives you the localized polynomial ring. It's all in the link I posted. Anyway, thanks all the same. Fly by Night (talk) 14:14, 26 October 2010 (UTC)[reply]
Maybe you already found a source for what's going on in which case you can ignore this but here goes anyway. The quotient ring K[x,y]/<x2+x,y2> has dimension 4 (the total multiplicity of the roots of these polynomials). In reverse lex order, the monomial basis is 1,x,y,xy. For the local version, you're no longer dealing with K[x,y], but its localization at the origin (in other words, its localization by the maximal ideal <x,y>). This ring consists of all rational functions which are defined at the origin. In this setting every polynomial with a non-zero constant term is a unit. The ideal <x2+x,y2> ⊂ K[x,y] extends to the local ring, but there it behaves a bit differently. In particular we can only see the properties of the ideal near the origin (this is where the term "localization" comes from). x2+x = x(1+x) but 1+x is a unit, so <x2+x,y2> = <x,y2>. That's why Loc<x,y>K[x,y]/<x2+x,y2> only has dimension 2 (the multiplicity of the root at the origin). In negative reverse lex order the monomial basis of this quotient ring is 1,y. The monomial order chosen can affect the monomial basis of the quotient (and equivalently the Grobner basis of the ideal), but it won't change the dimension. However, global orderings (where higher degree terms are larger) only makes sense in the context of the global ring, while local orderings (where the smaller degree terms are larger) only makes sense in the context of the local ring, so SINGULAR deduces which one of those you want to be working in based on your choice of a global or local ordering. Rckrone (talk) 05:18, 27 October 2010 (UTC)[reply]
What a fantastic answer: superb! Thanks a lot Rckrone. Fly by Night (talk) 08:49, 27 October 2010 (UTC)[reply]

Averaging points on a sphere

If I have a cluster of points on a sphere, is there some way of averaging them analytically? In other words, is there any method other than brute-force computation to find the point that minimises the distance between those points (in terms of angular displacement, obviously - the point should be on the surface of the sphere itself, so simply finding the euclidean distance wouldn't be much use)? If it helps, the particular case I'm looking at only involves a small section of a sphere, which should avoid the weird effects you'd get if you tried to average antipodal points. Thanks! Laïka 17:52, 26 October 2010 (UTC)[reply]

You can use vector cosine to easily calculate the angle from each axis to the point. Average the angles to get place a new point at an average angle to each axis. -- kainaw 17:55, 26 October 2010 (UTC)[reply]
Unless you care about which exact kind of average you find for widely separated points, an easy thing to do would be to average the points in 3D and normalize the resulting vector. If you try to average exact antipodes (so the 3D sum ends up being zero) it's not quite well-defined what one would want the operation to yield anyway. –Henning Makholm (talk) 18:35, 26 October 2010 (UTC)[reply]
Life might be easier if you convert to spherical coordinates first. Or, you can use Euler angles to give precedence to a particular rotation. Nimur (talk) 18:45, 26 October 2010 (UTC)[reply]
How does this problem become easier in spherical coordinates? –Henning Makholm (talk) 18:49, 26 October 2010 (UTC)[reply]
Normalization is free. Nimur (talk) 19:20, 26 October 2010 (UTC)[reply]
But vector addition is very expensive. –Henning Makholm (talk) 20:45, 26 October 2010 (UTC)[reply]
Ah, should have mentioned, I'm working in spherical co-ordinates anyway. Laïka 18:53, 26 October 2010 (UTC)[reply]
To compute the cartesian arithmetic mean in this case, it doesn't seem to suffice to compute the mean of the spherical angles--take a couple of points in the x-y plane, one at 5 degrees and the other at 355 degrees. Is there any way to avoid lots of conversions that are equivalent to just converting to cartesian anyway? 67.158.43.41 (talk) 22:17, 26 October 2010 (UTC)[reply]

There is quite a bit of work on how to do this correctly Directional statistics is the field where this is discussed and "Statistics of Directional Data" by Mardia and Jupp, is probably a good introduction.--Salix (talk): 19:08, 26 October 2010 (UTC)[reply]

A point that minimizes the sum of the distances is not what an average is. The average of a list of number is the number that minimizes the sum of the squares of the distances from them. The number that minimizes the sum of the distances is the median. Michael Hardy (talk) 22:54, 28 October 2010 (UTC)[reply]

This might also be relevant here. Michael Hardy (talk) 22:55, 28 October 2010 (UTC)[reply]

prove that a + b will have the same value as b + a

How do you do that? What if b is negative. Then a + -b is the same as a - b. So if a = 5 and b = 3, it's easy: 5-3. But let's look at b + a. Now it's -3 + 5, then you have to start counting back from 3, (you count -2, -1, 0) and then go forward. (...1, 2, finishing the 5). So in this case b + a would be 3 + 2, which is -b + (|a|-|b|). And there are a bewildering number of other similar arrangements (both negative? only b negative, only a negative, etc). Then there's zero. After all is said and done, I'm not even sure if a+b = b+a at all. (Like if they are both zero -- is 0+0=0+0; when I try to verify that I just get divide-by-zeros, so maybe it's not a valid operation at all!!)

Please help. Is a+b = b+a for ALL cases??? How do you prove that??? Thanks a million!!! 84.153.187.197 (talk) 18:50, 26 October 2010 (UTC)[reply]

Addition is commutative, subtraction is not. I think that you are getting confused between the two. On a number line addition mean "First do this, then do that" . Positives move the counter to the right, negatives to the left. So A+B means count A then count B. If A is 5 and B = -3, A+B means start at zero and move 5 to the right then move 3 to the left. The answer is where you end up. For B+A it means move 3 to the left then 5 to the right, oh look you get the same answer. I don't know if it can be proved or if it just observed that addition is commutative. Theresa Knott | Hasten to trek 18:59, 26 October 2010 (UTC)[reply]

Of course it can be proved, but how to prove it depends heavily on what your basic definitions and axioms look like. (Such as, what is a negative number anyway?) With the definitons in Integer#Construction, commutativity of integer addition follows immediately from commutativity for addition of natural numbers. For natural numbers, then you can either say, well, it's obvious, or prove it from the Peano axioms using induction. –Henning Makholm (talk) 19:16, 26 October 2010 (UTC)[reply]

Proof by pictures?: For adding two positive integer, a + b, the picture is a dots on the left and b dots on the right. Count up all the dots to compute a + b. If I rotate the picture 180 degrees, now all the dots that were on the right are on the left, and vice-versa. So now the total number of dots represents b + a, but since I haven't erased any dots, or drawn any extras this count is the same as the count I got before.

One could extend this to the sum of a positive integer and a negative integer by drawing o's for the positive number and x's for the negative integer. Connect an o with an x using a line to "cancel" them against each other, and repeat until you've run out of o's or x's. The count of o's or x's that survive gives you the result of the sum. Rotate the picture 180 degrees and, again you get that a + b = b + a. Quantling (talk) 20:32, 26 October 2010 (UTC)[reply]

I "define" addition similarly. I imagine counting clearly distinct stones on a beach. For some total number of stones, we can clearly group the stones into two piles, call one pile, say, "8" and the other "3" depending on how many stones are in each pile. We may reconstruct the total pile by either creating an "8" pile and then setting a "3" pile next to it, or by reversing this order. There we have commutativity from a few physical properties--conservation of stones, and the ability to move them about in space from pile to pile using different timings, along with our ability to recognize piles and stones. These arguments can give the basic properties of addition of natural numbers, where integer properties follow as Henning Makholm mentioned. The more rigorous approach of Peano Arithmetic (or similar), I would say, relies on an implicit connection between the axioms and reality, where the user of those axioms is at all interested in them because they list a sort of minimal set of intuitively obvious properties that a physical definition of addition satisfies. For instance, I believe PA "mirrors" my intuitive definition of addition. If I didn't, I wouldn't be able to add, and I wouldn't be able to trust calculators, so I wouldn't be able to pay for both an item and shipping--which is a stupid place to be. There's not really much more you can do than accept some axioms as given. 67.158.43.41 (talk) 06:00, 27 October 2010 (UTC)[reply]

While mathematicians will address the commutativity directly, it is very reasonable and logical of you to consider the situation in a case by case basis. Just what are your "bewildering number of ... arrangements"? Well, if a < 0 then b < a or b = a or a < b < 0 or b = 0 or b > 0 for five arrangements. (And yes, I am picturing a number line as I write this.) If a = 0 then b < a or b = a or b < a for three arrangements. If a > 0 then b < 0 or b = 0 or 0 < b < a or b = a or b > a for five arrangemants. So, there are only thirteen arrangements, which is not all that many for you to check, case by case, on a number line to see that it really does work. Compare this to the proof of the four color theorem, where the initial proof required examining 1,936 separate cases. I think that you may be experiencing one of the wonders of mathematics. When an equality is proven, then it will hold true. There is a certain, never diminishing joy which occurs when the results are obtained through a previously unanticipated means, and that equality still holds. You knew that it would (as long as the logical basis for mathematics was sound) but it brings joy nonetheless. Enjoy! -- 124.157.254.112 (talk) 21:42, 26 October 2010 (UTC)[reply]

See Proofs involving the addition of natural numbers#Proof of commutativity.
Wavelength (talk) 22:03, 26 October 2010 (UTC)[reply]
Note that OP asked for addition where at least one digit is negative, having no difficulty when only the natural numbers are considered. Taemyr (talk) 00:02, 28 October 2010 (UTC)[reply]
I disagree. It mentions 0+0 specifically, and 0 isn't negative. (0 is arguably not a natural.) For that case, 0+0 = 0+0 by hand-wavey rules of substitution I've never actually heard clearly articulated. In some sense, one 0 is indistinguishable from another in that we simply call them all "equal", whatever that means. By assumption the result of the two-argument addition function + is +(0, 0) = 0 (where +(0, 0) is using prefix instead of infix notation). We then have 0+0=0=0+0, so 0+0=0+0 follows from another assumption, the transitivity (and I suppose symmetry) of equality. One might wish to define a sort of structural equality, where two structurally identical formulas are always equal. That seems problematic for some types of definitions: like the function which returns the number of times it's been evaluated. Mathematical functions, at least, are static, though, so I'd be willing to accept such structural equality in some cases. This is a surprisingly interesting question! 67.158.43.41 (talk) 10:21, 28 October 2010 (UTC)[reply]
This book gives a proof: Edmund Landau, Foundations of Analysis. (orig. title: Grundlagen der Analysis) – b_jonas 13:02, 29 October 2010 (UTC)[reply]

Sounds like you're a mathematician or logician in the making; if so, may I recommend An Introduction to Symbolic Logic, by Susanne Langer? It's a gem, a delightful book, and very readable by any intelligent student with no background in logic or mathematics. The latest Dover edition is entirely sufficient (you don't need the expensive reprinting by a new publisher) and it should be available used for five dollars or so, online.

Some directions to rigorous proofs have been given above. If you'd like a more immediately intuitive demonstration, maybe I can help. But the answer to your question, like any proof, depends on the assumptions you're willing to start from. Will you grant the following assumptions?

  1. addition and subtraction are associative, e.g. (x+y) + z = x + (y+z).
  2. a well-formed proposition is either true or false.
  3. when the same number is subracted from both sides of an inequality, the two sides remain unequal.
  4. when you take a first number, add a second one, then subtract the first, you get the second one, i.e. (x+y)-x = y.

Actually, we can't really call the following a "proof", because that last statement above, at least, is an informal assumption. So let's call what follows an "intuitive demonstration".

(The demonstration is by the reduction to absurdity method. If you're not familiar with the method, it's extremely useful. It's based on the fact that a true statement will have only true consequences. If some statement has false consequences, then the original statement itself is false. You start by assuming the opposite of what you actually want to prove, and then show, through a series of allowed steps, that this opposite assumption implies a false consequence, a contradiction, or (same) "absurdity". If all your steps are allowed ones, then the only conclusion is that the original "opposite" assumption was itself false.)

We begin by considering the statement that's the "opposite" of the one you want to prove. That is, we consider the inequality a+b < > b+a and see what follows from it, by allowed steps:

a + b < > b + a

(a+b) - b < > (b+a) - b

a + (b - b) < > a  note this step relies on the unproved, intuitive notion that (b+a)-b = a

a < > a

But the statement "a < > a" is false ( "absurd" ) for all values of a. Since all of the three preceding steps are allowed by our assumptions, the only conclusion must be that the original statement, a + b <> b + a, is false. And if it's false that two terms are unequal, the only alternative is that they are equal, i.e. the only alternative possibility is that a + b = b + a.

It's common to write something like "proof by reductio ad absurdum" at the end of such a demonstration, btw. Hope that helps. Cheers,  – OhioStandard (talk) 07:06, 1 November 2010 (UTC)[reply]

I don't mean to nitpick, but the final condition, (x+y)-x = y is equivalent to ((x+y)-x)+x = (y)+x or (x+y)+(-x+x) = (x+y)+(0) = x+y = y+x, allowing associativity and the behavior of 0, which you actually implicitly used as well. That is, you're assuming something equivalent to what you wanted to prove, but which is less intuitive (in my opinion). It's unfortunate since the rest of the post illustrates a number of nice basic proof techniques so well. I would almost be inclined to reason in the reverse direction, from a+b = b+a to (a+b)-a = b, since the same points may be illustrated. 67.158.43.41 (talk) 07:40, 1 November 2010 (UTC)[reply]
No, not "nitpicking" at all; you're just using correct principles. God is in the details in logic and mathematics, after all, so thank you. I had actually been aware that I was using the "to be proved" in this "demonstration" ( I can't call it a proof ) I gave, and considered making that explicit, but I wasn't sure whether the OP was actually seeking a real proof, or just an intuitve assist. I felt the "take a first number, add second number, then subract the first one" might feel intuitive-enough to satisfy the OP, if that's all he might have been seeking. But you're of course correct that (x+y)-x = y and x+y = y+x are equivalent statements. I just thought it might make the OP feel better to have a look at it this way. But it's worth stressing that this can't be considered a proof at all, and I appreciate your doing so. I probably should have been more clear about that. I have the Landau book (somewhere) that Jonas mentioned above, perhaps I'll take a look, too, at what he has to say. Good stuff, this, and I'm always very impressed at the extraordinary degree of mathematical sophistication represented here. Thanks again for your comment, very much. Best,  – OhioStandard (talk) 08:50, 1 November 2010 (UTC)[reply]
I just realized how dense I was being, above. I shouldn't try to answer ref desk questions while sleepy, it appears. Of course all the running about the room shouting reductio ad absurdum was unnecessary. If the OP was willing to accept the "take a first number, add a second number, then subtract the first one" or (same) (a+b)-a = b premise on an intuitive basis, then just adding "a" to both sides would have done the trick, as you showed above. It would have "done the trick" for an intuitive acceptance, I mean; again, no sort of proof. Thanks,  – OhioStandard (talk) 09:52, 1 November 2010 (UTC)[reply]
I figured the rest of the post was worth the meandering through inefficiency :). It might be cleaner to use a series of equalities, but it's almost certainly not clearer for novices. 67.158.43.41 (talk) 21:24, 1 November 2010 (UTC)[reply]
Hmm ... I also just noticed that above I incorrectly gave "a + b < > b + a" as the "opposite" of the sentence "a + b = b + a". The use of the informal word "opposite" was careless; the word I should have used was "negation". If I'd used that correct and precisely-defined word, negation, I might have noticed the implicit quantifier "for any (a,b)" that precedes "a + b = b + a", and then realized that the correct negation of the equality isn't "for any (a,b) a + b < > b + a" as I tacitly assumed, but rather "there exists at least one (a,b) such that a + b < > b + a".  – OhioStandard (talk) 15:16, 1 November 2010 (UTC)[reply]

Linear transformation (map) problem, shortcut?

Given a basis of R^n B=[a,b,c . . . ], T(a), T(b), T(c) . . . , (where t is an unknown transformation from R^n to R^n+1) and u (a vector in R^n)

find: T(u), a basis for the kernal and a basis for the range.

I think I understand one way to solve this class of problem. T can be represented by a matrix, of known dimensions, I could do the multiplication T(a, b, c) with the entries of T as unknowns, and come up with a system of equations, and thus find a representation of T. Is this necessary? Is there a way to solve this class of problem without finding T explicitly? --hacky (talk) 21:33, 26 October 2010 (UTC)[reply]

A better way is to decompose u into a linear combination of your basis vectors. You can then apply linearity to compute T's effect on u. For instance, if you find
(where I've written your basis as B=[a1, a2, ..., an]), you have
.
Decomposing u can be done in a number of ways. You might be given u as a linear combination of basis vectors directly. You might be given u in another basis, in which case you could apply a change of basis transformation. 67.158.43.41 (talk) 22:31, 26 October 2010 (UTC)[reply]
Also, should itself be almost a basis for the range of T. However, it might not be linearly independent, so not really a true basis. I believe Gram-Schmidt Orthogonalization (or a slightly modified form, which can deal with intermediate 0's) should cull out the extra vectors while producing a nice orthonormal basis for the range. The 0's should give information on the null space as well.... 67.158.43.41 (talk) 03:46, 27 October 2010 (UTC)[reply]
There's a better way to find the matrix for T than how you described. Suppose M is the matrix for T. Then Ma = T(a), Mb = T(b), etc. So if [a,b,c,...] is the matrix with the elements of B as the columns, and [T(a), T(b),T(c),...] is the matrix with their images as the columns, M[a,b,c,...] = [T(a),T(b),T(c),...]. You can solve this by right multiplying by [a,b,c,...] inverse, or better yet, making the augmented matrix [a,b,c,...|T(a),T(b),T(c),...] and performing Gaussian elimination.--130.195.2.100 (talk) 01:29, 27 October 2010 (UTC)[reply]
It is possible that you didn't want these answers. If I understand your question, you wanted to know if there was a simpler method than one using matrices. The answers so far discuss simpler ways to use matrices.
The reason is, that in many or most situations, matrix calculations do provide the simplest method. There are also numerous situations, where they don't, but these usually will not involve arbitrary transforms. If your unknown transform derives from some known situation, where you possess extra information, then there may well be a faster way. If not, I'm inclined to answer your question thus:
No, there isn't; thus, you should concentrate on learning efficient matrix calculation methods, in the first place. JoergenB (talk) 20:14, 29 October 2010 (UTC)[reply]

physical constants as reported in Wikipedia

Why are Wikipedia physical constants displayed with parens around the last two digits? —Preceding unsigned comment added by J 0JFCfrmAyw59oVFk (talkcontribs) 22:09, 26 October 2010 (UTC)[reply]

They're reporting the measurement uncertainty in current measurements. For instance, 1.34(12) means the value is known to lie within 1.34-0.12 and 1.34+0.12. 67.158.43.41 (talk) 22:23, 26 October 2010 (UTC)[reply]
I haven't seen this in articles, but if it is enWP practice, then there should be a link to an explanation. Perhaps some template {{foo|1.34|12}} should yield "1.34(12)" or something.—msh210 14:41, 27 October 2010 (UTC)[reply]
The physical constant article uses that notation extensively. 67.158.43.41 (talk) 22:47, 27 October 2010 (UTC)[reply]
As a layman, I've always assumed that 1.34(12) meant that the best estimate or measurement was 1.3412, but the last of these digits are not known to be accurate. Different from 1.34±0.12. Have I been wrong? -- Tom N (tcncv) talk/contrib 04:55, 28 October 2010 (UTC)[reply]
I'm afraid so. See Uncertainty#Measurements. It's also wrong to say that 1.34(12) means the value is known to lie within 1.34-0.12 and 1.34+0.12. The value 0.12 is an estimate of the standard error. Qwfp (talk) 06:48, 28 October 2010 (UTC)[reply]
I should have mentioned I was glossing over the uncertainty of the bounds. I figured the OP wouldn't mind the difference, but it's a good thing to mention that nothing is certain in experimental physics. 67.158.43.41 (talk) 10:05, 28 October 2010 (UTC)[reply]

October 27

Primitive roots modulo 5^n

Hi all,

Could anyone suggest an elegant (number theoretic) proof of the fact that both 2 and 3 are primitive roots modulo 5^n for all n, i.e. 2^i or 3^i both generate the multiplicative group of Z/((5^n)Z)? I've looked at inductive proof but it seems a bit messy and I'm hoping there's a nice clean/concise solution - I have no doubt the reference desk brains are far more capable of coming up with an elegant proof than I am ;) (or if there is a particularly concise inductive proof, I have nothing against induction!) Could anyone suggest anything, please?

Thankyou for the help! Delaypoems101 (talk) 00:02, 27 October 2010 (UTC)[reply]

is cyclic of order 4⋅5n−1. If a = 2,3 were not a primitive root, its order would have to be a proper divisor of 4⋅5n−1, or in other words, a would be a square mod 5n, or n > 1 and a would be a fifth power mod 5n. In the former case, a would also be a square mod 5, which it is not (the only squares are ±1). In the latter case, a would be a fifth power mod 25, which it also is not. (You can check that by simply enumerating all the 20 possibilities, or you can reverse the argument above: if a were a fifth power mod 25, then a4 ≡ 1 (mod 25), which does not actually hold.)—Emil J. 15:14, 27 October 2010 (UTC)[reply]
The moral of all this is that when p is an odd prime and n ≥ 2, then r is a primitive root modulo pn if and only if r is a primitive root modulo p2.—Emil J. 15:39, 27 October 2010 (UTC)[reply]

Standard error and standard deviation

In lit review today, the professor was going on about how poor the author's data was, and how we can infer this point from his use of standard error rather than standard deviation -- he explained that because standard error is equal to standard deviation/square root of the population (I think that's what he said), if the standard deviation appears too large for him to publish without everyone laughing at his research data, he can trick readers into viewing a smaller number by reporting standard error instead, and only the most erudite will catch on because most people haven't the foggiest notion of what any of this means anyway. I tried to read both articles above, but they a) both confuse me (the former much more than the latter) and b) they didn't really discuss either in terms of the other, which is what I'm really trying to get at. I guess that's my question...if you could please explain it without sigmas and thetas, 'cause then I might as well read the article again. Thanx! DRosenbach (Talk | Contribs) 02:08, 27 October 2010 (UTC)[reply]

Your formula for the standard error of the mean is right (standard deviation / sqrt(n), n = population size), so generally standard error of the mean is much smaller than standard deviation. The specifics really matter, though... it can be perfectly reasonable to report standard error of the mean instead of standard deviation. It would of course be possible to report one when the other was appropriate, since they measure quite different things.
The standard deviation measures how dispersed your sample data is. Standard error of the mean measures how accurate the average you compute from your sample data is. For instance, if you measure the heights of your classmates, you can get a standard deviation from that sample which says how diverse the heights are. You can then compute the average height of your class, but you don't know how accurate that average is. The standard error of the mean is a "reasonable" estimate of the accuracy of your class's average height when compared to that of, say, the average height of anyone in the world.
For the sake of the argument, suppose half of everyone is 2 feet tall and the other half is 8 feet tall. The standard deviation of any sample of those people's heights would be relatively large--say around 3 feet (a guesstimate), which should be roughly the same for each sample size. However, if you were to compute the average height from any sample, you'd get pretty close to 5 feet. In fact, as you took more and more people in your sample, you'd expect to get sample averages very very close to 5 feet--the error of the mean would start to vanish. In a crude way, that's why you have to divide by an expression involving the number of data points in your sample to compute the error of the mean.
So again, standard deviation = dispersion of your sample; standard error of the mean = accuracy of the average you calculate from your sample.
Note: I've dropped out a lot of details and technicalities that you clearly don't want to hear about, like that the above formula is an estimate for an infinitely large population where your samples are independent. But, I avoided almost all equations, sigmas, and thetas! 67.158.43.41 (talk) 05:41, 27 October 2010 (UTC)[reply]
Maybe a bit simpler (if not precisely correct): the standard deviation measures the variation expected from one individual item in the group, while the standard error measures the variation expected in the average of a group of size n of the items. For instance, you know that if you pick one person at random you have a reasonable chance of getting someone with a high IQ or a low IQ, but if you pick twenty people you have a much lower likelihood of the average IQ of the group being high or low (to get a very high or very low average IQ you'd need to pick 20 people all with high IQs or all with low IQs). Standard deviation is a descriptive statistic (it describes a population) and is usually reported but isn't often used in meaningful ways. Standard error is used extensively in hypothesis testing, to test if samples come from different populations (this is why sample size matters in hypothesis testing - the larger n is, the smaller standard error is, and the easier it is to see differences between the sample mean and the expected mean of the population). --Ludwigs2 06:12, 27 October 2010 (UTC)[reply]
Nice answers, both of you. – b_jonas 12:58, 29 October 2010 (UTC)[reply]

1 = -1 ?

in my textbook, a puzzle (not homework)


we have learnt and

also

thus

multiply both sides by we have therefore

hence

can you spot the wrong reasoning ??? --Umar1996 (talk) 15:14, 27 October 2010 (UTC)[reply]

Try Mathematical_fallacy#Power_and_root. -- SGBailey (talk) 15:20, 27 October 2010 (UTC)[reply]
Specifically, the rule works when b is positive, but not in general. -- Meni Rosenfeld (talk) 15:24, 27 October 2010 (UTC)[reply]
yes! is impossible if b is positive. --Umar1996 (talk) 11:55, 28 October 2010 (UTC)[reply]
Not impossible, but different. —Anonymous DissidentTalk 12:05, 28 October 2010 (UTC)[reply]

how high do cryptographic functions scale (how big primes can you pick?)

take an algorithm like RSA, which uses big primes. Now, it is always big news when we generate a large prime, so, obviously there are limits to how many digits the primes can be. What is that limit? Can it work with 5000 (decimal) digit numbers, or 10,000 digit ones? Million digit ones? Where is the limit, before it becomes too slow to use realistically? 84.153.230.5 (talk) 16:13, 27 October 2010 (UTC)[reply]

In general it's not a really interesting question. The important question is, does it scale better than algorithms for cracking the cryptograms. And good encryption algorithms should be polynomial while requiring an exponential time to crack. So the answer is that, long before the algorithm becomes unusuable it will be essentially impossible to break the encryption. As to spesific limits, it will depend on application. Taemyr (talk) 23:57, 27 October 2010 (UTC)[reply]
Primality testing can be done in polynomial time in the number of digits in the input using the AKS algorithm. The polynomial exponent is (off the top of my head, so take it with a grain of salt) 6. That is, a million digit prime would take 1,000,000^6 = 10^36 operations to prove primality for. Probable primes are easier to check. The key step in RSA is modular exponentiation by squaring, which is shockingly fast. I recently had Python compute the largest known Mersenne prime, something in the neighborhood of 40 million binary digits, so, say, 10 million decimal ones, and take it modulo around 40 million. I didn't even combine the steps, which should have been much quicker. It completed several seconds later. Last I heard, RSA usually uses ~thousand digit primes, which, as Taemyr said, should be good enough.
Note: I've actually glossed over many details. I understand AKS is merely asymptotically polynomial, that certain primes take different amounts of time to prove primality for, that Mersenne primes are historically easier to find, that my example isn't representative, etc.; I don't believe the extra complication is worth it, for me or the OP. 67.158.43.41 (talk) 04:32, 28 October 2010 (UTC)[reply]
Was that several seconds for a single modulo operation? Keep in mind that while exponentiation by squaring is efficient, it still takes time proportional to the log of the exponent. I think in RSA decryption you use an exponent roughly the size of the primes. So for 40 million-bit primes, you'll need to multiply by 40 million the time it takes for a single operation, and several seconds quickly become several years. -- Meni Rosenfeld (talk) 08:41, 28 October 2010 (UTC)[reply]
If I recall RSA, you exponentiate by the encoding string, which can be on the order of the prime (product of the primes?). So you certainly have a good point--if the exponent was gigantic it would indeed take much, much longer than my admittedly nonrepresentative anecdote. If the OP is interested, perhaps they could try implementing RSA and see how quickly it works if they feed in some gigantic primes. I would be genuinely interested in the results. 67.158.43.41 (talk) 10:01, 28 October 2010 (UTC)[reply]

finding angles in 3D figures

Could you please give me an expression that gives the angle subtended by points placed symmetrically on a spherical surface. For instance, I understand that the angle between tetrahedrally-disposed covalent bonds on a carbon atom is ~109.5deg. How is this derived?

As a more general example suppose we had to place radio transmitters to produce optimum coverage over the Earth's surface (assumed spherical). For certain numbers e.g. 6, one of the many valid placements would be self-evident (e.g one at each pole and four at 90deg. intervals on the equator). But suppose we had a number such as 7 or 17? And would there be a unique configuration (disregarding "symmetry" equivalents) or could there be alternatives?

(Digression: It occurs to me that a physical analogue could be devised - with difficulty - to yield a solution: e.g. by placing the required number of equal repulsive electric charges confined within a thin spherical shell or surface).

Thanks. —Preceding unsigned comment added by Paul Renshaw (talkcontribs) 20:11, 27 October 2010 (UTC)[reply]

You can generate a tetrahedra, so the distribution needed, by placing points at non-adjacent vertices of a cube. The angle is then the angle between any two of these, such as between the vectors to (1, 1, 1) and (1, -1, -1) from the origin. Similar solutions exist I think for other regular polyhedra such as the octahedron that you describe. On the more general problem there is information at circle packing which links to Tammes problem and Thomson problem.--JohnBlackburnewordsdeeds 20:20, 27 October 2010 (UTC)[reply]
There are only a few platonic solids, that is, regular (i.e. "symmetric", by what I think you mean) shapes that fit on the surface of a sphere. These each have well-known angles. The number of vertexes in these shapes determines the allowable solutions to your question, 4, 8, 6, 20, and 12. Apparently, if you place any other number of repelling charges on a sphere (edit, see below: under a to-be-determined and non-standard potential function), they'll end up in a configuration slightly different from perfect symmetry. A special case is <= 3 points, since they don't form a polyhedron in that case, and the platonic solids argument doesn't apply. Clearly 1 point doesn't provide symmetry; 2 points do if they're placed on opposite sides of the sphere; 3 points do not, since they determine a plane which, from symmetry, must divide the sphere in half--which isn't a spherically symmetric configuration. 67.158.43.41 (talk) 22:44, 27 October 2010 (UTC)[reply]
Interestingly, with 8 repelling charges on a sphere, the corresponding platonic solid (a cube) will not occur. Rather, 8 points arranged around a central atom typically form a Square antiprism, rather than a cube. It's not hard to see why. Even though some of the symmetry is broken in going from the cube to the anticube, you actually increase the average distance between the points. There aren't many chemical compounds that are 8 coordinated, but there are some: TeF82-, for example. Buddy431 (talk) 03:12, 29 October 2010 (UTC)[reply]
Addendum: presumably the twenty coordinate case would also do something similar, forming some sort of skewed Dodecahedron. I don't know of any 20 coordinate compounds, though, so the point is largely moot. Buddy431 (talk) 03:24, 29 October 2010 (UTC)[reply]
That makes me wonder under what potential function point charges on a sphere arrange themselves into the platonic solids, where possible. I wonder if the usual potential function with distance taken to be geodesic distance works. Anybody know? 67.158.43.41 (talk) 04:13, 29 October 2010 (UTC)[reply]

Clustering algorithm

Is an efficient algorithm known for the kind of problem of which this is an example:

The figures show the distribution of population in a 5 by 5 block of 1 km squares, the total being 198. The aim is to cluster the squares into six groups each of size 198/6 = 33, or as close to this as possible in terms of minimising total absolute deviation. The groups can be of any shape, but no component square may be attached at just one or more of its corners.

The general problem will have a rectangular array, probably with some zero elements.→86.155.186.37 (talk) 22:17, 27 October 2010 (UTC)[reply]

I'm sorry, I do not know an algorithm. The general case feels quite hard to me. Thinking a little more, your problem is harder than the partition problem, a known NP-Complete problem. (Proof: take a 1D array of inputs; create a 2D array with 0's off the diagonal, stringing the 1D array along the diagonal. Cluster the 2D array into two groups. The clustering must consider cases where the first cluster takes the lower triangle and the second takes the upper triangle. From here, you can get all possible subsets of the diagonal split between the two clusters by choosing which cluster to put each diagonal element in. This is precisely the set of subsets considered by the partition problem, and you're considering them with respect to the same fitness function. Any other clustering also divides the diagonal into two sets, so is redundant. This instance of your problem is thus precisely equivalent to the full partition problem.) So, short of giving up, I would just throw a greedy algorithm at it and hope for the best. The article I linked is perhaps more hopeful: "The pseudo-polynomial time dynamic programming solution for the subset sum problem applies to the partition problem as well, and gives an exact answer in polynomial time when the size of the given integers is bounded." Perhaps you could use a variant of that solution and execute it in reasonable time. Good luck! 67.158.43.41 (talk) 04:19, 28 October 2010 (UTC)[reply]
This may be generalized into an NP problem, so an absolutely correct solution would be exhaustive to run - it would be a brute force check of every possible combination. However, a heuristic may be used. One that comes to mind would be to initially divide this into two subsets of equal size. It is similar to the partition problem mentioned above, but it is a bit easier because the limitations on which items may be included once another number is included. Once you have two equal size sets, divide each one into two equal size sets. If the heuristic fails to produce four nice sets, try rotating the entire set to see if your algorithm grabs a better group (since it will likely be biased to look in one direction for new numbers, rotating the set will make it technically look in a different direction). -- kainaw 12:39, 28 October 2010 (UTC)[reply]
Thanks for the response. There are analogies with the partition problem, as stated, and also with bin packing for a fixed number of bins. The latter analogy breaks down, as each "bin" has to be able to hold more than its notional capacity to cater for under-filling elsewhere, and the spatial nature of the inputs prevents clustering purely by value. The problem is a simplified version of the one involved in constructing UK parliamentary constituencies, the latest set of which contains the oddity of York Outer, colloquially known as Polo because of its resemblance to the eponymous sweet made by the million in the city.→86.155.186.37 (talk) 15:16, 28 October 2010 (UTC)[reply]
If (somehow) you knew that two particular values simply had to be in separate groups, you could use the two of them as starting points to create two equal-sized groups. Then, the problem becomes much easier because you'll add some to one group and some to another - back and forth. Because you have no means of knowing immediately if two values must be in different groups, you don't have a starting point. You have to guess at one, then try it. If you don't like the results, try another. That is the real reason that this is a hard problem. -- kainaw 18:20, 28 October 2010 (UTC)[reply]
Even if you knew beforehand two values were in separate groups, you might well make a mistake in creating equal-sized groups and take a sub-optimal part of the map which goes to the other cluster in the optimal solution. I don't think the problem is much easier even with this magically derived information, especially considering there are k instead of just 2 clusters total. Also, to be clear, Kainaw had said "It is similar to the partition problem mentioned above, but it is a bit easier because the limitations on which items may be included once another number is included" but my first post shows that this problem is actually strictly harder than the partition problem inasmuch as any solution of this problem automatically solves the partition problem exactly. Disallowing 0's (which you've explicitly allowed), it might become easier than the partition problem.
Again I would just try a greedy algorithm a whole bunch of times and hope for the best--like randomly start your k clusters in k places and keep adding to them while respecting the bounds as well as possible. A genetic algorithm might do well too, come to think of it. Mating clusters cleverly would be the key. The method would need to make clusters of nearly the right size change very little, while clusters of the wrong size should be quite flexible. Given two adjacent clusters, one with too many people and one with too few, you could extend the border of the smaller one into the larger relative to the difference needed. 67.158.43.41 (talk) 02:11, 29 October 2010 (UTC)[reply]
This sounds related to elections. – b_jonas 12:55, 29 October 2010 (UTC)[reply]

October 28

Obscure 3D shape that folds

In a kids mathematics magazine I used to get back in the early 2000's, there was a free gift of a card net of an obscure geometric shape to fold 'n' glue yourself

the shape had a really long name which i think ended in "hedron" (narrows it down! :-\) which i cannot remember, the magazine may even have been forced to make up the name themselves but it sounded authentic (inside the magazine the name was explained - which brings to mind lots of uses of "hex" and "dec", but my knowledge of shapes since may have distorted this memory.

To add to the challenge I cannot remember the name of the magazine itself, but it had a sister publication of "Young Scientist" (try googling for that lol - its not the first one), which had a version for older children called "SciTec" or "SciTech", and there was an equivalent older kids version of the maths magazine, whose name also escapes me. the publisher of all four of these was Educational publishing, under the name of which i can find at least three different publishing houses, and the relevant one seems to have gone under, since it's website domain is up for sale the editor of the young scientist magazine (at least) was Jenny Smith

every issue of each of the mags came with some kind of science or maths toy, such as 3d glasses. I want to find the name and net of the shape so i can make another one - it folded much like a Paper fortune teller, but on both sides, and looked to be made of cubes with triangular "hinges"

I have looked for the shape on wikipedia - i wouldn't be surprised if there was at least a stub for it, but I am at a loss for how to search with no clue of the name I have googled several combinations of the magazine names to find a publishers website - but as I said it has been taken down replaced with a for sale http://www.educationalpublishing.com/ (.co.uk has a similar notics under a different agency I'm not sure which is correct)

The end goal is finding the shape, not the magazine, but if anyone gets as far as the magazine I'd be interested anyway мдснєтє тдлкЅТЦФФ 00:11, 28 October 2010 (UTC)[reply]

Hexaflexagon? -- 119.31.121.84 (talk) 00:45, 28 October 2010 (UTC)[reply]
Study our article polyhedron. Bo Jacoby (talk) 09:27, 28 October 2010 (UTC).[reply]
This site has something that sounds similar, which it calls a "kaleidocycle" or "flexahedron". the wub "?!" 12:08, 28 October 2010 (UTC)[reply]

thanks 119.31.121.84 ! that was it the name seems so obvious now. It was actually Hexahexaflexagon but it was on the Flexagon page you linked. мдснєтє тдлкЅТЦФФ 04:25, 30 October 2010 (UTC)[reply]

Another Logic query - uncountability

Hi all, me again! I asked a question on axiomatic deductions a few days ago, but I was curious about something I couldn't figure out on my own. Won't need as thorough an answer this time, I don't actually -need- to do a question on this, I'm just curious :)

In the lecture course I attend we always assume (for propositional logic) that our primitive propositions are countable, and therefore so are all possible propositions. However, I'm asked here what happens when we allow uncountability, and I haven't really got a clue!

We call a set of propositions S independent if for all s in S, we can not deduce s from S-{s} (i.e. s is required to deduce s from S). We call 2 proposition sets S, T equivalent if for every s in S, we can deduce s from T, and for every t in T, we can deduce t from S. I've shown that for every proposition set (countable), there exists an independent set equivalent to it, but now what happens if we allow our initial propositions to be uncountable? Is there still always an independent equivalent subset?

By the nature of the question, I expect there to be a counterexample rather than a proof. Is there any way we can map this into the reals or [0,1] perhaps, then use a more well-known set of results to derive the result? I know you can consider a topological space on the valuations as a Stone space (though I don't know much about stone spaces), hence the name of the compactness theorem, as we reach a conclusion on a compact T.S. However, is there anything we can do in this case, any map into R we can use to help us for example?

Thankyou (time and time again!), 62.3.246.201 (talk) 00:39, 28 October 2010 (UTC)[reply]

I'm not terribly well versed in logic, or set theory for that matter, but I believe the uncountable case follows from Zorn's lemma. Take P as the set of all subsets of S, an uncountable set of propositions. Take E as the set of all elements of P equivalent to S, which is clearly nonempty, since it contains s. Partially order E by reverse inclusion. Zorn's lemma provides us with at least one maximal element M, which, from our reverse ordering, is actually minimal under standard inclusion. Any subset M' in E of M would be comparable to M under our partial order, so, since M is minimal, M' must be M. That is, no proper subset of M is equivalent to S, therefore, to M. Take M-{m} for any m, which is now not equivalent to M, so m cannot be deduced from M-{m}. Thus M is independent and equivalent to S.
I wouldn't be at all surprised if this question is (in a more technical sense than I've used) equivalent to Zorn's Lemma, which is itself equivalent to the Axiom of Choice and a number of other things, under the proper set theory. 67.158.43.41 (talk) 04:05, 28 October 2010 (UTC)[reply]
Zorn's lemma fails here: the relevant poset is not chain-complete. This can be seen even in the countable case: take S={p1,p1∧p2,p1∧p2∧p3,...}, and let En be S with the first n elements removed. Then each En is equivalent to S, and they form a chain in E, but the only set contained in all of them is the empty set, which is not in E. Algebraist 09:56, 28 October 2010 (UTC)[reply]
Ach, this is so stupid of me. When I stated the uncountable case, I said something different to the countable case! Sorry. I meant to say, in the uncountable case, is there always an equivalent independent set, not necessarily subset? I actually used that very example to show there isn't always such a subset, so this was doubly foolish of me. Does there always have to be an independent set equivalent to the original by Zorn's Lemma then? Thankyou again, 131.111.185.74 (talk) 17:48, 28 October 2010 (UTC)[reply]
That's what I assumed you meant, since it's the natural question and the one Imre set. If Zorn's lemma solves the problem, it does so in some rather subtle way; all the obvious approaches I've tried fail. Algebraist 18:23, 28 October 2010 (UTC)[reply]
Who's Imre?—Emil J. 10:30, 29 October 2010 (UTC)[reply]
Imre Leader, who is fond of this problem and whose undergraduate logic course the OP appears to be taking. Algebraist 10:43, 29 October 2010 (UTC)[reply]
Bah, how embarassing. Forgetting to check for bounded chains when applying Zorn's lemma is a bit like making an omelet while forgetting the eggs.... Modifying the above, I've gotten a bit farther, at least. If you take P to be the set of all independent sets provable from S, P contains the singletons from S so is nonempty. Any chain C = {I_a} has upper bound J = Union of C. To the contrary, if J isn't independent, some j in J is proven by J-{j}. Assuming proofs are of finite length, they can apply at most finitely many axioms in J-{j}, so say the finite subset J_0 of J-{j} proves j. J_0 + {j} are each contained in some finite set of I_a's, where the union of this finite set must itself be an I_a since they form a totally ordered finite sequence through inclusion. Call this union I_0. But then I_0 is independent and contains J_0 + {j}, so I_0 - {j} doesn't prove j, so J_0 doesn't prove j, a contradiction. So, each chain is bounded and Zorn's lemma gives a maximal independent set containing elements provable from S. Unfortunately, I haven't been able to show such a maximal set is equivalent to S, or to provide a counterexample. 67.158.43.41 (talk) 02:39, 30 October 2010 (UTC)[reply]
I think Martin's Axiom can extend the countable case to sizes less than continuum. It seems like it should fail at size continuum, but I'm having trouble showing that. My first thought: let your language contain a unary predicate for every real , and have the axioms for every . This doesn't seem to have an independent axiomatization, but I can't show that.--203.97.79.114 (talk) 20:39, 28 October 2010 (UTC)[reply]
Note that the OP asked about propositional logic, so you can't use any predicates. It probably does not make much of a difference, though.—Emil J. 10:30, 29 October 2010 (UTC)[reply]
Whoops. In that case, the natural analog would be what you have below.--203.97.79.114 (talk) 10:42, 29 October 2010 (UTC)[reply]
This is a nontrivial question. I was actually thinking about the problem some time ago, but IIRC I did not reach any conclusion. It seems that it should not hold in general for uncountable sets of formulas. The specific counterexample I had in mind is to take an uncountable linear order L and put using propositional variables . However, one can actually show that this has an independent axiomatization for quite a large class of linear orders, so if it works at all, extra conditions are needed. I think that it should be enough to make L an -saturated dense order.—Emil J. 10:30, 29 October 2010 (UTC)[reply]
Does that have an independent axiomatization for L=R? Algebraist 10:44, 29 October 2010 (UTC)[reply]
That I don't know. The class I mentioned consists of total orders L which contain |L| (nondegenerate) countable subintervals. One can check that this includes all scattered orders, and more generally, scattered sums of countable orders. In contrast, all intervals in the reals are uncountable, hence it is perfectly possible that the corresponding theory has no independent axiomatization, but on the other hand, the reals are generally quite well-behaved, so I would be rather cautious with making this a conjecture.—Emil J. 12:06, 29 October 2010 (UTC)[reply]
Incidentally you're quite right about Imre, I am currently taking his course, he is a fantastic lecturer - unfortunately he always puts an 'extra hard' question at the end of the sheet, II didn't have time to go through it in my supervision on it on this work, I'm just doing it now because I'm a masochist ;-) I'm told the next worksheet's final question took a group of around 10 Cambridge professors a number of hours to solve, so I always have that to look forward to! Now I'm going to try and read through everything you've all written and get my head around it, thankyou everyone for the stimulating discussion thus far! 131.111.16.20 (talk) 13:33, 29 October 2010 (UTC)[reply]

Integral calculus

Given and , find . What is the link between the first two identities and the question ? ?--115.178.29.142 (talk) 01:05, 28 October 2010 (UTC)[reply]

Hint: How is the numerator of the second identity related to its denominator? More hint; you should try yourself before reading past this word: It's the derivative. 67.158.43.41 (talk) 03:15, 28 October 2010 (UTC)[reply]
Unbelievable that I didn't notice that myself. Thanks. 115.178.29.142 (talk) 03:34, 28 October 2010 (UTC)[reply]

Tough probability question

If four cards are chosen at random from a standard deck (with all jokers removed, and face cards assigned value as follows: Jack = 11, Queen = 12, King = 13), what is the probability that using the elementary functions of addition, subtraction, multiplication, and division, they cannot be made to equal 24. In other words, for four cards with values a, b, c, and d (not necessarily distinct), what is the probability that using no elementary functions nor arrangements that a ₪ b ♥ c ♯ d (where ₪, ♥, and ♯ are elementary functions, also not necessarily distinct) will be 24. You cannot use the same card more than once, for example you can't say a ♦ a ₪ b ♥ c ♯ d, and you must use all the cards (so a ₪ b ♥ c does not work either). Thanks. 24.92.78.167 (talk) 02:26, 28 October 2010 (UTC)[reply]

How are parentheses handled? Aces are 1, I hope? A standard deck has 4 copies of each card, so there are 13^4 = 28561 possible distinct ordered four-card hands. There are four functions you listed. If parentheses are ignored and evaluation is assumed to be (((a ₪ b) ♥ c) ♯ d) there are only 4^3 = 64 choices for each (₪, ♥, ♯), leading to just 64*28561 = 1827904 possible combinations. If parentheses can be supplied arbitrarily, there are a few more possibilities:
((a ₪ b) ♥ c) ♯ d
(a ₪ b) ♥ (c ♯ d)
(a ₪ (b ♥ c)) ♯ d
a ₪ ((b ♥ c) ♯ d)
a ₪ (b ♥ (c ♯ d))
leading to 28561 * 64 * 5 = 9139520 possibilities to compute. These numbers are quite small for computers, so you could write a script to simply compute the answer. I'd be delighted to hear an analytic solution, but I suspect it's a horrifically complicated case-based analysis (or nearly equivalent to what I just wrote). There are a few optimizations. First, multiplication and addition are associative, so many of the extra parenthesizations are actually equivalent. Similarly multiplication and addition are commutative, so many hand rearrangements are equivalent. You could actually cut the run time by an ~eighth by running those (₪, ♥, ♯) made of only multiplication and addition on the set of unordered distinct four-card hands, which is of size (13 multichoose 4) = (13+4-1 choose 4) = 1820. Some combinations can be eliminated by inspection as well; for instance, 4 aces can't possibly get large enough with the operations you've given, and neither can anything with all 2's or under. You also can stop evaluation of a given (a, b, c, d) when you've found a combination that totals 24. I doubt these optimizations are worthwhile in your case, though, considering a script should be able to brute force the problem in a few seconds. 67.158.43.41 (talk) 03:43, 28 October 2010 (UTC)[reply]
If anyone else is curious, the OP's question is related to the popular mathematical game 24 Game in which you try to reach 24 given 4 numbers and the permissible mathematical operations above. Zunaid 16:22, 29 October 2010 (UTC)[reply]

Pairs of antihomologous points lie on a circle

Could someone please review this section of the Homothetic center article? I believe it to be wrong and there is a discussion on the article's talk page, but would like an expert opinion. If confirmed incorrect, is anything from that section salvageable? Thanks. -- Tom N (tcncv) talk/contrib 04:30, 28 October 2010 (UTC)[reply]

Rather wrong proof; correct end result, I think. JoergenB (talk) 21:36, 29 October 2010 (UTC)[reply]

Fox calculus on [F', F']

I'm looking at words w in the free group, F, of rank n, with basis xi, whose images in the abelianisation of F := Fo are trivial. I've modified the domains of the Fox calculus derivatives to be ZFo, rather than ZF, and I'm trying to show that the only such words w which have zero (modified) Fox derivative with respect to all the basis elements of F are the members of [F ',F '], i.e. commutators of commutators. Unfortunately, I'm not having much luck.

I've tried working in the simple case where we can write , where w only contains xi (and hence xi-1) once. Then the derivative of w with respect to xi is which equals zero if and only if the image of is trivial in the abelianisation, which forces the image of to also be trivial. I'm tempted to try some sort of induction argument, but even then, I'm not sure how to get a foothold in the general case, since the Fox derivative is so heavily dependent upon where the xi 's appear in the word. Any help would be great. (I've considered that the statement I'm trying to prove isn't actually true, but I'm using it to try to flesh out the proof from a textbook, and I believe it boils down to the above situation). Icthyos (talk) 11:40, 28 October 2010 (UTC)[reply]

Differential equation

Hi, can anyone see whether there is any closed-form solution to the following?

d^2y/dx^2 = k*y*sqrt(1 + (dy/dx)^2)

k is an arbitrary constant. —Preceding unsigned comment added by 86.184.26.179 (talk) 13:50, 28 October 2010 (UTC)[reply]

Just formatting for legibility: msh210 17:02, 28 October 2010 (UTC)[reply]
It's a long time since I did this sort of thing, but I remember being taught to use the substitution , which gives , after which you have a differential equation in P and y only, and can separate the variables. AndrewWTaylor (talk) 17:11, 28 October 2010 (UTC)[reply]
Thank you. I end up with an integral that doesn't seem to be solvable in terms of "ordinary" functions, but I guess that's as good as it gets... —Preceding unsigned comment added by 86.173.172.30 (talk) 19:13, 28 October 2010 (UTC)[reply]
Maybe try making some trig substitutions ... If I have it right, you now have , and putting gives ... Like Andrew, though, I'm very rusty on this stuff. --81.153.109.200 (talk) 20:02, 28 October 2010 (UTC)[reply]

certainly does not give . The identity you are thinking of is

Dammit, you're completely right. I was thinking of cosh and sinh ... --81.153.109.200 (talk) 20:46, 28 October 2010 (UTC)[reply]
Hint - the integral with P should be easy, even without a substitution. AndrewWTaylor (talk) 22:35, 28 October 2010 (UTC)[reply]
Sure, that one's easy; it's at the next stage that things get sticky. 86.173.172.30 (talk) 23:00, 28 October 2010 (UTC)[reply]
Allow me to introduce you to my new best friend, who unfortunately this time has come up with a solution that I can barely make sense of. On the other hand copying and pasting your question into Google exactly as written led me to this page. Answer in third post, however some initial conditions are assumed so it's not the most general solution. Zunaid 16:17, 29 October 2010 (UTC)[reply]

property of primes

if you picked a random prime of a certain number of large digits, say 1000, and told someone that you had done so, would the person know anything about the last n-1 of n digits (in the following example, the last 4 of 5 digits), or would it be random? For example, would the prime end in [...]0000x through [...]9999x with equal distribution? Or, could they know that, for example, ending in 9999x was far more likely than ending in 0000x? Thank you. 84.153.204.215 (talk) 17:24, 28 October 2010 (UTC)[reply]

sorry if it was unclear. My question can also be phrased like this. "Is it a good random number generator to find a prime of a thousand digits, and the first such prime you find, use its last n digits, discarding the last one first? For example, if you need a random number 0-10, you would use the second-to-last digit, and if you need a random number 0-100, you would use the third-to-last and second-to-last digits (so if the prime ends in ...563, then you get '56' as your random number). "84.153.204.215 (talk) 17:26, 28 October 2010 (UTC)[reply]
That would depend a lot on how you are generating your prime. If "the first such prime you find" is the same one every time, then it won't be random at all. If you have some really good way of generating the primes randomly, you can probably just generate random numbers directly anyway. Staecker (talk) 21:29, 28 October 2010 (UTC)[reply]
I'm guessing that the OP might be more interested in the distribution of digits in prime numbers than practical methods of generating random numbers? —Preceding unsigned comment added by 86.173.172.30 (talk) 21:40, 28 October 2010 (UTC)[reply]
How do you pick a 1000-digit prime? Doesn't it take a lot of work to figure out whether a specified 1000-digit number is prime? If you can already pick a 1000-digit number at random, then, given what you seem to want, why not stop there instead of worrying about whether it's prime or not? Michael Hardy (talk) 22:50, 28 October 2010 (UTC)[reply]
Asymptotically when n tends to infinite, Dirichlet's theorem on arithmetic progressions#Distribution says the digits have the same frequency. But for a fixed n, consider that the primes thin out on average per the prime number theorem. So 1000-digit primes starting with 1 are expected to be a little more common than those starting with 9. Starting with 10 should be slightly more common than starting with 19, and so on. And since starting with 20 should be more common than 29 and so on until 90 and 99, we also expect a second digit of 0 to be more common in total than a second digit of 9. If you ignore the first few digits then the difference becomes negligible for most purposes. But as others have pointed out, if you have a method to select a large random prime then you should be able to select a large random number without wasting time on prime generation. And if your prime isn't random then things depend on how the prime was selected. By the way, there are approximately 3.9×10996 1000-digit primes. PrimeHunter (talk) 23:52, 28 October 2010 (UTC)[reply]

October 29

binomial identity

Resolved

I have been reading from the book of Concrete Mathematics and have a small doubt about a binomial identity on Pg 199. Having shown the identity , the book claims that multiplication by yields the identity . I am confused as to why in the latter identity the sum does not run over , as I feel it should after multiplying and reindexing n+k as k. Or is this a misprint?-Shahab (talk) 03:44, 29 October 2010 (UTC)[reply]

Well, and are both correct in this case. If , then . —Bkell (talk) 03:51, 29 October 2010 (UTC)[reply]
The expression can be written simply thus avoiding the problem. Bo Jacoby (talk) 07:38, 29 October 2010 (UTC).[reply]
Okay. Thanks-Shahab (talk) 09:00, 29 October 2010 (UTC)[reply]

chance to win lotto in 70 years

If Play LOTTO every day for 70 years what would be the possibility of winning 1 time? Assume lotto would play every day? —Preceding unsigned comment added by 99.16.144.232 (talk) 12:43, 29 October 2010 (UTC)[reply]

That depends on the workings of whatever thing it is you're calling "LOTTO". Algebraist 12:47, 29 October 2010 (UTC)[reply]
If you are referring to any of the standard lotteries with the odds of winning being about 1 in 50 trillion, purchasing a lottery ticket every day does not increase your odds of winning the lottery. The odds that you will receive a winning ticket as a gift or simply find a winning ticket on the ground is just as significant as the odds that you may purchase a winning ticket at some point in the future. -- kainaw 12:52, 29 October 2010 (UTC)[reply]
50 trillion?! That sounds off by five to six orders of magnitude. A rule of thumb that I use (primarily based on observation of US state lotteries) is that the state keeps half of the proceeds and pays out the remainder in prizes. Half of the prize money goes into the jackpot pool and the remainder is paid off each week in smaller prizes for partial matches. The jackpot prize is typically quoted as the 20-30 year total annuity payment of about twice the actual value. So a 50 million:1 lotto will often see jackpots in the order of 12.5 million (times the cost of a ticket -- typically $1 in the US), described as a 25 million annuity. A 50 trillion:1 lotto would not be popular as the public would want to see on the order of a couple of winners a year to feel as if they have a chance, and would be unlikely to participate in a game with an ever growing but never paid out jackpot. -- 119.31.126.67 (talk) 22:05, 29 October 2010 (UTC)[reply]
The lottery is called LOTTO in the UK. You pick six numbers from a total of 49, then they pick six numbers and a bonus ball from a total of 49. According to this section, the odds of winning are 13,983,815 to 1. So the probability of winning on a fixed day is
Ignoring leap years, then in 70 years there are 25,550 draws. The probability of winning in those 70 years is
To get the probability up to 1 then you'd need to play about 14 million times. That would take you around 38,356 years. I think this is right, but I haven't studied probability since high school. Fly by Night (talk) 13:45, 29 October 2010 (UTC)[reply]
That's the expected number of wins, not the probability of at least one win. Did you really think you can guarantee a lottery win by buying one ticket in enough draws? Also, there are many things in the world called "lotto", and the OP geolocates to Texas. Algebraist 13:55, 29 October 2010 (UTC)[reply]
Well why don't you enlighten us with the correct answer instead of belittling mine? Fly by Night (talk) 14:04, 29 October 2010 (UTC)[reply]
Because I don't know what lotto we're talking about yet. Algebraist 14:07, 29 October 2010 (UTC)[reply]
Texas_Lottery#Lotto_Texas Fly by Night (talk) 14:15, 29 October 2010 (UTC)[reply]
Anyway, if p is the probability of winning in a single draw, then the probability of winning at least once in n independent draws is 1 − (1 − p)n. Since here p is probably much smaller than 1/n, the estimate np is nearly accurate.—Emil J. 14:26, 29 October 2010 (UTC)[reply]

I guess that this is because p is so small? Assuming that 0 ≤ p ≤ 1 and that n is a positive integer, we have 1 – (1 – p)n = np if and only if p = 0. For very small p, the two are almost equal. In fact, the correct answer to the OP's question was (ignoring leap years) given by EmilJ's answer but using p = 13,983,815–1 and n = 25,550; i.e. 1 – (1 – p)n ≈ 0.001825. If we use my incorrect school boy method then we get np ≈ 0.001827. Although, away from very small p, my school-boy answer becomes really wrong. If p = 1/2 and n = 10 then the real answer is almost 1, but my school-boy answer gives me 5; meaning I would have won every other time. That's why I didn't continue my education in probability: I'm rubbish at it! Fly by Night (talk) 19:39, 29 October 2010 (UTC)[reply]

When your linear algorithm tells you that the expoected value is 1, then the true probability of winning (at least) once is ("almost exactly") (e-1)/e. You more or less already deduced that result. When the expected value is 5 (which actually means that the expected average number of wins is 5), then your chance of not winning even once is
, more or less.
"More or less" and "almost exactly" here indicates that in practice there are rather small differences between the numbers e and , when the integer n is as large as either of the ones discussed here. JoergenB (talk) 22:14, 29 October 2010 (UTC)[reply]
It wasn't really explicitly said, but you'd expect to lose money with any lottery strategy, for otherwise the organizers wouldn't make money. Of course, if you value the emotional experience of playing the lottery, you might actually come out ahead on total value by purchasing lottery tickets. (Some lotteries probably purposefully lose money--eg. a charity raffling off a car that was itself donated--but with for-profit lotteries, the above applies.) 67.158.43.41 (talk) 03:01, 30 October 2010 (UTC)[reply]

Lotto is a special tax for poor mathematicians. Bo Jacoby (talk) 19:58, 30 October 2010 (UTC).[reply]

Endomorphisms of finite cyclic groups

I want a proof relating to the kernel of endomorphisms of finite cyclic groups. --84.61.153.119 (talk) 14:42, 29 October 2010 (UTC)[reply]

Here's an easy proof: Suppose some element maps to 0. Then maps to 0. But as has prime order [this was an assumption in the statement of the proposition, which wasn't sought by the OP], , so the map is the zero map.—msh210 15:58, 29 October 2010 (UTC)[reply]

Normal subgroups of prime index

Let G be a group, N be a normal subgroup of G with index p prime, and x an element of N. What is the relation between ℂN(x) and ℂG(x)? --84.61.153.119 (talk) 17:52, 29 October 2010 (UTC)[reply]

How to prove that exactly one of the following two cases hold:

(a) ℂN(x) a normal subgroup of ℂG(x) with index p, and xG = xN;

(b) ℂN(x) = ℂG(x), and xG = x1N ∪ … ∪ xpN with certain elements x1, …, xp of N, such that ℂN(xi) = ℂG(xi) ≅ ℂG(x) for all 1 ≤ i ≤ p? --84.61.153.119 (talk) 19:15, 29 October 2010 (UTC)[reply]

Could you give some context? Why do you need to know all of these things? What were you doing to make you arrive at needing to know such things? These seem quite isolated problems, and a little background and motivation would help the reference desk find your answer. Fly by Night (talk) 20:03, 29 October 2010 (UTC)[reply]

For any subset S of G and any element x in G, xS = {s-1xs | s ∈ S}. --84.61.153.119 (talk) 07:58, 30 October 2010 (UTC)[reply]

For any group G and any element x in G, ℂG(x) = {g ∈ G | gx = xg}. --84.61.153.119 (talk) 14:58, 30 October 2010 (UTC)[reply]

getting the first four of the LAST 5 digits? 1234 from 12349

Is there anything better than:

TheNumber = (x mod 10000) * 1000 + (x mod 1000) * 100 + (x mod 100) * 10 + (x mod 10)

And if there isn't, do I need the parentheses, or does order of operations take care of it? 84.153.201.148 (talk) 20:18, 29 October 2010 (UTC)[reply]

What about the floor function: floor(12349/10) = 1234? Fly by Night (talk) 20:24, 29 October 2010 (UTC)[reply]
Thanks! Forgot about that... Actually I meant that it's a much bigger number though. So, should I write: x mod 10,000 to get the first four digits, then use floor? Like this:

floor( (x mod 10,000) / 10)

In this case is there a better, more elegant way, and do I need the parentheses? 84.153.201.148 (talk) 20:32, 29 October 2010 (UTC)[reply]

If you have a positive integer n with d digits, and you want to know the first ƒ-digits, then you need to know floor(n/10d-ƒ). For example, n = 123,456,789 (so that d = 9) and you want to know the first five digits (so ƒ = 5) then you have floor(123456789/10(9–5)) = floor(12345.6789) = 12345. Fly by Night (talk) 20:43, 29 October 2010 (UTC)[reply]

Please note: I have just undone this change. Please don't change the question after people have started to answer. Just say that you asked the wrong question and rephrase what you wanted to ask. Fly by Night (talk) 20:45, 29 October 2010 (UTC)[reply]

OP here, ip change. I already said "of the LAST five digits", so it was not a realchange. I just made it. caps just now, so I hope this doesn't irk you. Very clearly, this is what I want: the first four of the last five digits. 92.230.71.220 (talk) 21:50, 29 October 2010 (UTC)[reply]
Divide by an appropriate power of 10 and then take the floor function. That will give you an appropriate decimal digit... Fly by Night (talk) 22:29, 29 October 2010 (UTC)[reply]
By "last" I think the OP means "least significant", in which case their mod solution seems quite good. 67.158.43.41 (talk) 02:48, 30 October 2010 (UTC)[reply]

what guarantee is there that numbers aren't very easy to factor (o(n))

what guarantee is there that large composite numbers aren't ridiculously easy to factor, just needing someone to think up and code a breakthrough algorithm that afterward anyone can use? 92.230.71.220 (talk) 22:20, 29 October 2010 (UTC)[reply]

There is no guarantee. All we know is that thousands (if not millions) of people have tried to figure out how to factor large numbers quickly. So far, nobody has figured out a way to do it quickly enough to make applications dependent on difficulty of factoring vulnerable. -- kainaw 03:01, 30 October 2010 (UTC)[reply]
A few more details can be found at the relevant section of the integer factorization article. Shor's algorithm shows that factorization can be done quickly on quantum computers (though these are incredibly delicate machines and progress has been slow on making large enough ones to be harmful to RSA--I hope). 67.158.43.41 (talk) 04:28, 30 October 2010 (UTC)[reply]

Some integers such as 1025 or 264 are ridiculously easy to factor, so there is obviously no guarantee that they aren't ridiculously easy to factor. Bo Jacoby (talk) 18:09, 30 October 2010 (UTC).[reply]

There are lots of algorithms for checking the non-primality of a number. For example, I can ask Maple if 10100 + 1 is prime and it tells me withing a fraction of a second that it isn't prime. But when I ask it to factorize the number it just hangs. Even though the number before it, 10100, could be factorized my my dog! It's already factorized. Okay, slight exaggeration; I don't have any pets. Fly by Night (talk) 18:51, 30 October 2010 (UTC)[reply]

Conjecture

I have been reading about conjectures and proofs on Wikipedia, but I am wondering if there is any criteria for a hypothesis to be formally called a conjecture. For example, if I noted that no prime number ever identified contained both the digits 5 and 7 (which I am sure is not the case, but let's say that it is), would that be sufficient for me to claim that "no prime number contains the digits 5 and 7" was a conjecture? Is that too trivial, or poorly defined, or is there some other criteria it would fall foul of? Any clarification would be appreciated. Thanks CrispinWhittaker (talk) 23:05, 29 October 2010 (UTC)[reply]

You might like to compare and contrast the articles conjecture and hypothesis. The former says that "...a conjecture is a proposition that is unproven but appears correct and has not been disproven." while the latter says that a hypothesis is "...a proposed explanation for an observable phenomenon." For example, I might conjecture that all churches contain crosses, while you might hypothesise that all churches contain crosses because they are Christian places of worship. In short: a conjecture says that something is true, even though it cannot be proven. A hypothesis says that something is the way it is because of something or another. The problem with hypotheses is that they may be bases upon false assumptions: there may be churches without crosses. Fly by Night (talk) 23:15, 29 October 2010 (UTC)[reply]
Ah yes, OK, so I have misused (quite badly) the term hypothesis. What I really meant, I suppose, is a proposition. So, are there any criteria that must be met for a proposition (e.g. my "no prime number has a 5 and a 7" proposition), to be formally considered (by the "mathematics community") as a conjecture? Thanks CrispinWhittaker (talk) 23:24, 29 October 2010 (UTC)[reply]
In this case, it comes down to provability. A conjecture is, as the article says, "...a proposition that is unproven but appears correct and has not been disproven." Fly by Night (talk) 23:49, 29 October 2010 (UTC)[reply]
In mathematical writing, the word "hypothesis" is used differently from the scientific usage discussed above. Hypothesis is almost always used in the sense of Antecedent (logic). Rarely "hypothesis" is used as a synonym of "conjecture", as in Riemann hypothesis, but the terms would be regarded as interchangeable- calling the RH a hypothesis instead of a conjecture is just a convention. The term "proposition" is typically not used for conjectures, but for proved statements (like "theorem" or "lemma"), or more generally for any logical statement (as described at Proposition) whether true or false or provable or not. Staecker (talk) 12:31, 30 October 2010 (UTC)[reply]
Also, in mathematics, there is no formal criterion (and thus are no criteria) for when a statement may be called a conjecture. I think that the closest you can get is "a statement whose truth value was not known, but someone guessed probably should be true". There are some fairly clear cases, where a prominent mathematician has said or written "I conjecture that...", or words to that effect. There are other cases, which are not at all as clear. (S)he may have written something like "It would be interesting to know whether this property holds in general". In such cases, the statement "This property holds in general" sometimes are referred to as 'N.N.'s conjecture'; but this is not quite proper - and, in particular, if a case later is found where the property doesn't hold, this should most politely be referred to as "a negative answer to the question by N.N.".
In either case, the term "disproof" instead of "counterexample" or "negative answer" is unnecessary, and may be misleading, since some readers may get the impression that someone found an error in what another person has claimed to be proven. JoergenB (talk) 15:20, 30 October 2010 (UTC)[reply]
I would say that the article's definition is spot on: "...a proposition that is unproven but appears correct and has not been disproven." Fly by Night (talk) 16:49, 30 October 2010 (UTC)[reply]
I don't like that formulation at all; it could contribute to the confusion between proof (mathematics) and proof (otherwise) which is fairly common.
Have you ever seen any of those letters from "trisectors", which universities often are plagued with, and which may start e.g. thus:
"Well, I know that the mathematicians have proved that it impossible to trisect an angle; but did they think of the following construction?"
A "proposition" in mathematics is essentially the same as a "theorem". A statement "appears correct" (i.e., appears to be a proposition), if there seems to be a proof for it; but possibly very complex, whence we have not yet checked the details. It is definitely "disproved", if we find an error in that suggested proof.
In my (less humble than usual) opinion, mathematics is not an empirical science. There is a really important difference between a "conjecture" in mathematics, and a "conjecture" in an empirical science. This is not reflected very well by the present summary in Conjecture. A quick look in the history shows that (at least for mathematics), the formulation was a good bit better six years ago:
In mathematics, a conjecture is a mathematical statement which has been proposed as a true statement, but which no one has yet been able to prove or disprove. (I should prefer "refute" to "disprove", here, but otherwise it seems fairly OK.) JoergenB (talk) 19:27, 30 October 2010 (UTC)[reply]
Your original formulation was "a statement whose truth value was not known, but someone guessed probably should be true". Your new formulation is "a mathematical statement which has been proposed as a true statement, but which no one has yet been able to prove or disprove." How do those differ from "..a proposition that is unproven but appears correct and has not been disproven"? This is exactly what your new formulation says:
  • "a mathematical statement which has been proposed as a true statement" ~ "a proposition"
  • "no one has yet been able to prove or disprove" ~ "unproven... has not been disproven"
Fly by Night (talk) 19:45, 30 October 2010 (UTC)[reply]
I start to worry that this goes beyond the Reference desk issue; in which case we might continue elsewhere.
My "new" formulation is a quotation from a six year old one. I explained why I do not consider that as similar to the statement you provided (quoting our article conjecture):
  • a proposition that is unproven but appears correct and has not been disproven.
As you can see, I wrote some minutes ago: A "proposition" in mathematics is essentially the same as a "theorem". Let me amplify: In modern mathematical texts, the terms proposition, theorem, lemma, and corollary are used for statements which are proved, either before or after the actual statement; either by the author(s), or by others (in an earlier article, book, or other context). There is some distinction between the usage of these terms; e.g., a corollary ordinarily follows after a statement with one of the other three labels, and its proof's main ingrediense often is a direct application of that preceeding statement.
Now, you seem to disagree. Do you disagree; and, if so, on what grounds? Do you think that my description of ordinary modern mathematical usage of the terms is wrong; or do you concede that the terms are used in that manner, but hold that using "proposition" as "proven statement of intermediate independent interest" is an abuse of the term? I can concede that we often use these four terms also for the statement themselves, in contexts where we discuss the proofs; we may start a proof with "Proof of the main theorem", or end with "Thus, the lemma is proved". However, we do not call this a theorem, proposition, lemma or corollary any longer, if the "proof" was shown to be defect. Actually, we may express this by saying something like
No, this is not a proposition; look here, that's a clear error (in the proposed proof).
Do you use these terms radically different? JoergenB (talk) 20:41, 30 October 2010 (UTC)[reply]

I'm sorry, you're straying from the point here. You said that a conjecture was "a mathematical statement which has been proposed as a true statement, but which no one has yet been able to prove or disprove." The article says that a conjecture is "...a proposition that is unproven but appears correct and has not been disproven." I say those two statements are the same. That was the only point I made, I wasn't questioning anything else. Fly by Night (talk) 20:53, 30 October 2010 (UTC)[reply]

Well, I wrote, twice now, that in modern mathematics the term "proposition" is reserved for proven statements. Isn't that to the point? (You may well disagree factually, but I don't think you reasonably can claim that if I am right, a conjecture still could be called a proposition.) JoergenB (talk) 21:05, 30 October 2010 (UTC)[reply]
No, the word proposition is not reserved for proven statements. A proposition is simply something that is either true or false.
Now, there is a secondary meaning of proposition, usually not used stand-alone but rather in the form "Proposition 3.7" or whatever, that means "theorem that doesn't really seem worthy of the name theorem or maybe even lemma, or maybe I'm not calling it a "lemma" because it's of independent interest and not just a stepping stone". But that's not usually what proposition means except in the form "Proposition 3.7". --Trovatore (talk) 21:13, 30 October 2010 (UTC)[reply]
(e/c) I think we're having two totally different discussions here. Let's just leave it at that, shall we? Fly by Night (talk) 21:16, 30 October 2010 (UTC)[reply]
Also note that, the fact that a proposition be a conjecture, is relative to the person who makes it. It sometimes happens that one makes a conjecture on something, ignoring that it has already been proven, or disproven. Of course a conjecture became famous when everybody would like to decide it , but nobody is able to do it for a while. --pma 21:47, 30 October 2010 (UTC)[reply]
Fly by Night, I really think this is one discussion; but with different aspects. If you agree with Trovatore, you may use "proposition" for a conjecture; if you agree with me, you can't.
When I was young, several books mentioned "Fermat's Last Theorem". This was later changed as improper; not believing that Fermat really had a correct proof for all cases, it should be called "Fermat's last conjecture".
With that, I agree that this leads too far, now and in this place. We might discuss it later at e.g. Talk:Conjecture, or at Talk:Proposition. JoergenB (talk) 22:05, 30 October 2010 (UTC)[reply]
There is really no possible argument about what proposition means. 2+2=5 is absolutely a proposition; this is indisputable. --Trovatore (talk) 23:40, 30 October 2010 (UTC)[reply]

October 30

Divisor of a Mersenne prime

Resolved

Given a Mersenne prime , empirically I've tested for all 47 known Mersenne primes. Is this a general property? The converse isn't true--that is, if , isn't necessarily a Mersenne prime. The first counterexample is , which is divisible by 11 and 31, so by 11*31 = 341. I haven't been able to make much progress and it seems like this would have been studied before. 67.158.43.41 (talk) 05:13, 30 October 2010 (UTC)[reply]

. Therefore, if q is odd, then if . But Fermat's little theorem says that if q is an odd prime then . This explains why for all Mersenne primes (note that q must be prime if Mq is a Mersenne prime). However, as you observe, the converse if not true - there are composite numbers called pseudoprimes to base 2 (also known as a Poulet numbers) which also satisfy , and hence . The smallest Poulet numbers are 341, 561 and 645. Gandalf61 (talk) 09:28, 30 October 2010 (UTC)[reply]
Ah of course, how simple. Thanks for the name for Poulet numbers! 67.158.43.41 (talk) 11:18, 30 October 2010 (UTC)[reply]

The mathematical spontaneity of coffee stirrers

Is this http://i56.tinypic.com/2urqjpj.png the only arrangement of four straight lines that allows each line to cross all the other lines without any triple or quadruple crossing points? And if it is, how can I prove it? —Preceding unsigned comment added by 89.243.205.241 (talk) 20:51, 30 October 2010 (UTC)[reply]

I don't know but here's a math joke: Why do we stir coffee? Answer: To make it sweet! All right, wise guy, then why do we put in sugar first? So that when it dissolves we know we can stop stirring! 84.153.183.246 (talk) 21:00, 30 October 2010 (UTC)[reply]

To prove anything about this I think you need to make the question a bit more precise. What makes two arrangements the same? I think I understand what you're asking and I'm pretty sure the answer is "yes", but I'm not so sure about how to state it in a precise way. Maybe someone else can help with that. Rckrone (talk) 22:43, 30 October 2010 (UTC)[reply]
I do not have much maths education, but I'll try. First off I'll say the colours in the picture don't mean anything. Secondly, an arrangement would be the same as another one if it was a rotation, a reflection (although obviously in this case my current arrangement has reflectional symmetry) or if it just had the crossover points shifted along the lines without the order of crossover points on any of the lines actually changing. 89.243.205.241 (talk) 23:02, 30 October 2010 (UTC)[reply]
Here's a sketch of an argument: the first 3 stirrers form a triangle. The fourth stirrer (call it A) can cross each of first 3 either within the segment that forms a side of the triangle, or outside of it. If stirrer A crosses into the triangle, it has to later cross back out. So then it must cross the triangle an even number of times (which means 0 or twice). If it crosses the triangle zero times, then choose the first and last stirrer it crosses and call these B and C, with the remaining one being D. Stirrers A,B,C form their own triangle, and stirrer D crosses stirrer A between A's intersection with B and C, so D crosses the triangle formed by A,B,C and therefore must cross it twice. So the only possible arrangement is for one of the stirrers to cross the triangle formed by the other 3 exactly twice, and the remaining crossing is outside the triangle. You would then need to verify that there's really only one way for that to happen. Rckrone (talk) 23:59, 30 October 2010 (UTC)[reply]

Algorithm for next term in series

A while back, my teacher showed the class a way to construct a polynomial function that could satisfy any series (example, for a series like -2, 4, 8, -2.3, and 0, this algorithm would produce a function f(n) such that f(1) = -2, f(2) = 4, etc). I forget how it worked though, although I remember it has something to do with subtracting each term by the one next to it, and continuing this until they're all zero. 76.68.247.201 (talk) 22:06, 30 October 2010 (UTC)[reply]

That's one way of doing it. Another is given in Lagrange polynomial. Algebraist 22:22, 30 October 2010 (UTC)[reply]
What you're remembering is maybe Newton's forward difference formula. You can also do it by solving a system of linear equations; see [1], for example. —Bkell (talk) 23:52, 30 October 2010 (UTC)[reply]

October 31

Pumping Lemma

Hi, I've been trying to understand the pumping lemma to no avail. I understand that it has something to do with the pigeon hole principle and that the idea is to get a repeating string x(y^i)z. I just still don't understand how to pick the pumping string, or figure out the minimum pumping length, or how to solve a proof. The wikipedia page made some sense but it was confusing too :/