Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 78.146.249.32 (talk) at 11:08, 17 April 2009 (→‎Is there an infinate number of different types of probability distribution?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


April 10

xy=yx

is it possible to express y explicitly from the equation
xy=yx ?
ie is it possible to write y as a function of x from this equation? Rkr1991 (talk) 04:16, 10 April 2009 (UTC)[reply]

y = -x W(-ln(x)/x)/ln(x) where W is the Lambert W function. McKay (talk) 04:29, 10 April 2009 (UTC)[reply]
In LaTeX you can write it more clearly as --CiaPan (talk) 05:33, 10 April 2009 (UTC)[reply]
And there is y=x, also. Any pair x>0, y>0 verifying xy=yx is of either this form or the other. --pma (talk) 08:58, 10 April 2009 (UTC)[reply]
In fact, y=x is a special case of the Lambert W solution, for appropriately chosen branches. Fredrik Johansson 23:56, 10 April 2009 (UTC)[reply]

Nontrivial Normal Abelian Subgroups of a Solvable Group

I'm working through Dummit&Footes book on algebra and cannot seem to figure out one of the exercises. They are asking for a proof that if G is solvable and has a nontrivial normal subgroup, then there is a nontrivial abelian subgroup A of H so that A is also a normal subgroup of G. I can demonstrate this in the case that G is finitely generated, or in the case that H has a nontrivial center; but I can't seem to show this in general. My best guess is that I'm over complicating things and that the solution involves some clever use of the fact that there is H' so H/H' is abelian and an application of one of the isomorphism theorems, but I'm not sure. Any help would be greatly appreciated; I'd actually be happier with a decent hint:) Thanks Phoenix1177 (talk) 09:35, 10 April 2009 (UTC)[reply]

one word hint: centres. 129.67.186.200 (talk) 09:51, 10 April 2009 (UTC)[reply]

Thank you. I'm assuming it's something along the lines H is solvable implies Z(H) is nontrivial, Z(H) char H implies Z(H) is a normal subgroup of G. I just can't seem to see why Z(H) needs to be nontrivial...though I'm sure I'll get it eventually. 66.202.66.78 (talk) 10:13, 10 April 2009 (UTC)[reply]

The symmetric groups S3 and S4 are solvable and have trivial centres, so this can't quite work. — Emil J. 10:35, 10 April 2009 (UTC)[reply]
I was thinking something might be wrong with the approach since it wasn't going anywhere fruitiful. The methods of approach I can think of would be to either use the fact that if 1 < A <...H < B <...G is a normal series with abelian factors for G, then A is a subgroup of H that is abelian, but this seems a long shot; or to use induction over the minimal length normal series with abelian factors, but there doesn't seem to be any guarentee that the series containing H will be minimal. I imagine that there is a rather obvious way to proving this, and I'm just not seeing it. Any further help would be greatly appreciated, I'm rather stuck:) 76.125.236.118 (talk) 13:09, 10 April 2009 (UTC)[reply]
What is H? If H = G, the result is easy (so in this case, given a solvable group G, you want to find a non-trivial Abelian normal subgroup of G given that G is not simple). Consider the minimal normal subgroup of G (this proof does not even require non-simplicity). Otherwise, what is H supposed to be? --PST 13:28, 10 April 2009 (UTC)[reply]
By the way, are you familiar with the result that for solvable groups, all chief factors are Abelian p-groups? --PST 13:37, 10 April 2009 (UTC)[reply]
As I understand it, H denotes the nontrivial normal subgroup of G assumed to exist in the beginning of the second sentence of the original post. That is, the problem is: given a solvable group G and its nontrivial normal subgroup H, find a nontrivial abelian subgroup of H which is normal in G. — Emil J. 13:43, 10 April 2009 (UTC)[reply]
In that case, consider my previous hint - consider the minimal normal subgroup of G (this subgroup is solvable as a group so what can you say about its commutator subgroup?). Furthermore, what is the relation between this subgroup and H? --PST 22:11, 10 April 2009 (UTC)[reply]
The terms of the derived series of H are characteristic in H, so normal in G. The last non-identity term of the derived series is abelian. Some people use nontrivial to also mean "proper", but with this usage there need not be any such subgroup: take G to be a finite solvable group of composite order and H to be a minimal normal subgroup, then H contains no non-identity proper subgroups that are normal in G. JackSchmidt (talk) 14:24, 10 April 2009 (UTC)[reply]
If "non-trivial subgroups" are precisely those that are neither the trivial group or the whole group, then non-simplicity (which is one of the hypothesis mentioned) should do it (along with solvability, of course). Just consider the minimal normal subgroup of G (which is not G because of non-simplicity). But basically, I am still not totally sure as to what the OP is asking because of his undefined terminology so I can't be sure whether this answers his/her question. --PST 04:15, 11 April 2009 (UTC)[reply]
The solvable group G = Z has no minimal normal subgroup. The OP has already handled the case of finitely generated solvable groups. Also the requirement is that the abelian G-normal subgroup is contained in a given normal subgroup H. JackSchmidt (talk) 15:57, 11 April 2009 (UTC)[reply]
Apologies - I was thinking of the case for finite groups. Thanks for correcting me. But for finite solvable groups that are not simple, it is always true that there exists a proper Abelian non-trivial normal subgroup (the minimal normal subgroup). --PST 02:53, 12 April 2009 (UTC)[reply]


April 11

Limit involving moment generating functions

Hi there - I'm looking to prove the result that if X is a real R.V. with moment generating function , then if the MGF is finite for some theta > 0, . I really have absolutely no clue where to start - is it central limit theorem related? I could really use a hand on this one, thanks a lot!

Otherlobby17 (talk) 06:51, 11 April 2009 (UTC)[reply]

Look at the integral that defines , namely . If it is finite, then , which is stronger than you need. McKay (talk) 09:14, 11 April 2009 (UTC)[reply]
Fantastic, thanks very much buddy :) Otherlobby17 (talk) 04:15, 12 April 2009 (UTC)[reply]

Graph

Is it possible for the graph of a continuous function to touch the x-axis without there being a repeated root? 92.0.38.75 (talk) 13:02, 11 April 2009 (UTC)[reply]

Are you referring to polynomial functions, rather than the less restrictive class of continuous functions? It may be my own ignorance, but I don't know the definition of a repeated root for an arbitrary continuous function. But in the case of a polynomial function, any instance where the function touches but does not pass through the x-axis must be a repeated root. To see this (assuming you have had calculus), note that the derivative at the root must be zero for the curve to just barely kiss the graph. Take the derivative first:
If the root is , the term appears in every product except one (when b = i). Now set , as well as setting the expression equal to zero (knowing that the derivative is zero at the root); all that is left is
Ergo there must be another term , or equivalently, the multiplicity of the root is greater than one.
Hope this helps, --TeaDrinker (talk) 15:01, 11 April 2009 (UTC)[reply]

What type of root does the absolute value function |x| have at 0? What about at 0? I think the concept "repeated root" is only usually defined for functions with derivatives. The order of the root is the number of values f(x), f'(x), f"(x),... which are zero. McKay (talk) 01:50, 12 April 2009 (UTC)[reply]

You could extend that to say that if a function is continuous but not differentiable at a zero, then the zero has order 0, analogously to the way one sometimes thinks of continuous functions as being 0-times differentiable, or C0. I don't know if this is useful for anything though. Algebraist 11:44, 12 April 2009 (UTC)[reply]

What symbol is this?

Ran across this symbol in a mathematics book but I have no idea what it is, or how to type it up in LaTeX. It's a bit like a cursive capital X with a short horizontal line through the middle. My first instinct was but it looks nothing like it. 128.86.152.139 (talk) 14:17, 11 April 2009 (UTC)[reply]

Hmm, I don't know anything fitting that description, but have you tried the comprehensive LaTeX symbols list? --TeaDrinker (talk) 14:23, 11 April 2009 (UTC)[reply]
I'm going through it now.  :( But it's incredibly long... 128.86.152.139 (talk) 14:36, 11 April 2009 (UTC)[reply]
What book is it? What's the context in which the symbol was used? --TeaDrinker (talk) 14:45, 11 April 2009 (UTC)[reply]
If you'd have said a vertical line, I'd have said Zhe. It's a shame you didn't. 163.1.176.253 (talk) 14:57, 11 April 2009 (UTC)[reply]


don't forget that a lot of people cross weird things. a close acquaintance crosses their v's! Maybe it's just a fancy x, so that you don't think it's a normal x?  :) 79.122.103.33 (talk) 15:49, 11 April 2009 (UTC)[reply]

Do you mean ? That's just a fraktur capital X. Algebraist 16:37, 11 April 2009 (UTC)[reply]
Brilliant, thanks. For context, my lecturer uses it to denote a set of data: Is there a non-Blackletter-style font, though? 128.86.152.139 (talk) 02:34, 12 April 2009 (UTC)[reply]

how does variable-length testing work?

let's say I buy a rigged coin (slightly favors one side) but forget what side is favored

could i just write a script that i put throws into one after the other, and at each stage it tries that many with a fair coin (for example at throw #5 it throws a fair coin five times) but REPEATEDLY, a MILLION times, to see how many times the fair coin behaves the same way under that many throws, ie as a percentage.

Then if the fair coin only behaves that way 4% of the million times, then it would be 96% confident that the currently winning side is weighted?

here are real examples i just ran with a script: if at throw #10 the count is 7 heads to 3 tails (70% heads), it ran a million times ten throws and came up in 172331 of them (17%) with at least that many. So it would report 83% confidence that heads are weighted.

if at throw #50 the count is 35 heads to 15 tails (70% heads), it ran a million times fifty throws and came up in 3356 of them (0.33%) with at least that many. So it would report report 99.67% confidence heads are weighted.

#1: t
0 head 1 tails
50% conf. heads weighted

#2: t
0 heads 2 tails
50% conf. heads weighted

#3: h
1 head 2 tails
50% conf. heads weighted

...
#10: h
7 heads 3 tails
83% conf. heads weighted
...
#50:h
35 heads 15 tails
99.7% conf. heads weighted

is that really how statistics works? if I write my script like I intend to will it be accurate? Also how many decimal places should I show if I am running the 'monte carlo' simulation with a million throws?

Is a million throws accurate enough to see how often a fair coin behaves that way, or should I up it to a billion or even more? could i use a formula, and if so what? (i dont know stats).

Thanks! 79.122.103.33 (talk) 15:30, 11 April 2009 (UTC)[reply]

The confidence interval is much tighter than that, see binomial distribution. 66.127.52.118 (talk) 20:13, 11 April 2009 (UTC)[reply]

why does modulus of a random integer in a high range favor lower answers SO SLIGHTLY?

say rand() returns 0-32767 but you want 0-100 - you can just do rand() % 100 which is pretty much an accepted programming practice but results in a very slighly skewed distribution.

data

My question is, how come the skew in the distribution is so incredibly slight?? Here I did it a million times:

0:10137
1:9967
2:10225
3:10157
4:9921
5:10096
6:10087
7:9924
8:9876
9:9994
10:10052
11:10022
12:10098
13:9940
14:10080
15:9939
16:9967
17:10067
18:9930
19:10058
20:10072
21:9882
22:9940
23:9793
24:10051
25:10105
26:10079
27:9970
28:9998
29:10197
30:9868
31:9979
32:10006
33:10014
34:9991
35:10062
36:9641
37:10054
38:9938
39:10221
40:9957
41:10064
42:9913
43:9858
44:10050
45:10080
46:10010
47:10009
48:10147
49:9971
50:10107
51:10083
52:9943
53:9998
54:9926
55:10036
56:9965
57:10048
58:10130
59:10049
60:9889
61:9843
62:10067
63:9918
64:10109
65:10201
66:10037
67:10049
68:9940
69:10011
70:10061
71:9946
72:10017
73:9781
74:9946
75:9986
76:10180
77:9888
78:9850
79:10034
80:10186
81:9803
82:9948
83:10040
84:9984
85:10109
86:9986
87:10006
88:9883
89:9834
90:9921
91:10002
92:10191
93:10091
94:9990
95:9910
96:9837
97:9793
98:10097
99:9894

I barely see the effect unless I know to look for it (at 0 does come up more than 99, but then again 1 doesn't...)

Here they are as percentages of 10,000 (the actual expected number):

101.37%
99.67%
102.25%
101.57%
99.21%
100.96%
100.87%
99.24%
98.76%
99.94%
100.52%
100.22%
100.98%
99.4%
100.8%
99.39%
99.67%
100.67%
99.3%
100.58%
100.72%
98.82%
99.4%
97.93%
100.51%
101.05%
100.79%
99.7%
99.98%
101.97%
98.68%
99.79%
100.06%
100.14%
99.91%
100.62%
96.41%
100.54%
99.38%
102.21%
99.57%
100.64%
99.13%
98.58%
100.5%
100.8%
100.1%
100.09%
101.47%
99.71%
101.07%
100.83%
99.43%
99.98%
99.26%
100.36%
99.65%
100.48%
101.3%
100.49%
98.89%
98.43%
100.67%
99.18%
101.09%
102.01%
100.37%
100.49%
99.4%
100.11%
100.61%
99.46%
100.17%
97.81%
99.46%
99.86%
101.8%
98.88%
98.5%
100.34%
101.86%
98.03%
99.48%
100.4%
99.84%
101.09%
99.86%
100.06%
98.83%
98.34%
99.21%
100.02%
101.91%
100.91%
99.9%
99.1%
98.37%
97.93%
100.97%
98.94%

As you can see they're all over the place.

So I'll do it a billion times:

0:10038140
1:10008197
2:10009360
3:10006955
4:10011825
5:10010609
6:10009413
7:10006938
8:10011526
9:10010894
10:10010597
11:10009374
12:10009683
13:10007576
14:10011881
15:10009578
16:10010504
17:10009339
18:10009367
19:10010843
20:10006451
21:10006077
22:10009165
23:10014474
24:10006321
25:10006088
26:10007508
27:10007083
28:10008172
29:10009126
30:10011141
31:10011209
32:10009601
33:10011616
34:10006668
35:10008558
36:10012031
37:10011200
38:10008657
39:10011348
40:10012982
41:10012670
42:10011145
43:10008010
44:10011152
45:10009978
46:10011937
47:10010535
48:10008799
49:10006801
50:10009905
51:10009997
52:10007276
53:10012822
54:10012214
55:10005860
56:10010537
57:10010839
58:10008926
59:10011667
60:10008250
61:10012131
62:10003874
63:10005923
64:10014245
65:10009392
66:10009417
67:9982730
68:9978860
69:9980179
70:9978155
71:9982744
72:9977599
73:9976077
74:9981662
75:9977978
76:9982794
77:9981410
78:9982701
79:9978788
80:9977564
81:9980187
82:9980063
83:9976760
84:9980559
85:9978017
86:9980910
87:9981715
88:9978261
89:9981133
90:9979202
91:9976322
92:9977249
93:9976058
94:9977878
95:9984202
96:9980344
97:9981362
98:9978432
99:9979728

the effect becomes clear... however it is TINY!!!

Here they are as percentages (of the expected 10million):

100.3814%
100.08197%
100.0936%
100.06955%
100.11825%
100.10609%
100.09413%
100.06938%
100.11526%
100.10894%
100.10597%
100.09374%
100.09683%
100.07576%
100.11881%
100.09578%
100.10504%
100.09339%
100.09367%
100.10843%
100.06451%
100.06077%
100.09165%
100.14474%
100.06321%
100.06088%
100.07508%
100.07083%
100.08172%
100.09126%
100.11141%
100.11209%
100.09601%
100.11616%
100.06668%
100.08558%
100.12031%
100.112%
100.08657%
100.11348%
100.12982%
100.1267%
100.11145%
100.0801%
100.11152%
100.09978%
100.11937%
100.10535%
100.08799%
100.06801%
100.09905%
100.09997%
100.07276%
100.12822%
100.12214%
100.0586%
100.10537%
100.10839%
100.08926%
100.11667%
100.0825%
100.12131%
100.03874%
100.05923%
100.14245%
100.09392%
100.09417%
99.8273%
99.7886%
99.80179%
99.78155%
99.82744%
99.77599%
99.76077%
99.81662%
99.77978%
99.82794%
99.8141%
99.82701%
99.78788%
99.77564%
99.80187%
99.80063%
99.7676%
99.80559%
99.78017%
99.8091%
99.81715%
99.78261%
99.81133%
99.79202%
99.76322%
99.77249%
99.76058%
99.77878%
99.84202%
99.80344%
99.81362%
99.78432%
99.79728%
ahhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh. Now the effect becomes nice and clear. (But is still tiny)

My questions are:

  1. why did I have to do a BILLION (which is a huge number) times to see this nice and clear pattern?
  2. why is the pattern so TINY
  3. why does the switch from over the expected (were it an even distribution) to under the expected CLEARLY happen at 66? wouldn't 50 make more sense? It's a huge and clear shift and I bet it would repeat around there, given that the numbers above and below are clearly all in line and it's not like a statistical fluke (e.g. if 64 through 67, were just a hair away from each other and not actually in order but 66 "happened to" win out) , it follows the pattern decisively... I wonder why 66 (or two thirds of 99) is where it happens, it seems odd... why?

So: what are the mathematical reasons for such a tiny tiny slight favoring (like 0.1% it seems like over a billion iterations of the lower modulus numbers?), why don't they show up over a million iterations, and why does the shift from over the expected to under the expected number happen at 66 of 99 instead of something sensible like 50?

Thank you! 79.122.103.33 (talk) 17:43, 11 April 2009 (UTC)[reply]

I'm not quite sure I understand the problem. The remainders from 0 through 67 occur 328 times in the range, while the remainders from 68 through 99 only occur 327 times. So we would expect any low number (0–67) to occur with probability 328/32768 = 1.001% and any high number (68–99) with probability 327/32768 = 0.998%. It easily drowns in sheer randomness for too few trials. I have no idea why your result seems to treat 67 as a high number though. That seems weird. —JAOTC 18:03, 11 April 2009 (UTC)[reply]
Jao's answer is correct. For another explanation of why the shift happens where it does, think of a system where rand() returns numbers from 0-5 (like a die). Now, if you wanted a number between 0 and 4 exclusive, you could do rand() % 4, with the following possible outcomes:
  • rand() returns 0 - result 0
  • rand() returns 1 - result 1
  • rand() returns 2 - result 2
  • rand() returns 3 - result 3
  • rand() returns 4 - result 0
  • rand() returns 5 - result 1
It is easy to see that 0 or 1 would occur twice as often as the other numbers. If you made a similar table for 0-32767 and 0-99, you would see that the numbers 0-67 occur once more than the other numbers (namely, when the generator returns a number between 32000 and 32767). decltype (talk) 18:43, 11 April 2009 (UTC)[reply]
thanks for the answers! They make sense. So how would you correctly choose a number between 0 and 3 inclusive using a generator that goes from 0-5 inclusive, in an equally distributed way? Should you just reflip on a 4 or 5, no matter how many times you keep getting it? 79.122.103.33 (talk) 21:25, 11 April 2009 (UTC)[reply]
If you want an algorithm that's sure to stop, you can just flip it twice and see which ordered pair (like (1,1), (0,4) or (5,1)) you get. As there are 36 such ordered pairs, any scheme that maps nine of them to 0, nine to 1, nine to 2 and nine to 3 works. —JAOTC 22:12, 11 April 2009 (UTC)[reply]

To get uniformity at little cost, just reject values of rand() greater than 32699. Incidentally, it is usually recommended to use division rather than mod for this problem. That is, instead of rand() % 100, use rand() / 327 (after rejecting values above 32699). If rand() was perfectly random, it wouldn't make any difference, but division is considered less likely to magnify the imperfections of imperfect random number generators. McKay (talk) 01:45, 12 April 2009 (UTC)[reply]

okay then what is the answer

this is in relation to my thread two above (about weighted coins). so how many flips out of how many would I need to reach to be sure my coin isn't fair, for 75%, 90%, 95%, 98.5%, 99%, 99.9% confidence...

what is the formula? (this is not homework) 79.122.103.33 (talk) 21:21, 11 April 2009 (UTC)[reply]

The number of flips of the weighted coin that are necessary to ascertain which direction the weight is in (or to ascertain that it is weighted at all) to a specified level of confidence depends on the extent of the weight. At a fixed level of confidence, a coin with 2/3 probability of landing heads will be determined to be weighted much sooner than a coin with 501/1000 probability of landing heads. So the answer depends upon the amount of skew. Eric. 131.215.159.99 (talk) 23:32, 11 April 2009 (UTC)[reply]

how to do monte carlo properly

if after doing n flips and getting a certain number of heads, I want to be exactly 95% sure that the results show my coin favors heads (isnt fair) but I'm really bad at statistics, and want to do monte carlo instead, could I see what the most heads is in 20 runs (20 because 19/20th is 95%) by making a list of "most heads out of n flips in 20 runs" a billion times, average those numbers, and get my 95% threshold for n?

For example, if I want to see what number out of 50 flips my coin has to beat for me to be exactly 95% sure that it isn't fair, I average a billion metaruns of "most heads out of 50 flips, in 20 tries"?

sorry that I'm such a zero in statistics. this must be so frustrating to anyone who actually knows what's going on. anyway does this proposed methodology work for my intended confidence level (95%)?

if not this then what is the correct monte carlo method for the 95% interval? Thanks!79.122.103.33 (talk) 21:45, 11 April 2009 (UTC)[reply]

The distribution of the number of head in 20 tosses of a fair coin is approximately a normal distribution with a mean of 10 and a variance of 5, so a standard deviation of sqrt(5). In a normal distribution, 95% of observations are less than 1.65 standard deviations above the mean (since your question is "does my coin favour heads" then this is a one-sided test). 10 + 1.65 x sqrt(5) is approximately 13.7. So the probability that a coin that does not favour heads will score more than 13 heads in 20 tosses is less than 5%. So if your coin scores 14 or more heads, you can be "95% certain" that it favours heads. See hypothesis testing and Z-test for more details. Gandalf61 (talk) 12:25, 12 April 2009 (UTC)[reply]
You need to know the prior probability that the coin is biased before you can answer such a question precisely. See hypothesis testing (mentioned by Gandalf) for some discussion. Really, you've asked the question a couple very different ways: 1) you have a coin that is known to be biased, but you're not sure if it's towards heads or towards tails (let's say that's an implicit assumption that it's 50-50 guess between heads-biased or tails-biased); or 2) you have a coin that might be biased (with some unknown probability) and you want to check. Case 1 is straightforward: the null hypothesis is that the coin is biased towards tails, then compare the outcome of your experiment with the binomial or normal distribution. Case 2 is harder: for example, say you have 1000 coins, of which 999 are fair and one is 60-40 biased towards heads. You pick one of those coins uniformly at random, flip 20 times and get 14 heads. Are you 95% confident that you picked the biased coin? Nope, because of the prior probability distribution. In this case you'd use Bayes's theorem in conjunction with the known 0.999 prior probability to interpret the result. But if you don't know the prior probability, it's not so clear what to do. 66.127.52.118 (talk) 12:46, 12 April 2009 (UTC)[reply]
not knowing the prior probabiity, what can one do? If a magician lets you check a coin he's using, what are you supposed to guess for the chances it's fair? How woud you check? (to be 90% / 95% / 98% / 99% etc sure in your conclusion?) Thanks 94.27.222.70 (talk) 22:29, 12 April 2009 (UTC)[reply]
That is to some extent a question about philosophy rather than mathematics. See Bayesian probability and frequency probability (as well as statistical hypothesis testing already mentioned) for some discussion. 66.127.52.118 (talk) 01:19, 13 April 2009 (UTC)[reply]


April 12

When (or how do i find out when) the integral of x^P exp(-x^Q) converges over (0, infinity)?

Not much more to explain than what the title says really! I'm revising up on my analysis but I'm uncertain as to how to go about finding out where (if anywhere) in terms of P,Q the integral converges, where P and Q > 0. If someone can just point me in the direction of a theorem or a wikipage which I could begin to use to approach the problem (or help me begin to approach it if you're feeling generous!), I'd be hugely appreciative - thanks a lot,

Mathmos6 (talk) 04:20, 12 April 2009 (UTC)[reply]

First, change variable: . This reduce to the case Q=1 (with another P, that turns out to be > -1 ), which is the Euler integral for the gamma function. So you can actually compute the integral in terms of P & Q. Then, is it clear to you the convergence of the Euler integral (i.e. in your notations, P > -1, Q=1)? --pma (talk) 09:02, 12 April 2009 (UTC)[reply]
Since goes to 0 pretty quickly and is only a polynomial, this integral converges for all positive values of P and Q. In general, determining the convergence of improper integrals has more to do with asymptotics than with integration. The techniques tend to be the same as those for determining the convergence of infinite series. Jim (talk) 16:38, 12 April 2009 (UTC)[reply]

Intrinsic equation

Given that the intrinsic equation of a curve is where a is a non zero constant, I have to show that . I start with the Whewell equation, namely . However I get and from here cannot make any progress. Integrals.com can't integrate that expression so I doubt I'll have any luck with it. Where have I gone wrong? Thanks 92.0.38.75 (talk) 11:04, 12 April 2009 (UTC)[reply]

Notice that the intrinsic equation is translation invariant, so you have also to assume (0,0) belongs to the curve, if you want to show . That said, I do not see anything wrong in your approach, although there are possibly other ways. You have
Integrate, using and :
From these you can easily obtain your and the cartesian equation
So, it's old semicubic parabola. (PS: notice the TeX for trigonometric functions and parentheses).--pma (talk) 11:35, 14 April 2009 (UTC)[reply]

Invariant sets

Let

and

is it true that

? --pokipsy76 (talk) 15:47, 12 April 2009 (UTC)[reply]

No, in general you only have . There are simple examples; maybe you can see it: try with X a countable tree. pma (talk) 16:31, 12 April 2009 (UTC)[reply]
Which T do you have in mind on the tree?--pokipsy76 (talk) 17:43, 12 April 2009 (UTC)[reply]
T is the map going one step up the tree towards the root, as in the example below. Algebraist 20:01, 12 April 2009 (UTC)[reply]
I don't see how it could work. If the tree is a complete binary tree then it seems to me that T(X)=X.--pokipsy76 (talk) 07:34, 13 April 2009 (UTC)[reply]
Sorry, should've been clearer: you need to pick the right tree. Algebraist 12:17, 13 April 2009 (UTC)[reply]
Or this example with : define ; for all n>0; otherwise. Then and . (Uè, paisà!) --pma (talk) 17:11, 12 April 2009 (UTC)[reply]
Well if we take instead of it works. (Uè uè!)--pokipsy76 (talk) 17:41, 12 April 2009 (UTC)[reply]
Either works. This is in fact one of the tree examples. Algebraist 20:01, 12 April 2009 (UTC)[reply]
Ok, you are right, I didn't see that also produces expanding "holes" starting from every .--pokipsy76 (talk) 08:28, 13 April 2009 (UTC)[reply]

I'm wandering if there are examples when X is a topological subspace of an euclidean space and T is a continuous function.--pokipsy76 (talk) 18:19, 12 April 2009 (UTC)[reply]

Of course there are. Any countable example (such as pma's) can be embedded in N in R. Algebraist 20:01, 12 April 2009 (UTC)[reply]
Yes, that was a stupid question and not really what I had in mind... the slightly difficult problem indeed could be to find examples with topological subspaces which are connected and have nonempty interior. For example the intermediate value makes it impossible to extend the above map in R.--pokipsy76 (talk) 07:34, 13 April 2009 (UTC)[reply]
You can get a connected example in the plane with a tree again, only this time you put in the edges. You can easily fiddle that to have nonempty interior by fattening out one of the edges. Algebraist 12:17, 13 April 2009 (UTC)[reply]

Ok, so what about this "improvement":

Let

,

and for any ordinal number

is it true that

for some or (at least)

(assuming this makes sense)?--Pokipsy76 (talk) 20:32, 13 April 2009 (UTC)[reply]

Yes, this version works. We must have for some , otherwise loses at least one point at every step, and so loses card(ON) many overall, which can't happen since X is a set. Algebraist 21:09, 13 April 2009 (UTC)[reply]
And any ordinal of the same cardinality of X may be required in the iteration, before the sequence becomes stationary: in the sense that for any ordinal α with card(α)card(X) there is a map such that . --pma (talk) 10:01, 14 April 2009 (UTC)[reply]

Factorial proof

How cn it be prooved that 0!=1 ? —Preceding unsigned comment added by 201.130.196.160 (talk) 19:14, 12 April 2009 (UTC)[reply]

It cannot be proved, it is defined to be that value because it make formulas work. See Factorial#Definition For more info. meshach (talk) 19:24, 12 April 2009 (UTC)[reply]
The continuity of the relationship between the gamma function and factorial recquires that 0!=1. The gamma function is an extension of factorial to all real and complex numbers, and is subject to the relationship . If you admit the relationship between gamma and factorial (Which there is proof of on the gamma function article), then evaluation gamma gives proof that 0!=1 since . This is the closest thing to a proof I can offer you. Elocute (talk) 20:25, 12 April 2009 (UTC)[reply]
We have , which leads us to . Supposing that 0! is defined, then its value must be 1. Readro (talk) 21:53, 12 April 2009 (UTC)[reply]

It is the number of bijections from one empty set to another, which is 1. Strictly speaking it is a convention, but any convention other than 0! = 1 would cause endless trouble. McKay (talk) 00:51, 13 April 2009 (UTC)[reply]

Others above have offered quite a bit of techno-babble, but I can show you a very simple derivation by working backwards from, say, 4!

Make sense? --69.91.95.139 (talk) 01:55, 13 April 2009 (UTC)[reply]

The questioner has specifically requested a proof of the claim that 0! = 1 and yet your "techno-babble" above fails to constitute a proof. Although your argument provides some insight into the claim, it does not constitute a mathematical proof. Furthermore, the other comments above were far more well-reasoned arguments. Please see proof (mathematics) for more details. --PST 04:57, 13 April 2009 (UTC)[reply]
Furthermore, User:Readro has provided a well-expressed argument equivalent to yours. Your argument lacks the assumption that 0! is defined, which is necessary to conclude that this entity equals 1. --PST 05:00, 13 April 2009 (UTC)[reply]
PST, please use the reference desk for constructive contributions. I have left a note at your talk page. Eric. 131.215.159.99 (talk) 08:32, 13 April 2009 (UTC)[reply]
I see nothing wrong in my comment. The previous user seems to have criticized the comments given by the other users when, in fact, he/she is the one to have a flawed argument. And "please use the reference desk for constructive comments" is too general a statement, asserted without sufficient evidence. As I have not contributed anything that is prohibited, and the discussion that you have initiated is not relevant to the question, it is not of my interest to take further part in it. --PST 08:54, 13 April 2009 (UTC)[reply]

I apologize for providing Readro's 'proof' more explicitly. In the future I will let the concepts remain written in abstract, potentially confusing symbols rather than written out with numbers. There is no reason, after all, that any argument should be written explicitly for the reader unfamiliar or uncomfortable with the abstract concepts of algebra; that's just silly.

And none of the contributions were formal proofs. You can't prove a convention. You can, however, provide a good argument for it, which is what we've done here. --69.91.95.139 (talk) 11:43, 13 April 2009 (UTC)[reply]

The question as to whether your argument is well-designed or not, is not what I am challenging. This is merely my opinion. The point is that you must assume 0! to be defined, before concluding it to be 1. --PST 12:46, 13 April 2009 (UTC)[reply]
The other important thing to note is that your comment would have been perfectly OK (it is not of my interest to attack people for their errors) had you not challenged the other arguments. --PST 12:48, 13 April 2009 (UTC)[reply]
Then I probably should have said "... which is okay if you're at a high enough level to understand them, but I'm guessing you're not, so...", because that's what I meant. I didn't mean their arguments were bad, just a little too technical for the kind of person who is likely to ask why 0! is 1 (ie, high school algebra level). --69.91.95.139 (talk) 13:35, 13 April 2009 (UTC)[reply]
OK. As I said, I am not too concerned about this and nor is anyone else. But in my opinion, you could have phrased your intended comment in a better manner (or just avoiding that comment would not have done any harm). Furthermore, in my understanding, your argument was not logically structured well enough (concluding something equivalent to: f(2) = 2^2 => f(1) = 1^2; and this implication is, of course, not logical), but of course others may see this differently. As there is no longer merit in continuing this argument, noticing that the original poster has left, I no longer see purpose in replying. --PST 03:22, 14 April 2009 (UTC)[reply]

Conjugate Diameter?

In A treatise on analytical geometry, conjugate and transverse axes are noted, regarding oblate and prolate spheroids, which agrees with the definition of conjugate and transverse diameters of an ellipse. These appear related to the idea of conjugate points. But, back on pg.107of "A treatise...", the concept of Conjugate Diameters in Ellipseappears to being discussed.

(Diagram of conjugate diameters)

How can that be? Aren't the red and blue lines actually oblique diameters, with the vertical diameter along y being the conjugate diameter, and the horizontal along x, the transverse?
Are there two different meanings of conjugate diameter? ~Kaimbridge~ (talk) 19:46, 12 April 2009 (UTC)[reply]

Mandelbrot Set

What are the root and apex points of the Mandelbrot set? 72.197.202.36 (talk) 20:38, 12 April 2009 (UTC)[reply]

I believe I have heard 0.25 (the inner point of the cusp of the cardioid) referred to as the root. “Apex point” might refer to −2, which is the point at the very left end. Jim (talk) 01:45, 13 April 2009 (UTC)[reply]


April 13

Length of a Continued Fraction

Alright, I have several questions regarding the length of a continued fraction. Feel free to answer any, all, or none of them (though, preferably, if you intend to do the last, don't respond at all :). These questions are similar, but distinct.

  1. Does the length of the continued fraction expansion of a rational number say anything meaningful about the number itself? I can already kind of guess it is kind of a measure of the complexity of the number, but I'm wondering if there's any significantly less vague property to speak of.
  2. Can the numerator/denominator form of a rational number be used to predict the length of that number's continued fraction, without actually calculating its expanded form?
  3. Would it be easier to predict the length if the fraction were given in reduced form?
  4. What about with the numerator and denominator decomposed into prime factors?

BTW, just to deflect any misunderstanding from question # 3, I am aware that converting a numerator/denominator fraction into its continued fraction form and back will automatically put it in reduced form.

All responses appreciated, --69.91.95.139 (talk) 01:46, 13 April 2009 (UTC)[reply]

Have you thought about the questions? What have you worked out with regards to each question? Please list some of your ideas. --PST 04:53, 13 April 2009 (UTC)[reply]
Hint: the length of the continued fraction expansion of a/b (which may or may not be in reduced form) is related to the number of steps required to find the greatest common denominator of a and b using the Euclidean algorithm - can you see why ? Our Euclidean algorithm article contains some results on the "worst case" and average number of steps. Can you find a fraction a/b, where a and b are both less than 100, with a continued fraction expansion of length 10 ? Gandalf61 (talk) 05:49, 13 April 2009 (UTC)[reply]
Is the last partial quotient allowed to be 1? —Tamfang (talk) 04:53, 14 April 2009 (UTC)[reply]
Oops, yes it was. So make that "...with a shortest continued fraction expansion of length 9". Gandalf61 (talk) 11:01, 14 April 2009 (UTC)[reply]
I've figured out the connection algebraically. Thanks for the tip, Gandal. I'll post my formulation here for the curious later. --69.91.95.139 (talk) 22:30, 14 April 2009 (UTC)[reply]
Let a and b be integers satisfying . The Euclidean algorithm and the derivation of the continued fraction both proceed by finding the unique integer n such that or . (Strictly speaking, this is not how the Euclidean algorithm is defined, but I will use n in the Euclidean algorithm to aid in finding the modulus, which is effectively 'subtracting a from b as many times as possible').
Let's look at deriving the continued fraction form of b/a first.
This is one step. If b = na, then the fraction portion can be left out, and the continued fraction expansion has ended. If this is not the case, then we know or , so the lower fraction is larger than 1, and the expansion continues.
Now take a look at the same for the Euclidean algorithm. We're trying to find the GCD of a and b. In order to do so, the Euclidean algorithm tells us that that GCD(a, b) = GCD(a, b-na).
That was also one step. We notice that this is the same pair of numbers as was left in the fraction above. Thus, the continued fraction expansion and the Euclidean algorithm are doing essentially exactly the same thing.
There is a difference to note, that the continued fraction expansion ends sooner. When we end up with two integers of the form a and b = na, the continued fraction ends, but the Euclidean algorithm continues 1 more step, to give the pair (a, 0). Thus, the length of the continued fraction of a fraction is 1 less than the number of steps it takes to perform the Euclidean algorithm of calculating the GCD of the numerator and denominator, and this is so regardless of whether they are given in reduced form, and is not helped at all by giving them in factored form.
That answers all my questions. Thanks again for the help. --69.91.95.139 (talk) 00:43, 15 April 2009 (UTC)[reply]
Hey! it's just me or in those formulas there is written "banana" everywhere... I think I'll go for another banana split! ;-) --pma (talk) 21:11, 15 April 2009 (UTC)[reply]

constructing a table

I want to let someone, blindly, try to guess how dice will land with a machine (while they are rolling in the air), but they don't see how the dice will land, so I thought I would say they can stop whenever they want, after 5 throws or 50, and I would tell them after each number of throws how well they would need to be predicting (the amount of the effect)for this to show up at the 95, 98, and 99% confidence. Can this be done? How would my chart look? (and how would I calculate it?) Thanks! 94.27.151.13 (talk) 14:10, 13 April 2009 (UTC)[reply]

How many dice - 1 ? 10 ? 100 ? What exactly is the other person guessing ? What is the "effect" that you want to test for ? Is this another variation on the coin tossing questions that you were asking above ? And have you read hypothesis testing, normal distribution and Z-test yet ? Those articles will help you to answer your questions. Gandalf61 (talk) 15:53, 13 April 2009 (UTC)[reply]
the provided links were very difficult so i am asking a simpler question!
Q. How many dice?
A. One standard six-sided die.

Q. What exactly is the other person guessing?
A. They claim a machine can deduce to some extent, from the observed position and spin of the die, on which face it will land. The machine cannot see the results (or what happens once the die falls below a certain height).


Q. What is the "effect" that you want to test for ?
A. Can the machine predict the die to any statistically significant extent for the number of throws the experiment consists of? (ie is the machine predicting the results of the fair die throws statistically better than I can by saying 1, 1, 1, 1, 1 etc). The device-wrangler can stop after any number of throws (they dont see the results until the end) and I want them to be able to consult my chart to see when they want to stop...

Q. Is this another variation on the coin tossing questions that you were asking above ?
A. Not quite. I'm looking for a table that shows, for each number of throws, what the threshold is at a few different confidence levels for proving that their machine works to ANY statistically significant extent -- and the corresponding strength of the effect that this would demonstrate.

My reasoning is that the effect strength would have to be close to 100% for an effect to be proven in 6 die throws, so if they hae just thrown 6 they would consult my chart and see that the machine would have to be very accurate to prove any efficacy in so few throws. By reading down the chart, they can find the number of throws corresponding to the strength effect they think they have -- if they think the machine works 1% better than chance, they would read down the list and see the 1% somewhere, maybe at 100 throws for 95% confidence and at 115 throws for 99% confidence, I don't know -- and I don't know how to calculate these numbers!

Could someone prepare such a chart for me or give me simple formulas so I can do so myself from any programming language? Thank you. 94.27.151.13 (talk) 18:33, 13 April 2009 (UTC)[reply]
Statistical power. Unless your machine can make perfect guesses you will never be able to set up your experiment such that you can be certain that a 95% confidence canwill be reached. Taemyr (talk) 06:04, 14 April 2009 (UTC)[reply]
when you said "you will never sbe able to set up your experiment such that you can be certain that a 95% confidence can be reached" my brain exploded. You owe me a new brain, Mister. And then you owe me an explanation. What could that statement possibly mean? 79.122.35.239 (talk) 10:55, 14 April 2009 (UTC)[reply]
Well, first of I made a typo. It should be will rather than can. Other than that, as long as there is a non-zero possibility that your machine guesses wrong it's possible that it guesses wrong on all throws of the dice. So you have the concept of Statistical power, which is the probability that an experiment detects an effect that is present. Power is in tension with confidence, in that if you want a higher confidence you must either change the experiment or allow for a lower power. Taemyr (talk) 03:19, 16 April 2009 (UTC)[reply]

Gaussian Derivative

In Gaussian_function it says "Mathematically, the derivatives of the Gaussian function are the Hermite functions", but it seems like the actual derivative is the inverse of a Hermite function. Where does the extra -1 factor come from? Is this an error in the article or am I missing something? Truthforitsownsake (talk) 16:00, 13 April 2009 (UTC)[reply]

OK, let's say "up to a multiplicative factor". Notice also that the definition of several special functions differs by some constant among the various authors, so you may even find a definition of Hermite functions without "extra -1". --pma (talk) 14:34, 14 April 2009 (UTC)[reply]

Polynomial question/thing

This question has always baffled me: f(x) is a monic polynomial of degree 6. It has the following values: f(0)=0, f(1)=1, f(2)=2, f(3)=3, f(4)=4, f(5)=5. What is the value of f(6)? Considering this is beyond my current working knowledge of polynomials, how would I even begin to solve this? Furthermore, what would the correct equation be of said polynomial, and how would I come up with it? Does this involve calculus of some sort? 141.153.216.159 (talk) 16:06, 13 April 2009 (UTC)[reply]

Follow the example at Lagrange polynomial. No calculus, just a bit of algebra.  Pt (T) 16:29, 13 April 2009 (UTC)[reply]
If we want a monic polynomial of degree 6, the best thing to do is to create an arbitrary monic polynomial of degree 6, . As f(0)=0 there is obviously no constant term. There may be an easier way to solve this, but the method I used was to create a coefficient matrix for all five non-zero values f(x) takes that are provided. Using row operations, I transformed it to reduced row echelon form, whereby I could read off the coefficient values. The equation I got is . f(6)=726. Readro (talk) 17:21, 13 April 2009 (UTC)[reply]
Messing around with matrices is a lot more complicated than Lagrange polynomials, I think. Algebraist 18:41, 13 April 2009 (UTC)[reply]
Indeed. And if one doesn't want to use the Lagrange polynomial formula explicitly, one may observe that f(x)-x is a sixth degree monic polynomial vanishing at x=0,1,2,3,4,5 : therefore f(x)=x(x-1)(x-2)(x-3)(x-4)(x-5)+x and f(6)=6!+6. --pma (talk) 19:58, 13 April 2009 (UTC)[reply]
That's a very nice and elegant argument. Wish I'd spotted that! Readro (talk) 21:13, 13 April 2009 (UTC)[reply]

Doing the computations with matrices is laborious, but understanding hor they're done is what makes it obvious that there must be a solution. But pma's solution above is the best of those proposed here. Michael Hardy (talk) 21:24, 13 April 2009 (UTC)[reply]

statistics: are there real effects that are impossible to show statistically with very high confidence?

I am new to statistics and I was wondering: are there real physical effects that are impossible to show statistically with very high confidence (no matter what test you devise?). I mean that you can show it with 98% confidence, but the effect is such that by its nature you cannot devise the experiment with 99.9% confidence? (maybe because it is not possible to repeat the 98% confidence test many many times for some reason, or have the sample space be much larger than enough for the 98% confidence level, for some reason)?

Thank you! 94.27.151.13 (talk) 18:54, 13 April 2009 (UTC)[reply]

Many measurements of cosmological parameters are subject to uncertainty from cosmic variance; that is, we can only observe a small fraction of the universe, which surely has slight statistical deviations from the whole universe. For example, the measurement of the Hubble constant is subject to errors from bulk flows in the visible universe. In one of the best measurements of the Hubble constant, from the HST Key Project, the systematic error due to bulk flows is given as 5% (Table 14). -- Coneslayer (talk) 19:16, 13 April 2009 (UTC)[reply]
Some economic effects are very difficult to show statistically. The reason is that (1) as with astronomy, it is not always possible to conduct controlled experiments, (2) as with fluid dynamics applied to small quantities of liquids, sometimes the number of people involved is small enough that individual idiosyncratic behavior drowns out the effect. Wikiant (talk) 19:34, 13 April 2009 (UTC)[reply]
Thank you, both respondents! These answers are just what I was looking for! :) -- follow-up question: In cases like this, what is the lowest confidence interval that is even worth mentioning the results over (in a paper or any other source)? 95% as you said, or 90% or even lower? Thanks! 94.27.151.13 (talk) 20:10, 13 April 2009 (UTC)[reply]
The appropriate confidence interval depends on the cost of being wrong. In clinical medicine, there is a huge downside to erroneous measurement, so one would tend to go with large confidence intervals. In market research or portfolio management, you can be wrong in many individual cases as long as you are right more often than wrong, so smaller confidence intervals are acceptable. The lowest I've seen (and in the field of market research) is 75%. Wikiant (talk) 22:17, 13 April 2009 (UTC)[reply]
thank you Wikiant, both for the concrete and qualified number, and especially for your explanation preceding it. Totally informative. 94.27.151.13 (talk) 22:35, 13 April 2009 (UTC)[reply]

Finding P

I'm not one to ask the REFDESK to do my homework, but I would appreciate the help. I need to find the "...probability distribution for the sum of the numbers given", all 2 through 10. Percentages are given for the outcomes, but I need the equation to "Find P (Sum is prime)." If I could get the equation, I can be on my way. Thank you. —Mr. E. Sánchez (that's me!)What I Do / What I Say 22:01, 13 April 2009 (UTC)[reply]

If you're saying that probabilities P(sum=2), P(sum=3) and so on are given, then all you need to do is add up the values for the primes. —Tamfang (talk) 04:46, 14 April 2009 (UTC)[reply]
Resolved.
I see. Thank you! —Mr. E. Sánchez (that's me!)What I Do / What I Say 00:50, 15 April 2009 (UTC)[reply]

Connection between frequency modulation and Bessel functions

I am trying to understand the presence of spectral side-bands when the frequency of an oscillation oscillates itself. Most books on the subject assume I have much more or much less mathematical background than I do. It seems to ride on the following identity, which I can't see how it was derived.

Your help is most appreciated.128.223.130.198 (talk) 22:46, 13 April 2009 (UTC)[reply]

These are named Jacobi-Anger expansions and follow easily from the generating series of the Bessel's functions; have also a look here [1] and here [2], and ask for details if needed. Put in the generating series of the , and use .--pma (talk) 13:12, 14 April 2009 (UTC)[reply]
Thanks a lot, that did the trick. 128.223.23.171 (talk) 18:50, 15 April 2009 (UTC)[reply]

April 14

Puzzle

I am not a math professor, or expert but was posed with this question and cannot get it out of my mind. What multi-digit number is such that when the last digit is placed at the beginning, the number is doubled? [i.e. - abcd=1/2 (dabc)] There may be an obvious answer but I have tried it for a while and have come up with nothing yet.

I have come up with some parameters so far. If these are true or false could someone tell me either way and explain.
  • the second to last digit is 0 - since the only single digit that, when multiplied by 2 equals itself is 0, the second to last digit must be 0 due to the fact that it will end up being the last digit when the transfer is made.
  • The last digit is 5 - in order for the second to last digit to be 0 the last one must be 5 since it is the only number that, when multiplied by 2, will give you a number with a 0 in it.
  • In the original number, the first two digits must be 25 - if the last digit is 5 than the first two of the original number must be something that will give you 1/2 of the first digit in the final number(5), thus 25.

I understand that all of these premises are based off of the integrity of the first one. If the first one is wrong than I am really lost!!:):) Any help is appreciated.jondn (talk) 01:33, 14 April 2009 (UTC)[reply]

There are no 4-digit solutions to that problem other than 0000 (I used a computer to check). Also, your reasoning makes no sense. 207.241.239.70 (talk) 02:54, 14 April 2009 (UTC)[reply]
Too blunt a comment was posted above - ignore it. Note that the value of 'abcd' is 1000a + 100b + 10c + d, and the value of 'dabc' is 1000d + 100a + 10b + c. Therefore, 2a = d, 2b = a, 2c = b, and 2d = c, because twice 'abcd' is 'dabc'. Therefore, 8c = d (to see why, combine the first three equations) and 2d = c. In particular, we conclude that 8c is both equal to d and 16d. Therefore, we see that c = 0 and d = 0. From the initial equations, we conclude that a = 0 and b = 0. This demonstrates that the number is 0000, as desired. P.S. I must emphasize that this is not the type of problem that math professors solve, contrary to what you suggest. --PST 03:35, 14 April 2009 (UTC)[reply]
I don't think 2a=d, 2b=a, etc. necessarily hold, because there can be carries from one place to another. Note that if you don't mind ignoring a remainder, 1052 = 1/2 of 2105. 207.241.239.70 (talk) 04:48, 14 April 2009 (UTC)[reply]


A number that can be multiplied by moving the last digit to the front is called a parasitic number. The Wikipedia article on parasitic numbers gives 105263157894736842 as a solution to your puzzle and provides a general method for finding an n-parasitic number for any n. The example I've given here shows that your first premise (that the next-to-last digit must be 0) is false, since the next-to-last digit here is 4. In fact, all of the premises are false, but as you say, the truth of the second and third premises depends on that of the first. By the way, I wouldn't call this solution obvious, but there is a way of thinking about the problem that leads straight to this solution. Michael Slone (talk) 04:03, 14 April 2009 (UTC)[reply]
There have been two recent New York Times articles on this puzzle:
The second explains the solution. Jim (talk) 04:36, 14 April 2009 (UTC)[reply]
Btw, if you need to recall the nubmer 105263157894736842 (to show your ability in multiplications), note that the digit is . --pma (talk) 06:55, 14 April 2009 (UTC)[reply]

Thanks a lot for the help!! I was not necessarily implying that math professors do these problems rather was just saying that I am in no way an expert but the problem was still driving me crazy!! Thanks againjondn (talk) 14:02, 14 April 2009 (UTC)[reply]

What is ?

What is ? This is not homework. I just got stuck in integrating this by hand and Wolfram Mathematica's Integrator thought that the "d"s were constants (except for the last one, obviously!)! Please help me!The Successor of Physics 15:09, 14 April 2009 (UTC)[reply]


As a first remark, your looks a bit like a weird thing. Either you mean , which is a notation for the second derivative of a function t(r) wrto the variable r; or you mean , which is a notation for the square of the first derivative of t(r). In both cases there is no general identity that allows a reduction to a simpler formula: you have to keep the integral as it is. Moreover, even if there were such an identity, that integrator will not give it to you, for it only contains a data-base of antiderivatives of explicit functions. Using it to look for the integral of "f(x)", without specifying what's "f(x)", is like to send a letter addressed "to my grandfather": it will not be able to understand what you want. If you specify what is the unknown function t(r), for instance if you choose t(r)=cos(r) or t(r)=r2, then you may look for an antiderivative of your expression in terms of elementary functions, and in this case that integrator may help. --pma (talk) 16:06, 14 April 2009 (UTC)[reply]


This integral gives you the spacetime interval along a path in 1+1 dimensional spacetime. As pma said, you can't integrate it unless you have a relation between t and r. If you have a function t(r) (which would be rather unusual) then the integral is . If you have a function r(t) (more likely) then it's . If you have functions t(q) and r(q), where q is an arbitrary parameter, then the integral is . If you don't care about the sign of r', which is the usual case, then you can bring it inside the square root and simplify the last two integrals to and . -- BenRG (talk) 23:05, 14 April 2009 (UTC)[reply]
Well, I meant . Also, (I know the answer is s after the substitution but I don't wan't it to be done). Does that help?The Successor of Physics 14:12, 15 April 2009 (UTC)[reply]
The answer will depend on the limits of integration. I don't think any simple limits will give an answer of s (assuming s is a constant). Are you sure you don't mean ? -- BenRG (talk) 18:38, 15 April 2009 (UTC)[reply]

Repeated Root

Following on from an earlier question, how does one describe the behaviour of at ? Is this a repeated root? If not what is it? 92.0.38.75 (talk) 17:38, 14 April 2009 (UTC)[reply]

Yes, it is a repeated root. See Multiplicity_of_a_root#Multiplicity_of_a_zero_of_a_function. Both the function and its derivative are zero at that point, but its second derivative is nonzero, so that point is a root with multiplicity 2. Black Carrot (talk) 19:22, 14 April 2009 (UTC)[reply]
Also, we can define multiplicity of zeros via power series expansions. If and n is the index of the least non-zero term, we say that n is the multiplicity of as a zero of y(x) (in this case it's 2). So we say that the multiplicity of is zero if is not a zero ;-)
(Notice also that the order of a pole is defined analogously. Indeed, if you allow any integer value, the two concepts are unified: one can then say that the multiplicity as a zero is minus the order as a pole. Of course it should sound a bit strange to speak of negative multiplicities, but it is only a matter of language, while in the substance it's very natural). --pma (talk) 19:54, 14 April 2009 (UTC)[reply]

April 15

Composition of functions and integrability

Is there an example of a discontinuous integrable f and a continuous integrable g such that f(g(x)) is non-integrable? I know the converse is not true - if f is continuous and integrable, and g is integrable, then f(g(x)) is integrable, but what about with g continuous? I'm fairly confident there is no counterexample, but I'm not totally sure how to prove it - how would I go about it if that is the case?

Thanks very much,

Mathmos6 (talk) 13:38, 15 April 2009 (UTC)[reply]

The following homeomorphism g should help you to build a counterexample with f Riemann integrable and not even measurable in Lebesgue sense. Let h(x):=x+c(x) where c : [0,1] → [0,1] is the Cantor function; then h : [0,1] → [0,2] is a homeomorphism (for it is continuous & strictly increasing) taking the Cantor set C into a closed subset A of [0,2] of measure 1 (for the complement of C in [0,1] is sent into a set with the same measure, that is 1). Therefore g:=h-1 : [0,2] → [0,1] is a homeomorphism taking A into the Cantor set. Notice that, since A has positive measure, it contains a non-measurable set (you don't need this, if you are just happy with not Riemann integrable). Can you see how to go ahead? --pma (talk) 14:42, 15 April 2009 (UTC)[reply]


Well the standard example for f-integrable g-integrable with composition fg not integrable is g Thomae's function, f(x)=1 everywhere except f(0)=0 - so is the next step something similar to that? I can't honestly say I'm completely sure how to go ahead, despite the fact you've already given me a lot of help - sorry, my head's obviously having a slow night!

Mathmos6 (talk) 19:06, 15 April 2009 (UTC)[reply]

The function you describe is (Lebesgue) integrable. Are you interested in Riemann integrability? Algebraist 19:10, 15 April 2009 (UTC)[reply]
OK, start with an example with f Riemann integrable, g a homeomorphism (the one defined above) and with composition not Riemann integrable. Choose f to be a characteristic function, . Then is also a characteristic function. Thus you want S such that is Riemann integrable and is not. To this end you just have to clarify a bit to yourself which sets S have a Riemann integrable characteristic function (these are called Jordan measurable sets; if you are familiar with the characterization of Riemann integrable functions by means of their points of continuity it's quite immediate). Then, if you recall the relevant property of the above homeomorphism g, you are done. But we still do not know if you are interested in Riemann or in Lebesgue integrability. In the latter case, if you want a stronger example, you can in fact choose a Jordan measurable set S such that the set is not even Lebesgue measurable (in this case use the little remark above). --pma (talk) 21:05, 15 April 2009 (UTC)[reply]

That's brilliant! I understand completely I think, except for one thing - what's the relevance of h being homeomorphic? Is that just so we know it's continuous, or does it have additional relevance? As a matter of interest, is it safe to 'adjust' the example so that g is [0,1]->[0,1], say by simply multiplying h(x) by (1/2)? Thanks so much for the help, Mathmos6 (talk) 04:46, 16 April 2009 (UTC)[reply]

You're welcome. Of course, h(x)/2 is even nicer, for the interval remains the same. The fact that h is a homeomorphism has no particular relevance for your needs, except that being an additional property, makes the counterexample stronger. Moreover, it shows that such properties of sets, like: having zero measure, or: being measurable either in the Lebesgue or in the Jordan sense, are not topological invariants.
Note also: the measurability problem may be bypassed if one chooses to deal with functions f that are measurable in the sense of Borel, which is a topological concept, rather than in the sense of Lebesgue, which is not (the subtle difference is that in the former case the sublevel sets {f<c} are required to be Borel measurable; in the latter, only Lebesgue measurable). After all, any Lebesgue measurable function is (canonically) equal a.e. to a Borel measurable function, so you would lose nothing in terms of classes of functions defined a.e. For Borel measurable f, of course, it is true that is still Borel (with g continuous, or even Borel measurable). BUT the problem essentially remains, in the sense that the map , although now well-defined in a suitable class of measurable functions, is not well-behaved: still changing f in a set of zero measure could change in a set of positive measure.
On the positive side, if g is a homeomorphism and g-1 is Lipschitz, then is Riemann (or Lebesgue) integrable if such is f; the reason is that now g-1 can not expand the null sets where f is bad-behaved to fat ones. In this case, the map gives rise to a nice linear continuous operator between the corresponding L1 spaces. --pma (talk) 08:55, 16 April 2009 (UTC)[reply]

Isolated singularities

does poles belong to an isolated singularity?? —Preceding unsigned comment added by 59.96.30.227 (talk) 15:08, 15 April 2009 (UTC)[reply]

Yes, a pole is a special case of an isolated singularity. — Emil J. 15:36, 15 April 2009 (UTC)[reply]

Algebra: Olympiad question

This is an Olympiad question which I cannot think of any way to start. "The +ve integers a and b are such that 15a + 16b and 16a - 15b are both squares of +ve integers. What is the least possible value that can be taken by the smaller of these squares." Please help. --Siddhant (talk) 16:54, 15 April 2009 (UTC)[reply]

This is quite easy, as IMO questions go. The answer is 231361. Algebraist 17:27, 15 April 2009 (UTC)[reply]

How? I know the answer but I need the method. Thanks for your effort.--Siddhant (talk) 17:32, 15 April 2009 (UTC)[reply]

Show that both squares are divisible by 481. Algebraist 17:53, 15 April 2009 (UTC)[reply]

Do I need to use Fermat's theorem for that?--Siddhant (talk) 18:44, 15 April 2009 (UTC)[reply]

I didn't use any theorem of Fermat. That doesn't mean there's not an approach that would use such a theorem, however. Algebraist 18:46, 15 April 2009 (UTC)[reply]
Well, Siddhant, have you proved the answer? If you still have not got it, you can always ask again and we will certainly help you. The trick with these problems is not to think too hard because usually the solution only requires a few basic facts from number theory (and minor algebraic manipulations). --PST 02:27, 17 April 2009 (UTC)[reply]

I assumed let r2=15a + 16b and let s2=16a - 15b. I squred both the squares to get r4 + s4 = 481 (a2 + b2). Now how to prove that each of the squares is divisible by 481? What arguments need to be given after that to reach the answer? Is there another approach possible to answer this question? Thanks.--Siddhant (talk) 08:06, 17 April 2009 (UTC)[reply]

Write a and b in terms of r2 and s2 and then consider what happens if 481 doesn't divide one of the squares. Zain Ebrahim (talk) 10:19, 17 April 2009 (UTC)[reply]

April 16

Prime number

Which is the biggest prime number? —Preceding unsigned comment added by 59.92.243.47 (talk) 11:16, 16 April 2009 (UTC)[reply]

See Prime number#The number of prime numbers. —JAOTC 11:18, 16 April 2009 (UTC)[reply]
Or if you're interested in the largest known prime number, try (surprise!) Largest known prime number. —JAOTC 11:21, 16 April 2009 (UTC)[reply]
There can be no largest prime number. If there were, list them- {p1,..., pk}. Now the product of all the prime numbers in this list is divisible by every prime in this list. Therefore, this product +1 can be divisible by none of the primes in the list (the reminder will be 1 if you try to divide it by any prime). Therefore, This product +1 must be a prime must have a prime factor. This prime factor cannot be any of the primes listed, for otherwise it would divide 1 - a contradiction; we specifically assumed those pk's to be the only primes. Therefore, they cannot be finitely, but, rather, infinitely many primes. Therefore, there is no largest prime number. If you are interested in a topological proof of this fact, try Furstenberg's proof of the infinitude of primes. The proof I gave is due to Euclid. --PST 12:23, 16 April 2009 (UTC)[reply]
Ummm... The product+1 must have a prime factor other than the ones you started with. It doesn't need to be prime itself. 2*3*5*7*11*13+1 = 59*509. McKay (talk) 12:52, 16 April 2009 (UTC)   On second thoughts, it should be said like this: (1) Every integer > 2 must have a prime factor. (2) If there are finitely many primes, their product + 1 contradicts (1). McKay (talk) 13:09, 16 April 2009 (UTC)[reply]
No - I am not convinced. My proof assumed the hypothetical situation that there are only finitely many primes with a largest prime (say) pk. The product of all these primes +1 must be larger than all these prime numbers. Furthermore, it must be prime, because by assumption the only primes are those pj's, and if it were not prime, it would have to be divisible by some pj. Your counterexample does not demonstrate a fallacy in the proof, because we conclude the product + 1 is prime, because we have already listed all the primes (so it cannot contain a prime factor "not in this list"). Anyway, I may well be wrong - I'm a bit sleepy. :) --PST 14:07, 16 April 2009 (UTC)[reply]
I am wrong. I have corrected my (embarrasing) mistake - thankyou for the correction. --PST 14:29, 16 April 2009 (UTC)[reply]
However your previous proof by contradiction seems perfectly correct to me.--pma (talk) 15:15, 16 April 2009 (UTC)[reply]
Well, I think, OP wanted to ask the largest known prime number so far, which is 243,112,609 − 1, a Mersenne Prime. The new largest prime numbers discovered are almost always Mersenne primes because of the algorithmic and computational ease to verify their primaliy. - DSachan (talk) 15:41, 16 April 2009 (UTC)[reply]
Really? I always thought it was because Mersenne numbers (with a prime n) were much more likely to be prime then randomly selecting another number... But it looks like you are refering to some of the Mersenne specific primality test such as the Lucas-Lehmer test for Mersenne numbers. I stand corrected. Anythingapplied (talk) 16:32, 16 April 2009 (UTC) [reply]
That's also a reason. That's why GIMPS exists and the existence of Lucas-Lehmer test helps their cause. - DSachan (talk) 10:19, 17 April 2009 (UTC) [reply]

Not to sound grumpy, but don't the articles I linked to cover pretty much all of this? —JAOTC 16:33, 16 April 2009 (UTC)[reply]

Yes, they do. Did you expect people to bother with reading what other people wrote before posting? — Emil J. 16:59, 16 April 2009 (UTC)[reply]
Right, but then you too shouldn't have written that sentence, but only put a link to it, for it has been repeated soo many times... ;-) pma (talk) 17:52, 16 April 2009 (UTC)[reply]

I hate those "...3 football fields long" or "...to the moon an back 11 times" comparisons but I can't help it, how many pages would it take to write out 2^43,112,609 − 1? -hydnjo (talk) 22:44, 16 April 2009 (UTC)[reply]

It has 12978189 decimal digits. Decide how many digits you can fit on a page and work it out. Algebraist 23:28, 16 April 2009 (UTC)[reply]
Right, that's the hard part. I have no clue nor could I find a reliable source as to how many "standard" size characters fit on a "standard" size page. -hydnjo (talk) 23:42, 16 April 2009 (UTC)[reply]
This exact issue has been discussed in Wikipedia. Mersenne prime#List of known Mersenne primes says: "To help visualize the size of the 46th known Mersenne prime, it would require 3,461 pages to display the number in base 10 with 75 digits per line and 50 lines per page." Earlier there was an unsourced claim about "a standard word processor layout", but I changed it in [3] after discussion at Talk:Mersenne prime#Word Processor quote. PrimeHunter (talk) 23:51, 16 April 2009 (UTC)[reply]
Thanks PrimeHunter, I obviously (embarrassedly) missed that factoid. -hydnjo (talk) 00:15, 17 April 2009 (UTC)[reply]
Oooh, a pile of pages 25 inches high! -hydnjo (talk) 00:29, 17 April 2009 (UTC)[reply]

water disturbance equation

Is there a mathematical equation that would describe the amount of time it takes for ripples to stop after a stone is dropped into a nice flat pond if we can assume there are no further disturbances? 65.121.141.34 (talk) 19:56, 16 April 2009 (UTC)[reply]

Suppose you were to throw the pebble at point P on the pond. If there are no disturbances, I would assume the ripples to expand until they reach the edges of the pond. So let R be the minimum of all distances from P to points on the boundary of the pond. Assuming the riples enclose a circular area, they will stop expanding after the circle they enclose has radius R. Now all we need to know is the area of the circular region enclosed by the initial ripple and the rate at which the largest ripple's radius is increasing, or the rate of change of the area enclosed by the ripples. Then using calculus, we could work out the answer. If, assuming a perfect model, you wanted to work out the time it takes for the ripples to stop (i.e hit the edges of the lake), using the trajectory of the pebble (when you threw it into the pond), you would have to use wave mechanics. Often, in real life situations, mathematical equations rarely give you the exact answer. --PST 02:20, 17 April 2009 (UTC)[reply]

April 17

Is there an infinate number of different types of probability distribution?

Probability distributions seem to be discovered by individuals and often named after them, such as the Guassian. See List of probability distributions. But the essence of the probability distribution is described by a mathematical formula. Are there therefore as many different probability distributions as there are maths formulas? I assume there is an infinate number of maths formulas. In other words, are the named probability distributions just a tiny subset of all possible probability distributions? 78.146.249.32 (talk) 11:07, 17 April 2009 (UTC)[reply]