Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 76.201.158.47 (talk) at 00:37, 19 July 2009 (→‎Extrema Finding and Integral Transforms: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



July 13

Points on square grid

I presume that this is a well-known result, but I couldn't find it. What is the greatest number of points which can be marked on an n by n square grid so that no three are collinear? Drawing successive cases suggests that for n=0, 1, 2, 3 and 4, the answer is 0, 1, 4, 5 and 6 - is this correct so far, and what's the general result?—86.132.235.208 (talk) 18:21, 13 July 2009 (UTC)[reply]

You're wrong with the last two values: for n=3, the answer is 6 (take all except one diagonal) and for n=4, it's 8 (take the inner two of each side of the square). For arbitrary n, the number can be no higher than 2n (since every line or column can only contain two marked points), but I'm not sure if this value can always be achieved for n>4 (for n=5, I get no further than 9 marked points). --Roentgenium111 (talk) 22:02, 13 July 2009 (UTC)[reply]
10 points are possible for :
**---
*--*-
---**
-**--
--*-*
For , 12:
**----
---*-*
-*--*-
*-*---
----**
--**--
And it does not seem completely implausible that this generalizes to every n. -- Meni Rosenfeld (talk) 15:29, 14 July 2009 (UTC)[reply]
Some Google-fu turned up this summary from Research Problems in Discrete Geometry by Peter Brass, W. O. J. Moser, János Pach, p417: "In a long sequence of papers ... many examples were constructed, up to n = 52, for which the bound 2n is obtained. Most of these sets were found by computer search and no general pattern has emerged". Gandalf61 (talk) 16:22, 14 July 2009 (UTC)[reply]

Average of the first billion digits of the decimal expansion of Pi

Is it possible to know what the answer to that is - in which case what is it and how did you work it out - without laboriously adding them up? -- JackofOz (talk) 20:58, 13 July 2009 (UTC)[reply]

In other words, you add the numeral values of the digits and divide by 1,000,000,000? If that's what you mean, couldn't you just put them all into a spreadsheet? Nyttend (talk) 21:02, 13 July 2009 (UTC)[reply]


Do you want an exact answer? I doubt there's any clever trick to speed this calculation up faster than simply adding up the digits. (Well, actually there is — just print out the right answer. But I suppose you want a justified method.)
On the other hand, if you're OK with an answer that's good to 3 or 4 decimal digits, the answer is 4.5 . --Trovatore (talk) 21:09, 13 July 2009 (UTC)[reply]
That's close enough. Thanks for the quick responses. -- JackofOz (talk) 21:18, 13 July 2009 (UTC)[reply]
Using the table here, the first billion digits of add up to 4500057062. Also to be found in this OEIS sequence. Fredrik Johansson 21:28, 13 July 2009 (UTC)[reply]

"Trovatore" didn't really state his answer except for its bottom line number. Here's the rest: The average of the ten digits 0 through 9 is

(0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9)/10 = 45/10 = 4.5.

That's Trovatore's answer.

That's approximately correct if the ten digits occur equally frequently in the long run. But here's a hard question: Do the ten digits really occur equally frequently?

For that we have only statistical evidence, not a mathematical proof.

In one sense, the answer is clearly "no": if the sum is 4500057062 instead of 4500000000 (i.e. 4.5 billion) then it deviates slightly from exact equality. But it is conjectured that you can get as close as you want to equality of those ten frequencies by making the number of digits big enough.

If you want to get into statistical evidence, then we'd also talk about pairs of consecutive digits, and triples of consecutive digits, and so on. The number cited above, 4500057062, does not deviate from 4.5 billion by more than would be predicted by the full-fledged conjecture dealing with pairs, triples, etc. occurring equally frequently. One could get into details of how that conclusion was reached as well.

But the way Trovatore came up with 4.5 is just that it's the average of the ten digits 0 through 9. Michael Hardy (talk) 23:25, 13 July 2009 (UTC)[reply]

It is conjectured but unproven that pi is a normal number (Trovatore's answer being wrong would have been interesting evidence against the conjecture). However, if you just want the billionth digit without having to compute all the preceding ones, there is a beautiful spigot algorithm for obtaining it, at least if you don't mind a hexadecimal rather than decimal expansion. 70.90.174.101 (talk) 01:08, 14 July 2009 (UTC)[reply]

I'm supremely indifferent to what the billionth digit is, but thanks anyway. I was too naive in accepting Trovatore's answer as the answer to the question I asked. It's probably very close to 4.5, but just for the fun of it, can anyone produce the actual average I'm seeking, to, say, 9 decimal places? -- JackofOz (talk) 09:23, 14 July 2009 (UTC)[reply]
4.500057062 - see Fredrik Johansson's response above. Gandalf61 (talk) 09:46, 14 July 2009 (UTC)[reply]
Why were you "too naive"? I was right, wasn't I? --Trovatore (talk) 21:46, 14 July 2009 (UTC)[reply]
Note that Trovatore's answer was right even wrto the precision: tree or four decimal digits. --pma (talk) 14:26, 15 July 2009 (UTC)[reply]

Randomly choosing one object out of a countably infinite set

During a physical-world discussion of a subject that has caused much distress here, the person I was discussing it with suggested that it would be much easier to look at from the point of view of picking a door from an infinite set of doors. This has caused me great distress independent of the problem. Let's look at the odds of randomly selecting one element from N with equal probability, that is P(1) = P(k), for any k in N. By the definition of probability, P(1) is either 0 or an element of (0..1]. The sum of all P(x) (=P(1)), x in N, equals 1, and by the Archimedian principle, if P(1) > 0, there exists some n such that P(1) * n < 1, so P(1) = 0. Thus P(1..n) = n * P(1) = 0, and the limit as n goes to infinity of 0 is 0. The only conclusion I can come to is that it's not meaningful to speak of selecting an element from N with equal probability?!? I'm not sure what I'm missing here, but I think I'm missing something.--Prosfilaes (talk) 23:17, 13 July 2009 (UTC)[reply]

You're missing nothing. There is no uniform countably additive probability measure on the natural numbers. Algebraist 00:07, 14 July 2009 (UTC)[reply]
In many cases, such as the German tank problem, a probability distribution f(k)=1/Ω, for k=1...Ω, and f(k)=0 elsewhere, for some large value of Ω, may serve as a prior distribution. After computation of a posterior distribution you may take the limit of infinite Ω. It is incorrect to take the limit first and compute afterwards. Bo Jacoby (talk) 10:15, 14 July 2009 (UTC).[reply]
As a consequence of what is stated in the first reply, if you want the distribution to be uniform you have to renounce to countably additivity. There are translation invariant, finitely additive probability measures in N; they are all in the dual of l. -pma (talk) 16:13, 14 July 2009 (UTC)[reply]


July 14

Proving inequalities

Say . Due to the AM-GM inequality, , which means that the maximum value of abc is 1/27. Now, what values of a, b and c give this maximum value? Obviously it's when they're all 1/3, but is there some way to prove this? On a related topic, if I want to prove where , is it fair to say that (since you can swap the variables around...?) and somehow prove the inequality using this? Is there a name for the kinds of things I'm talking about, where you have inequalities with permutations of variables? --wj32 t/c 11:27, 14 July 2009 (UTC)[reply]

The AM-GM inequality says that the AM and GM are equal if and only if all terms are equal, which in your example gives . For the second question, what you say isn't true in general - e.g. . AndrewWTaylor (talk) 11:38, 14 July 2009 (UTC)[reply]
You can only "swap variables around" when there's symmetry. Suppose you start with the assumption . So far a, b and c are symmetrical. Then you evaluate your sum, starting with . After this step, a and b are still symmetrical, but c is not symmetrical with them. You have included a summand which has a and b, but not one that contains c. So from this point forth, is very different from and you can't interchange them.
By the way, in your first question you have forgotten to state that (not sure if this is required for the second). -- Meni Rosenfeld (talk) 12:55, 14 July 2009 (UTC)[reply]

My favorite way to see the AGM inequality is like this. Say x=abc where . Now let and . Suppose you adjust a and b while keeping their sum constant. That means that u doesn't change, while the product is clearly at maximum when v=0 which means a=b. By symmetry (since you could have permuted the variables in any way) the product is at maximum when a=b=c. —Preceding unsigned comment added by 70.90.174.101 (talk) 12:24, 14 July 2009 (UTC)[reply]

Lagrange multipliers shows the general technique for this sort of problem. HTH, Robinh (talk) 19:45, 14 July 2009 (UTC)[reply]
Thanks for all your answers. Say I changed the problem to that of proving where and . Could I use symmetry to prove this inequality, rewriting it as ? --wj32 t/c 01:20, 15 July 2009 (UTC)[reply]
No, are you sure you have read my explanation above?
Symmetry allows you to do things like assuming, without loss of generality, an ordering on the variables (say ). Since both the condition and the result are, as a whole, symmetrical, and there must be some ordering, you may as well assume that a is the least and c is the greatest. But you can't just change the inequality arbitrarily.
The whole point in such theorems is that to satisfy the condition, if one variable increases, another must decrease. Thus an increase in one summand is offset by a decrease in another. You destroy all that if you keep only one summand - you can change its value by playing with the variables without any consequences.
Anyway, neither of these statements can be valid, since the LHS is unbounded under the condition. -- Meni Rosenfeld (talk) 10:02, 15 July 2009 (UTC)[reply]

regarding Bernoulli trials

Dear Wikipedians:

The question reads "Bernoulli trials with probability p of success and q of failure, what is the probability of 3rd success on 7th trial and 5th success on 12th trial?"

I reasoned that 3rd success on 7th trial means exactly 2 successes in first 6 trials, and 5th success on 12th trial means exactly 1 success among the 8th, 9th, 10th and 11th trials (4 trials in total), therefore my answer to this question is:

(6 choose 2)p²q4 + (4 choose 1)pq³.

How sound is my reasoning?

Thanks for the help.

70.31.152.197 (talk) 16:46, 14 July 2009 (UTC)[reply]

First, 3rd success on the 7th trial means exactly 2 successes in the first 6 trials and a success on the 7th trial, so it's . Similarly you need a success on the 12th trial, so the second part is . Finally, you want both to happen, so you need to multiply the parts rather than add them. So it should be . -- Meni Rosenfeld (talk) 17:02, 14 July 2009 (UTC)[reply]

Simplifying this sum

Is there a simple expression for this sum?:

Bromskloss (talk) 20:46, 14 July 2009 (UTC)[reply]

(pi^2)/6 --pma (talk) 20:56, 14 July 2009 (UTC)[reply]
That's the limit as N tends to infinity. I don't think there's a nice expression for the partial sums. Algebraist 21:00, 14 July 2009 (UTC)[reply]
Ops, too much in a hurry, did'n notice the "N". However, we can easily write an analytic function S(z) such that the sum above is S(N). Can be of any interest? --pma (talk) 21:07, 14 July 2009 (UTC)[reply]
Maple gives where is the nth Polygamma function. Whether or not you consider that simple is up to you. It's not elementary so I wouldn't consider it simple. Irish Souffle (talk) 21:05, 14 July 2009 (UTC)[reply]
If you want to exhibit your sum as a value of an analytic function at z=N, just write
.
You can further expand in a geometric series each term and rearrange into a power series; you can do it by the absolute convergence (however, within a radius of convergence 1).This way in fact you find the quoted polygamma. Added note: the above S(x), restricted to the positive semi-axis, is the unique increasing solution to the functional equation S(x)=S(x-1)+ 1/x2 for all x>1. --pma (talk) 21:28, 14 July 2009 (UTC)[reply]
PS: Irish Souffle, I don't agree with yours "not elementary hence not simple"; there are elementary things that are not at all simple and simple things that are not at all elementary; in fact usually in math we make things less elementary exactly in order to make them simpler --pma (talk) 21:34, 14 July 2009 (UTC)[reply]
It's simple in the sense that it is nice, neat, and compact, but if he was planning on calculating things by hand I'm not sure that he'd think of it as simple (unless there's an easy way to evaluate the polygamma function at integers; I've never used it before). Irish Souffle (talk) 21:46, 14 July 2009 (UTC)[reply]
It's also called a generalized harmonic number, specifically . The Euler–Maclaurin formula can probably be used to evaluate it numerically. -- Meni Rosenfeld (talk) 22:09, 14 July 2009 (UTC)[reply]


July 15

Bounded variation

So, I'm working on a qualifier problem, as usual. Here, I just want to know if there is a typo. It asks about a function being of bounded variation on (0, 1):

If f is uniformly continuous on the open interval (0, 1), then f is of bounded variation on (0, 1).

Prove or disprove. See, the thing is, Royden, de Barra, Wikipedia all define bounded variation only on closed intervals. So, is it even defined on an open interval? And, if so, what does it mean? There seems to be some sort of minor/major error on every qualifying exam so I would not be surprised if it's wrong. But, I could also see the definition just being f is BV(a, b) when f is BV[c, d] for every a < c < d < b. StatisticsMan (talk) 01:40, 15 July 2009 (UTC)[reply]

A uniformly continuous function on (0, 1) extends uniquely to a uniformly continuous function on [0, 1], so it might just mean that the latter function is of BV. Algebraist 01:46, 15 July 2009 (UTC)[reply]
Exact. The variation of f on an open interval I is defined the same way as in the case of closed bounded intervals. If you(SM) are ok with the definitions on closed intervals, you can just say it is the sup of the variation of f on all bounded closed subintervals. For the easy counterexample on (0,1), the hint is: such an f should oscillate a lot, so as to have unbounded variation, still not too much, because it needs to have limits at 0 and 1. So, look for the right α in f(x):=xαsin(1/x).....Note: the definition you (SM) suggest is a weaker thing; it means: f is BVloc(I), locally BV.--pma (talk) 07:59, 15 July 2009 (UTC)[reply]
Also notice that if f is both continuous and of bounded variation on (0,1), then it is uniformly continuous; in any case, if f is BV on (0,1) then it admits limits at 0 and at 1; the corresponding extended function F on [0,1] so that it is continuous on 0 and 1 has of course the same variation as f on (0,1) (the variation of F on [0,x] is continuous at x if F is). So, this gives you another equivalent definition of a BV function f on (0,1) : it is the restriction to (0,1) of a BV function on [0,1], that you can assume continuous on the end-points. --pma (talk) 09:11, 15 July 2009 (UTC)[reply]
BV functions are differentiable almost everywhere, so a continuous nowhere-differentiable function will fail spectacularly. Or, without using known properties or standard weird functions, you could build a counterexample by hand by just making a function with a spike of height 1/n centred at 1/n. Algebraist 10:03, 15 July 2009 (UTC)[reply]
nice example... but be careful not to sit on it ;-) --pma (talk) 10:13, 15 July 2009 (UTC)[reply]
I already know that x sin(1/x) is the counterexample, though I do not have a proof. But, I just looked at the page for uniformy continuity and I found the Heine-Cantor theorem which says a continuous function on a compact space is uniformly continuous. If I define f to be x sin(1/x) except at 0 and 0 at 0, then it is continuous on [0, 1] and thus uniformly continuous. It seems pretty clear from the definition that if I now delete the endpoints from the domain that it's still uniformly continuous as the definition on [0, 1] says for any epsilon, there exists a delta such that for any x, y with d(x, y) < delta, then d(f(x), f(y)) < epsilon. In particular, it is true for any x, y in (0, 1). Then, bounded variation I have seen before and it's just to pick a sequence of points that give the max value of each oscillation. You get a sequence of partial sums that is divergent so it's not of bounded variation.
Speaking of that (since I was thinking it was BV at first but it's not), Royden does absolute continuity only with finite sums. But, the Wiki article says "(finite or infinite)". So, my question is, are the three definitions equivalent if you do it with 1) finite sums only, 2) infinite sums only, 3) both? I think they're all equivalent. StatisticsMan (talk) 14:31, 15 July 2009 (UTC)[reply]
Yes, they're all pretty obviously equivalent. Algebraist 14:44, 15 July 2009 (UTC)[reply]

elliptical tube cuts a plan

The cutting line of an elliptical tube and plan, is it an ellipsis? 88.72.242.144 (talk) 02:17, 15 July 2009 (UTC)[reply]

Yes, unless the axis of the tube is parallel to the plane, in which case you get either nothing, one line, or two parallel lines. --Spoon! (talk) 04:23, 15 July 2009 (UTC)[reply]
Yes, though it's an ellipse because an ellipsis is somehting else.. —Preceding unsigned comment added by 83.100.250.79 (talk) 16:13, 15 July 2009 (UTC)[reply]
Which is strange enough, given that both come from the same Greek word (ἔλλειψις). — Emil J. 16:56, 15 July 2009 (UTC)[reply]

Thank you Spoon!, Emil and 83.100.250.79 for your the quick answers. Do you know where to find a proof or reference to this result? 88.72.242.144 (talk) 21:24, 15 July 2009 (UTC)[reply]

Well, here's a crude sketch: If the tube were perpendicular to the plane, then the cross section is obviously an ellipse. As you tilt the plane, it just "stretches" the cross section (the cross section as you look down the tube is still the same, but now it lies on a tilted plane, so distances are farther in one direction). A stretched ellipse is still an ellipse, because stretching preserves the sign of the discriminant of a conic section. --Spoon! (talk) 05:01, 16 July 2009 (UTC)[reply]
You might find a perusal of Conic sections useful (remembering that a tube is merely a special case of cone). 87.81.230.195 (talk) 00:04, 21 July 2009 (UTC)[reply]

functions

please can anyone give me hints on these questions?

2f(x-1) - f(1/x - 1) = x

what is the value of f(x)?

next....

f(1) = 2005 f(1) + f(2) + f(3) + ...... + f(n) = n2 * f(n), n>1

what is the value of f(2004)? —Preceding unsigned comment added by 122.50.137.12 (talk) 12:47, 15 July 2009 (UTC)[reply]

What is the domain of f supposed to be for the first question? Algebraist 12:59, 15 July 2009 (UTC)[reply]
As for the second one, rearranging the equation to (n2 − 1)f(n) = f(1) + f(2) + f(3) + … + f(n − 1) gives you a recursive definition of f. In order to solve the recurrence, compute the first few values of the function, and see whether a pattern emerges. (Note that all values of the function are linear in the value of f(1); the picture will be much more clear if you start with f(1) = 1 at first, and only obfuscate it by multiplying it with 2005 after you solve it.) Then prove that your pattern is correct by induction on n, and plug in n = 2004 to get the result. — Emil J. 13:15, 15 July 2009 (UTC)[reply]
Better yet, subtracting the equation for n and n − 1 gives
f(n) = n2f(n) − (n − 1)2f(n − 1).
This recurrence is much easier to solve than the one above. — Emil J. 14:58, 15 July 2009 (UTC)[reply]
As for the first one, substitute 1/x for x to get
2f(1/x − 1) − f(x − 1) = 1/x.
Together with the original equation, you now have a system of two linear equations in two unknowns f(x − 1) and f(1/x − 1). Solve it. — Emil J. 13:34, 15 July 2009 (UTC)[reply]

Equation solving

I have just added the Wikiproject Mathematics template to the talk page of Equation solving. The article seems to have been pretty much ignored until now and it needs a lot of work. I have filled in the bits on ratings etc.. If someone wants to do a more official assessment then please do. Yaris678 (talk) 16:44, 15 July 2009 (UTC)[reply]

Wikipedia talk:WikiProject Mathematics is a more appropriate place for such messages than the Reference desk. — Emil J. 16:49, 15 July 2009 (UTC)[reply]
OK. Thanks. I will post it there. Yaris678 (talk) 18:01, 15 July 2009 (UTC)[reply]
On the wider point, am I right in thinking that Wikipedia talk:WikiProject Mathematics is for bringing up issues with mathematics articles, whereas the helpdesk is for questions about mathematics itself? Perhaps that should be made clear at the top of the article. Yaris678 (talk) 18:21, 15 July 2009 (UTC)[reply]
Yes, that is correct. At the top of which article? If you mean the Mathematics Reference desk (which is not an article), then since the header is shared with the other desks, it should probably be discussed in Wikipedia talk:Reference desk (the irony is not lost on me). -- Meni Rosenfeld (talk) 20:20, 15 July 2009 (UTC)[reply]
Yes, by article I mean desk. I have made a post on Wikipedia talk:Reference desk. Thanks, Yaris678 (talk) 12:40, 17 July 2009 (UTC)[reply]

test edit

TeX is dead while the image server is down, I think. Algebraist 20:46, 15 July 2009 (UTC)[reply]

Is this alleged fact that is stated by saying "the image server is down" supposed to be somehow known to the Wikipedia public? Where does one find such information? Michael Hardy (talk) 20:48, 15 July 2009 (UTC)[reply]

There should be a note at the top of the page related to the downage, though it doesn't mention maths rendering specifically: 'Uploads and Thumbnails generation have been temporarily halted while we upgrade our image store.' The best way to have up-to-date technical information is via IRC. Algebraist 20:51, 15 July 2009 (UTC)[reply]

The top of the page is where I'd never think of looking. One edits talk pages at the bottom of the page. Michael Hardy (talk) 20:54, 15 July 2009 (UTC)[reply]

Well, it's the established place for sitewide notices. More details here, btw. Algebraist 20:58, 15 July 2009 (UTC)[reply]

Thank you. I'd never have guess that "techblog" exists. I do have some vague suspicion that the hardware and software that makes Wikipedia work were not actually brought down from Heaven by an archangel at the beginning of Time, but it's only a vague suspicion. Michael Hardy (talk) 21:06, 15 July 2009 (UTC)[reply]

And now the initial line I posted above is getting properly rendered. Maybe we're back to normal. Michael Hardy (talk) 21:12, 15 July 2009 (UTC)[reply]

July 16

Change the subject of equation

How do I solve a V=pie r2h problem for r —Preceding unsigned comment added by 71.244.44.98 (talk) 00:13, 16 July 2009 (UTC)[reply]

sorry I meant for h and r=6 and pi=72pi —Preceding unsigned comment added by 71.244.44.98 (talk) 00:15, 16 July 2009 (UTC)[reply]

Adding section header I think you should go and talk to your teacher. Rearranging an equation like that is a fundamental bit of algebra that you need to learn properly. --Tango (talk) 12:49, 16 July 2009 (UTC)[reply]

Continuity and differentiability

Show that the function

If f [x} = I 2x-3 I . [ x ] , x > 1 [ Greater than and equal to 1]

and f{x} = sin { pi .x/ 2 } , x < 1 [Less than 1]

is continuous but not differentiable at x = 1,

Where pi = 180 Degree, [ x ] is the greatest integer —Preceding unsigned comment added by 122.174.90.103 (talk) 05:27, 16 July 2009 (UTC)[reply]

Adding section header --Tango (talk) 12:49, 16 July 2009 (UTC)[reply]
If you can let us know what work you have done for this problem, and where you have gotten stuck, then we can help you with solving it. Do you know and understand the definitions of continuous and differentiable? Eric. 216.27.191.178 (talk) 22:22, 16 July 2009 (UTC)[reply]

Unique representation of an element of a local ring

What I want to show is that an element u of (p is prime, n is a natural number) can be written uniquely in the form where . I vaguely feel that this is something related to p-adic numbers but I never studied those. What I do know is that is the field on p elements. If someone can point me in the right direction I will be grateful.--Shahab (talk) 09:27, 16 July 2009 (UTC)[reply]

First, your notation does not make much sense, as is not a subset of . Assuming you wanted to write , existence follows immediately from the fact that any natural number can be written in a base-p representation. As for uniqueness, the sets and have the same finite number of elements, and we have just established that the mapping from the former to the latter sending each sequence to the sum is surjective, hence it is also injective. Or you can simply use the fact that the base-p representation of integers is unique; since as an integer is smaller than pn, and distinct natural numbers below pn are incongruent modulo pn, the representation is also unique qua elements of . — Emil J. 10:32, 16 July 2009 (UTC)[reply]
Thank you. (I am however confused by your statement about my notation.)--Shahab (talk) 13:43, 16 July 2009 (UTC)[reply]
As Emil pointed out, your notation was nonsensical. is the ring of integers modulo p, while is the ring of integers modulo pn. There's a natural map from the latter to the former, so it makes sense to treat elements of as also being in if you want to. In elementary terms, if you know an integer is 7 mod 9, then you know it is 1 mod 3. However, there is no natural map from to (for n>1), so it doesn't make sense to regard elements of as being in . In elementary terms, if you know an integer is 1 mod 3, then it could be 1, 4, or 7 mod 9, and there's no reason to prefer one above the others. Algebraist 14:14, 16 July 2009 (UTC)[reply]
There seems to be a misunderstanding here. I understand the difference between Z_n and Z_{p^n}. What I don't understand is what notation of mine claims that one is contained in the other. Emil says "Assuming you wanted to write " and I say "". Its the same thing, isn't it? Thanks for bearing with me.--Shahab (talk) 16:45, 16 July 2009 (UTC)[reply]
The elements of are integers, a typical element is 2. The elements of are equivalence classes of integers with respect to congruence modulo p, a typical element of is . (Furthermore, it's not a terribly good idea to rely on this internal representation, as its not unimaginable that someone defines it in another way. One can only be sure how the object looks like up to isomorphism.) — Emil J. 16:56, 16 July 2009 (UTC)[reply]
Yes I know that. I was making the assumption that the elements of {0,1,...p-1} are really the equivalence classes {[0],[1],...[p-1]}. That is just a loose way of writing I have seen in books. However this does not clear my doubt as to why you said that I think Z_n is in Z_p^n.--Shahab (talk) 17:18, 16 July 2009 (UTC)[reply]
No one mentioned , it was . You wrote (omitting some irrelevant bits): an element u of can be written uniquely in the form where . Here, you take elements ui of , and try to interpret them as elements of in the sum. — Emil J. 17:30, 16 July 2009 (UTC)[reply]
Actually, what I wrote two posts above is misleading. was not supposed to stand for a set of integers, but of the ring elements 0, 1, 1 + 1, …, , which are also commonly denoted by 0, 1, 2, …; the ring in question being in the context. — Emil J. 17:40, 16 July 2009 (UTC)[reply]
OK I understand what you are saying. In my defense I was reading from this book here(Pg 21) and it does say . (This book assumes a lot of knowledge which I sadly don't have.) I get your point though. Thanks--Shahab (talk) 17:56, 16 July 2009 (UTC)[reply]
On thinking more, I realize that the uis could be in , if you interpret juxtaposition as the natural action of on , rather than as multiplication. Algebraist 16:52, 17 July 2009 (UTC)[reply]
How do you intend to define a "natural action" of on elements u of that do not satisfy pu = 0? How is it different from mapping to (using whatever is the "action" on 1), and then doing multiplication in ? — Emil J. 17:48, 17 July 2009 (UTC)[reply]
It does satisfy pu = 0. It's just the natural action of Z on (i.e. nu is the sum of n copies of u), which descends to the quotient Z/pZ because pZ acts trivially. Algebraist 18:13, 17 July 2009 (UTC)[reply]
The annoying thing is that there is no standard notation for the set of the first n natural numbers {0,1..,n-1}. Von Neumann's " n " is extremely elegant but ambiguous outside ordinals (what is n+m or f(n)?). I saw using [n] sometimes, that maybe deserves more popularity...So one is tempted to use forgetting about its algebraic structure and its quotient nature. After all we do not use different symbols for as a set or as a ring. I don't know. --pma (talk) 20:52, 16 July 2009 (UTC)[reply]
Agreed. I have also found distinguishing between as a group and as a ring very troublesome. I would like a notation to mean explicitly "the cyclic group of n elements"; I know that chemists use but I find that ugly. (For whatever reason, the additive group comes up rarely enough that the ambiguity there doesn't bother me.) Eric. 216.27.191.178 (talk) 22:17, 16 July 2009 (UTC)[reply]
 ? My fourth year geometric group theory lecturer used Cn, he was a pure mathematician through and through. Maelin (Talk | Contribs) 23:18, 16 July 2009 (UTC)[reply]
Ah, is standard notation when studying symmetry groups? That would explain where the inorganic chemists got it from. I would prefer something with blackboard-bold, though, myself. Unfortunately is already taken, and I think I've seen used for something else, too. Perhaps we can introduce ?
I'll admit to not being fond of group presentations... feels unnecessarily complicated here. Just gut feeling. Eric. 76.21.115.76 (talk) 05:28, 17 July 2009 (UTC)[reply]
I don't know about it being a "standard" notation, it's just what he used. I don't remember whether I saw it in the literature. I wouldn't expect a blackboard bold thing though, given that the symmetric groups Sn don't get bolded either. Maelin (Talk | Contribs) 07:31, 17 July 2009 (UTC)[reply]
My point is that usually a basic mathematical object admits so many different structures of interest, that having a special notation for it as endowed with each of these structures, is just hopeless. I think that a reasonable choice is to reserve a special a symbol only for the basic, precise set without structure (e.g. deciding that ) ; then, at each use, one specifies which structures that object is endowed with, and in which category it is to be seen. But we do not have enough standard symbols to distinguish from as a set, as an ordered set, cyclic group, topological space, dynamical system, topological group, ring,&c. And the same for the elements: I prefer to think that 3 is always 3; not a different thing as element of , , or &c... --pma (talk) 08:10, 17 July 2009 (UTC)[reply]
It's safe to think of 3 as referring to the same thing in , , etc, since there are natural embeddings of these various rings into the larger ones. But the element 3 in really has a totally different meaning since there's no embedding of into any of these others. Rckrone (talk) 16:47, 17 July 2009 (UTC)[reply]
Ah, of course, we can't have separate notation for everything. But I use -the-ring and -the-group often enough that I would like notation to distinguish those two cases. Although now that I think of it, the fact that I use it frequently as a ring and as a group is just a consequence of the particular choice of what courses I am taking; someone else might want notation to distinguish the set, someone else might want notation to distinguish the space, etc. Eric. 216.27.191.178 (talk) 19:57, 17 July 2009 (UTC)[reply]
Rckrone, yes, but in fact there is a natural embedding of in for p<q; it is not a group homomorphism, but it's there. As I see it, what is totally different is the algebraic structure of compared to , but why should this make 3 a different thing as an element of either set? Take for example me: I belong to totally different groups, but I'm always the same person. But of course everybody is free to see it according to his or her way. --pma (talk) 21:49, 17 July 2009 (UTC)[reply]

Rope cutting formula?

My math knowledge is hopeless and this particular problem can by solved by most teenagers eventhough iam an adult. Consider that there is a rope that is about 30 thousand foot long. It is cut exactly into two equal length ropes. Those two ropes are cut again into 4 equal ropes. This goes on and on until the ropes are about 3 hundred foot long. The question is what would be the formula that gives the number of cuts so that the rope is around 3 hundred foot long?. May be the formula will not work when the original rope is too long or too short, I have no idea. —Preceding unsigned comment added by 131.220.46.25 (talk) 12:00, 16 July 2009 (UTC)[reply]

Assuming you mean 1 cut gives two pieces of 15,000 ft and 1 more cut then gives four pieces of 7,500 ft... You are dividing the rope into 2^n parts after N cuts. 300 ft is 1/100 of 30,000, so you have to choose between 2^6=64 and 2^7=128. -- SGBailey (talk) 13:39, 16 July 2009 (UTC)[reply]
The general solution: with each step you are multiplying the number of ropes by 2 (replacing every single rope with 2 ropes half the size of the original). So, after n steps, you have equal-sized ropes, of length , where L is the original length (30 thousand feet, in this case), and you want to solve for n given a particular l. We can rearrange to get:
This can be solved for n by the use of a logarithm, to get:
Since most calculators have a logarithm button, you can just punch this in to get your answer. You may have to use the base conversion formula, if your calculator only has a base 10 logarithm, to get:
For your particular numbers (that is, L=30000 and l=300), you should get about 6.6. This means, as SGBailey alluded to, that after 6 cuts the ropes will be longer than 3 hundred feet, but after 7 cuts the ropes will be shorter 3 hundred feet. That is to say, they will never be 3 hundred feet long. --COVIZAPIBETEFOKY (talk) 13:52, 16 July 2009 (UTC)[reply]
Wikipedia seems to be having some difficulties with the mathematical notation. My guess is that the server is just being slow, and the problems will work themselves out eventually. I'm pretty sure I wrote everything correctly. --COVIZAPIBETEFOKY (talk) 13:55, 16 July 2009 (UTC)[reply]
If you start with 30000 feet and cut exactly in half, the sequence of lengths you get is
30000.0,15000.0,7500.0,3750.0,1875.0,937.5,468.75,234.375...

so you never quite get 300. 67.117.147.249 (talk) 16:58, 16 July 2009 (UTC)[reply]

Subspace complemented -> Hilbert space

Let X be a Banach space. Suppose every closed subspace M of X has a complementary subspace (that is assumed to be closed); i.e., X is a direct sum of M and some other closed subspace. Does it follow that X is a Hilbert space? In other words, construct a subspace that is not complemented when X is not a Hilbert space (i.e., it doesn't satisfy the polarization identity). Either a textbook on Function Analysis or Wikipedia probably has an answer, but I'm lazy :) -- Taku (talk) 22:58, 16 July 2009 (UTC)[reply]

Finite-dimensional Banach spaces have complementary subspaces. Algebraist 23:10, 16 July 2009 (UTC)[reply]
Sure :) Assume X is infinite-dimensional in addition. (In face, in view of the Hahn-Banach theorem, such a counterexample must be infinite-dimensional). -- Taku (talk) 23:12, 16 July 2009 (UTC)[reply]
The answer is yes, but you are not going to find it in every textbook on Functional Analysis. It's Lindenstrauss and Tzafriri's characterization of Hilbert spaces (more precisely: of the Banach spaces that admit an equivalent Hilbert norm; in particular, of course, any finite dimensional space). Notice that "all closed subspace have a complement" is a topological vector space theoretical property, that is not affected by re-norming. In other words, a Banach space has a non-complemented closed subspace if and only if it admits no equivalent Hilbert norm (so your statement has to be slightly corrected, as A. already pointed out). --pma (talk) 07:13, 17 July 2009 (UTC)[reply]

Thank you for the information. Right, this is a topological question; not geometric one (in the sense that one can replace the original norm by the equivalent one.) It sounded like this isn't a trivial result. (Somehow I was thinking: since every Banach space contains a copy of, umm, c_0?, you first assume X = c_0, then somehow generalize it. I guess it's probably not that simple.) -- Taku (talk) 10:39, 18 July 2009 (UTC)[reply]

Yes it's quite a deep theorem. If you are interested, and you want to work on related problems (there are still open variants of the complementary subspace problem), or if you are just curious to learn the techniques, maybe a good starting point is Lindenstrauss and Tzafriri's 2-volume book (Classical Banach spaces I&II). After the work of Banach, the geometry of Banach spaces had a great, increasing developement, culminating with the impressive boom of the '70s. In that golden period many of the main problems were solved; still the topic remained active in the subsequent decades, and new spectacular results have been proven, especially in connection with measure theory. --pma (talk) 12:41, 18 July 2009 (UTC)[reply]

July 17

Asking people for their names

OK, this is going to be a really odd question.

Say you've got a person who you don't know the name of, but you've got a list of names it might be, and you're allowed to ask yes-or-no questions. What is the number of questions you've got to ask to be guaranteed of knowing their name?

For example, say their name might be Mario, Luigi, Peach or Toad. You can ask them whether they're each name in turn, and be sure of their name after the third question ("Are you Mario?" "No." "Are you Luigi?" "No." "Are you Peach?" "No." So they must be Toad). But there is a quicker way:

"Are you Mario or Luigi?"

If YES, you can simply ask them if they're Mario, and if NO you can ask them if they're Peach. Either way you're guaranteed to know their name in only two questions.

If you have seven names, Mario, Luigi, Peach, Toad, Bowser, Wario and Daisy, you can ask them:

"Are you Mario, Luigi, or Peach?"

If they say YES, you can ask them each name in turn and so guaranteed to know their name in three questions (Are you Mario, Luigi, or Peach? Are you Mario? Are you Luigi?) If they say NO, you simply do what you did when there were only four names, and so you're also guaranteed to know their name in 3 questions. So the minimum number of questions you've got to ask to be guaranteed of knowing their name is 3. As far as I've figured it out, the sequence for the number of questions for each number of names is 0 for 1 (if there's only one name you don't have to ask any questions!), 1 for 2, 2 for 3, 2 for 4, 3 for 5, 3 for 6, 3 for 7, 3 for 8, and I think 4 for 9 and 4 for 10 but my brain is totally melting so I can't be sure. What I want to know is, is there an easier way to figure this out and does this fit with any mathematical sequences already discovered? Vitriol (talk) 11:30, 17 July 2009 (UTC)[reply]

Put it this way: if you are allowed to ask n questions, how many different n-ples of answers can you get? (from Y,Y,..Y to N,N..N). It's 2n right? So no way to detect the name if there are more that 2n names. But if the names are 2n , divide them into two equal groups and ask which one has the name, as you were saying, and in n steps you have it (and a fortiori if they were less than 2n). So the answer is log2(n), rounded. It's the base of information theory...--pma (talk) 11:47, 17 July 2009 (UTC)[reply]
Rounded up. BTW, for reference, the algorithm that you are implementing here is known in computer science as a binary search, and it is the most efficient algorithm possible for this scenario, as pma demonstrated. --COVIZAPIBETEFOKY (talk) 11:55, 17 July 2009 (UTC)[reply]
I think "binary search" refers specifically to the case of searching a sorted list. What we have here is the more general dichotomic search. -- Meni Rosenfeld (talk) 15:14, 17 July 2009 (UTC)[reply]

Residue of exp(1/sin(z)) at z=0.

Is there a simpler way to find the residue of at z=0 than plugging into the power series for e and then doing long division on the first term, as no other term should include a 1/z term? It's not too hard or anything but I am just wondering. Thanks StatisticsMan (talk) 17:43, 17 July 2009 (UTC)[reply]

There's the residue formula: where C is a closed curve around z0 and f is holomorphic everywhere else in the curve. Or if you know the order of the pole is n you can use which is on the residue page.
However, it looks like the singularity you're asking about is an essential singularity, so no residue. Rckrone (talk) 18:15, 17 July 2009 (UTC)[reply]
What? Essential singularites have residues, just like any other isolated singularity. Algebraist 18:19, 17 July 2009 (UTC)[reply]
Oh sorry. I guess I was confused about that. I've only ever seen residues discussed in the context of poles. Rckrone (talk) 18:22, 17 July 2009 (UTC)[reply]
I will answer my own question. Consider the power series expansion of at z = 0. Since 's expansion has a z term but no constant term, only for n = 1 will this power series expansion have a term. So, the coefficient in front of 1/z in the expansion for is just the coefficient for 1/z in the expansion for . And, now we're just dealing with a simple pole so you multiply by z and take the limit to get 1. This, I consider to be simpler than actually doing the long division on the term, though you don't have to go far. StatisticsMan (talk) 19:26, 17 July 2009 (UTC)[reply]


It's not that simple. Say that your argument is OK with res0(1/sin z)=1 (it's equivalent to lim sin(z)/z=1 as z→0). But I fear that after that, you are confusing 1/sin(z) with sin(1/z). Indeed, res0(1/sin(z)n) is 0 for even n, but it is not for odd n. You can easily compute it by means of the Lagrange inversion theorem, using the power series expansion of arcsin(z). You should find
.
From this you get, expanding the exponential in a series of powers of 1/sin(z),  :.
There are other ways of course; maybe somebody has a more elegant solution.--pma (talk) 20:57, 17 July 2009 (UTC)[reply]
Note. I tried googoling the first digits of the decimal expansion 086521097, and as a matter of fact it corresponds to a telephon number. Actually they say they just moved there and do not have much information about that exp(1/sin(z)), so please do not bother them further. --pma (talk) 08:39, 18 July 2009 (UTC)[reply]
If you wanted to find a simpler representation of this number, surely Plouffe's Inverter would be more effective than Google (though I got no hit in this case). -- Meni Rosenfeld (talk) 20:04, 18 July 2009 (UTC)[reply]
Good work with the series. This is a hypergeometric series, . Using an identity, this in turn can be rewritten using the modified Bessel function and the modified Struve function as
which can be checked to agree numerically with the integral. The hypergeometric form is the simplest form yet of the solution, though, I think. Fredrik Johansson 08:58, 18 July 2009 (UTC)[reply]
Agree. I see now Maple also returns the hypergeometric form to the series. Talking about a residue of the form , the above computation admits the following generalization (as I write it, it's not exactly a generalization, but I liked the symmetric form...).
Let and be entire functions, with and . So and have local inverses at 0, respectively and . Then we have:
.
Indeed,
because the first series converges uniformly on any small circle around 0. Using Lagrange's
,
we get the above symmetric expression in and .
Since it seems quite a standard computation we could maybe mention it somewhere in residue (complex analysis)#Series methods. BTW, is there a direct way for seeing that is the derivative of some function meromorphic at 0? --pma (talk) 09:50, 18 July 2009 (UTC)[reply]

July 18

Polynomial whose roots are powers of another polynomial

Hello again. This is part of a theorem I am reading on single Hensel lift of a polynomial. Suppose I am given a polynomial in hm in which divides xk-1 as well. By Hensel's lemma I can construct a polynomial h(x) which is equivalent to it mod pm and which divides xk-1 too but in mod pm+1. If we let x to be the root of hm and y=x+pmi the root of h then we have xk=1+pme as hm divides xk-1 in . Also in ; yp=xp and ykp=1. All this is fine. My book now claims (and so does the link) that if I consider another polynomial hm+1 whose roots are the pth powers of the roots of h then these roots coincide mod pm with those of hm. I can't justify this statement. I'll be grateful for any help. Thanks--Shahab (talk) 10:17, 18 July 2009 (UTC)[reply]

Irreducibility of multivariate polynomials

(redirect from sci desk) I cannot find anything about that. I can think of two reasons for this: either it's utterly complicated and not well understood or it can be so easily reduced to the question of irreducibility of polynomials of a single variable that no one cares to tell. What is it? 93.132.138.254 (talk) 09:35, 18 July 2009 (UTC) —Preceding unsigned comment added by 83.100.250.79 (talk) [reply]

Start maybe from here : Algebraic geometry.... --pma (talk) 11:47, 18 July 2009 (UTC)[reply]
Ah, your're right! I already have guessed this had something to do with mathematics somehow! 93.132.138.254 (talk) 14:44, 18 July 2009 (UTC)[reply]
My understanding was that Algebraic geometry primarily studied polynomials over algebraically-closed fields. What about polynomials over general rings? Can you provide any particular insight there? Eric. 76.21.115.76 (talk) 21:30, 18 July 2009 (UTC)[reply]

How does this work?

How does this work. I recently had it emailed to me and it seems to work. Thanks. Computerjoe's talk 22:33, 18 July 2009 (UTC)[reply]

Warning: link plays unsolicited audio. Algebraist 22:34, 18 July 2009 (UTC)[reply]
Let your initial number have first digit x and second y, so the number is 10x+y. Subtracting the sum of digits from this gives 9x. So all the page has to do is label all multiples of 9 with the same option and it wins. Algebraist 22:37, 18 July 2009 (UTC)[reply]
Ah okay. Thanks. Computerjoe's talk 22:46, 18 July 2009 (UTC)[reply]

July 19

Extrema Finding and Integral Transforms

Is there a transform which "simplifies" finding of minima or maxima? Or to put the question more generally, what are the "frequency domain" analogues for extrema finding for various transforms, such as the Fourier, Laplace, and Mellin transforms? (Or any others, if a straightforward analogue exists.) I'm looking at a class of non-trivial minimization problems, and am wondering if a transform might help with conceptualizing them, much like the Fourier transform helps with conceptualizing wave behavior with its time domain/frequency domain duality. -- 76.201.158.47 (talk) 00:37, 19 July 2009 (UTC)[reply]