Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Grey1618 (talk | contribs) at 06:08, 6 February 2009 (Beat Frequency for Pulses: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 31

formula to calculate pi

this formula which has been given by the great indian mathematician srinivasa ramanujam, is the cause of my headache. i am an indian and without understanding this formula, i won't be a true indian. please explain this for a ninth grade student and this also

Do you just want an explanation of the meaning of the terms in the formulae, or do you want an explanation of why the formulae are actually true? The first is substantially easier than the second. Algebraist 01:49, 31 January 2009 (UTC)[reply]

Even if I did understand Ramanujan's formulas I would probably not be a true indian. :-) . The formula components are the summation sign Σ , the square root sign , the factorial sign ! , and the exponentiation notation, and the Pi symbol π. Click on the colored words to find explanations. You may like to check the formulas numerically, for instance using the J (programming language). The left hand side is

  %o.1
0.31831

and five terms of the right hand side is

  ((2*%:2)%9801)*+/(!4*k)*(1103+26390*k)%(!k)*296^4*k=.i.5
0.31831

the terms of the sum decreases rapidly:

  ((2*%:2)%9801)*(!4*k)*(1103+26390*k)%(!k)*296^4*k=.i.5
0.31831 2.48051e_8 5.31965e_15 4.08817e_21 7.72733e_27

Bo Jacoby (talk) 08:00, 31 January 2009 (UTC).[reply]

Some further remarks: The first formula was discovered by Ramanujan in 1910 and published in 1914, with no proof, as his abitude. The first proof appeared in a 1987 book by Jonathan and Peter Borwein, where they reconstructed Ramanujan work on the subject. The other formula is not due to Ramanujan, but to the Chudnovski brothers, following Ramanujan's method. The point of both formulae is that they converge very quickly, as Bo Jacobi shows here above. For instance 4 terms of the second series already give pi with more that 50 correct decimal digits. It was used by its authors in the '80s to compute 4 billions digits of pi. (If you ask why, it seems that computing digits of pi is a sort of competition, I do not know what is the current record; it was over 1 trillion around 2000 by the japanese mathematician Kanada and his team). pma (talk) 17:26, 31 January 2009 (UTC)[reply]

Euler's proof that god exists

This is an excerpt from wiki's page on Leonhard Euler:

There is a famous anecdote inspired by Euler's arguments with secular philosophers over religion, which is set during Euler's second stint at the St. Petersburg academy. The French philosopher Denis Diderot was visiting Russia on Catherine the Great's invitation. However, the Empress was alarmed that the philosopher's arguments for atheism were influencing members of her court, and so Euler was asked to confront the Frenchman. Diderot was later informed that a learned mathematician had produced a proof of the existence of God: he agreed to view the proof as it was presented in court. Euler appeared, advanced toward Diderot, and in a tone of perfect conviction announced, "Sir, , hence God exists—reply!". Diderot, to whom (says the story) all mathematics was gibberish, stood dumbstruck as peals of laughter erupted from the court. Embarrassed, he asked to leave Russia, a request that was graciously granted by the Empress. However amusing the anecdote may be, it is apocryphal, given that Diderot was a capable mathematician who had published mathematical treatises.[1]

  1. ^ Brown, B.H. (May 1942). "The Euler-Diderot Anecdote". The American Mathematical Monthly. 49 (5): 302–303. doi:10.2307/2303096.; Gillings, R.J. (February 1954). "The So-Called Euler-Diderot Incident". The American Mathematical Monthly. 61 (2): 77–80. doi:10.2307/2307789.

"Sir, , hence God exists."

What does that mean? Was it just a joke by Euler to humiliate Diderot, and has no meaning, or does it mean something else? Because no matter how many times I try to interpret it, the sentence makes no sense to me. Could anyone clarify this for me?? Johnnyboi7 (talk) 05:36, 31 January 2009 (UTC)[reply]

Isn't this anectode taken from Bell's booklet "Men in mathematics"? I've been also puzzled with its meaning, but first I'd like to check the sources. If it's just the author's version, as I suspect, things could be much different, and easier. I'll have a look to the article you quoted...--pma (talk) 07:34, 31 January 2009 (UTC) PS: Bell quotes the anecdote from De Morgan's book "A budget of paradoxes".PPS: I have read the interesting Gillings' article you mentioned [1], where it is reported the tradition of the anecdote, going back to Thiébault's version (1804) that is probably the origin. (Note that initially the denominator had n instead of z, not that this makes less nonsensical the use of the formula). --pma (talk) 09:03, 31 January 2009 (UTC)[reply]
Now that you have convinced me that God exists. I'm going to confront Richard Dawkins and say to his face.

"Sir, , hence God exists."

The look on Richard Dawkins face when I say this will be priceless. 122.107.205.162 (talk) 10:24, 31 January 2009 (UTC)[reply]
The difficult thing is to organize everything as a public event, like Catherina the II was able to do, according to the anecdote. Of course, she was particularly interested, as it was God who gave her the kingdom of all the Russias (according to her and to her colligues kings all around Europe). Euler himself was interested, as he got a salary from Catherina (and not by the French Republic, which paid for Diderot, of course). Just to say that nothing has really changed so much. For instance, today a mathematician could gain some founds proving with mathematical authority that Economy is governed by suitable mathematical rules (is it so different from (a+bn)/z=x?). Personally, I suspect that it's always been the same story, in paleolithic times, in Catherina times, and today: just brute force proofs, so to speak. --pma (talk) 16:37, 31 January 2009 (UTC)[reply]
To answer the original question, Euler's alleged quote is indeed nonsense. The mathematical sentence is a statement regarding the relationship between a, b, n, z, and x, but in this context these variables have no meaning, so the sentence is meaningless. It would be as if I had said "I am writing a novel where Jane's second cousin is named Richard" -- I've told you absolutely nothing about my novel, because you don't know anything about Jane and Richard, so knowing that they are second cousins is useless. Eric. 131.215.45.82 (talk) 23:32, 31 January 2009 (UTC)[reply]
Indeed and, since it is just nonsense, there is no possible logical retort (other than pointing out that it's just nonsense, which is never very impressive). --Tango (talk) 00:42, 1 February 2009 (UTC)[reply]
Well we can't reject a proof as nonsense just because you do not understand it. Let say: "the author shoud fill some passage that is not completely clear, or provide a refernce for it. Some variables are not defined. The existence result is in any case quite poor as it is, and demands for further properties of the found solution". pma (talk) 01:44, 1 February 2009 (UTC)[reply]
So the ruling monarch comes to you and says - "The guy Diderot is a pain in the ass - please prove to him that God exists so he'll shut up about the atheism thing."...Well, when the ruling monarch tells you to do something - it's generally a good idea to put other matters aside and attend to it right away. Euler knows he can't prove the existence of God - but he knows that Diderot doesn't know squat about math - so he writes down any old mathematical-looking gibberish that's sufficiently complicated that nobody is going to argue about it and challenges Diderot to prove that it's NOT true. With the onus suddenly on him - and with no knowledge whatever about math - poor Diderot can neither challenge nor disprove Euler's assertions. This is nothing to do with math or religion - it's a piece of clever social engineering. SteveBaker (talk) 05:36, 1 February 2009 (UTC)[reply]

We have to rely on mathematicians when we do not understand a mathematical proof. So it is sad if Euler really testified that he had a proof of God's existence. I do not understand Andrew Wiles' proof of Fermat's last theorem, and perhaps some day it will be published that an error in Wiles' proof is found and that Fermat's theorem is false after all. SteveBaker's argument applies to Wiles as well: more is earned by providing a proof than by trying in vain. Bo Jacoby (talk) 18:12, 1 February 2009 (UTC).[reply]

The crucial point here is that Euler basically posed a question to Diderot that he couldn't answer. Euler, likewise, probably believed that it was impossible to prove God's existence, and ALSO believed it impossible to disprove. The question of God's existence is an unaswerable question, based on the extent of knowledge and intelligence given to humans. This is ironic, because Diderot had no problem advocating atheism, which faces that same problem. Euler's little gag exposed his hypocrisy.

Localization Morphism

Hello all. I am currently reading a book on rings. Unfortunately the book is assuming a lot of basic knowledge which I possess only in bits and parts. In an example for showing that a morphism between local rings is not necessarily a local morphism (i.e. doesn't map the maximal ideal into the maximal ideal) the book says:

Let A be a local ring with a prime ideal P, such that where M denotes the unique maximal ideal. If we denote by the localization morphism with respect to the multiplicative system S=A\P, then is not local.

My problem is that I do not understand what is a localization morphism (how it is defined, and what idea it conveys). I believe that has S-1P as its unique maximal ideal but thats about all I understand here. Any help will be appreciated.--Shahab (talk) 14:35, 31 January 2009 (UTC)[reply]

AP is the ring formed from A by adding multiplicative inverses for every element not in P. Thus the elements are of the form a/b for a in A and b in A\P. There is a natural map from A into AP sending a to a/1. This is the localization map. The canonical example is to take A to be the integers and P={0}. Then AP is the rationals and the localization map is the natural embedding of Z into Q. Algebraist 15:22, 31 January 2009 (UTC)[reply]
Thanks. I don't quite understand your first sentence though. Shouldn't it be: AP is the ring formed from A by multipliying A by multiplicative inverses for every element not in P. Also if m is in P it is going to be mapped to m/1. In that case it is not a unit and so possibly in the maximal ideal of AP. How can I conclude that the morphism isn't local?--Shahab (talk) 15:45, 31 January 2009 (UTC)[reply]
Perhaps 'adjoining' would have been better than 'adding'. On your last point, yes of course P is mapped into the maximal ideal of AP. The point is the M is not (we're assuming here that MP). Algebraist 15:50, 31 January 2009 (UTC)[reply]
An explicit example is A = { a/b : b odd, a,b in Z }, P = 0, M = 2A = { a/b : a even, b odd, a,b, in Z }, AP = Q = { a/b : b nonzero, a,b in Z }, and φ : AAP : x → x. The maximal ideal of Q is 0, and the image of M under φ is much larger than 0. JackSchmidt (talk) 16:14, 31 January 2009 (UTC)[reply]
Thanks--Shahab (talk) 17:09, 31 January 2009 (UTC)[reply]

elementary graph theory

(This isn't the question... just background...) I was looking at a problem that introduced dropping a node from a connected graph and ensuring that the connected graph is still connected. My first thought was to have the dropped node add connections between all neighboring nodes, but that will be nasty if there are, say 100 neighbors. So, I thought about the minimum number of edges required to ensure connectivity. For 1 neighbor, no edges. For 2 neighbors, 1 edge. For 3 neighbors, 2 edges. For four neighbors, 3 edges (wrong - I noticed it is 4 later). So, it is n-1. Well, I then thought about five neighbors. It takes 7 edges to ensure connectivity. Then, I realized that I don't need this at all and went on to the next step.

(This is the question...) Is there a common proof for the minimum number of edges required to connect n nodes? I don't need it, but now the idea is stuck in my head and I have a lot more pressing things to work on. -- kainaw 22:23, 31 January 2009 (UTC)[reply]

There are many ways of showing that n-1 edges are required to connect n vertices. I don't know which, if any, is most common. The only textbook I have to hand is Bollobás's Modern Graph Theory, which does it by observing that the two algorithms for finding spanning trees he's given obviously make a graph with n-1 edges. Algebraist 22:33, 31 January 2009 (UTC)[reply]
I know that n-1 edges can create connectivity, but that doesn't ensure connectivity. With 4 nodes, ABCD, I can have vectors AB, BC, AC. That is n-1, but it is not connected. I must have 4 vectors to ensure connectivity. With 5 nodes, I must have 7 vectors. I was wondering about ensuring connectivity. -- kainaw 00:52, 1 February 2009 (UTC)[reply]
Oh, I see. We had that question here a little while ago. The answer is that with the disconnected n-vertex graph with most edges is a complete graph on n-1 of the vertices with an extra isolated vertex. Thus (n-1)(n-2)/2+1 edges are required to ensure connectivity: not much of an improvement on having all edges. Algebraist 00:57, 1 February 2009 (UTC)[reply]
Previous discussion. Algebraist 00:58, 1 February 2009 (UTC)[reply]
Thanks. Luckily, all the nodes in the program I was writing have unique IDs. So, when dropping a node with n neighbors, I only need n-1 vectors. If I consider the neighbors a line of unconnected nodes and put a vector between each pair along the line, I've ensured connectivity. The unique IDs makes it very easy to do that. -- kainaw 01:10, 1 February 2009 (UTC)[reply]
For completeness, it should be noted that you are assuming the graph is simple - that is, contains no self-loops or duplicated edges. --Tango (talk) 20:56, 1 February 2009 (UTC)[reply]

Modular arithmetic/number theory

Hi. For some work I'm doing, I need to work with PSL(3,2n). Trying to find the centre of SL(3,2n) involves solving the equation

which some straight calculations for low n show to be only true for x=1. Is this true for any n? I was never any good at number theory...SetaLyas (talk) 23:28, 31 January 2009 (UTC)[reply]

Yes. The group of units of Z mod 2n has 2n-1 elements, so every element has order a power of 2. If x3=1, then x must have order dividing 3, so the order must be 1, so x is 1. Algebraist 23:30, 31 January 2009 (UTC)[reply]
Wow, thanks for the speedy answer! SetaLyas (talk) 00:21, 1 February 2009 (UTC)[reply]

Vectors & Potentials

A particle at position vector r experiences a force (a(R−3) + b(R−4))r, where R=|r|. How would one find the function (say V(r)) of the potential in this case? I know with 1-D work done it's just the integral of F with respect to x, but how does one approach it in 3 dimensions? Can you simply integrate the function before the r with respect to R and ignore the r to get -(a/2(R−2) + b/3(R−3))r? It seems horribly wrong to me to just ignore the position vector in the integral but I'm not honestly sure where to go otherwise - what's the correct method?

On a similar vector-y calculus-y note, how would one differentiate 1/R with respect to time, when again R=r for the position vector?

Thanks for the help,

131.111.8.104 (talk) 23:40, 31 January 2009 (UTC)Zant[reply]

While the force is a vector, the potential is a scalar. As the force is in the radial direction and numerically depend on the distance only and not on the direction, it is the negative gradient of a potential which also depend on the distance only. The potential −a/(2R2)−b/(3R3) will do. Bo Jacoby (talk) 08:47, 2 February 2009 (UTC).[reply]
Sorry about the terse response. Unless I am confused (a strong possibility), the first question has no solution for nonzero b. See conservative vector field and scalar potential for information. See scalar potential#Integrability conditions for how to calculate the potential -- it requires a line integral.
For the second question, use the chain rule. You may find helpful. (See dot product#Derivative for taking the derivative of a dot product.) Eric. 131.215.158.184 (talk) 09:38, 2 February 2009 (UTC)[reply]


February 1

Power Series Expansion

I'm trying to find an approximation for the following formula:

and I want to "obtain by expanding kT as a power series in ", the approximation

I've expanded the exponent which I presume is the correct approach but there are so many ways it seems I could go from there - dividing by kT, subtracting kT from each side and so on. I've got as close as but no better - can anyone see what I should be doing? On an unrelated formatting note, how comes my first LaTeX formula is smaller than the other two?

Thanks,

Spamalert101 (talk) 00:02, 1 February 2009 (UTC)Spamalert[reply]

The first formula is smaller because it only contains simple symbols so can be displayed in HTML. The others contain more complicated symbols (the fractions, probably), so have to be done as an image, which for some reason is always bigger. I'm a little confused by your main question, though, have you copied out the first formula incorrectly? That formula expanded as a power series in epilson is simply , no approximation. I can't see any way you can get any higher powers from that formula - it's just a polynomial in epsilon (with complicated, but constant, coefficients). --Tango (talk) 00:40, 1 February 2009 (UTC)[reply]
Tango, I think you got confused. You need to solve for kT. That means you shouldn't have kT anywhere on the right side; it should appear only on the left side. You can't do that in closed form, but you can give as many terms of the power series as you want. See below.... Michael Hardy (talk) 02:41, 1 February 2009 (UTC)[reply]

OK, start with

Separate the two variables:

Expand both sides as power series:

Differentiate with respect to ε:

Since u = 0 when ε = 0, setting ε to 0 gives us

Differentiating again (applying the product rule to the right side), we get

When ε = 0 then u = 0 and du/ = 2, so we have

Therefore

The power series we seek is

where abcde, ... are the values of the 0th, 1st, 2nd, 3rd, 4th, ... derivatives of u with respect to ε at ε = 0. So a = 0, b = 2, c = −4/3, and that gives us

Michael Hardy (talk) 01:51, 1 February 2009 (UTC)[reply]

The general theory behind Michael's approach can be found at Lagrange inversion theorem. Another approach is to find a contraction mapping. Start from Michael's step

Rearrange it like this:

Call the right side F(u). Now define a sequence

You will find the sequence converges at least one term per iteration. McKay (talk) 09:32, 1 February 2009 (UTC)[reply]

On the LaTeX issue: try \textstyle and \scriptstyle. --pma (talk) 15:01, 2 February 2009 (UTC)[reply]

Number sequence help

What is the next number in this sequence (thankfully this isn't homework)

1 11 21 1211 111221 312211 ?

thanks —Preceding unsigned comment added by 70.171.234.117 (talk) 07:59, 1 February 2009 (UTC)[reply]

Haha. This is an old riddle. Each sequence is describing the previous sequence. How would you say 111221? It has three ones, two twos, and one one. Which gives you 312211. If it is any consolation I had to have this one explained to me too when I first saw it. Anythingapplied (talk) 08:38, 1 February 2009 (UTC)[reply]

wow my friend really did me over then... this isn't even a mathematical sequence at all -__-

Not sure why it's disqualified from being mathematical, but the sequence does appear in the Encyclopedia of Integer Sequences. --Ben Kovitz (talk) 22:18, 1 February 2009 (UTC)[reply]

just for fun, what would be a quadratic function that would actually produce:

f(1) = 1 f(2) = 11 f(3) = 21 f(4) = 1211 f(5) = 111221 f(6) = 312211

? —Preceding unsigned comment added by 70.171.234.117 (talk) 08:52, 1 February 2009 (UTC)[reply]

I very much doubt there is anything as simple as a "quadratic function" for this sequence. For more information see look-and-say sequence. Gandalf61 (talk) 09:22, 1 February 2009 (UTC)[reply]
If you want a polynomial that will go through a given 6 points then, in general, you need at least a quintic (degree 5). --Tango (talk) 13:39, 1 February 2009 (UTC)[reply]
Polynomial interpolation would give you a polynomial of degree at most 5.--Shahab (talk) 13:48, 1 February 2009 (UTC)[reply]
According to [2], the unique polynomial of least degree that fits those six points is -(11597/6)x^5+(100285/3)x^4-(416905/2)x^3+(1766885/3)x^2-(2247664/3)x+337211. I haven't checked whether that's right. Black Carrot (talk) 16:57, 1 February 2009 (UTC)[reply]
Indeed, when I say that in general you need at least degree 5 I mean that there is a way of getting a degree 5 or lower solution for all such problems. You can, however, do it with higher degree if you like (although the solution ceases to be unique), that's why it's "at least" not "precisely". --Tango (talk) 20:52, 1 February 2009 (UTC)[reply]

LOL sorry I meant polynomial function not "quadratic". Thanks anyways —Preceding unsigned comment added by 70.171.234.117 (talk) 18:17, 1 February 2009 (UTC)[reply]

Probability function

Hi there - was hoping to get a hand with this question, I'm useless with probability and it's really doing my head in!

N doctors go into a meeting, leaving their labcoats at the door (they all have coats). On leaving the meeting they each choose a coat at random - what is the probability k doctors leave with the correct coat?

Would I be right in thinking that you have selections of the k doctors with the correct coats multiplied by the number of arrangements of remaining doctors to wrong coats, over 'n!' ? If so, how do you find the latter?

Thanks a lot, 131.111.8.98 (talk) 09:59, 1 February 2009 (UTC)Mathmos6[reply]

:My knowledge about probability is limited but I'd say you have a binomial distribution here. So the answer should be --Shahab (talk) 11:01, 1 February 2009 (UTC)[reply]

Oh I think its not the binomial distribution after all. Since the doctors will most certainly pick up coats one by one so the probability of success is changing for a particular trial.--Shahab (talk) 11:19, 1 February 2009 (UTC)[reply]
It's more a problem of enumeration, what you want are the rencontres numbers (for the probability, of course, divide by n!). You also have to decide if you mean "exactly k" or "at least k", the two answers being immediately related with each other. --pma (talk) 12:58, 1 February 2009 (UTC)[reply]

We have a Wikipedia article about this problem: rencontres numbers. Michael Hardy (talk) 17:42, 1 February 2009 (UTC)[reply]

There is also an article on rencontres numbers, that may be interesting too.--pma (talk) 10:57, 2 February 2009 (UTC)[reply]

Zero divisors in polynomial rings

Hello all. I am trying to prove the following theorem: Let f(x) be a polynomial in R[x] where R is a commutative ring with identity and suppose f(x) is a zero divisor. Show that there is a nonzero element a in R such that af(x)=0.

Now I start by letting f=(a0,a1,...an) and g=(b0,b1,...bm) where g is of least positive degree such that fg=0. I can see that anbm=0 from here and so I can conclude that ang must be zero (for else ang would contradict g's minimality). The hint in the book I am reading asks to show that an-rg=0 where 0≤r≤n. Equating the next coefficient in fg=0 gives me anbm-1+an-1bm=0 but I can't figure out what to do next.

Can anyone help please?--Shahab (talk) 10:12, 1 February 2009 (UTC)[reply]

If ang is the zero polynomial, what does this tell you about each anbk (0 ≤ k ≤ m) ? So what does this tell you about an-1bm ? And what can you conclude about an-1g ? And if you repeat this argument, what can you conclude eventually about an-rg ? Gandalf61 (talk) 11:19, 1 February 2009 (UTC)[reply]
Thanks. I proved it.--Shahab (talk) 11:31, 1 February 2009 (UTC)[reply]

what is the most important mathematical operation

if you could only have 1 mathematical operation, which one would you have? —Preceding unsigned comment added by 82.120.227.157 (talk) 14:36, 1 February 2009 (UTC)[reply]

I'd go with addition. As long as you restrict yourself to the integers, multiplication is just repeated addition and exponentiation is just repeated multiplication, so if you have addition you can do all three, it just takes longer. Even if you work in larger number systems, or even things without numbers, most of the standard operations are ultimately based on addition (eg. in the rational numbers, multiplication of fractions is defined in terms of multiplication of integers, which is defined in terms of addition). --Tango (talk) 14:42, 1 February 2009 (UTC)[reply]
I would have to agree, addition is most important. However, it should be stated that integer exponentiation is repeated addition, non-integer exponentiation gets rather more complicated. -mattbuck (Talk) 14:59, 1 February 2009 (UTC)[reply]
I did state that... --Tango (talk) 20:50, 1 February 2009 (UTC)[reply]
You can only get multiplication, exponentiation, etc. from addition if you allow yourself recursion, which for this reason I would say is a more important operation. Algebraist 17:15, 1 February 2009 (UTC)[reply]
If you can do something once, you can do it lots of times - I think you get recursion for free. (As with any question like this, there are slightly different interpretations which get different answers.) --Tango (talk) 20:50, 1 February 2009 (UTC)[reply]
In that case, why not go the whole way and start with the successor function? Algebraist 21:20, 1 February 2009 (UTC)[reply]
A few thoughts:
1. The question is not entirely well-formed, because an operation requires some objects to apply it to. But that's easy to fix: we just include your choice of objects as part of your choice of operation.
2. If all you could do was recursion, then you couldn't do anything, because recursion is just the ability to do some other thing any number of times.
3. Opposing #2, take a look at the lambda calculus. The lambda calculus contains only one kind of object: functions that take a single argument. There is only one operation: function-application. All you can give to a function is a unary function, and all a function can return is a unary function. It is easy in the lambda calculus to define integers, addition, multiplication, recursion, Boolean operations, etc. Once you have the integers, you can define real numbers, irrational exponents, and anything else you like. So, I guess I'll take "function application" as my sole operation, with "unary functions" as my objects. That buys me everything.
Are there other known ways in math to get everything with just one operation?
--Ben Kovitz (talk) 21:06, 1 February 2009 (UTC)[reply]
I suppose the project to embed all of mathematics in set theory can be seen as reducing everything to the single operation the takes a property φ and outputs the class of all objects which φ. (This idea works better in some set theories than others) Algebraist 21:18, 1 February 2009 (UTC)[reply]
But you still need a way of combining properties (AND, OR, etc [or unions and intersections, depending on point of view]). Is there a version of set theory in which those aren't taken as undefined? --Tango (talk) 21:23, 1 February 2009 (UTC)[reply]
If you can have ω-recursion for free, I can have first-order formulae for free. Algebraist 22:02, 1 February 2009 (UTC)[reply]
It was my understanding that much of modern set theory intended to build up the rest of mathematics from this sort of "single operation" approach. The section Set_theory#Axiomatic_set_theory explains some of the efforts, which seem to have been largely successful. "Nearly all mathematical concepts are now defined formally in terms of sets and set theoretic concepts. For example, mathematical structures as diverse as graphs, manifolds, rings, and vector spaces are all defined as sets having various (axiomatic) properties. Equivalence and order relations are ubiquitous in mathematics, and the theory of relations is entirely grounded in set theory." Nimur (talk) 21:24, 1 February 2009 (UTC)[reply]
That's also my understanding of the main goal of set theory. What would you say is the single operation of set theory? Function-application actually depends on sets for its definition, since a function is a kind of set (of ordered pairs, themselves sets). "One operation" and "defined in terms of" are different, at least as I understand them. --Ben Kovitz (talk) 21:54, 1 February 2009 (UTC)[reply]
As I said above, I take the basic operation of set theory to be 'take a property, form the set/class of all sets/objects with that property'. Of course, some work needs to be done to avoid paradox. Algebraist 22:02, 1 February 2009 (UTC)[reply]
Thanks for repeating your point, Algebraist. I hadn't given it proper consideration the first time, because I was thinking that "property" is too vague for a mathematical operation. For example, the property of "good government" or "wise choice". What do they call that operation (mapping a property to the set/class of all the things that have it)? (Most folks I've talked with about this say that a property is that set/class, so "having a property" simply means "being a member of that set/class", but those folks were philosophers, and I think that's a dumb theory, anyway.) I had been thinking of operation as meaning a function mapping a tuple of elements from a set A to the set A, or something close to that; so, for example, integer addition is an operation that maps to integers to an integer, etc. Am I being too narrow? --Ben Kovitz (talk) 23:02, 1 February 2009 (UTC)[reply]
Are there other known ways in math to get everything with just one operation?: for instance (talking a little bit more about constructions rather than operations) in category theory every universal construction turns out to be a particular case of an initial object, the simplest concept in the theory. The trick of course is that the category changes -and becomes possibly more complicated. --pma (talk) 00:38, 2 February 2009 (UTC)[reply]
Well, there's the fact that all Boolean functions can be formed from iterating the Sheffer stroke. Michael Hardy (talk) 15:49, 3 February 2009 (UTC)[reply]
Sole sufficient operator is a Wikipedia article devoted to answering the question above. Somewhat stubby for now. Michael Hardy (talk) 15:51, 3 February 2009 (UTC)[reply]

Ratio of binomial coefficients

Hiya,

Having shown that for all , and supposing , how would one show that the limit of the ratio of the two sides of the above inequality as equals 1?

Many thanks for the help!

Spamalert101 (talk) 16:02, 1 February 2009 (UTC)BS[reply]

I'm not sure, but here's the first thought that comes to mind. It might or might not be fruitful. "Ratio = 1" means "they're equal", so just prove that the difference between them gets smaller than any epsilon. You might be able to do that with an inductive proof. --Ben Kovitz (talk) 22:03, 1 February 2009 (UTC)[reply]
Hiyatoo! Look at the LHS: it is, starting the sum from the last term (and the largest):
.
Notice that it is a finite sum, although with an increasing number of terms. The kth term into the sum converges to as . In general, this should not be enough to conclude that the inner sum converges to
,
as you want, BUT it's also true that each term is less than the corresponding term . Then you conclude applying the dominated convergence theorem for series (a toy version of the usual dominated convergence theorem of integrals; its a particular case, as is a particular case of ). Is it ok? in case ask for further details. Note that in the same way you can prove (try it) an analogous asymptotics for your sum in the more general case of an integer multiple of m, that is , instead of . If you got the geometric series & dominated convergence thing, you can write down immediately the limit of the ratio in terms of p. pma (talk) 23:58, 1 February 2009 (UTC)[reply]

This is Question of sequence & series

This is Q of sequence & series hlp me 1+2+4+8+16+........find nth term of the series & find sum of first n ter? —Preceding unsigned comment added by 117.196.34.27 (talk) 16:11, 1 February 2009 (UTC)[reply]

n'th term is 2n−1. Sum of first n terms is 1+2+4+8+...+2n−1=2n−1. Bo Jacoby (talk) 17:10, 1 February 2009 (UTC).[reply]
Hi. Have you actually tried doing this yourself? -mattbuck (Talk) 18:03, 1 February 2009 (UTC)[reply]

Seemingly straightforward problem

Hi, I'm trying to solve the following, seemingly simple, problem but I'm stumped by something. I want to find the maxima and minima of the following function:

I find the following derivative:

Which, when set equal to zero, I rewrite to this polynomial:

Using the quadratic equation gives me the solutions x = 0.738 and x = 0.6020. Plotting these functions shows that the first is actually the maximum of the function, but the second makes no sense at all. I've gone over it a million times, and I can't find any errors. I was thinking that there might be some complex business that I'm not aware of (like when I assume that ). Can anybody elucidate? risk (talk) 20:19, 1 February 2009 (UTC)[reply]

The second term in your derivative is (fairly obviously) wrong: I haven't worked it through, but fixing that should help. AndrewWTaylor (talk) 20:30, 1 February 2009 (UTC)[reply]
Sorry, I copied it out wrong. I've fixed it in my original post (to avoid confusion). risk (talk) 20:33, 1 February 2009 (UTC)[reply]
Where did you get that polynomial from? I get , which has only one solution. Algebraist 20:45, 1 February 2009 (UTC)[reply]
I used the following steps
In the first step I multiply by . In the fourth, I square both sides. Any illegal moves? risk (talk) 20:55, 1 February 2009 (UTC)[reply]
You will see the error if you try to plug the solution into the equation ; the left side becomes negative .776, and the right side becomes positive .776.
You didn't make any algebraic errors, but rather a subtle logic error. What your algebraic manipulations show is the following: if x is a solution to , then x is a solution to . However, because you squared both sides in one step (and squaring is not an injective function), your proof does not go in reverse; it is not necessarily true that all solutions to the latter equation must also be solutions to the former equation. In fact, you have even constructed an example of a solution to the latter equation which is not a solution to the former equation. Eric. 131.215.158.184 (talk) 21:26, 1 February 2009 (UTC)[reply]
See extraneous solution. --Tango (talk) 21:27, 1 February 2009 (UTC)[reply]

Of course. Thank you both. I should take some time to read the Extraneous Solution article. Could you tell me how you would solve it from , or how you could tell that it had only one solution? risk (talk) 21:34, 1 February 2009 (UTC)[reply]

It's just a quadratic in . Use your favourite way of solving quadratics, and remember that is by definition non-negative. Algebraist 21:39, 1 February 2009 (UTC)[reply]
I solved it by mapping to a dummy variable, s, and solving the cubic equation in s. ; taking the derivative, , solve s by quadratic formula. Then note that one of the zeros is negative and so mapping back into x yields the square root of a negative number. That is the extraneous root. Nimur (talk) 21:37, 1 February 2009 (UTC)[reply]
That approach relies implicitly on the chain rule and the fact that sqrt(x) has no stationary points. Algebraist 21:47, 1 February 2009 (UTC)[reply]
Although your approach can produce extraneous solutions, it definitely won't omit any correct solutions. So you can just take both solutions that you found and plug them into the original equation to verify their correctness, and throw out any extraneous solutions. But Algebraists' approach is better. Eric. 131.215.158.184 (talk) 22:33, 1 February 2009 (UTC)[reply]

Summarizing, it seems worth repeating Nimur's remark. Since you are looking for maxima and minima, it is convenient to make the substitution from the beginning and look for max & min of over all --pma (talk) 00:24, 2 February 2009 (UTC)[reply]


February 2

"Complement" of an automorphism group

I am confused as to how the notion of a "complement" (in the group-theoretic sense) can apply to non-group-elements. In the example I have, we have a quotient , and it is written "Let L- be the complement in L under the action of " where is some automorphism of Q. How can L- be defined as the complement of a group of automorphisms - what is it the complement in, and if it is how can that even be defined?? I'm lost! SetaLyas (talk) 00:34, 2 February 2009 (UTC)[reply]

Try giving a little more context; perhaps there is some minor typo. It is not uncommon for Q to be a q-group, ρ to be an automorphism of coprime order, and to ask about a complement of the centralizer of ρ in Q/[Q,Q], which is likely equal to image of [ Q, ρ ] in L, also known as L1−ρ, another important subgroup. JackSchmidt (talk) 03:44, 2 February 2009 (UTC)[reply]
Thanks ^_^ I'm reading from a paper filled with typos, so that's not unlikely! You're correct in some of your guesses... Q is a 2-group, is an automorphism of order (so of coprime order). It is to do with considering the associated Lie ring of the group, and then just says "Let be the complement in L under the action of ". So you are saying by "the complement in L under the action of ρ" means the complement of the centralizer of ρ (or <ρ>?) in Q/[Q,Q], which could equal [Q,ρ]/[Q,Q]? Are there any texts where this terminology is used that you know of? SetaLyas (talk) 12:27, 2 February 2009 (UTC)[reply]
Especially if the Lie ring methods are being used to talk about fixed points or fixed point free automorphisms, I think the text means the complement of the fixed points/centralizer. I don't think leaving out the words "of the fixed points" is standard terminology, but if it is a preprint, it could be a very plausible typo. The index of the centralizer in Q of ρ should be equal to the (group theoretic, not vector space) index of L in L (because ρ has coprime order; Khukhro's p-group book, page 81) if this is the case, so that might be something to check.
In general if ρ acts coprimely on Q, then Q = [Q,ρ]·CQ(ρ), and if Q is abelian, then this is a direct product (so true for instance in the quotient Q/[Q,Q]). This can be found in Aschbacher's or Kurzweil and Stellmacher's or Gorenstein's group theory textbooks (I think always under "coprime action"). Again, you could check this in each factor to see if this is what the author means; it should basically just be finding the Fitting decomposition of 1−ρ.
For Lie ring methods: Chapter VIII of Huppert-Blackburn 2 is probably reasonable place to compare to, as it covers the regular automorphism case, which is similar (especially section 9). Khukhro's p-automorphisms of finite p-groups has a reasonable description of Lie ring methods, but focused on the not-coprime case. Vaughan-Lee's book on the Restricted Burnside Problem had some good material if I recall correctly, but I don't think it focused on automorphisms at all. Leedham-Green and McKay's structure of p-groups book I think uses similar techniques to the other books, but is fairly specialized and fast paced. If the paper is online, I can probably check if this is reasonable. JackSchmidt (talk) 18:52, 2 February 2009 (UTC)[reply]

mathematics

Who is the father of geometr —Preceding unsigned comment added by 74.125.74.37 (talk) 02:55, 2 February 2009 (UTC)[reply]

In the classical western tradition, this is usually ascribed to Euclid. You may want to evaluate History of geometry to define your question more precisely, as well as consider a more world-wide perspective. Nimur (talk) 03:23, 2 February 2009 (UTC)[reply]
y? —Preceding unsigned comment added by 82.120.227.157 (talk) 15:13, 2 February 2009 (UTC)[reply]
Euclid is listed in Fathers of scientific fields#Mathematics which also lists fathers of some subfields of geometry. PrimeHunter (talk) 18:01, 2 February 2009 (UTC)[reply]

Name of a type of puzzle

When I was in elementary school, teachers often gave us math puzzles during lunch for whatever reason. They were the types of puzzles where you had to form a number after being presented with a set of numbers. For example, if I am given the numbers 1,2,4,5 then I would have to manipulate them in a way where the result would be 24. The answer, of course would be 4(5+2-1). Is there a name for this? Thanks, Vic93 (t/c) 04:28, 2 February 2009 (UTC)[reply]

Your variant sounds like 24 Game. The best known variant may be four fours. I don't know whether there is a general name for this type of puzzle, but see Krypto (game). PrimeHunter (talk) 04:37, 2 February 2009 (UTC)[reply]
Hi, thanks. I certainly didn't know about that game before. However, looking back now, I may have misworded my original query. While 24 does sound a lot like what I described, I neglected to mention that the end result could be any integer, not just 24. For example, I would be given the numbers 3,7,5,2 with the goal being say, 8. The answer would be ((7*3)-5)/2. Also, I don't believe it was limited to four numbers. You could be given three, five, thirty (although that would be extremely difficult), etc. Perhaps this is just another variant of the game, but I'd like to know if it has another name (if it is not indeed 24). Vic93 (t/c) 22:15, 2 February 2009 (UTC)[reply]

Is it possible to generate any integer, as constrained in the Four fours article (using addition, multiplication, concatenation, factorial, exponentiation)? No logs, as it states that it is trivial to do with them. Nadando (talk) 05:37, 2 February 2009 (UTC)[reply]

The book Mathematical Recreations and Essays (mentioned in the article) is available on google books. Please check out page 14. Depending on what opperations are allowed you can go up to different numbers. Anythingapplied (talk) 17:13, 2 February 2009 (UTC)[reply]

What is this type of problem called?

For example, you have a container of capacity 3 and another of capacity 5, and have to use fill and transfer operations to measure out, for example, 2 units. One solution is to fill the 5 container, then transfer 3 units to fill the other container, leaving 2 units behind. My question:

(a) Does this type of problem have a name? (b) For general containers of capacity m and n units, which values of quantity can be produced, and what sequence of steps are required in each case?→86.132.164.81 (talk) 13:11, 2 February 2009 (UTC)[reply]

(b) Values: all multiples of the greatest common divisor d of m and n, which is also characterized as the smallest positive integer of the form d=xn+ym over integers x,y; the corresponding algorithm is old Euclid's one. Not by chance: if you replace "containers & capacity" with "segments & length" the geometric origin of the problem appears. --131.114.72.215 (talk) 13:23, 2 February 2009 (UTC)[reply]
I don't see how to implement Euclid's algorithm with two containers. If you have a units in one container, and b in the other, how do you replace a with a mod b or ab while leaving b intact? I don't think you can produce gcd(m, n) in general without using emptying a container as another operation (and even in that case the algorithm is not Euclid's, but a sort of "counting x times n modulo m" for a suitable x). Furthermore, you obviously cannot fit more than m + n units in containers of size m and n, so arbitrary multiples of the gcd are out of question. — Emil J. 14:14, 2 February 2009 (UTC)[reply]
Example: let m = 5 and n = 7. If we denote by (a, b) the state where a units are in the smaller container and b units in the larger, then it is easy to see that the following set of states in closed under the operations of filling and transfering: (0, 0), (5, 0), (0, 7), (5, 7), (0, 5), (5, 2). Thus you cannot produce gcd(5, 7) = 1. — Emil J. 14:31, 2 February 2009 (UTC)[reply]
Yes, I was just wondering whether to add a remark... If you are allowed to use only the two containers, filling and emptying them (e.g. you are at the sea), then of course you get exactly all multiples of the gcd up to n+m (and potentially any multiple if e.g. you drink it). If you impose the constraint that you can't waste water, then it is your situation (and the answer is different as you say). The OP refers to "fill" and "transfer" operations indeed (86: is unfill=-fill allowed??). But your ecological version is somehow more attractive. --pma (talk) 14:44, 2 February 2009 (UTC)[reply]
I believe this is called the Die Hard with a Vengeance problem. -mattbuck (Talk) 15:35, 2 February 2009 (UTC)[reply]
Fill 7; Pour 7 into 5 leaving 2; Throw 5 away; Pour 2 in 7 into 5; Fill 7; Pour 3 of 7 into 5 leaving 4 in 7; Throw 5; Pour 4 in 7 into 5; Fill 7; Pour 1 of 7 into 5 leaving 6 in 7; Throw 5; Pour 5 of 6 in 7 into 5 leaving 1 in 7; Throw 5. You have 1 in 7. -- SGBailey (talk) 20:47, 2 February 2009 (UTC)[reply]

If you were allowed negative amounts, I think the greatest common divisor answer would be right. Michael Hardy (talk) 01:36, 3 February 2009 (UTC)[reply]

Expanding an integral

I am trying to expand the following expression for small ξ:

Just expanding the integrand and integrating term by term does not work, since it runs into ever more divergent integrals. I guess the expansion will involve log terms and the like... Does anybody have an idea of how to do it? Thanks, MuDavid Da Vit 15:00, 2 February 2009 (UTC)[reply]

Ok, I found it myself:
MuDavid Da Vit 10:29, 3 February 2009 (UTC)[reply]
Wow, that's impressive. — Emil J. 12:01, 3 February 2009 (UTC)[reply]
Well done. So the non-analyticity is in the third and in the fourth term (and maybe it's better to write them with so the expansion is ok for negative too) --pma (talk) 13:29, 3 February 2009 (UTC)[reply]
Yes I'd be interested in how the term with the ln was extracted.Dmcq (talk)
I followed a rather sinuous path. In fact I was calculating the analytic continuation of
for d = 3. Summing first and taking the integrals together gives π–2 times the integral I wrote above. If, on the other hand, you integrate first (for suitable values of d sum and integral can be switched at will), you get
(The integrals may look suspicious, but some juggling with analytic continuations gives the right result. These expressions can be found in about any textbook on quantum field theory.) Now you set the term with n = 0 apart (this one will give the ξ3 term), and you rewrite the remainder of the sum as going from one to infinity. Then you expand in ξ. For suitable values of d, the summation over n can be switched with the sum of the expansion. Performing the sum over n gives Riemann zeta functions. Then you take the limit d to 3. One of the terms in the expansion has ζ(1), while the second piece (without the sum) has Γ(–2). The poles cancel exactly and a term with ln(ξ) emerges as a second-order term in the expansion of ξd+1.
I have two more expression like that. They are more complicated, but the procedure can be readily applied (albeit less beautifully; I don't have a closed expression for the general term). MuDavid Da Vit 08:45, 4 February 2009 (UTC)[reply]
Wow. Analytic continuations with Riemann zeta functions plus a throwaway line about finding an expression in any textbook about quantum field theory all in one paragraph. ;-) No really I'm seriously impressed. Thanks very much. Dmcq (talk) 10:49, 4 February 2009 (UTC)[reply]
;-) Well, I do quantum field theory all day long, so this "finding an expression in any textbook about quantum field theory" is not much of a feat, really. I'm proud of the Riemann zeta functions, though. MuDavid Da Vit 13:40, 4 February 2009 (UTC)[reply]

Capitalization conundrum

Hello all. I'm writing because my office mate and I were having a discussion concerning the capitalization of terms in mathematics that use a person's name. In particular, we're concerned with those names that get turned into adjectives. We're thinking "Boolean", "Abelian", "Cauchy", "Lipschitz", things like that. (For instance, a function can be Lipschitz, but no one would ever say "the graph is Petersen").

We noticed that almost everyone gets their name capitalized except for Abel. The word "abelian" appears in lowercase all over the place. What gives? Can anyone explain this to us? Also, does anyone have other examples of lowercase typeset names?

Thanks! –King Bee (τγ) 17:42, 2 February 2009 (UTC)[reply]

I always think of it as it being an extra honour, not getting a capital letter (a bit like members of the Royal College of Surgeons going by Mr. not Dr.). Abel is such a great mathematician that something named after him has being a word it its own right and is no longer thought of as eponymous. (Take a group of students that have just finished a first year algebra course and see how many of them even know where the term "abelian" comes from!) --Tango (talk) 18:09, 2 February 2009 (UTC)[reply]
The late professor Børge Jessen said that the highest honor for a mathematician is to become an adjective spelled without capitalization. He mentioned, apart from abelian, also galois groups, and hermitean and pythagorean and euclidean. I think he also wrote 'hilbert space'. Bo Jacoby (talk) 18:16, 2 February 2009 (UTC).[reply]
I definitely capitalise Galois group. I may be inconsistent with the others. Abelian is the only one that I would think it odd to see capitalised. --Tango (talk) 18:19, 2 February 2009 (UTC)[reply]
I agree with Tango (although the honor is slightly diluted by having an exact synonym -- commutative -- in common use), but I also never capitalize boolean (largely because of programming). I don't have any explanation for why these conventions are used, though... maybe "Abel" and "Boole" just don't sound like typical western last names? By the way, the term for being names after something is eponym. Eric. 131.215.158.184 (talk) 18:50, 2 February 2009 (UTC)[reply]
It's also worth noting your sample set there was quite strange. Lipschitz and Cauchy will always be capitalised, as they are the surnames themselves, not adjectives derived from them. Adjectives derived from names are uncapitalised through increased usage, there is no "correct" way... SetaLyas (talk) 21:22, 2 February 2009 (UTC)[reply]
Compare to the standardized metric system notation, "Symbols for units are written in lower case, except for symbols derived from the name of a person. For example, the unit of pressure is named after Blaise Pascal, so its symbol is written "Pa", whereas the unit itself is written "pascal"." Nimur (talk) 09:18, 3 February 2009 (UTC)[reply]

Lots of people who write Wikipedia articles don't capitalize "boolean", and since many of those are in computer science, I wonder if they know "Boole" is a person's name (in computer science it's compulsory to start every word with a capital letter except when there's a reason to do so). I've noticed people not capitalizing "gaussian" and I wonder if they know that Gauss was the most famous person to live on earth in the 19th century (except among those who did not work in the physical and mathematical sciences). Michael Hardy (talk) 15:14, 3 February 2009 (UTC)[reply]

Isn't platonic and christian written without capitals, even if people do know that Platon and Christ are famous people? Bo Jacoby (talk) 11:53, 4 February 2009 (UTC).[reply]
In English it isn't, is it? — Emil J. 16:36, 4 February 2009 (UTC)[reply]
I would certainly write Christian with a capital C. Platonic, I'm not so sure about... Incidentally, isn't it Plato, not Platon? --Tango (talk) 22:48, 4 February 2009 (UTC)[reply]
Plato is the usual English version, but Platon is a direct transliteration of (the nominative form of) his name in Greek, and I believe it's the standard form in some modern languages. Algebraist 22:51, 4 February 2009 (UTC)[reply]

Golden ratio tesselation?

<img src="http://www.cantonese.ca/quilt-model.png"> I want to adjust the ratios and angles in this tesselation to make the square bigger, possibly into a golden rectangle, and adjust the rhombi accordingly. Can anyone work out what dimensions and angles to use for it to "work" mathematically? --Sonjaaa (talk) 20:14, 2 February 2009 (UTC)[reply]

What ratio? All the edges are the same length. All you can do is alter the angle between the squares. At one extreme (0 degrees) the rhombii will cease to exist and you'll have tessellating squares at the other extreme (90 degrees) the rhombii will become more squares so you'll have tessellating squares again. An obvious compromise is to use 45 degrees. -- SGBailey (talk) 20:52, 2 February 2009 (UTC)[reply]

Is it possible to turn the square into a rectangle, a golden rectangle, and adjust the rhombi accordingly? Or would that be geometrically impossible?--Sonjaaa (talk) 21:42, 2 February 2009 (UTC)[reply]

Hi Sonjaaa... yes you can do the same with any rectangle, also a golden one, in place of the red squares, the only thing is that the pink and magenta rhombi will not be equal. You can decide both edges of the rectangle, say thy are a and b. Then you also have all rhombi of one color with all edges =a, of course, and rhombi of the other color with all edges =b. The rectangles of course has 90 degree angles, the pink and magenta rhombi are similar, all of them have 2 angles of degrees, and the other 2 of degrees, and you can also decide what is . You may think that there are only the red rectangles and that they are only joined by the corners, like in your picture, and that the rhombi are just background. As SGBailey suggests, you can move all rectangles together, making the angle more or less open. When it's 90 degrees too, you get a figure like a Scottish design, if you know what I mean. When it vanises, or when it becomes 180 degrees, also, the rectangles join together like bricks. You can make it with a deck of cards. Was it as simple as that what you meant, or do you want another thing? Also, if you take a graph paper you may draw it easily :) pma (talk) 00:11, 3 February 2009 (UTC)[reply]

You could try other tessellations, or go with Penrose tiling for instance if you really want to make bigger and bigger patterns. Dmcq (talk) 09:31, 3 February 2009 (UTC)[reply]
If I change the square to a golden rectangle with dimensions 1000 by 1618, then what would be the dimensions and angles of the rhombi? I want to cut out a pattern I can use for a quilt, but I don't know how to get the exact measurements for the rhombi except by trial and error until something fits at the right angles.--Sonjaaa (talk) 17:41, 3 February 2009 (UTC)[reply]
Oh I see, it's like an algebra puzzle and you gave me the formula with the alpha thingy.--Sonjaaa (talk) 17:42, 3 February 2009 (UTC)[reply]
Sonjaaa, since you still have to fix the shape of the rhombi, if you want to be super-golden, why don't you choose golden rhombi too? That is, they have the longer diagonal and the shorter diagonal in golden ratio (all rhombi, small and large). Dimensions:
  • Rectangle's Shorter Edge: 1000
  • Rectangle's Longer Edge: 1618
  • Small Rhombus' Edge: 1000
  • Small Rhombus' Shorter Diagonal: 1051
  • Small Rhombus' Longer Diagonal: 1701
  • Large Rhombus' Edge: 1618
  • Large Rhombus' Shorter Diagonal: 1701
  • Large Rhombus' Longer Diagonal: 2753
This choice also makes the small rhombus' longer diagonal equal to the large rhombus' shorter diagonal (appreciate the Chiasmus), that makes the whole thing veeery harmonious --you know... ;) I didn't wrote angles because the diagonals are enough, to draw a rhombus (it's easier and more precise). Do you like the hint? pma (talk) 20:48, 3 February 2009 (UTC)[reply]

Function proof, hard one

g(g(k)+g(m))=k+m.

k,m are greater or equal to zero, g(x) has x greater or equal to 0, and g(x) is greater or equal to 0.

(all defined to be greater or equal to zero)

Prove or Disprove:

g(x)=c (with restrictions for greater or equal to 0, constant c) and g(x)=x (with restrictions) are the ONLY TWO FUNCTIONS that satisfy this.

Its easy to see that they satisfy.

if g(x)=x, then g(g(k)+g(m))=k+m. because g(g(k)+g(m))=g(k+m)=k+m if g(x) =x, the left side is obviously k+m so they satisfy...

its also easy to tell for constant c.

However, HOW DO YOU FIND OTHER FUNCTIONS!?/ (like exponential, log ones)

Thanks —Preceding unsigned comment added by 208.119.135.108 (talk) 21:00, 2 February 2009 (UTC)[reply]

I don't understand. If g(x)=c, then g(g(k)+g(m))=c, which is not necessarily equal to k+m, surely? Algebraist 21:03, 2 February 2009 (UTC)[reply]
But K+M=constant (since they are both integers) so it is a constant. —Preceding unsigned comment added by 208.119.135.108 (talk) 22:14, 2 February 2009 (UTC)[reply]
Oh, are k and m fixed integers? I assumed they were variable (nonnegative) reals. In that case, there are enormous numbers of such functions. Since the given condition only involves the value of g at three points (k, m and g(k)+g(m)), you can choose g freely at all other points and still get a function satisfying the condition. Algebraist 22:18, 2 February 2009 (UTC)[reply]

see, well i think g has to be degree less than 1, because g(g(x)) is degree m lets say, well m+k is degree 1...g(g(x)) cannot be degree one if g(x) is greater than degree one

anybody know how to best appraoch this? —Preceding unsigned comment added by 208.119.135.108 (talk) 22:24, 2 February 2009 (UTC)[reply]

What is this 'degree' you're talking about? I've already given you many solutions. Here's one particular one: set g(m)=m, g(k)=k, g(m+k)=m+k and g(x)=ex for all other values of x. Algebraist 22:27, 2 February 2009 (UTC)[reply]
Polynomial degree? If you require that the function be a polynomial then your observations about degree is a proof, just consider what properties are needed for a first degree polynomial to satisfy g(g(k)+g(m))=k+m. I have no idea if g(x)=x and g(x)=c=k+m are the only analytic functions, and if you allow general functions then, as Algebraist notes, counterexamples are easy to construct. Taemyr (talk) 23:18, 2 February 2009 (UTC)[reply]
Never mind. This is bullshit. Even if you limit yourself to polynomials you get counterexamples. For example, three points define a second degree polynomial. For higher degrees than that you are underspecified and get infite functions. Taemyr (talk) 23:22, 2 February 2009 (UTC)[reply]
Yes, there are lots of polynomial solutions. The only question is which of them are everywhere nonnegative (as required by the question). This will depend on the values of k and m. Algebraist 23:23, 2 February 2009 (UTC)[reply]
(edit conflict) I think Algebraist is getting a bit annoyed. You can change the g(x)=ex to anything in the above solution and it will work. Algebraist has said this answer based on the fact that you said K and M are CONSTANTS, which I don't believe is what you meant. They probably should be variable integers. When you say g(x)=c is a solution, that implies g(x)= a constant should work no matter what constant. so g(x)=2 should work. Clearly this does not work unless k+m=2. So it only works on a very specific set of k, m. If I saw this problem I would assume that for a given function g(x) you should get g(g(k)+g(m))=k+m for ALL positive integers k and m. That being said, I think algebraist is right that the way you've purposed this problem, g(x)=c is not a solution.
Now that that is hopefully understood, let me try and lend a hand. First of all a proof that g(0)=0. Suppose g(0)=x for some integer x not equal to 0. Then by the property 0+0=g(g(0)+g(0))=g(x+x)=g(2x). So g(2x)=0. Thus by the property 2x+2x=g(g(2x)+g(2x))=g(0+0)=g(0). So g(0)=4x, which is a contradiction since g(0)=x. Thus g(0)=0 for all g(x) that abbide by the property (another reason why g(x)=c is not a solution).
Take g(1)=x for some integer x. By the property (and prior proof)
1+0=g(x+0)=g(x)
1+1=g(x+x)=g(2x)
x+0=g(1+0)=g(1)
x+x=g(1+1)=g(2)
2x+x=g(2+1)=g(3)
2x+2x=g(2+2)=g(4)
2+1=g(2x+1x)=g(3x)
2+2=g(2x+2x)=g(4x)
You can see that if we continue in this way we can show that g(kx)=k for any k and g(k)=kx for any k. I think this shows that the only possible value for x is 1 (I'm trying to think why, but am running out of time). And using this value of x=1 we've shown g(x)=x. Thus this is the only solution. Anythingapplied (talk) 23:39, 2 February 2009 (UTC)[reply]
You must have x=1, since you have g(x)=1 and g(x)=x2. If we take everything to range over the nonnegative reals (rather than integers), then g(x)=x is still the only solution, but the proof is somewhat more effort. Algebraist 23:52, 2 February 2009 (UTC)[reply]
If you have proven that g(x)=x is the only solution that works for the domain of the non negative integers you have also proven that no function other than g(x)=x works for the domain of nonnegative reals. Taemyr (talk) 01:31, 3 February 2009 (UTC)[reply]
How so? Algebraist 01:40, 3 February 2009 (UTC)[reply]
I don't think this question is appropriate for the reference desk. This problem is on the USAMTS, a mathematical talent search based on the honor system that gives you a month to solve problems like these. [3]. The answers to these questions should not be given until after March 9th. Indeed123 (talk) 02:11, 3 February 2009 (UTC)[reply]
Well, at least now we know the actual question. I think the real case is more interesting, though. Algebraist 02:21, 3 February 2009 (UTC)[reply]

February 3

Solving the diffusion equation

The temperature θ(x, t) in a very long rod is governed by the one-dimensional diffusion equation

where D is constant. At time t = 0, the point x = 0 is heated to a high temperature. At all later times, conservation of energy implies

where Q is some constant.

Having shown by dimensional analysis θ(x, t) can be written in the form

where ,

I need to show that and integrate it to obtain a first order D.E. I haven't tried the second integrating part yet because I'm stubborn and don't want to give up on the first bit, but how on earth do I show that the big F/z mess sums to 0 without calculating god knows how many derivatives?

Any help would be -greatly- appreciated,

Mathmos6 —Preceding unsigned comment added by 131.111.8.98 (talk) 06:09, 3 February 2009 (UTC)[reply]

It's not that bad. Start with
which becomes simpler once you notice that
Then we have
Now you just need to work out (I'll leave that bit to you) and the rest is just algebra. Gandalf61 (talk) 11:05, 3 February 2009 (UTC)[reply]
Get rid of the D by using Dt as a new independent variable. Also set Q=1 in order to simplify the formulas. Bo Jacoby (talk) 14:17, 3 February 2009 (UTC).[reply]

Anti-derivatives vs Riemann sums

I was cleaning up my room and came across my old course notes and calculus textbook (James Stewart). Reading up the section on integrals and finding areas, I thought up the following problems:

  1. Is there any example of a function for which one can compute the integral as the limit of the Riemann sum in closed form, but for which it is difficult (or impossible) to find an anti-derivative as an elementary function, except at a finite/countable set of discontinuities? (Note: italicised text added on 13:04, 3 February 2009 (UTC))
  2. How does one prove that a given function does not have an anti-derivative in terms of the elementary functions? Just some links to relevant articles or a proof outline would be nice. Thanks. Zunaid 08:13, 3 February 2009 (UTC)[reply]
For 2: According to our Risch algorithm article, Risch's approach can be used to prove that does not have an elementary antiderivative. I haven't checked it. -- Jao (talk) 08:51, 3 February 2009 (UTC)[reply]
See differential Galois theory. — Emil J. 11:54, 3 February 2009 (UTC)[reply]
Addressing part 1 of your question, the fundamental theorem of calculus guarantees that if f is Riemann integrable with indefinite integral F then f = dF/dx - in other words, F is an anti-derivative of f. So if f is Riemann integrable then we always know an anti-derivative exists - but it is not necessarily expressible in terms of functions that we already know and love. But mathematicians are always happy to expand their circle of acquaintances, so since we know that the anti-derivative exists, we can just use the Riemann integral as a definition of this new function - this is how functions such as the error function, the logarithmic integral function and the Fresnel integrals are defined. Gandalf61 (talk) 12:23, 3 February 2009 (UTC)[reply]
What I should have said was "impossible to find an anti-derivative in terms of elementary functions. Zunaid 13:04, 3 February 2009 (UTC)[reply]
The fundamental theorem of calculus only holds for continuous functions. For example, the indicator function of the interval [0,1] is Riemann integrable, but it does not have an antiderivative (as derivatives are Darboux). — Emil J. 12:37, 3 February 2009 (UTC)[reply]

Ah dammit. I think my question 1 is poorly phrased, and in fact is impossible. The correct phrasing is as corrected above, hopefully it is more water-tight. However if it were the case that there is a closed form for the Riemann sum, then integrating f from a to x would automatically give an anti-derivative in the form of F(x), obtained as the closed form expression of the summation. For example, I thought of exp(-x2) which doesn't have an elementary anti-derivative. I thought perhaps one could compute a closed form Riemann sum for it, or perhaps for any other example. Thanks for the answers. Zunaid 13:04, 3 February 2009 (UTC)[reply]

You may transform integrals into sums, if that appears more elementary to you:
Your questions are treated in Concrete Mathematics. Bo Jacoby (talk) 22:15, 3 February 2009 (UTC).[reply]

Surds Question

Well..i thought i knew this subject.. we learnt it last early last year but i have a bad memory and it seems to of all fallen out of my head along with my babelfish for understanding math.

I almost completely dont understand it.. i managed to do one but i was working backwards from the answer..I tried a multitude of different ways..none of them worked.. i hope you guys will be able to help (sorry i dont have much idea on the method.. i really wish i was able to do this..but i need help unfortunately)

without futher delay

Simplify:

Try factoring the numbers under the radicals and see if that gives you any ideas. -- Jao (talk) 11:39, 3 February 2009 (UTC)[reply]
i tried doing that but i didnt end up with any similar numbers under the square root sign :S.. so i was not able to add them

i know i have to add/subtract them but i cant get the radicals to become similar numbers (this is actually like question no.2.. i was only able to do the first one as i said by working backwards.. im sure once i understand the fundamentals ill be able to do the rest)—Preceding unsigned comment added by 124.180.230.234 (talk) 11:44, 3 February 2009 (UTC)[reply]

28=2*2*7, 180=2*2*3*3*5, I see some common factors there (in fact, I see square factors, so you can deal with them pretty easily, you don't even need to worry about combining surds). Remember, you won't necessarily get it all simplified down to one term, just get it as simple as possible. --Tango (talk) 11:53, 3 February 2009 (UTC)[reply]

(Explicit answer removed -I did not even realized that the previous answers were maieutics, sorry) pma (talk) 11:57, 3 February 2009 (UTC)[reply]

The numbers infront of the radicals dont matter? —Preceding unsigned comment added by 124.180.230.234 (talk) 12:26, 3 February 2009 (UTC)[reply]
They only matter that when you find a perfect square inside the radical, you take it's square root, and remove the result from the under the radical, multiplying it by the coefficient in front of the radical. StuRat (talk) 13:06, 3 February 2009 (UTC)[reply]
OK, let's start from the basics:
1) Find all the factors for each number under the radical symbol. Do this by trying to divide by 2 until you can't anymore (and still get an integer result). Then try to divide by 3, then 5, then 7. I think that's as high as any of those factors go.
2) If the factors contain two 2's, you can take those out and multiply the coefficient in front of the radical by one of those 2's. Same thing for if it contains two 3's, 5's, or 7's.
3) Multiply any remaining factors back together and leave them under the radical.
4) After you've done all of this, if any of the radicals contain the same values, these may be combined together by adding the coefficients in front of them. Remember that when you add a negative number, that's the same as changing it to positive, then subtracting it.
Show us your work and we will tell you if you did it right. StuRat (talk) 13:06, 3 February 2009 (UTC)[reply]

February 4

Faber and Faber offered a one million dollar prize for proving the Goldbach Conjecture within 2 years (from 2000 until 2002). They actually insured themselves against someone winning the prize. Does anyone know how much they paid the insurance company, and how this premium (or the probability of someone coming up with a proof) was calculated? Icek (talk) 04:25, 4 February 2009 (UTC)[reply]

I googled and found this, which says that the premium was "in the five figures". I guess that it must be reasonably common for people to take insurance out against things which are so rare, there is no way of getting a good estimate of the probability, so presumably insurers have some kind of standard policy on these things?81.98.38.48 (talk) 17:08, 5 February 2009 (UTC)[reply]

∫f(x) Integration without dx

Can there be an ∫f(x)(I took away the dx on purpose)?I tried to define it as because so .The Successor of Physics 06:21, 4 February 2009 (UTC)[reply]

Good question. The answer is no, however. It is a notation. You can't have a "(" without a ")" for example. It is the same thing here. The ∫ symbol by itself is meaningless without considering the limit that it represents. When you remove the dx, it doesn't make any sense because the reader now has no idea what limit it represents and therefore that particular collection of symbols becomes meaningless. In fact is not a fraction. You can't just multiply by dx. It simply is a notation that represents the rate of change of the variable y over the variable x, and you can't separate that out without losing that meaning. In some classes the teacher may "abuse the notation" and pretend that you can in order to suggests the correct intuition, when in reality they are not using the notation correctly. Please see Abuse of notation for further details. Anythingapplied (talk) 07:00, 4 February 2009 (UTC)[reply]
A better example might be the "+" symbol. You're question is the equivalent of asking what is just "1+" mean, without having a second number. The normal definition of that symbol doesn't apply. So unless you've given "+" another definition in this special case, it is meaningless to say "1+". You will find that some people do use "1+", for example computer programmers, but they have an established definition. Likewise, you will find in many classes a teacher may write ∫f(x) on the board. The meaning can change depending on the context or what math class you are in. All these symbols depend on having a shared understanding of their definition. Anythingapplied (talk) 07:05, 4 February 2009 (UTC)[reply]
Thanks! The Successor of Physics 07:55, 4 February 2009 (UTC)[reply]
Note also that it is a good rule to write the symbol, on the right of the integrand, because it clearly tells you at once what is the integration variable and what is the integrand (it is the expression between the and the ). Still, you can find an abbreviate notation without , whenever there is no ambiguity (I suggest that you to write it always, however ,when you do computations). Anyway, you have this list of successive simplified forms:
.
You can find them all, in books or at the blackboard. Each one carries less information than the preceding ones. The important thing is to state clearly what is the adopted notation, and not to change it in the middle of a text. Here Anythingapplied's preceding sentence applies too.--pma (talk) 09:36, 4 February 2009 (UTC)[reply]
Sometimes it is more convenient to write dx on the left of the integrand, especially with nested integrals: vs. . — Emil J. 11:39, 4 February 2009 (UTC)[reply]
That's true, but then I'd also like
... pma (talk) 13:47, 4 February 2009 (UTC)[reply]
Sure, as long as the limits give the variables explicitly. But would be very ambiguous. —JAOTC 14:19, 4 February 2009 (UTC)[reply]
Well if that is acceptable, wouldn't the clearest presentational form be to have the integrand on the left?
 ~Kaimbridge~ (talk) 14:09, 4 February 2009 (UTC)[reply]
No. Formally (in the real nonnegative case), the integral is a sum of formal rectangle areas. It doesn't matter if you compute them by multiplying the height and width and then add them () or compute them by multiplying the width and height and then add them () but you can't add them together and then compute them. —JAOTC 14:19, 4 February 2009 (UTC)[reply]
Jao, there is no ambiguity in
as the order of the integrations is clearly indicated: first z, then y, then x (if one agree with the relative "parenthesis-like" convention for the integral signs, of course). IMO, allowing other permutations of integral signs/integrand/differential symbols, certainly does not make it any clearer; it possibly adds a chance of ambiguity. pma (talk) 14:48, 4 February 2009 (UTC)[reply]

dx shouldn't be thought of as only notation. It not only identifies which variable is being integrated with respect to, but also gets the dimensions (or "units", if you like) right. If ƒ(x) is in meters per second and dx is in seconds, then ƒ(xdx is in meters. And so on. Think of dx as an infinitely small increment of x. That is not logically rigorous, but logical rigor isn't everything, and sometimes logical rigor is out of place. Michael Hardy (talk) 00:45, 5 February 2009 (UTC)[reply]

Confidence interval

Please help me understand this. Let's say I suffer a loss of $100 with probability 10% and I suffer a loss of $0 with probability 90%. My college days have long passed but I seem to recall this being a Bernoulli trial with variance 10% * 90% * $100 and mean 10% * $100. What kind of assumptions would I need to calculate a 99.5% percentile for my potential losses given only this information and how would I do it? Is the following correct: I assume my potential losses are normal with the mean and variance above and since the 99.5th percentile for N(10,9) is 33, the percentile for my losses is 10 + 33*9 = X? --Rekees Eht (talk) 15:40, 4 February 2009 (UTC)[reply]

The variance is 900, actually. More importantly, if you're only undergoing this trial once (which seems to be the case), the assumption of normality is totally unjustifiable. Also, this isn't what confidence intervals are about (the term is usually used to refer to estimating population parameters from sample data). In answer to the actual question, you can say that there's a more than 99.5% chance that your losses are in [0,100] (indeed, there's a 100% chance of that), but you can't say the same of any smaller interval. Algebraist 15:49, 4 February 2009 (UTC)[reply]

Thanks. I see I messed up the variance calc - it is 900. And sorry for using the wrong word. Let me ask the question like this: What is the level of potential losses, say L*, for which the probability that the potential losses is less than L* is 99.5%? What assumptions do I need to make to calculate this? --Rekees Eht (talk) 16:45, 4 February 2009 (UTC)[reply]

You've already stated all the assumptions you need: you suffer a loss of $100 with probability 10% and $0 with probability 90%. Given this, the probability that your loss is less than $x is 90% for 0<x<100 and 100% for x>100. There is no x such that the probability is exactly 99.5%. Statistical ideas, gaussian/normal approximations, and so on will only crop up if you repeat your Bernoulli trial lots of times. Algebraist 17:53, 4 February 2009 (UTC)[reply]

Ah ok I understand that now - thanks. Let's say it was theoretically possible to repeat this trial 100 independent times. How would I calculate L* if L* is now the level of loss for which the average loss over the 100 trials is lower than L* with proabaility 99.5%? --Rekees Eht (talk) 07:12, 5 February 2009 (UTC)[reply]

Consider the random loss L, it's mean value μ and it's standard deviation σ. If you had repeated your experiment many times the L variable has approximately the normal distribution and the percentiles are looked up in a table. The inequality μ−2.8σ < L < μ+2.8σ is satisfied 99.5% of the time. Bo Jacoby (talk) 09:27, 5 February 2009 (UTC).[reply]

Units going bad

I have 15 units in the field. One has gone bad in three years - How many will go bad in eight years? Thank you - Bruce Prather [e-mail removed] —Preceding unsigned comment added by Bruceprather (talkcontribs) 16:31, 4 February 2009 (UTC)[reply]

Who knows? You haven't given nearly enough information to give a sensible answer. What, for example, is a 'unit'? Algebraist 16:33, 4 February 2009 (UTC)[reply]
Since there are 15 of them, Bruce's field is GF(24), of course. Over time, some elements lose their inverses. It happens to us all sooner or later. (Sorry, bored.) —JAOTC 16:51, 4 February 2009 (UTC)[reply]
Ha. Instinct says 8/3, or around 3 units, but you can't be at all confident of that (perhaps someone even more bored could do some kind of confidence interval). Maybe the failed unit was a badly-built dud and the rest will last 100 years, or maybe they're all going to fail around the 3 year mark. The probability of failure in mechanical systems varies over time in a complex way (see Bathtub curve) so as Algebraist says you'd need a lot more data (plotting a large number of failures against time would be a good start). And if the units are connected to each other, there's a whole other set of problems. --Maltelauridsbrigge (talk) 16:58, 4 February 2009 (UTC)[reply]
My instinct says 15(1 − (14/15)8/3) = 2.52… as in exponential decay. That still rounds up to 3, though. — Emil J. 17:25, 4 February 2009 (UTC)[reply]

If the probability that some unit goes bad in some year is , then the probability that the unit survives the year is , and the probability that it survives three years is , and the probability that it has gone bad in three years is , and the probability that out of 15 units go bad in three years is , and the probability that is , and the likelihood that is where is a constant. The probability that out of the 15 units go bad in 8 years is . These expressions can be simplified. The mean value of is and the standard deviation is . The answer to your question is , that the true value is approximately equal to the mean value, but the uncertainty is of the order of magnitude of the standard deviation. Bo Jacoby (talk) 19:59, 4 February 2009 (UTC).[reply]

Why are you assuming independence? Algebraist 20:18, 4 February 2009 (UTC)[reply]

I have got no information on dependence. That's why. The result depends on the information given. More information usually leads to a smaller standard deviation meaning a better result. Bo Jacoby (talk) 22:19, 4 February 2009 (UTC).[reply]

Constructing a series

Suppose that diverges and . How does one show there exist with and divergent? I did this question ages ago but really can't recall!

Thanks,

131.111.8.104 (talk) 17:03, 4 February 2009 (UTC)BTS[reply]

Dunno what the official proof is but if you let
I think this should do it because for groups of summing up to 1 the corresponding sum up to something like an entry in the Harmonic series (mathematics) . Well that's the idea I'm sure one would have to be a bit better at doing the job properly. Dmcq (talk) 17:29, 4 February 2009 (UTC)[reply]


Exact, Dmcq's one is perfect. Actually if you define
,
the proof of the divergence is immediate: you can see the nth partial sum as an upper Riemann sum for relative to the subdivision whose points are exactly the first partial sums of the , so diverges. (In fact, the partial sums of the are asymptotically the log of the partial sums of the ). --pma (talk) 18:09, 4 February 2009 (UTC)[reply]

Is it true to say ? —Preceding unsigned comment added by 131.111.8.102 (talk) 19:00, 4 February 2009 (UTC)[reply]

Not in general, no. Algebraist 19:01, 4 February 2009 (UTC)[reply]

Then how would we know bn/an0? I'm probably being stupid here sorry =P —Preceding unsigned comment added by 131.111.8.102 (talk) 19:17, 4 February 2009 (UTC)[reply]

We don't. Better hastily redefine to be . Algebraist 19:22, 4 February 2009 (UTC)[reply]

Oh dear, I was being stupid! Thanks :D 131.111.8.102 (talk) 19:26, 4 February 2009 (UTC)BTS[reply]

My fault. You can also go back to Dmcq's form; then you obtain lower Riemann sums, but still you can do an estimate from below, whereas now the integral gives a log estimate from above. Note that the asymptotics with log that I mentioned needs some mild assumption on the (I forgot to say). Note also that if the series of the converges, so does the series of , and you have some nice bounds both from below and from above on its sum. --pma (talk) 20:53, 4 February 2009 (UTC)[reply]

February 5

Trig integral?

How do you do the integral:

I tried partial fractions and a substitution of v = sec(theta) but I can't get it. Thanks. Inasilentway (talk) 00:47, 5 February 2009 (UTC)[reply]

Try using tanh instead. Partial fractions should work fine, though. What went wrong? Algebraist 01:01, 5 February 2009 (UTC)[reply]

If k and g are positive, then partial fractions will do it without getting into imaginary numbers, etc.

Do some algebra to figure out what the two "something"s are, and then you have

etc.

If you use trigonometric substitutions, you should say

so that

and then

and

That will lead you to the integtral of 1/sin. That's a hard one, but you don't need to do it since it's in the standard books and tables. Finally, you need to undo the substitution at the end, changing it back from a function of θ to a function of v.

Probably simpler just to go with partial fractions. Michael Hardy (talk) 01:14, 5 February 2009 (UTC)[reply]

Algebraist and Michael Hardy, thank you. Here's what went wrong when I tried using partial fractions. I set up the equation as Michael stated, using A and B as the somethings, respectively. I solved for A and B and one step in there was:

which led to the system:

and then

which has got to be wrong... Inasilentway (talk) 01:38, 5 February 2009 (UTC)[reply]

However did you get ? That's not right at all. Algebraist 01:44, 5 February 2009 (UTC)[reply]
Bad algebra is how I did it. lol. So it should really be

... which leaves me without a solution for A or B...

Can I just use this identity: http://www.integral-table.com/eq13.png and set a = k, b = 0, c = -g, x = v?

That's for cases where the discriminant b2 − 4ac is negative. Michael Hardy (talk) 02:18, 5 February 2009 (UTC)[reply]
You have
Multiply both sides by the common denominator and you get:
On the left side you have
and the first is 0 and the second is 1.
One the left side you have
and the first is
and the second is
Therefore (since √k ≠ 0) you have
and
From the first equation you get B = −A. Then you can substitute  −A for B in the second equation. Michael Hardy (talk) 02:15, 5 February 2009 (UTC)[reply]

A difficult algebra problem involving logarithms

I would like to express α as a function of f and p given the following:

0 = p * ln(1 - α + αf) + (1 - p) * ln(1 - α)

--Tigerthink (talk) 04:05, 5 February 2009 (UTC)[reply]

But can you specify what knd of numbers p and f are, and what kind of solution are you looking for? (real/integer/positive/close to zero...?), for the behaviour/number of the solutions etc changes quite a lot. You are not happy with , I presume. So, (putting g:=1-f) you want to solve wrto the equation
,
which at least looks more like an algebra problem; is it so? (By the way, if you use LaTeX format people will start answering earlier... it is in the human nature I guess) --pma (talk) 13:07, 5 February 2009 (UTC)[reply]
p is the probability of something (and therefore between 1 and 0.) f is a positive real number. fp is greater than 1. I would be surprised if α is ever less than 0 or greater than 1 if those two conditions hold. Sorry, I don't know LaTeX. And you might find it useful to know that the thing I'm really interested in is the limit of the value of α as f goes to infinity in terms of p.--Tigerthink (talk) 16:36, 5 February 2009 (UTC)[reply]
It looks to me like as . The first term becomes (if you excuse the very informal notation) , so the only way the total could be 0 is for the second term to be . That can only happen if . --Tango (talk) 16:54, 5 February 2009 (UTC)[reply]
But then the first term goes to log{-∞) or log(∞), depending on the exact rates at which f and α diverge, so it must be more complicated than that. α=0 always works, though. Algebraist 17:06, 5 February 2009 (UTC)[reply]
Good point, but it's irrelevant because I was answering the wrong question anyway (see below). α=0 is a rather boring answer... --Tango (talk) 22:47, 5 February 2009 (UTC)[reply]
If the second term is , then actually , not . — Emil J. 17:11, 5 February 2009 (UTC)[reply]
More precisely, if 0 < p < 1 is fixed, then I think as . — Emil J. 17:26, 5 February 2009 (UTC)[reply]
Wow, that's a pretty complicated formula. Can you give me any clues as to how you derived it? And if you'll excuse my mathematical ignorance, what does O mean in this context? Also, if it's the limit as f goes to infinity, then why is f present in the expression?--Tigerthink (talk) 20:52, 5 February 2009 (UTC)[reply]
See Big O notation. Algebraist 20:58, 5 February 2009 (UTC)[reply]
Oh, yeah, for some reason I was thinking p>1, I'm not sure why... For the correct values of p, α=1 (in the limit) would work, you're right. --Tango (talk) 22:47, 5 February 2009 (UTC)[reply]

What's the best tool for signal intersection?

Say I have two signals, and . Both contain a similar signal, with a particular noise each. Basically:

Now, what I can do to extract S? Given there are three variables and only two equations, I suppose it can't be solved so easily. In the case of signal processing, then, what tools are available to solve this kind of problem? What I really want here is some theoretical tool. Cheers! — Kieff | Talk 05:39, 5 February 2009 (UTC)[reply]

If you know nothing about what the signal might be, and nothing about how the noise might have occurred, then you can't really say anything. If you're able to experiment on the channel (measuring the output from a given input), you can get a very good picture of how the probability density function of the noise looks (it might be both white and Gaussian, for instance), and then it's easy to make an educated guess and also fairly easy to compute the reliability of that guess. The Statistical signal processing and Wiener filter articles might help. —JAOTC 17:21, 5 February 2009 (UTC)[reply]
I might have some sort of frequency/spatial mixup, but if this were a measurement of physical quantities, then a very sane approach is just to average the two signals, S3 = (S1 + S2)/2. Assuming the noises N1 and N2 have a mean of 0 (and possibly that they are a sum of a uniform (white) and a normal (gaussian) random variable with mean 0), then the noise of S3 should have smaller variance than the noise of either S1 or S2. I tried reading the Wiener filter article, but it appeared to only look at one signal. I might have missed something due to several implicit fourier transforms. JackSchmidt (talk) 17:41, 5 February 2009 (UTC)[reply]

Open interval as a disjoint union of closed intervals

Is it possible to write the open interval (0,1) as a disjoint union of closed positive-length intervals, or not?

131.111.8.96 (talk) 06:18, 5 February 2009 (UTC)BTS[reply]

No. Let T be the set of endpoints of the intervals. Then T ∪ {0,1} is a perfect set of reals (nonempty, closed and without isolated points), hence uncountable. But this is a contradiction since there clearly can be only countably many intervals. Joeldl (talk) 09:03, 5 February 2009 (UTC)[reply]

Sorry - why can there only be countably many intervals? Is it because each must uniquely enclose a rational?

131.111.8.97 (talk) 17:11, 5 February 2009 (UTC)BTS[reply]

Yes. Now go and do your own example sheet. Algebraist 17:14, 5 February 2009 (UTC)[reply]

Log normal distribution

Hello. Given the parameters (mu and sigma), how would I calculate the 99.5th percentile for a lognormal distribution? My calculator can calculate the percentiles for normal distributions. --Rekees Eht (talk) 08:36, 5 February 2009 (UTC)[reply]

The article lognormal distribution has formulas for computing the parameters for the corresponding normal distribution. Bo Jacoby (talk) 09:31, 5 February 2009 (UTC).[reply]

Sorry but I don't understand. Let's say I have a lognormal variable and I know its mean and variance. I can use the formulas to work out the corresponding normal parameters. But how would I work out the 99.5th percentile for the original lognormal variable? That article is hard to understand. --Rekees Eht (talk) 09:54, 5 February 2009 (UTC)[reply]

Okay I did something. Please let me know if it's right. Say I have a lognormal variable with mean 0.000325 and std dev 0.00900802 and I want to find the 99.5th percentile. Using the formulas you provided, I calculated that the corresponding parameters for the normal distribution is mu = -11 and sigma = 3. Then I used excel's "LOGNORMDIST" function with the parameters and did a goal seek on the "x" until the CDF equals 99.5%. The answer I got is 0.995566. Does this sound reasonable? --Rekees Eht (talk) 15:21, 5 February 2009 (UTC)[reply]

An easier approach is to use excel's inverse logdist function as follows: =LOGINV(0.995,0.000325,0.00900802). The solution this returns is 1.023807. Wikiant (talk) 15:49, 5 February 2009 (UTC)[reply]
Thanks but I think you are supposed to use the corresponding normal parameters (-11 and 3). When I do that I get an answer of 0.008967. Can you explain why that is so small? --Rekees Eht (talk) 16:15, 5 February 2009 (UTC)[reply]
If I am understanding you correctly, -11 and 3 are the parameters for the *normal* distribution. The LOGINV function is expecting parameters for the *lognormal* distribution. Wikiant (talk) 16:28, 5 February 2009 (UTC)[reply]

Countable series in the rationals

Is there any possible way to list the rationals Q as such that converges?

Thanks, 131.111.8.97 (talk) 16:42, 5 February 2009 (UTC)Mathmos6[reply]

Yes. For more details, consult your Analysis I supervisor, since he/she is paid to teach you this material, and we are not. Algebraist 16:54, 5 February 2009 (UTC)[reply]
An example would be . Anythingapplied (talk) 19:43, 5 February 2009 (UTC)[reply]
I think the OP wants *all* the rationals to be included. If it's possible, it certainly wasn't taught in my first year analysis course... --Tango (talk) 19:46, 5 February 2009 (UTC)[reply]
It wasn't taught in mine, either, but it can be solved using results from the course. If it was taught, I might be willing to explain it, since it would be something the OP would have to know. It's supposed to be an interesting problem for students to attack, and it doesn't do someone any good to tell them the solution to an interesting problem. Attacking interesting problems is how one becomes better at mathematics. Algebraist 21:02, 5 February 2009 (UTC)[reply]
nb: this is a homework question. Algebraist 21:09, 5 February 2009 (UTC)[reply]
But how can we know if (s)he really have a paid supervisor... And in the case, what's wrong if (s)he wants our help? As a first hint for 131, I suggest to reflect on this fact: a finite sum of the form can be made arbitrarily small choosing suitably the number and the , and one can do that even under the condition that the first and the last of the are two given numbers a and b, and there is still the freedom to choose the other avoiding some numbers, if one wants... I hope this will be of help, but not too much help. (Algebraist: I'll remove this post if it is not opportune) pma (talk) 21:30, 5 February 2009 (UTC)[reply]


I'm actually a Physics student rather than a Mathematics student - I simply do the maths example sheets too to try and improve my ability/as a matter of interest, since the physics isn't particularly challenging and I enjoy them, so I don't actually have an analysis supervisor. I try to read up as much as I have time to on whatever the topic is, in this case analysis, but this sort of problem hasn't been brushed upon in any of my reading so far - I wasn't expecting an explicit answer, because I was under the impression the reference desk was intended for guidance rather than that, but I apologize if I made it sound like I expected you to 'teach me' the material; I merely wanted a nudge in the right direction, if nothing else just so I could read up further on the appropriate topics in analysis, but any additional assistance was always welcomed. Thankyou for the help - I'll go away and have a think about it. 131.111.8.99 (talk) 21:38, 5 February 2009 (UTC)Mathmos6[reply]

We'll be happy to help you if you come back with your ideas after thinking about it. A hint: I doubt you'll find much useful in further reading; this requires thought, not knowledge. Algebraist 21:44, 5 February 2009 (UTC)[reply]

I do not know the answer, but I think that the answer is no. If the series converges, then for each positive real there exists a positive integer such that . So all, except a finite number, of the are very close to one another. I do not expect these numbers to include all, except a finite number, of the rationals. Perhaps a proof can be made along this line. Good luck. Bo Jacoby (talk) 22:21, 5 February 2009 (UTC).[reply]

I already gave the answer. It is yes. Algebraist 22:23, 5 February 2009 (UTC)[reply]
I thought my hint was even too explicit. So, to start try to go from to with a corresponding sum =1/100 or so . --pma (talk) 22:48, 5 February 2009 (UTC)[reply]

So then, could you split the infinite sum up into an infinite series of finite sums, the values and bounds of which you 'choose' as pma suggested so that the size of the sums tends to 0 but the full collection of sums contains every single rational? —Preceding unsigned comment added by 131.111.8.104 (talk) 23:04, 5 February 2009 (UTC)[reply]

Well suppose you have an enumeration of the rationals; start your sum defining ; your first task is to reach ; you can make as many steps as you want, till you get for some with a partial sum say < . Then go on. Remember, you have to reach all 's, each one only once... --pma (talk) 23:22, 5 February 2009 (UTC)[reply]
(PS: formally: define the by induction accordingly)

How about this: Let a1a3a5, ... list all the rationals. Then for n odd, let an+1 = an + 1/n. Then at least the sum of (an+1 −an)2 over odd n would converge. Next try to figure out how to tweak that idea to make it work when the terms (an+2 −an+1)2 for n odd are also added. Michael Hardy (talk) 23:38, 5 February 2009 (UTC)[reply]

But don't you have repetitions this way, since the a with odd indices already cover all rationals? I am also curious to see Algebraist's construction for he has some elegant trick for sure.... The OP is not homework! pma (talk) 00:22, 6 February 2009 (UTC)[reply]

So then - sorry if I haven't quite followed - you could take r0 to be say 1, reaching r0 from 0 with a partial sum of less than or equal to 1/2, then since you have more omitted rationals on either side of 1 you need to go 'both ways', so you could go from r0=1 to r1=-1 say, omitting the already counted rationals as 'pma' pointed out is possible earlier, with a new partial sum <1/4, then back to r2 = say, 2, omitting previously chosen rationals to obtain a partial sum <1/8, then to -2, 3, -3 and so on by the same process? As your steps must get 'smaller' in order to reduce the partial sum, would this ensure that you didn't miss out any rationals? I suppose going from 0->1 you could just take 1 step, 0->1/2->1, then on the way back, you could choose the path of previously unchosen 'thirds' to make sure you didn't omit any rationals (1->2/3->1/3->) and then choose halves (0->-1/2->-1) for our newly included interval, adding as many interstitial previously unchosen rationals as needed inbetween each rational to make sure your sum is less than 1/4? (Would this step need to be formalized more to avoid causing problems later on? You should be able to 'influence' the sum as necessary, since the closer the rationals you choose are together, the smaller the sum will be.) Continuing onwards with -1->-2/3->-1/3->0->1/4->...2 and then adding in more interstitial rationals to influence your sum - would this method work, since the rationals are dense and you could always ? Apologies for the lack of formatting, too tired to LaTeX! Thanks,

131.111.8.102 (talk) 03:29, 6 February 2009 (UTC)Mathmos6[reply]

February 6

Series

Is there a determine whether the infinite series

converges or diverges? I've tries several tests repeatedly, but to no avail. —Preceding unsigned comment added by 70.52.46.213 (talk) 02:14, 6 February 2009 (UTC)[reply]

Consider your function over all x rather than just the natural numbers - is it the derivative of something similar to part of your denominator perhaps? See if you can work out the integral of first, in terms of a single function. In which case, do you know a series convergence test related to the derivative of a function? 131.111.8.102 (talk) 02:57, 6 February 2009 (UTC)Mathmos6[reply]

Well, I know that the integral of 1/nln(n) is ln(ln(n)), so by the integral test it diverges...but how would I go from there? —Preceding unsigned comment added by 70.52.46.213 (talk) 03:01, 6 February 2009 (UTC)[reply]

The integral of 1/n is ln(n), the ingegral of 1/nln(n) is ln(ln(n)), the integral of 1/nln(n)ln(ln(n)) is ___? Can you spot the pattern? 131.111.8.102 (talk) 03:04, 6 February 2009 (UTC)Mathmos6[reply]

Ah thank you very much. —Preceding unsigned comment added by 70.52.46.213 (talk) 03:16, 6 February 2009 (UTC)[reply]

Question on roots

This has left me totally flummoxed: how do I prove that the equation:

has only one root, lying between 1 & 2, which is irrational? While the 1st and 2nd parts can be solved by plotting the graph for the function , the 3rd part stumps me. A helping hand please? I know the policy of not asking homework questions full well and respect it. This is just a practice question from one of the tougher Indian high school level books.--Leif edling (talk) 03:31, 6 February 2009 (UTC)[reply]

Beat Frequency for Pulses

Hello,

I was looking at the wikipedia article for beat frequency and there it lists this formula for calculating the beat frequency of two sin waves:


It doesn't, however, say how to calculate the beat frequency of binary pulses. In a simple example, I'm thinking of two blinking lights that are blinking at slightly different frequencies. Or in a more complicated way, pulses of electricity that are very brief, with longer times of no pulse.

Does anyone know the formulas for these examples? Thank you in advance.

--Grey1618 (talk) 06:08, 6 February 2009 (UTC)[reply]