Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 124.157.218.5 (talk) at 04:55, 17 October 2010 (→‎Improper Integral: ah, Fubini's theorem). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


October 10

Intersection of 2 dense subspaces of a normed vector space necessarily dense?

As the title says, is the intersection of 2 dense subspaces of a normed vector space necessarily dense? I suspect not, but I'm struggling to come up with an example - I imagine there's one where the intersection is the zero vector only, but please could anyone help me a bit further since this line of thought has got me nowhere? Thanks! 62.3.246.201 (talk) 02:25, 10 October 2010 (UTC)[reply]

Consider and as subspaces of the reals. 71.167.238.13 (talk) 02:41, 10 October 2010 (UTC)[reply]
Gotcha, thanks! 62.3.246.201 (talk) 11:22, 10 October 2010 (UTC)[reply]
Usually, normed vector space are meant to be vector spaces over the field of real or complex numbers. To quote an example with real vector spaces, just think to the Banach space with the subspace of the continuous functions, and of the simple functions. Both are dense linear subspaces, but their intersection is a line -the constant functions. --pma 19:17, 11 October 2010 (UTC)[reply]

Very straightforward probability

I'm kind of embarrassed that I'm stuck on this, but here's my question:

Suppose you have a box with N black marbles and 1 gold marble. If you randomly select n marbles from the box, what's the probaility that you have selected a gold marble?

So I tried to set this up as #number of combinations with a gold marble/# total number of combinations, but I keep getting answers that don't make sense. I don't even know if I should treat the black marbles as distinct or not! Please help! 74.15.136.172 (talk) 04:45, 10 October 2010 (UTC)[reply]

Put N+1 white marbles in a box; take n of them up and put on the table beside the box. Now close your eyes while a demon appears and paints all the marbles black except a randomly selected one which he paints gold. What is the probability that the one he selected was one of those you've already taken from the box? –Henning Makholm (talk) 05:08, 10 October 2010 (UTC)[reply]
Right, obviously. Thanks a lot! 74.15.136.172 (talk) 15:05, 10 October 2010 (UTC)[reply]

the way to do these things is to subtract from 1. what is the probability you will roll a 6 at least once if you roll a die 6 times? Well, the probability that you don't once is 5/6, that you don't twice is (5/6)*(5/6) and that you don't six times is (5/6)^6. That's about 0.334897977, so if that's the probability you don't roll even one six in the six rolls, if you subtract it from one you get the probability that you do. Do the same thing for your problem. 92.224.205.56 (talk) 10:25, 10 October 2010 (UTC)[reply]

Makholms way is far easier. Taemyr (talk) 10:30, 10 October 2010 (UTC)[reply]
And more relevant - dice-rolling is selection with replacement, removing marbles is without replacement. I do wish that all replies were helpful →81.131.164.39 (talk) 16:11, 10 October 2010 (UTC)[reply]
I said do the "same thing" for your problem. If N is 5, and n is 3, you would do 1-(5/6) * (4/5) * (3/4). I thought it would be obvious. you can work out the general formula for N and n in the same way. 92.224.205.56 (talk) 20:43, 10 October 2010 (UTC)[reply]
So why not give this example straight away, instead of claiming that a power of a constant fraction is the "same thing" as the product of fractions with numerator and denominator each decreasing by 1? Why claim that "the way to do these things" is a method which arrives at the straightforward (to quote the OP) answer only after tedious algebraic cancellation?→81.131.164.39 (talk) 19:51, 12 October 2010 (UTC)[reply]



See Hypergeometric distribution. Bo Jacoby (talk) 17:38, 10 October 2010 (UTC).[reply]

See Overkill. Seriously though, knowledge of different distributions isn't supposed to replace probability common sense. -- Meni Rosenfeld (talk) 19:00, 10 October 2010 (UTC)[reply]
Right. But the purpose of a reference desk is to provide reference. Bo Jacoby (talk) 04:41, 12 October 2010 (UTC).[reply]

A mirror problem

Dear all:

I am working on the following problem:

A mirror has the property that whenever a ray of light emanates from the origin it reflects parallel to the x-axis. Find the equation of the mirror.

I have produced the following illustration:

and got as far as working out the following equations for the three lines:

Incident line:

Normal line:

Reflected line:

However, I am stuck with trying to relate the three line equations into a single ODE that captures the relationship between these three lines: namely that the angle between the incident line and the normal line is equal to the angle between the normal line and the reflected line. Could anyone help me out?

Thanks for all your help.

L33th4x0r (talk) 12:46, 10 October 2010 (UTC)[reply]

Let be the angle between the reflected and normal line. The (negative of the) slope of the normal line is . The slope of the incident line is . Now use the trigonometric identity . -- Meni Rosenfeld (talk) 13:03, 10 October 2010 (UTC)[reply]
You can solve it using some vector calculus. In particular the Frenet–Serret frame for plane curves. At each point of a regular plane curve you have a unit tangent vector T and a unit normal vector N. Let the plane curve be given by γ(t) = (x(t),y(t)). Then we have
The chord joining the origin to the curve is exactly γ(t). This can be written in the form γ = αT + βN where α and β are some numbers. Since TN = 0, TT = NN = 1 we can take the dot products to show that α = γ⋅T and β = γ⋅N, thus γ = (γ⋅T)T + (γ⋅N)N. The light ray reflects in the normal line, so the tangential component is reversed. That means that reflecting γ in the normal line gives ρ = (γ⋅N)N – (γ⋅T)T. The vector ρ lies on the line of the reflected light. For that to be parallel with the x-axis we need ρ⋅(0,1) = 0. Doing the substitutions and taking the denominator gives a differential equation:
For the problem we know that the curve's tangent line must he vertical when it crosses the x-axis. So we can assume that y(t) = t, giving:
We can solve this differential equation, and we see that for non-zero k:
Since y(t) = t you now have your final equation for the curve. You actually have two one-parameter families of curve given for k < 0 and k > 0.
You could replace k by the x-intercept if you liked. When y = 0 you have x = –12k, i.e. k = –2x. Putting this into the formula the equations become
where p ≠ 0 is the x-intercept. Fly by Night (talk) 14:48, 10 October 2010 (UTC)[reply]

The solution is well-known to be a parabola whose axis is parallel to the x-axis, but some of the ways of showing that above are cumbersome. There are some high-school-geometry-style arguments for it. I'll see if I can put one here later if there's some interest. Michael Hardy (talk) 15:44, 10 October 2010 (UTC)[reply]

There are other non-continuous, piecewise smooth solutions which aren't parabolas. Fly by Night (talk) 17:24, 10 October 2010 (UTC)[reply]


October 11

Markov Chains

Hello everyone, I have a question here which I have been pondering for a while but can't figure out how to do a general case. So we are studying first step analysis in a Markov Chain class and I am trying to figure out how to do this. Let P be a transition probability matrix for a Markov chain and let be the expected number of visits starting at i to state j before the next visit to state i. Let denote the row vector . If the chain is irreducible and recurrent show that satisfies and that for all j in the state space. Note that by this definition . I suppose this would help us solve for such expectations by simply solving a left eigenvalue/eigenvector problem (with the constraint that the i-th component is 1, to have a unique solution). So I can do specific examples and I have done numerical examples to convince myself that it is true but I don't know where to begin for a general case. Any help would be appreciated. Thanks! -Looking for Wisdom and Insight! (talk) 05:53, 11 October 2010 (UTC)[reply]

It is hard to provide really helpful answers to questions like this because we don't know which useful helper properties your text has already established. Here are some hints that point toward a proof from first principles:
  1. Let Pi be the matrix whose ith row is that of P (that is, the transition probabilities away from state i) and the other rows are 0.
  2. Compute each mi(j) separately following the hint I gave 174.29.63.159 yesterday (see above). It turns out that mi is exactly the ith row of the inverse of the matrix I–P+Pi.
  3. Therefore mi(I–P+Pi) = ei, and the rest is just algebra, remembering that the probabilities in Pi sum to 1.
It may be helpful while thinking through the matter to fix i=1 (which you can do without loss of generality). –Henning Makholm (talk) 14:43, 11 October 2010 (UTC)[reply]

Understanding the Double Dual

Resolved

Dual_space#Injection_into_the_double-dual mentions that for an infinite dimensional vector space V, V** and V are not isomorphic, but I'm having trouble understanding why. Using the notation in the article, if I have the vector space V = K (the space of vectors (a1,a2,...) with each ai in K and all but a finite number equal to zero), then V* = KN (the space of all vectors (a1,a2,...) with each ai in K). But then what does the double dual V** look like? What is an example of an element of V** that is not in the image of the natural injection Ψ:V → V**?

Consider the subset of V* that consists of all the projection functions (which just return one of the components of their argument), together with the function that sums all of the components. This set is linearly independent (because no finite set of projection functions add up to the sum function). Extend it to a basis for V* (you may need the axiom of choice for this). Now consider the element of V** that expresses its argument in that basis and gives you the coefficient of the sum function. –Henning Makholm (talk) 17:49, 11 October 2010 (UTC)[reply]

In particular I'm trying to answer the following question: if I have some subspace W of V* from above, does the set W0∩Ψ(V) uniquely determine W? Here W0 ⊂ V** is denoting the set of annihilators of W and Ψ(V) is the elements of V** that can be represented as a vector with only finitely many non-zero terms. If anybody could help me understand this better I would appreciate it. Rckrone (talk) 17:26, 11 October 2010 (UTC)[reply]

I don't think your can distinguish between the entire V* and the subspace spanned by the projection functions (that is, elements of V* with finitely many nonzero coefficients). You get the empty set in both cases, because nothing in eliminates every projection function. –Henning Makholm (talk) 18:07, 11 October 2010 (UTC)[reply]
Yeah, you're right. Thanks a lot for the explanation. It's a lot clearer now. Rckrone (talk) 18:14, 11 October 2010 (UTC)[reply]

As a concrete example, think of the set of all trigonometric polynomials with period 2π with complex coefficients, with inner product given by

Each trigonometric polynomial has only finitely many terms. But the double dual of this space is a space of Fourier series, and generally such a series has infinitely many terms. Michael Hardy (talk) 23:42, 11 October 2010 (UTC)[reply]

Sorry, I should have specified in my question, but I was talking about the algebraic dual, rather than the continuous dual. I think in the algebraic case, the double dual can't contain any infinite series of (the images of) the basis elements of the original space since there will be some functional in the dual space for which the "inner product" diverges. There's probably a more precise way to say that. Rckrone (talk) 02:10, 12 October 2010 (UTC)[reply]


But note that there is a further issue here. In the notation of the linked article, the canonical inclusion in the bi-dual of the F-vector space is linear and injective, and it is surjective if and only if is finite, and Henning Makholm has showed a proof. Saying that and are not isomorphic is a stronger statement (for between isomorphic vector spaces of infinite dimension there are of course linear not surjective injections).
There was an interesting discussion here at RD/M on September 25, 2009 on the case of countable dimension. For future reference (this one!) that time I put here a last example that provides a continuum cardinality family of subsets of whose characteristic functions are linearly independent in (whatever is the field ).(I did not happen to think about uncountable dimensions). --pma 20:03, 13 October 2010 (UTC)[reply]
Assuming the generalized continuum hypothesis, your example generalizes nicely to uncountable dimensions. Let be any vector space of infinite dimension. I claim that the algebraic dual has strictly larger dimension than . Proof. Choose a basis for . Any subset of naturally corresponds to an element of . A chain in corresponds to a linearly independent set in . The following lemma guarantees that there are very long chains.
Lemma. Assume GCH, and let be any infinite set. Then contains a chain of cardinality .
  • Without loss of generality, assume that is an initial ordinal.
  • Let with the lexicographic ordering -- that is, iff where is the least element on which the two sets differ. (Intuition for : A is the set of binary fractions between 0 and 1).
  • Let . (Intuition: dyadic rationals are countable yet dense).
  • Because , we have . (GCH used for ).
  • Define an order-embedding by . (Intuition: Dedekind cuts). It is trivial that this mapping is monotonic, and fairly easy to see that it is injective.
Thus, the image of is a chain in of cardinality the same as , namely . –Henning Makholm (talk) 21:55, 16 October 2010 (UTC)[reply]
Here's a proof that works without GCH. Let the scalar field be and the dimenion of be . I use the notation for the set of all functions as a vector space over , that is, .
  1. When : Then . (There are scalar multiples of base vectors and the number of finite sums of such vectors is ). On the other hand contains at least elements. Since and have different cardinality, they cannot be isomorphic. In particular, must have dimension .
  2. For arbitrary : Let be the smallest subfield of , that is, if has characteristic 0 and for characteristic . In either case, is small enough to match the condition in case 1. Let be a basis for . By case 1, . But each member of is also a member of , and is still linearly independent over . (Namely, suppose that is an -linear relation among vectors . Then the 's are a solution to a homogenous system of linear equations with coefficients in , and this system has only the trivial solution in . Then of the equations must be linearly independent over , which means that their determinant is nonzero whether evaluated in or in . Therefore the 's must all be zero, too). So the dimension of is at least , Q.E.D.
Henning Makholm (talk) 03:11, 19 October 2010 (UTC)[reply]
excellent! pma 08:37, 20 October 2010 (UTC)[reply]

Reverse Fourier

I'm into burning new digital waveform patches into my homebrewed synthesizer... Here's the deal: I know the absolute values for each overtone, but not their relative phases. So for each set of overtones there's a practically infinite number of possible waveforms. Being close to original waveform is not really important (there's plenty of processing downstream, so the starting patch will be mushed anyway). I suspect the only meaningful criteria for setting relative phases will be maximum usage of available bits (bus width less processing headroom). The waveform, say, for 16-bit word size, must touch either +32767 or -32767 rail, and must be centered around precisely 0 (if it's asymmetrical it shouldn't touch both rails). That's quite obvious.

But the second objective is to maximize (or optimize?) the energy carried by the wave. A pure sinewave spanning 1 Volt from peak to peak has RMS content of 0.5 / 1.41414=353 mV. But a fixed set of harmonic overtones fitted into the same peak-peak range may be 400 or may be 200 milliVolts RMS - depending on the set of phases. So my question is - is there a simple empyrical algorythm for maximizing form factor of the synthesized wave for a given peak-peak limit?

TIA, East of Borschov 18:56, 11 October 2010 (UTC)[reply]

Are you sure that is what you want? Even though the waveform you synthesize fits within ±32767, there's no guarantee that this will also be the case after downstream processing -- even passive filtering could create peaks higher than your original ones. So if you're too clever choosing phases that will knapsack a lot of energy into your sample space, you're probably just setting yourself up for some clipping distortion further down the line. –Henning Makholm (talk) 22:01, 11 October 2010 (UTC)[reply]
"Downstream" is mostly analog. Digital processing is only for frequency modulation, shaping and mixing noise and delay effects - so a fixed headroom of two bits is more then enough. Perhaps I chose the wrong word: the target is not really maximizing but normalizing different voices to roughly the same (not necessarily high) energy. East of Borschov 02:49, 12 October 2010 (UTC)[reply]

Map from free abelian group onto the integers

I'm currently reading through a proof, and at one point it asserts that given any element in the free abelian group of rank n, it is possible to find a homomorphism onto the integers, whose kernel contains this element. I have no problem if the chosen element has at least one zero coefficient, for example (1,1,1,0) in Z4 (in that case, the map could send the first three basis elements to zero, and the fourth to 1), however I cannot see how to show the existence of such a map if the chosen element has no non-zero coefficients, for example (1,1,1,1) in Z4.

It just seems like by permitting such an element to be in the kernel, there will be (non-inverse) elements in the free abelian group which sum to something in this kernel, whose images cannot sum to zero in the integers, because it would cause us to fail to preserve the structure, under the map. Is there something that I'm missing? Thanks, Icthyos (talk) 20:33, 11 October 2010 (UTC)[reply]

The map that sends (a,b,c,d) to a−b is a homomorphism onto the integers with (1,1,1,1) in the kernel. Algebraist 20:36, 11 October 2010 (UTC)[reply]
Ah, of course - thanks. Is there a general way to spot such a map? I tried to extend what you did - say we are in Zn and we want the kernel to contain . Then the map sending to has such a kernel, but obviously isn't onto unless and are co-prime. For instance, I can't see how to construct a map from Z4 whose kernel contains (2,4,8,16), since none of the coefficients are pairwise co-prime. Icthyos (talk) 21:07, 11 October 2010 (UTC)[reply]
Just cancel the common factors: 2a−b. Algebraist 21:12, 11 October 2010 (UTC)[reply]
Right! Thanks for the help, it's something I'd never thought about before. Brain couldn't make sense of it! Icthyos (talk) 22:20, 11 October 2010 (UTC)[reply]


October 12

Rings with infinite height

Is there an example of ring (commutative, with unit) in which every prime ideal has infinite height? The closest example I came up with is

but I'm not sure it is right.--151.71.87.31 (talk) 11:33, 12 October 2010 (UTC)[reply]

Take a look at this paper. It looks at polynomial extension rings of commutative unitary rings, and proves that prime ideals have finite height if and only if they are finitely generated. So to find your example, you need to find one where none of the prime ideals are finitely generated. Fly by Night (talk) 12:21, 12 October 2010 (UTC)[reply]

A simple exponentiation math problem

I'm not sure how to solve this: 2n+3n=4. Solve for n. I'm pretty sure it's nothing I had to learn in high school or college, but I could be wrong. I'm trying to simplify some mortgage amortizations and I'm not sure the best way to solve them. (I could probably use Newton's method, but I'm trying to look for an exact or direct approach.) Thanks in advance! ~a (usertalkcontribs) 19:45, 12 October 2010 (UTC)[reply]

n=0.7604913577476414 by Newton's method. I don't think there is an elementary solution. 72.89.116.142 (talk) 20:02, 12 October 2010 (UTC)[reply]
Awww, that's too bad. Thanks for your time! ~a (usertalkcontribs) 20:37, 12 October 2010 (UTC)[reply]

Here's a link: Newton's method. Michael Hardy (talk) 20:21, 12 October 2010 (UTC)[reply]

Yeah I know about Newton's method (I mentioned it above), but I was hoping for (what anon calls) an "elementary solution". Regardless, thanks for your help! ~a (usertalkcontribs) 20:37, 12 October 2010 (UTC)[reply]
I just tried Newton's method and the iterations seem to be periodic, i.e. they don't settle down to a limit and just jump back and forth between two values. Which initial value did you use 72.89.116.142? Fly by Night (talk) 21:39, 12 October 2010 (UTC)[reply]
I started with n=0 and got the same answer. "for(int i=0;i<10;i++)n-=f(n)/fp(n);" and f(n) is "2n+3n-4" and fp(n) is "2n*ln(2)+3n*ln(3)". ~a (usertalkcontribs) 22:03, 12 October 2010 (UTC)[reply]
I've just run it again, with x0 = 0 and the answer oscillates between 0.7604913576 and 0.7604913579. I ran the iteration 500 times. Asking Maple to solve the equation for me gives x = 0.7604913576. Doing some algebra I get:
Putting xn = 0.7604913576 into this gives xn+1 = 0.7604913579. Maybe it's a rounding problem in Maple. Could you do that last substitution at your end and let me know what the output is, please? Fly by Night (talk) 22:49, 12 October 2010 (UTC)[reply]
It's obviously a rounding error. -- Meni Rosenfeld (talk) 08:32, 13 October 2010 (UTC)[reply]
Thanks for that Meni... Fly by Night (talk) 10:47, 13 October 2010 (UTC)[reply]

Using the J (programming language), expand the function as a truncated (i.13) taylor series (t.) and use the polynomial equation solver (p.).

  {:{:>p.((2&^)+(3&^)-4:)t.i.13 
0.76049135775

Bo Jacoby (talk) 23:35, 12 October 2010 (UTC).[reply]

Ah taylor series? So  ? That's kind of hard to find an exact solution for n too, right? ~a (usertalkcontribs) 13:53, 13 October 2010 (UTC)[reply]
Right, but there are some special-purpose numerical algorithms for polynomials, and apparently J only has a numerical polynomial solver, not a general root-finder. Anyway, it should be in the denominator. -- Meni Rosenfeld (talk) 14:44, 13 October 2010 (UTC)[reply]
Truncating the taylor expansion to 13 terms gives a polynomial of degree 12 having 12 roots, and J computes them all: 4.44295j5.75916 4.44295j_5.75916 _5.91012 _5.18925j2.80887 _5.18925j_2.80887 _3.22387j4.90441 _3.22387j_4.90441 _0.567437j5.75809 _0.567437j_5.75809 1.63828j5.44476 1.63828j_5.44476 0.760491. The untruncated series has an infinite number of roots. No programming language has a root-finder computing that infinitely long row of complex numbers. Bo Jacoby (talk) 18:14, 13 October 2010 (UTC).[reply]
Oops! Thanks, fixed. ~a (usertalkcontribs) 15:55, 13 October 2010 (UTC)[reply]
I think the problem is that 2x + 3x = 4 does not have an exact solution in terms of elementary functions. The best you can hope for is a numerical solution.Fly by Night (talk) 15:02, 13 October 2010 (UTC)[reply]
For this to make sense, you'd have to exclude constant functions from being elementary. The solution is trivially elementary according to the definition in the linked article.—Emil J. 15:10, 13 October 2010 (UTC)[reply]
You're right. I had an inverse of y(x) = 2x + 3x in mind. Thanks. Fly by Night (talk) 16:40, 13 October 2010 (UTC)[reply]
Ah, ok. Thanks. So, then, solving the generalized problem an+bn=c is impossible? ~a (usertalkcontribs) 15:55, 13 October 2010 (UTC)[reply]
Very much so. Just look at Fermat's Last Theorem. Looks really easy on the face of it. Number theory is full of problems that are easy to state and seemingly impossible to solve. Fly by Night (talk) 16:40, 13 October 2010 (UTC)[reply]
For those without a knowledge of Newton's Method or Taylor series expansions, the "Goal Seek" function in Excel (or any similar spreadsheet) gives a very easy way to solve a large number of such equations, but Excel achieves only five significant figures, even with a thousand iterations. (Excel actually uses the Newton-Raphson method) Dbfirs 15:23, 13 October 2010 (UTC)[reply]
Does Excel really use Newton-Raphson? I find it hard to believe that Excel can symbolically differentiate, and if it evaluates the derivatives numerically then it may as well just use the secant method. -- Meni Rosenfeld (talk) 15:36, 13 October 2010 (UTC)[reply]
Agreed. According to this MS support page Excel Goal Seek uses "a simple linear search". From the description, it could be the secant method or possibly the false position method. Gandalf61 (talk) 16:15, 13 October 2010 (UTC)[reply]
Apologies for my error. I jumped to a false conclusion based on inadequate research, and if I'd thought about it (as Meni did), I'd have realised that it was unlikely! Sorry! Dbfirs 01:53, 14 October 2010 (UTC)[reply]
Wolfram Alpha can also solve this, to arbitrary precision. -- Meni Rosenfeld (talk) 15:42, 13 October 2010 (UTC)[reply]
Oh wow, that's cool. I'll have to keep that URL. ~a (usertalkcontribs) 15:55, 13 October 2010 (UTC)[reply]
There is no need for symbolic differentiation with Newton's method. Numerical differentiation works just fine and AFAIK it's usually done that way. 72.229.127.238 (talk) 05:32, 14 October 2010 (UTC)[reply]
As I said, I am fairly sure that the secant method is superior to Newton's method with numerical derivatives. Personally I have used Newton's method many times with symbolic derivatives, never with numerical. -- Meni Rosenfeld (talk) 07:06, 14 October 2010 (UTC)[reply]

If the n's don't have to be the same, on the theory that "you can't step into the same river twice", an exact solution is 2^0 + 3^1 = 4. also the average of those exponents (0 and 1) is 0.5, or 0.75 if you'd rather have something than nothing, which is pretty close to the above answers by more complicated or less whole/precise methods. 92.230.70.59 (talk) 23:25, 14 October 2010 (UTC)[reply]

Someone please look the numerical solution up in Plouffe's Inverter in case it has an explicit form we're missing. The website doesn't load for me right now. – b_jonas 22:40, 16 October 2010 (UTC)[reply]

The equation can also be solved without using differentiation (such as Newton's method and Taylor's formula do). Rewrite the equation

Take the logarithm

Divide by log 3

Now the unknown is nicely isolated on the left hand side, but alas it also occurs on the right hand side. So try the iteration

for n= 0,1,2,3,... The J expression is simply

   3&^.@(4-2&^)^:_]0
0.76049135775

Bo Jacoby (talk) 16:52, 17 October 2010 (UTC).[reply]

October 13

a paradox?

There is a very exciting new casino - it is exciting because you can come out ahead using the martingale system! For you see, there is but one counter, and it makes what seems to be "almost even" bets on $1 that are all slightly in the house's favor, but the actual function used crosses the zero every 15 bets!


So, if the winner goes:
House, Player, Player, House, House,
the graph moves (1, -1) to (2, 0), (2, 1), (2, -1), (2, -2)
The graph fluctuates up and down, but one thing is for sure: it crosses the zero at most every 15 bets.


There is a long line behind this table, all composed of misguided gamblers who believe that a non-doubling version of the Martingale system really can work in this one particular case -- if they just come with $16, they can be guaranteed to weather the longest possible losing streak, and make at least $1 (less a slight house advantage).

Of course, once they are starting to win the house kicks them out, so they each leave with just short of $1.

So now we have a line, snaking around. Each player comes with $16 and leaves when they make $1, and the next player steps in immediately.

Therefore, it would seem if 10 players have left already, the house is down nearly $10, and so on for 100 players or 1000 or what have you.

But this is my problem: The house is just a counter that makes a series of linear bets, and each and every one has a slight advantage for the house!!!

In fact, I can make the paradox even stronger: what if the graph that I mentioned only ever dips up to $1, but dips much deeper into negative routinely.

how do we reconcile the house advantage on each bet, with the fact that everyone can play until they are a winner and let the next person start? 84.153.240.143 (talk) 17:30, 13 October 2010 (UTC)[reply]

I don't believe that the system you described has a house advantage. For example, if the first seven bets are all won by the house, then the next bet certainly is disadvantageous to the house, because it is certain to be won by the player. So your argument that the system as a whole is advantageous to the house because all of the individual bets are doesn't work (because the individual bets are not all advantageous to the house), and in fact what you've showed in your reasoning is that the system as a whole has a house disadvantage. —Bkell (talk) 23:54, 13 October 2010 (UTC)[reply]

Laplace transform of a matrix?

My goal is to find the inverse of a filter. The problem is, the filter is given by a matrix of discreet values, so actually what I need is to find a matrix which is the inverse of it related to convolution. The obvious way to do this with a function would be to transform it to the Laplace domain, get it to the power of -1, and transform it back. I never heard about transforming a matrix filled with numerical values to the Laplace domain (I doubt the solution would be just multiplying every element with 1/s). Does anyone know a way to do it? Violating causality is not a problem. A solution in the form of pointing me to a Matlab function would be sufficient, but I couldn't find any. Using the "deconv" function, with a dirac impulse as the parameter would do it in the one dimensional case. However, my filter is two-dimensional, and, while there exists a conv2 for 2d convolution, there is no deconv2. --131.188.3.21 (talk) 18:30, 13 October 2010 (UTC)[reply]

For discrete systems (such as your matrix), it is more common to operate in the Fourier domain or the Z transform domain, rather than the Laplace domain. The Z-transform relates to the Laplace Transform via the Bilinear transform. You can then calculate the Z transform trivially (multiply by a Vandermonde matrix with α=zj+k). You can also approximate the Laplace transform using a modified Vandermonde matrix; this corresponds to a discrete fourier transform (which is related to the Laplace transform).
If you only need a Laplace transform so you can calculate an inverse, consider a more standard alternative. A typical work-flow to invert a 2D filter would be to convert to Fourier domain, numerically invert, and then perform an inverse Fourier transform. The difficulty lies in the numerical inversion, which may be ill-conditioned. To help condition this problem, you might perform spectral factorization on the frequency representation of the matrix, and then attempt to invert. Of course, an even better solution is to switch to a 1-dimensional representation: as pointed out in my favorite text on practical numerical inversion, Geophysical Estimation by Example, switch to a 1-D coordinate system before inverting: "Many (2D inversion) problems appear to be multidimensional, but actually they are not." Nimur (talk) 18:45, 13 October 2010 (UTC)[reply]
Thanks for the idea, it seems that all the filters are linearly separable. The only problem with using deconvolution is that it is actually like a polynomial division: so if I have f*x = d, deconvoluting by x = deconv(d,f) will just give me zero, with a remainder of d. (d is the dirac impulse).
And yes, the FFT of the matrix turned out to be singular :( --131.188.3.21
I think if I don't find any better idea, I'll just have to resort to constructing a big system of equations out of the whole convolution, and apply a least square estimator. Not the most elegant approach. (talk) 19:31, 13 October 2010 (UTC)[reply]
What you are describing as "not elegant" is called numerical inversion. If you have actual numbers in your matrix, and you want to have actual numbers in your inverse, it's time to give up on "elegance" - you're doing the best that can be done with actual values. As an aside, my red-link on spectral factorization is covered in the "spectral theorem" article. It might help you create an invertible matrix out of your FFT result. Nimur (talk) 20:30, 13 October 2010 (UTC)[reply]

October 14

direction relative velocity

In an inertial coordinate system, a particle's position as a function of time is given by r=i(1 m/s^3)t^3 -j(2m/s^2)t^2 & a student's position by r=-j(2m/s^2)t^2 +k(3m/s)t, for the interval 0 to 2s

I'm asked to compute the direction of a particle's average velocity relative to the student for this 2 second interval (I know I need either two angles, or a plane and one angle)

I've calculated that the relative average velocity of the particle to the student is -4i m/s +3k m/s

How do I find the direction from that?24.63.107.0 (talk) 05:43, 14 October 2010 (UTC)[reply]

The direction of the vector −4i m/s +3k m/s is the unit vector −0.8i+0.6k. Bo Jacoby (talk) 07:11, 14 October 2010 (UTC).[reply]
Or if you need the direction as an angle, you can convert it to polar coordinates ( arctan(3/4) ). --131.188.3.21 (talk) 07:57, 14 October 2010 (UTC)[reply]
I think you have your signs reversed - to calculate the velocity of the particle relative to the student, you should subtract the student's velocity from the particle's velocity, not vice versa. Gandalf61 (talk) 08:59, 14 October 2010 (UTC)[reply]

October 15

Division by zero

1×0=0

       Therefore, 1=0÷0

2×0=0

       Therefore, 2=0÷0

Therefore, 1=2=0÷0

Similarly any number is equal to any number Please tell me the mistake in this. Jijo (talk) 05:55, 15 October 2010 (UTC)[reply]

Your error is assuming that division by zero is a legitimate operation. —Anonymous DissidentTalk 06:03, 15 October 2010 (UTC)[reply]
Or, in greater generality, assuming that division by zero follows the same rules as division by other numbers. -- Meni Rosenfeld (talk) 06:14, 15 October 2010 (UTC)[reply]
You can also visit the Mathematical fallacy article for similar problems.--Email4mobile (talk) 06:15, 15 October 2010 (UTC)[reply]
I am not dividing it to get a result. I have simply used a division symbol and left it alone.Jijo (talk) 19:02, 15 October 2010 (UTC) —Preceding unsigned comment added by Jijo925 (talkcontribs) 19:01, 15 October 2010 (UTC)[reply]
Asserting that 1x0=0 implies 1=0/0 is in some senses a correct statement, this is because 0/0 is an indeterminant form, the statement itself as given is technically nonsense, as you can't perform division by 0, so it is nonsense to write it that way. What you alluding to is the fact that 0/0 is an indeterminant form. A better way to write this would be and . A math-wiki (talk) 01:03, 16 October 2010 (UTC)[reply]
I don't think the OP is necessarily alluding to indeterminant forms. I think they're following the rules of algebra more or less like a machine that was never told division by 0 doesn't follow those rules. In general, certainly, a*b = c means a = c/b, whenever b is not 0. In fact, "c/b" is merely a convenient symbol meaning "c times the multiplicative inverse of b", where that inverse must exist for "c/b" to be meaningful. 67.158.43.41 (talk) 06:40, 16 October 2010 (UTC)[reply]
You are dividing by 0. You're not evaluating the expression, but that doesn't matter.--203.97.79.114 (talk) 08:30, 16 October 2010 (UTC)[reply]
Partially correct. He isn't evaluating the zero division on the right, but he *is* evaluating it on the left. Just moving the number and changing the operation is a shortcut - what you're really doing is dividing both sides by the same number. That is, if you start with x×y=z, you pass through (x×y)/y=z/y in order to get to x=z/y. You have to evaluate the (x×y)/y in order to simplify it to x. That simplification works with all numbers except zero, as (x×y)/0 is undefined, for any value of x and y. You can't simplify (x×0)/0 to x anymore than you can go from the general simplification of "w/w is 1" to the specific statement "0/0 is 1". -- 174.24.199.14 (talk) 15:47, 16 October 2010 (UTC)[reply]

Variance

Hi! Say where ~ Poisson and and Y ~ Lognormal. The 's are all iid (and independent of N) and K is some positive number. I am trying to find the variance of L in terms of the parameters...

Using the law of total variance on L and N, I get

Then using an answer to an earlier question I asked here (thanks for that), I get

My question is: is this correct? I don't think it is because a simulation exercise gives me a much smaller variance... Thanks for any help. --Mudupie (talk) 09:26, 15 October 2010 (UTC)[reply]

This looks correct to me. I've also done my own simulation and the values seem to match. What parameters did you use, and what result did you get? -- Meni Rosenfeld (talk) 11:18, 15 October 2010 (UTC)[reply]
Thank you for the confirmation - I guess there must be something wrong with my simulation then. I'm using the following parameters:
The analytic std deviation then comes to 117,938,227 but my simulated std deviation is only 78,057,536. I used 500,000 trials... --Mudupie (talk) 11:33, 15 October 2010 (UTC)[reply]
I think you may need a bigger sample. Try running 5 batches of 100,000 trials and find the variance of each batch. If the results vary significantly it will support this hypothesis. -- Meni Rosenfeld (talk) 12:17, 15 October 2010 (UTC)[reply]
Thanks. I think you are right. Surprisingly, though, the simulated std deviation does not vary significantly among the batches. I narrowed the problem down to the simulation of lognormal random numbers - the problem occurs when is large (more than 4ish) even when K is zero. I'm using Excel's LOGINV function. I simulated 6,000,000 numbers from Lognormal(7,8) distribution and the simulated variance was significantly lower than what it should have been (even though the simulated 90th, 91st, ... 99th, 99.1th, 99.2th, ... 99.9th and even 99.99th percentiles were accurate). I'll post a follow up question when I have the data. One more question for now: how does one determine what an appropriate number of simulations is for these types of problems? --Mudupie (talk) 20:56, 16 October 2010 (UTC)[reply]

Vector Fields and the Indices

Let M be a smooth (n–1)-dimensional manifold in Rn. Let X be a smooth vector field over M with only isolated singularities. Let p be such an isolated singularity, i.e. X(p) = 0. How do I compute the index of X at p using connections? Does anyone know a nice formula? Fly by Night (talk) 12:39, 15 October 2010 (UTC)[reply]

Coefficients in lengthy expansions

For an expression such as , is there an efficient way to determine how many times abcd, say, will appear in the expansion? My feeling is that the binomial coefficients could be used, but I can't put my finger on how. Thanks for the help. —Anonymous DissidentTalk 13:01, 15 October 2010 (UTC)[reply]

After running a few of these "cyclic" products through Wolfram|Alpha and entering the sequence of coefficients into the OEIS, it's clear that the coefficient of the degree one term in n variables corresponds to the derangements of n objects (... 2, 9, 44, 265, ...). I don't really understand this result, though; can anyone explain? —Anonymous DissidentTalk 13:30, 15 October 2010 (UTC)[reply]
Rearrange as
then note that coefficient of abcd is number of ways of permuting abcd such that a occurs in position 2,3 or 4 but not position 1; b occurs in position 1, 3 or 4 but not 2 etc. i.e. coefficient of abcd is counting derangements of abcd, as you said. Gandalf61 (talk) 13:48, 15 October 2010 (UTC)[reply]
That makes good sense. Thanks. Unfortunately, counting derangements is about as time-intensive as just expanding the product, so I suppose a particularly convenient method does not exist. —Anonymous DissidentTalk 00:59, 16 October 2010 (UTC)[reply]
The article on derangements includes closed form formulas for computing the n-th derangement. The most useful is that the number of derangements of objects is the integer nearest to . If you need to compute other coefficients, however, it seems more difficult. Eric. 82.139.80.27 (talk) 18:07, 16 October 2010 (UTC)[reply]
Here's a suggestion that may or may not help you: Let , and rewrite your product as
where are the elementary symmetric polynomials. Furthermore is simply , so depending on what you want to do with the product, you may be able to save work by staying in the ring of symmetric functions as far as possible. –Henning Makholm (talk) 00:12, 17 October 2010 (UTC)[reply]

could you explain cayley graph

Hey, i am a 8th grade student interested in graph theory but i am having trouble to understand your articles. :-( is there any way you can explain "Cayley graph" to me in an easier way, thanks a million!!!! —Preceding unsigned comment added by 85.181.50.90 (talk) 21:41, 15 October 2010 (UTC)[reply]

The trouble is that a simplified explanation will lack formal mathematical rigor, and these are complicated and subtle concepts. A Cayley graph is a representation of a discrete group. This is a concept that requires subtle understanding of continuity and discreteness (as well as formal definitions of graphs and groups). These are concepts to which you probably have not been introduced, even if you are an advanced eighth-grader. Don't be discouraged - graph theory is very complicated and takes a while to wrap your mind around. You might start with the Graph theory article. Then, read about formal definitions for a "group"; and finally, discrete group. A Cayley graph is just a graph for a particular discrete group. Nimur (talk) 21:55, 15 October 2010 (UTC)[reply]

I'm not dumb! Lady Ada invented binary language which all computers everywhere speak, so girl's can learn this things too. I just need someone to explain to me in a simple way, I am very smart, what is a discrete group? I read your graph theory article and understood it PERFECTLY 100% ALL OF IT. But your discrete group article is too hard. I don't want to give up, I know I can understand if I try!!! Please just explain discrete group to me very easy, I understand graph theory!!!! I'M SMART I JUST NEED SUM HELP —Preceding unsigned comment added by 85.181.50.90 (talk) 22:16, 15 October 2010 (UTC)[reply]


maybe there could be a better name for it. if you could call it anything, what would you call a discrete group if you didn't know what discrete group was? 85.181.50.90 (talk) 22:40, 15 October 2010 (UTC)[reply]

No one was implying that you were dumb, it's not a matter of intelligence, it's a matter of knowledge. I'm currently taking my first upper-division (300 level) mathematics courses myself and I couldn't make sense of the information in that article either as I simply don't know enough about the relevant material. I would recommend learning about topology, group theory, and graph theory a fair amount before returning to this particular object, perhaps then you'd have the necessary information to make sense of the article. Graph Theory is a very interesting subject I had the opportunity to study it some in high school Discrete Math and I very much enjoyed studying them. A math-wiki (talk) 01:12, 16 October 2010 (UTC)[reply]
To the original questioner: I think it's awesome that you're interested in this stuff. Here are my suggestions for you. First, you need to understand just a little bit of group theory. In particular, you need to understand what a group is, and you should study some examples of groups to get a feel for how they work and to build up a collection of examples for yourself to play with. You also need to understand the idea of a generating set of a group. Then a Cayley graph is sort of a "road map" of a group. There's one vertex that represents the identity element of the group, and for each generator of the group there is an edge leading to a vertex representing another element, and then edges leading from those vertices, and so on. If the group you're thinking about is finite (which is a good place to start), then these paths will eventually lead back around in loops and cycles and create a finite graph, with one vertex for every element in the group. If there are two or more ways to get from the identity element to some other element in this graph, then that means there are two or more ways to write that element in terms of the generators of the group. This is only a very brief introduction to this whole thing, so I haven't explained anything in very much detail. If you have other questions, please feel free to ask. How much of the things I have mentioned in this reply did you already know? If you already understand what a group is, and you can give me an example of a finite group that you're comfortable thinking about, then I can give you an explanation of its Cayley graph that I think you'll understand. —Bkell (talk) 05:55, 16 October 2010 (UTC)[reply]

October 16

1/n in base b

Hello all I have been working with the behavior of 1/n in base b of and on for some time and recently had a breakthrough in understanding of how long the period of 1/n in base b for n coprime to b. What I found in essence is that for m=1,2,3,... the first m for which is divisible by n is precisely the period of 1/n in base b. My question is, given that this is closely related to Fermat's little theorem, any seen this conjecture before? Has it been proven? A math-wiki (talk) 01:19, 16 October 2010 (UTC)[reply]

I'm a bit puzzled by your question. Do you mean by "period" the length of the string of digits which repeats when 1/n is represented in base b? --TeaDrinker (talk) 04:46, 16 October 2010 (UTC)[reply]
Midy's theorem might be of interest to you. Also, the phenomenon you mentioned can be explained by using standard long division. The intermediate remainder after k digits have been computed is b^k mod n. I'm sure a quick induction can prove this rigorously. Writing it out for 1/7 in base 10 is a convenient example case. If this value is 0, division terminates--this can only occur if b and n are not relatively prime (a basic number theory result I'm sure you can prove or someone can dig up). If this value is 1 (which must occur at some point when they're relatively prime, which is a consequence of Fermat's little theorem), and only when it is 1, the operation repeats itself. This occurs precisely when b^k = 1, i.e. when b^k - 1 = 0, for k > 0 as small as possible, where equality is taken mod n. 67.158.43.41 (talk) 06:04, 16 October 2010 (UTC)[reply]

Factorise this polynomial....

Can someone please tell me how to factorise x^3-4x+1 modulo 229? I really need to know this for some other problem but I don't want to compute it. I know it's a perfect cube modulo 229 but nothing else. I want its exact factorization modulo 229. Urgent help required. Thanks! —Preceding unsigned comment added by 180.200.136.222 (talk) 03:00, 16 October 2010 (UTC)[reply]

Mathematica gave .174.29.63.159 (talk) 04:56, 16 October 2010 (UTC)[reply]

That can't be. I know for a fact that it's a perfect cube. There must be an error. —Preceding unsigned comment added by 180.200.136.222 (talk) 05:12, 16 October 2010 (UTC)[reply]

I disagree.... Brute force shows 1, 94, and 134 are the only numbers mod 229 which, when cubed, are 1. Any forces a^3 = A^3 = 1, giving only 9 possibilities. None of these satisfies 3a A^2 = 0. Edit: I forgot to mention 229 is prime, so for any B and k>0, B^k = 0 only for B=0 mod 229. This prevents terms higher than linear order from appearing inside the (Ax + a) term above. 67.158.43.41 (talk) 06:27, 16 October 2010 (UTC)[reply]

You don't have to take my word for it. Just verify it yourself. Foil it out and you get and then do simple division on each of the coefficients and you get what you are supposed to get. 458 gives you remainder zero. 63200 gives you remainder 225 (which is -4 mod 229). And 2320000 is 1 mod 229. Done! 174.29.63.159 (talk) 08:06, 16 October 2010 (UTC)[reply]

Ah right. Sorry for the confusion. You're right. I was a bit confused. There's this theorem which states it has to have at least one factor with exponent GREATER than 1. I thought the theorem stated it had to be a perfect cube. Obviously different things. My mistake. Would you please tell whether there's an online tool that can factor such things modulo a given prime. It'd be really handy. Thanks! —Preceding unsigned comment added by 180.200.136.222 (talk) 09:01, 16 October 2010 (UTC)[reply]

Another question: can you tell me a prime p such that x^3-4x+1 is a product of three distinct linear factors mod. p? I can tell you that the polynomial x^3-4x+1 is either irreducible mod. p, has exactly one root mod. p, or is a product of three distinct linear factors mod. p EXCEPT when p=229. So you just need to check when it has three distinct roots mod. p. Can you tell me a prime p when x^3-4x+1 has three distinct roots mod. p? Thanks! —Preceding unsigned comment added by 180.200.136.222 (talk) 09:20, 16 October 2010 (UTC)[reply]

Please help. I really need your help. —Preceding unsigned comment added by 180.200.136.222 (talk) 09:37, 16 October 2010 (UTC)[reply]

Anyone??? —Preceding unsigned comment added by 180.200.136.222 (talk) 11:47, 16 October 2010 (UTC)[reply]

Using my formula above, the cube of a linear factor Ax+a has 3a A^2 as the coefficient of the quadratic term. This must be 0, mod p. Since the integers mod n are a field for prime n=p, they have no zero divisors. That is, 3a A^2 = 0 (mod p) forces 3, a, or A = 0. a or A = 0 won't work since then the cubic or constant term is zero when it should be 1. So, 3=0. That is, p=3. However, then the linear term 3a^2 A x has 3a^2 A = 0, yet it should be -4 = 2. So, nope, looks like you can never make your cubic equation a cube modulo any prime. 67.158.43.41 (talk) 00:16, 17 October 2010 (UTC)[reply]

Sorry, you misunderstood me. I wanted to know whether you can make x^3-4x+1 a PRODUCT OF THREE DISTINCT LINEAR FACTORS modulo some prime p. I didn't want it to be a cube modulo p. In fact it's impossible as you say. So is x^3-4x+1 a product of THREE DISTINCT LINEAR factors modulo some prime p? Thanks for all you help. It's much appreciated. But please tell me the answer. —Preceding unsigned comment added by 180.200.136.222 (talk) 03:38, 17 October 2010 (UTC)[reply]

Posterior distribution of lambda

Hi again, this time I have a question which has nothing to do with complex numbers!
Suppose and our prior knowledge about is
i) Identify the posterior distribution of given an observation x.
ii) One possible loss function is . Show when the loss is minimised. What is an expression of this for the above posterior distribution?

By the way, I actually haven't started the introductory statistics course yet, but I know what poisson and gamma distributions are, as well as loss functions, but not how solve problems like this. This question is from a past exam.--MrMahn (talk) 06:58, 16 October 2010 (UTC)[reply]

could mean the density is proportional to
or it could mean the density is proportional to
This likelihood function would be proportional to
Assuming the first alternative above, multiplying the likelihood by the prior density gives
This is a Gamma density. α has been replaced by x + α, and β by β + 1. Michael Hardy (talk) 15:29, 16 October 2010 (UTC)[reply]
Thanks, that makes sense actually. But how do you do part ii of the question?---MrMahn (talk) 04:53, 17 October 2010 (UTC)[reply]

Series of bounded functions

Q: Let f1, ..., fn: be nounded functions and . Suppose f: is such that, whenever we have with , then for some i. Show that f is bounded.

A: At first glance this looks like fairly straightforward analysis, but I'm having problems with the fact that you don't get to fix the 'i' in the condition, so it seems hard to construct anything concrete which applies to just one i in particular, which I presume is what we need, and particularly when we're working over all the reals, so we can't guarantee in what sense the function f would be unbounded (if working towards a composition). I'm usually fairly comfortable when it comes to analysis so if someone could just point me the right way, I wouldn't need a complete answer, just a suggestion of a method to get started :) thanks very much!

Incidentally, this is a question on a Graph Theory worksheet, so if there's a graph theoretic solution then I'd prefer that! Thankyou. 62.3.246.201 (talk) 15:04, 16 October 2010 (UTC)[reply]

I don't believe the version you've written is true. Take f(x) = x, which is of course unbounded. Set f1 equal to a sawtooth wave with discontinuities at each integer, slope 1 elsewhere, minimum 0, maximum 1. Pick (perhaps you meant for each there exists a satisfying your condition, or some variation on that theme, but it's not what's written). Any requires . If both x and y happen to land on the same linear segment of f1, . Otherwise x and y can only land on adjacent linear segments. The leftmost, say x, must be in for some integer n, and the rightmost, y, must be in . This forces and , so they differ by at least 1/3rd. 67.158.43.41 (talk) 00:44, 17 October 2010 (UTC)[reply]
No, read the question again. Your example does not meet the assumptions. For example, for arbitary , set . Then , but . –Henning Makholm (talk) 01:03, 17 October 2010 (UTC)[reply]
Ah, of course. Yup, the above is garbage. 67.158.43.41 (talk) 01:55, 17 October 2010 (UTC)[reply]

First, since you have no assumptions of continuity, you can think of all functions as for some opaque set . You should probably also scale your functions to make wlog, just to cut down on the number of symbols you need to keep in mind.

More specifically, is Ramsey's theorem in your syllabus? If so, label vertices with elements of , and ponder which kind of edge relations your assumptions might allow you to consider ... –Henning Makholm (talk) 00:54, 17 October 2010 (UTC)[reply]

Yes, Ramsey theory has come up on one or two of the questions on this sheet already, but I hadn't thought of using it in the infinite dimensional case. I'll have a think and see if I can come up with anything of use then, thankyou everyone already for all your replies! Also I'm not aware of what an opaque set is, or at least I haven't heard the terminology before - is it just a subset of the reals satisfying some sort of measure/density related property, or something along those lines?62.3.246.201 (talk) 01:15, 17 October 2010 (UTC)[reply]
By "opaque" I just mean that you can forget it has any structure -- i.e. you cannot usefully exploit the fact that it's a field, or a total order, or a topological space. It is just a set with some elements. (I don't think it is standard terminology. Real mathematicians might probably just say "an arbitrary set"). –Henning Makholm (talk) 02:12, 17 October 2010 (UTC)[reply]
By the way, you're not really in an "infinite dimensional case". You work with finite graphs corresponding to some subset of the x's. Ramsey will tell you that you cannot keep adding vertices to the graph indefinitely, which allows you to bound the variation of f(x). –Henning Makholm (talk) 02:21, 17 October 2010 (UTC)[reply]

Improper Integral

How would I integrate e^(-x^2) dx from 0 to infinity? I tried using the chain rule but I get a plus/minus and I'm not sure what to do with that. Thanks, 24.92.78.167 (talk) 15:39, 16 October 2010 (UTC)[reply]

This function has no elementary anti-derivative, but you can calculate the value of the integral using a trick. Consider
We can multiply these together to give
We can use polar coordinates to simplify this. Let x = r cos(θ) and y = r sin(θ), so that x2 + y2 = r2 and dx dy = r dr dθ. Thus
Notice that your function is an even function, and so we have
The Erf function has been defined because of this problem. Fly by Night (talk) 15:52, 16 October 2010 (UTC)[reply]
What formal proofs or definitions are needed to state that
 ?
I understand that
is (assuming it converges) just a number, so
,
but how do I get from there to
 ?
I assume that it is because exp(y-2) is independent of x, but I worry about playing fast and loose with notation without understanding the justification. -- 124.157.218.5 (talk) 02:44, 17 October 2010 (UTC)[reply]
Yes (and it is good to worry about such things in general). First you consider the entire x integral to be "just a number" in order to move it inside the y integral. Then you consider just the integrand of the y integral. For any particular y, is "just a number", so you can move it inside the x integral without changing the value of the y integrand for that y. The only formal prerequisite you need for that is that integration is linear, which is a basic requirement of integration.
The the polar coordinate transformation that follows is justified by the theory of multiple integrals, which is a bit more complicated than this. –Henning Makholm (talk) 03:14, 17 October 2010 (UTC)[reply]
Thank you. I see that my problem was really one of going from iterated integrals to multiple integrals, that is,
and is justified by Fubini's theorem. -- 124.157.218.5 (talk) 04:55, 17 October 2010 (UTC)[reply]

We have an article about this very integral that gives at least two methods of finding it: Gaussian integral. Michael Hardy (talk) 03:11, 17 October 2010 (UTC)[reply]

Final coordinate

If I am given several (say 5) x,y coordinates and the ratio of matching radii from said points but NOT which ratio applies to which point and NOT an absolute distance. How would one work out a final coordinate of where 5 circles meet given that such a point exists? (Not homework, but part of me planning to set a puzzlle for a treasure hunt.) A theoretical or an empirical answer are both equally fine, but I would want to be able to get the final x & y to at least 6 sig figs. -- SGBailey (talk) 15:47, 16 October 2010 (UTC)[reply]

Are you going to allow the participants access to wp:rd/math to help them solve your puzzle? Fly by Night (talk) 16:29, 16 October 2010 (UTC)[reply]
I can't think of an easy way to do this, but the following might help. The set of all points at distances from two points in a given ratio is a circle, and is relatively easy to construct: given the two points A and B draw the line through them and plot the points on it in the given ratio (one between the points, one outside), then draw the circle with centre on this line through the points. As it's easy to draw it's easy to calculate. If the ratio is 1:1 the circle is a straight line between the points. So given three points and the ratios of distances between them it should be possible to calculate three circles and so find the point.
But with you don't know which ratio to use with which point. The only way I can see is a search through all possible triplets of ratios, i.e. assign each to the three points in different ways and calculate the point, then calculate the distance to a fourth and see if its ratio matches an unused one. This could be very tedious for a large number of points, depending on the data: some datasets might be easier to work with than others. It might only be feasible using a computer.
Another way would be more statistical: pick points at random or in a grid, find the ratios of the distances to the coordinates, then compare them to the given ratios. Do it on a computer you should be able to quickly get a point close to the target so e.g. the sorted ratios match. which can then be refined to the required accuracy.--JohnBlackburnewordsdeeds 21:51, 16 October 2010 (UTC)[reply]

connect the opposite sides of a square with lines that don't cross

These seems intuitively impossible to me as after you connect one set of opposite sides, you've blocked off one of the two remaining sides completely. Is the observation true? What is a rigorous proof that you cannot connect opposite sides of a square in two dimensions without the two connection lines crossing? This is not homework, I'm just curious. (I prefer an algabreic proof to a geometric one, since I never got how geometry proves anything, but if you just have a geometric proof it's ok). Thank you. 93.186.31.238 (talk) 20:48, 16 October 2010 (UTC)[reply]

It's only impossible if you require the lines doing the connecting to be completely straight. If you're assuming that, you haven't said so, and if you don't have that criterion, you can just connect one pair of sides with a line inside the square, and the other pair with a curving line (or a set of straight lines abruptly changing direction as needed) going around the outside. Here, let me draw you some bad ASCII art:
     ____B____
 ___/____     \
|        |     \
|        |      \
|---A----|       \
|        |       /
|________|      /
    \__________/
A connects the vertical sides, B connects the horizontal sides, and they don't cross.--81.153.109.200 (talk) 21:04, 16 October 2010 (UTC)[reply]
(EC) I assume the "lines" can be curvy things, but must at all times stay inside the square? (If not, you can do it like this.) If they need to stay inside the square, think of the two lines as being the graphs of two continuous functions (say f and g) defined on the interval [0,1] with values in [0,1] so that f(0)=0 and f(1)=1 (this is the function that connects the lower-left to upper-right) and g(0)=1 and g(1)=0 (this is the other one). Now consider g(x)-f(x). This difference is 1 at x=0, and -1 at x=1. And right about now you should start feeling like the intermediate value theorem. Staecker (talk) 21:05, 16 October 2010 (UTC)[reply]
Your diagram is much prettier than mine! :D --81.153.109.200 (talk) 21:07, 16 October 2010 (UTC)[reply]
And now I see you asked about opposite sides rather than corners. You can do a similar argument in that case though. Staecker (talk) 21:08, 16 October 2010 (UTC)[reply]
That's a very restricted class of curves. For example, your curve from the left side to the right side moves only from left to right (and maybe up and down too) without ever wiggling back left. Can your approach be adapted to more general curves, or do we need the Jordan curve theorem for that case? Algebraist 21:16, 16 October 2010 (UTC)[reply]

OH DEAR

I MEANT ANY CURVY/SQUIGGLY "LINE" YOU WANT BUT I ALSO MEANT ALL ON THE OUTSIDE!!!!

Where I wrote: "after you connect one set of opposite sides, you've blocked off one of the two remaining sides completely."
I imagined:

     ____1____
 ___/____     \
|        |     \
|        |      \
|        |       \
|        |       /
|________|      /
    \__________/
Now what?

(In this case the right side is the one you "blocked off".

Rereading my question this way, anyone have any proofs? Thank you and sorry about the confusion!!! 93.186.31.236 (talk) 22:57, 16 October 2010 (UTC)[reply]

Here's a proof using graph theory. Consider the graph in the picture. If there were a way to connect vertices A and C, and vertices B and D on the outside of the square without crossing that would produce a plane graph of K5, but K5 isn't planar, so that's impossible. Edit conflict apparently but I got no error message when I posted. Rckrone (talk) 01:53, 17 October 2010 (UTC)[reply]
That really just begs the question why the complete 5-graph is not planar. –Henning Makholm (talk) 02:02, 17 October 2010 (UTC)[reply]
Sure, but that's a well known result and it's not hard to track down a proof. For example the first hit on google [1] has one which uses the Euler characteristic of the plane. If we're not allowed to refer to known theorems, then any non-trivial math result is going to take years of explaining. Rckrone (talk) 02:29, 17 October 2010 (UTC)[reply]
We are indeed not allowed to refer to known theorems when the proofs of those known theorems assume what we're trying to prove (or some very similar variant of it).
Just as an example, your Google hit simply asserts, early on: "If G is a planar graph, then any plane drawing of G divides the plane into regions, called faces. One of these faces is unbounded, and is called the infinite face." And it cannot even start talking about Euler characteristics without asserting this. The claim is intuitively obvious, but it is not more obvious than the property the OP wanted proved in the first place, and it represents exactly the kind of geometric intuition that he refers to as "never got how geometry proves anything".
If we want to prove, rather than just assert, the a plane drawing of a graph divides the plane into faces, we end up doing something extremely similar to the Jordan applications below. So appealing to graph theory here is not a shortcut but a detour. –Henning Makholm (talk) 02:57, 17 October 2010 (UTC)[reply]
I simply disagree that this square problem is somehow more fundamental than the Euler characteristic and the ideas in graph theory relating to planar graphs (although this is essentially subjective). For example, it's not hard to show from the Jordan curve theorem that a plane graph cuts the plane into separate regions if and only if contains a cycle.
Regardless, graph theory provides a toolbox that's well suited for this sort of problem. If the OP is interested in understanding how that toolbox is developed, then he/she has some direction for where to go from here, which would be to find a good intro treatment of the subject. Rckrone (talk) 03:34, 17 October 2010 (UTC)[reply]
In fact Jordan_curve_theorem#History_and_further_proofs even mentions a proof of the theorem based on the fact that K3,3 is not planar. That said, I don't myself know how one proves K3,3 is not planar without the Jordan curve theorem, but I guess it's either possible or that paper is junk. Rckrone (talk) 03:53, 17 October 2010 (UTC)[reply]
A rigorously written proof is, to be honest, very tedious in this case. As Algebraist mentioned, the Jordan curve theorem is the one you're looking for. Loosely, say you've connected the top and bottom edges of your square with a continuous curve C. Say the curve starts on the top edge at point T, ends on the bottom edge at point B, doesn't intersect itself, and doesn't intersect the square anywhere else. There are two ways to connect T and B traveling only on the square--left from T, follow the left edge, right until you hit B; or right from T, follow the right edge, left until you hit B. Call the curves thus generated C1 and C2. Now combining C1 with C satisfies the requirements of the Jordan Curve Theorem and generates two connected components, E1 and E2, with C1+C as the boundary between them. The right edge (except perhaps a corner) didn't take part in the curve C1+C, so doesn't intersect the boundary C1+C, so is entirely within one of the connected components E1 or E2. If the right edge is within the interior (explained on the linked page), great. Any line starting in the exterior must cross the boundary C1+C to get to the interior, so restricting such a line to not cross any existing lines (the square and C) prevents that line from touching the right edge. If the right edge is within the exterior, the inside of the square + the right edge (except the corners) does not cross C1+C by assumption, so is entirely within E1 or E2--so, apparently, entirely within the exterior. Union the interior, the interior of the square, and C1 except T and B to create a new connected component. Since this component is bounded, and has boundary C2+C, the left edge, which is in this component, is in the interior of C2+C. This can be made a bit more rigorous, particularly the final union step, showing the union preserves connected component-ness and has the given boundary; but, this outline could certainly be turned into a rigorous (algebraic) proof if one had the patience. 67.158.43.41 (talk) 01:49, 17 October 2010 (UTC)[reply]

(Edit conflict .. the following is essentially "what he said" with slightly different details):

Inside or outside is actually more or less the same thing in this case; a solution to the outside problem turns into one of the inside one (and vice versa) by a circle inversion centered on your square followed by some additional stretching and squishing to make the distorted square look like a square again.
Your intuition is right; one cannot connect both pairs of sides without intersections.
Proof. Connect the top and bottom sides on the outside in whichever way you please except without the line crossing itself. Then add a straight line right down the middle of the square. Together with parts of the top and bottom, this completes a closed curve which we now apply the Jordan curve theorem to. This is the crucial step! The theorem tells us:
  1. Every point not on the curve is either inside the curve or outside the curve.
  2. These two component are connected components, which means (among other things) that there is no continuous path that contains both an inside point and an outside point, but does not cross your closed curve.
  3. The closed curve constitutes the boundary of the inside, and also the boundary of the outside. That means (among other things) that any point on the curve will be next to a part of the inside and next to part of the outside.
The point in the middle of the square sits on the closed curve, and the third property now says that somewhere in the square is a point A that is outside the curve, and somewhere in the square is a point B that is inside the curve. These two points cannot both be in the right half of the square, for then the straight line that connects them would violate the second property. For the same reason they cannot both be to the left of the straight line. Assume that A is to the left and B is to the right. (Otherwise just swap "left" and "right" below).
Now assume (for contradiction) that you could connect the left and right side outwith the square without touching the wiggly top-bottom connection. Then the left-right connection could be extended from its left endpoint to A and from its right endpoint to B, again without touching the closed curve. This gives us a continuous path from A to B that does not cross the closed curve. But that is forbidden by the second property! Therefore our assumption must be false. We cannot connect the left to the right. Q.E.D.
You may consider this presentation more "geometric" than "algebraic", but except for the Jordan theorem, all of the tricky details happen inside the square, where they are easily formalized with coordinates and inequalities instead of appeals to geometric intuition. –Henning Makholm (talk) 01:51, 17 October 2010 (UTC)[reply]

October 17

human values

what are the human values that we learn through maths?????????? —Preceding unsigned comment added by Chharish775 (talkcontribs) 04:08, 17 October 2010 (UTC)[reply]