Wikipedia:Reference desk/Mathematics: Difference between revisions
Line 346: | Line 346: | ||
::By the same reasoning, wouldn't the next term need the same coefficient after finitely many terms? If the first terms of the polynomials had same coefficient, they cancel out and you're left with a polynomial of degree 1 less. So, would the answer be that the sequence of polynomials must differ only in the constant term after finitely many terms of the sequence, and those constant terms must converge to some real number? [[User:StatisticsMan|StatisticsMan]] ([[User talk:StatisticsMan|talk]]) 20:16, 16 August 2009 (UTC) |
::By the same reasoning, wouldn't the next term need the same coefficient after finitely many terms? If the first terms of the polynomials had same coefficient, they cancel out and you're left with a polynomial of degree 1 less. So, would the answer be that the sequence of polynomials must differ only in the constant term after finitely many terms of the sequence, and those constant terms must converge to some real number? [[User:StatisticsMan|StatisticsMan]] ([[User talk:StatisticsMan|talk]]) 20:16, 16 August 2009 (UTC) |
||
:::Yes --[[User:PMajer|pma]] ([[User talk:PMajer|talk]]) 20:21, 16 August 2009 (UTC) |
:::Yes --[[User:PMajer|pma]] ([[User talk:PMajer|talk]]) 20:21, 16 August 2009 (UTC) |
||
:::Of course; my blunder. [[User:Fredrik|Fredrik Johansson]] 22:09, 16 August 2009 (UTC) |
Revision as of 22:09, 16 August 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
August 10
TeX help
Looking for TeX code for integral sign having a horizontal dash midway (Dixmier trace related symbol). This would be similar to but with a horizontal line instead of a closed line. Tried a stike (∫) but prefer better. Ref: Noncommutative integral. If TeX doesn't have such, then question: can custom symbols be created in TeX? How?, please point. Thank you. Henry Delforn (talk) 07:54, 10 August 2009 (UTC)
- The MnSymbol library might have what you need. Check here and look on page 29. Otherwise search that PDF for "integral" and you might find some others. Maelin (Talk | Contribs) 12:04, 10 August 2009 (UTC)
- You can try
\def\dashint{\mathop{\mathpalette\dodashint{}}\!\int} \def\dodashint#1{\setbox0\hbox{$#1\int$}\setbox0\hbox to \wd0{\hss$#1-$\hss}\wd0=0pt\box0}
- It won't work in Wikipedia's texvc, if that's what you're asking for. — Emil J. 13:08, 10 August 2009 (UTC)
- No didn't work. Came up with:
which is better than∫but still bad. Henry Delforn (talk) 14:59, 10 August 2009 (UTC)
- No didn't work. Came up with:
- You may be interested in the unicode equivalent, ⨍, U+2A0D, or the slanty version ⨏, U+2A0F. JackSchmidt (talk) 15:12, 10 August 2009 (UTC)
- Here's a crude approximation which is accepted by texvc:
\,\,{-}\;\;\!\!\!\!\!\!\!\!\!\int
(the two initial spaces are only needed if something precedes the integral). — Emil J. 15:40, 10 August 2009 (UTC)
- Here's a crude approximation which is accepted by texvc:
- I don't care what they may say, I don’t care what they may do, you guys are just alright with me, oh yeah. Henry Delforn (talk) 20:14, 10 August 2009 (UTC)
Time Simulation in Matlab
Hi,
I want to produce a plot of two parametrised functions in real time, so i can see the path of the particle being drawn out as time increases; in Matlab. I'm currently studying fluid flows so i want to be able to see the path of a particle given the x and y equations in time. My tutor produces simulations with Maple quite easily but apparently it is possible to produce them with Matlab and since I have a copy of Matlab i'd love to know how.
I've done a bit of searching and have come across Simulink models, but they seem very complicated for what i want to be doing (I have a novice level of understanding on Matlab, I can write scripts with ease but models are beyond me).
Thanks for any help.
Pete 124.184.72.1 (talk) 10:45, 10 August 2009 (UTC)
- You can try something like this:
figure;
hold on;
x = 0;
y = 0;
plot(10, 10, 'b-');
for i=1:10
x1 = x;
y1 = y;
x = x + 1;
y = y + 1;
plot([x1, x], [y1, y], 'b-');
drawnow;
pause(0.1);
end
- Hope that helps... --Martynas Patasius (talk) 23:04, 10 August 2009 (UTC)
Complex integration
Hi, it's me again. I'm trying to evaluate with . I know that the integral converges (comparison with x-2, and well behaved at origin), and numerical estimation yields .
So... I define . It's got one singularity, a simple pole at the origin, and we have residue equal to i. Thus, any contour with winding number n around the origin has for its integral. It makes sense to me to take as a contour something like the upper half of , the lower half of (with R large and r small), and the lines that join them. Oriented in the usual way, this should lead to an integral that contains the one we're looking for, with other parts that can be dealt with separately, we hope.
I have no problem with the integral on the large circle going to 0. However, it seems to blow up on the smaller semicircle, and I can't see what I'm doing wrong. Should I be using a different contour? Thanks in advance for any tips. -GTBacchus(talk) 22:04, 10 August 2009 (UTC)
- Heh, I just came here to ask about the same exact problem and even the exact same part of it, the inner circle. The answer is exactly by the way. I was doing it slightly differently, though, the upper half of . I have a solution from someone else and I am just trying to understand what they do for this. Call this contour . Then
- .
- The solution I am looking at makes the claim "We then see that the first integrand has a removable singularity at z = 0 and so as , this integral goes to 0." I don't get it. StatisticsMan (talk) 22:27, 10 August 2009 (UTC)
- That makes some sense to me. Since has a simple pole with residue i, then subtracting takes away that pole. As long as there are no other negative powers of z in the Laurent expansion, the singularity becomes removable.
I mean, I think that makes sense. Must scribble madly on whiteboard now... -GTBacchus(talk) 22:34, 10 August 2009 (UTC)
- That makes some sense to me. Since has a simple pole with residue i, then subtracting takes away that pole. As long as there are no other negative powers of z in the Laurent expansion, the singularity becomes removable.
- Yea, I get that there is a removable singularity, but why does that make the integral 0? StatisticsMan (talk) 22:36, 10 August 2009 (UTC)
- Wait, which part of the contour are we talking about? If it says "as ", are we talking about the big circle? -GTBacchus(talk) 22:40, 10 August 2009 (UTC)
- If the singularity is removable, then the residue is zero, so that makes an integral around it equal zero. -GTBacchus(talk) 23:03, 10 August 2009 (UTC)
- First, I didn't use R and r as you did. Instead, I used R and 1/R, so sorry I confused you. So, as R goes to infinity, the radius of the inner circle goes to 0. Second, if the singularity is removable, then the residue is zero, so that makes the integral around the whole inner circle 0 but we have half of the inner circle whether you do the upper half (what I was doing) or lower half (what you were doing). We don't have the whole circle. StatisticsMan (talk) 23:25, 10 August 2009 (UTC)
- Yea, I get that there is a removable singularity, but why does that make the integral 0? StatisticsMan (talk) 22:36, 10 August 2009 (UTC)
Let's not use "the residue theorem" directly. By writing out the function as a Laurent series, one sees that residues close to, but outside contours can contribute to integrals. In the present case: if is a upper semi-circle of radius centered at 0, we have: , where is analytic. Now if you parameterize by and choose clockwise orientation (which is what we need for our computation here) . Needless to say, this holds in the limit , where the contribution of the bounded term to the integral vanishes. To wrap up the computation of the real integral, note that the integral along a large circle decays as the radius grows, by Jordan's lemma. Phils 02:18, 11 August 2009 (UTC)
- I figured out how to finish it up. The point is, if it is a removable singularity, it means there exists some r such that the function is bounded on a disk of radius r about that point. In other words as long as you make r (or 1/R) small enough, the function must be bounded. So, you use the ML inequality to say the integral is less than or equal to whatever the bound is times the length which is (in my case). And, goes to 0 as R goes to infinity. StatisticsMan (talk) 03:05, 11 August 2009 (UTC)
- I get it. Thank you both very much! -GTBacchus(talk) 17:48, 12 August 2009 (UTC)
August 11
Integral
Let and for . Prove that a.e.
I have someone's solution but it quotes some result on Fourier series saying "Since G is continuous and G(0) = G(1), then the sequence of the arithmetic means of the partial sums of the Fourier series of G converges uniformly to it." They say this is a result of some theorem in baby Rudin. I am guessing we are not supposed to just know such a result. So, do any of you have an idea how to do this using qualifying exam real analysis? Thanks. StatisticsMan (talk) 14:38, 11 August 2009 (UTC)
- The statement of your problem is equivalent to the statement that the Fourier basis is complete. I don't think there are too many elementary proofs of that floating around. The shortest one I found was in Zygmund's Trigonometric Series, section 1.5.
- Briefly, Zygmund's proof starts by proving the statement for continuous periodic functions. This is done in the following way: if one assumes, to the contrary, that there is a continuous function whose Fourier coefficients are all zero, but which is nonzero, then it must be greater than some in a -neighborhood of some point . It is then only necessary to construct a trigonometric polynomial p which is bounded below one in absolute value outside this neighborhood, and exceeds 1 uniformly inside this neighborhood; this can be done by appropriate translations of the cosine function. If one then considers the interaction of , this is supposed to be zero, but can be shown to grow to infinity, which is the desired contradiction.
- Zygmund then considers the general integrable case by letting and estbalishing that if the Fourier coefficients of f are all zero, then so are the Fourier coefficients of F. However, F is a continuous periodic function, so by the previous special case F is uniformly zero. That means that its derivative, f, is zero almost everywhere. RayTalk 15:24, 11 August 2009 (UTC)
So, G is continuous and 1-periodic, and we need to show that the Cesàro sum of its Fourier series converges uniformly to G(x). Let
be the kth Fourier coefficient. The Nth partial Cesàro sum is
and we have
thus
First, if we temporarily change G to the constant 1 function, we can easily compute directly G(x,N) = 1, hence
Now, we return to our original G. For any there exists such that whenever . We have
by (*) and (**). We can bound
For , we have
for some c depending only on G, hence |G(x,N) − G(x)| ≤ 2ε if N is large enough. — Emil J. 15:33, 12 August 2009 (UTC)
Lottery odds
Hi,
I'm trying to work out the odds of the following situation:
49 balls in a draw, 6 balls drawn, and I want to know how to work out the probability of 3 and only 3 matching balls being drawn (with the other 3 being "duds") as a "condensed formula" and not a "series formula".
For (not) example:
P = 1/N * 1/(N-1) * .... * 1/(N-(n-2)) * 1/(N-(n-1))
For example:
P = N! / ((N-n)! * n!)
Rixxin (talk) 16:02, 11 August 2009 (UTC)
- What do you mean by matching balls and duds? Bo Jacoby (talk) 16:53, 11 August 2009 (UTC).
- A match ball is one that matches your lottery ticket and a dud is one that doesn't. --Tango (talk) 17:03, 11 August 2009 (UTC)
(outdent rewsponse) Hint. Figure out how many ways there are to draw 49 balls. Then figure out how many ways there are to draw exactly three matching. The second can be broken into two parts: How many ways to pick exactly three of the six you have, then how many ways to pick the rest (the other three) of the numbers you don't have. Baccyak4H (Yak!) 17:14, 11 August 2009 (UTC)
- Thanks for the reply. I'm pretty familiar with the method to derive this, it's the formula itself I'm having trouble with.--Rixxin (talk) 18:17, 11 August 2009 (UTC)
Assuming that the lottery ticket also contains 6 numbers, the probability is that of the hypergeometric distribution for N=49, m=n=6, k=3. Bo Jacoby (talk) 08:32, 12 August 2009 (UTC).
- The hypergeometric distribution is a continuous distribution though, is there not a discrete version of this, similar to the OP's initial example formula for working out the odds of picking numbers from a set of , and matching all of them? ReeKorl (talk) 10:30, 12 August 2009 (UTC)
I can't see where your n and N come from. So lets assume there are 49 balls. You chose 6, and then 6 are drawn from the 49. What are the odds of getting exactly 3 of your 6 to match the 6 drawn. The odds are given by (Number of losing draws)/(number of winning draws). Here losing draws are the number of 6 ball combinations that don't include exactly 3 of your chosen 6. Winning draws are the number of 6 ball combinations that include exactly 3 of your chosen 6. Well, there are
ways of chosing 3 from your 6. If you want to match exactly 3 then 3 of the 6 drawn must be different from your chosen 6. There are
ways of chosing 3 from the 49 - 6 = 43 unchosen balls. It follows that there are 20 × 12,341 = 246,820 possible ways of drawing 6 from 49 so that exactly 3 will match your chosen 6. There are a total of
ways to draw 6 numbers from 49. So the number of losing draws is total - winning = 13,736,996. The odds are the
The odds are then about 55.7-to-1. If you want to introduce variables then you'll get an explicit formula but with the products and differences of factorials. Notice that odds of say m-to-n mean that you expect something will happen n times out of m + n. In the case above, you can expect not to match a single number more often than not. I forget the exact figure, it's about 70% of the time you won't get a single number. But don't quote me on that, I haven't worked it out for a long time. Also, the 55.7-to-1 means that if the lottery prize was £56 for 3 numbers then the lottery would still make a slight profit over time. Interesting that the English lottery pays £10 for exactly 3 numbers. Quite a profit margin, eh? ~~ Dr Dec (Talk) ~~ 11:27, 12 August 2009 (UTC)
- The hypergeometric distribution is discrete. My n and N come from the article hypergeometric distribution.
- Bo Jacoby (talk) 14:47, 12 August 2009 (UTC).
- I was asking about Rixxin's n and N. There aren't any variables in the formulation of his problem, but then s/he starts writing expressions in terms of n and N. Bo, you seem to have given the probability (0.018) instead of the odds (55.7-to-1). ~~ Dr Dec (Talk) ~~ 22:04, 13 August 2009 (UTC)
- "N" was the total number of balls, and "n" was the number of balls drawn, so in my example this would be 49 and 6 respectively.--Rixxin (talk) 15:58, 14 August 2009 (UTC)
- I was asking about Rixxin's n and N. There aren't any variables in the formulation of his problem, but then s/he starts writing expressions in terms of n and N. Bo, you seem to have given the probability (0.018) instead of the odds (55.7-to-1). ~~ Dr Dec (Talk) ~~ 22:04, 13 August 2009 (UTC)
- OK Dr Dec. Odds are 490607 to 8815 , to be exact. Bo Jacoby (talk) 20:43, 14 August 2009 (UTC).
- True, but 56-to-1 would be better since you can either play the lottery or not. A whole number answer is good enough, a one decimal place answer is getting as accurate as useful. An answer like 490607-to-8815 is not very practicle. Like any question in maths: we should put out answers in the context of the question. ~~ Dr Dec (Talk) ~~ 21:52, 14 August 2009 (UTC)
Parametrisation
Hi. I've just finished a piece of work and would appreciate it if someone hear could check through it.
'A curve is given by where n is an odd integer and a is a positive constant. Find a parametrisation that describs the curve anticlockwise as t ranges from 0 to . The area enclosed by such a curve is given by . Determine the area for the case n=3.'
For the parametrisation I get and (though I see no reason why it couldn't be the other way around) and for the area I get (I'm slightly concerned about the negative sign). Is this correct? 92.3.137.99 (talk) 20:32, 11 August 2009 (UTC)
- Switching x and y switches the direction that the curve goes with increasing t, which reverses the sign of the integral. The version you chose goes clockwise which is why the integral comes out negative. Rckrone (talk) 01:50, 12 August 2009 (UTC)
- The area enclosed by such a curve is given by . Bo Jacoby (talk) 05:37, 12 August 2009 (UTC).
- Ahhhh, I see. It never crossed my mind to more closely examine why it specifically said 'anticlockwise'. I, hopefully, won't make that mistake again. Thank you Rckrone! 92.3.137.99 (talk) 12:57, 12 August 2009 (UTC)
- BTW, with n=3 you get a classic curve, the Astroid. Other values of the exponent give
lessmore or less known curves, still enjoying some geometric properties. --pma (talk) 16:01, 12 August 2009 (UTC)
- BTW, with n=3 you get a classic curve, the Astroid. Other values of the exponent give
- Ahhhh, I see. It never crossed my mind to more closely examine why it specifically said 'anticlockwise'. I, hopefully, won't make that mistake again. Thank you Rckrone! 92.3.137.99 (talk) 12:57, 12 August 2009 (UTC)
- The area enclosed by such a curve is given by . Bo Jacoby (talk) 05:37, 12 August 2009 (UTC).
- With n = 1, you also get a fairly well-known curve. I daresay it is much better known that the astroid. — Emil J. 14:22, 13 August 2009 (UTC)
- of course but he was intersted in n=3; what he wants is in the linked article... pma —Preceding unsigned comment added by 79.38.22.37 (talk) 19:44, 14 August 2009 (UTC)
- As a second question, why is the area given by rather than which is how I was taught to find the area under a curve when it's given in parametric form? 92.3.137.99 (talk) 19:01, 13 August 2009 (UTC)
- For a closed curve, integration by parts shows that the two are equivalent. -- Meni Rosenfeld (talk) 20:52, 13 August 2009 (UTC)
- By the way, the area is (note the ). -- Meni Rosenfeld (talk) 21:10, 13 August 2009 (UTC)
- With n = 1, you also get a fairly well-known curve. I daresay it is much better known that the astroid. — Emil J. 14:22, 13 August 2009 (UTC)
August 12
Using the δ/ε method to show that lim(x→0) of 1/x does not exist
So just when I thought I was getting the hang of the δ/ε method, I got stuck on showing that doesn't exist. I started by assuming the contrary, i.e., that there was a limit L such that . After setting up the basic δ/ε inequalities, I got
a) whenever b) . I then decided to look at two different cases for L, namely when L ≥ 0 and when L < 0.
For L ≥ 0, the rightmost side of a) is always positive (since we choose ε > 0), so I thought it would be safe to say that is always positive as well. It then seemed logical to use as an effective delta, i.e., say that (however, I could not satisfactorily explain to myself why it seemed logical to do that - explanations appreciated =] ). However, after multiplying this latest inequality by , the contradiction between the resulting statement and the inequality a) led me to deduce that there isn't an L ≥ 0 that satisfies the limit, and that I'd done something correctly.
So then I moved on to the case where L < 0.
I noticed the leftmost side of a) would always be negative, and hence would always be negative as well. Then I got stuck. I know I'm looking for contradiction between a) and some manipulation this last inequality, but I'm not entirely sure how to get there, especially since multiplication by negative terms changes the direction of the < signs...
Would somebody be able to explain the completion of this proof to me, please? Korokke (talk) 06:37, 12 August 2009 (UTC)
- You can make a similar argument as you did in the first case. Specifically, for any δ>0, there is always an x with -δ < x < δ such that . Note that in both cases it's not enough just to argue that a specific δ doesn't work, but that there is no possible δ that works. Rckrone (talk) 07:41, 12 August 2009 (UTC)
- And you only need to show that for a single ε, although in this case no ε will work. For example, it's sufficient to show that ε=2 will not work, by showing that, no matter how small δ is made, 1/x will still get further than ε from a given L. For δ≥1, you can use x = ±1/2 (depending on the value of L), and for δ<1, you can use x = ±δ/2 (likewise). I'll let you complete the argument from there. --COVIZAPIBETEFOKY (talk) 13:09, 12 August 2009 (UTC)
Random Walk
I was reading up on the random walk article, which says that in a situation where someone flips a coin to see if they will step right or left, and do this repeatedly, eventually their expected distance from the starting point should be sqrt(n), where n = number of flips. This is derived from saying that D_n = D_(n-1) + 1 or D_n = D_(n-1) - 1, then squaring the two and adding them, then dividing by two to get (D_n)^2 = (D_(n-1))^2 +1. So if (D_1)^2 = 1, it follows that (D_n)^2 = n, and that D_n = sqrt(n). But if we instead work with absolute values, we seem to get a different result: abs(D_n) = abs(D_(n-1)) + 1 or abs(D_n) = abs(D_(n-1) - 1), therefore abs(D_n) = abs(D_(n-1)), so D_n should stay around 0 and 1. Is there a way around this apparent contradiction? —Preceding unsigned comment added by 76.69.240.190 (talk) 17:19, 12 August 2009 (UTC)
- For one thing, your argument fails when D_(n-1) is 0. Algebraist 17:23, 12 August 2009 (UTC)
- It is true that D_n = D_(n-1) + 1 or D_n = D_(n-1) - 1, but squaring the two and adding them, then dividing by two to get (D_n)^2 = (D_(n-1))^2 + 1 is not legitimite. Multiplying the two gives (D_n)^2 = (D_(n-1))^2 − 1 which is not correct. Sometimes a bad argument leads to a good result. Bo Jacoby (talk) 04:21, 13 August 2009 (UTC).
- Adding the two then dividing by two amounts to computing the expectation, as both possibilities have probability 1/2. So, as far as I can see, this makes a valid argument that the expected value of Dn2 is n (which is however different from the expected value of |Dn| being , indeed the latter is false by Michael Hardy's comment below). The subsequent computation directly with |Dn| is wrong, because the two possibilities do not always have probability 1/2: as Algebraist pointed out, it fails when Dn−1 = 0. — Emil J. 10:53, 13 August 2009 (UTC)
- Nevertheless, the argument is fixable. |Dn| = |Dn−1| + 1 when Dn−1 = 0, otherwise the difference is +1 or −1 with equal probability. Thus the expectation of |Dn| − |Dn−1| equals
- Using Stirling's approximation and linearity of expectation,
If you read carefully, you see that it says
Michael Hardy (talk) 10:36, 13 August 2009 (UTC)
Using the identity theorem
Here's the problem I'm working on:
Let A be the annulus . Then there exists a positive real number r such that for every entire function f, .
So, if the claim is not true, then we can get a sequence of entire functions with . These functions converge uniformly on the annulus A, so their limit f is also analytic on A, and agrees on that set with . Therefore, by the identity theorem, f and g have the same Taylor series expansion around any point in A, and this expansion converges in the largest radius possible, avoiding singularities. We know that g has one simple pole at the origin, so our function is defined and unbounded on the disk , where is any point inside the annulus. This disk lies inside the disk , where any entire function would have to be bounded...
Here I'm stuck. I think I just proved that f isn't entire, but what's the contradiction, exactly? Why can't f be the uniform limit of entire functions in some domain, without itself being an entire function? -GTBacchus(talk) 18:58, 12 August 2009 (UTC)
- It is possible for f to be the uniform limit of entire functions in some domain, without being entire. The geometry here (with the domain enclosing the singularity of f) is crucial. Try taking contour integrals around a circle in the annulus. Algebraist 19:10, 12 August 2009 (UTC)
- That does it; thank you. The integral for each is zero, while that for 1/z is 2pi*i. The identity theorem isn't needed here; I just didn't think to integrate. Complex integration sure does a lot of stuff that real integration doesn't. I think I'm still getting used to that.
As for the example where entire functions uniformly converge to a non-entire function, I can just take the Taylor expansion of 1/z around 1, and then look at the domain B(1,1/2). The partial sums of the series are polynomials, and therefore entire, but their uniform limit has its singularity just over the horizon to the west. Is that right? -GTBacchus(talk) 21:58, 12 August 2009 (UTC)
- Looks OK to me. Michael Hardy (talk) 23:56, 12 August 2009 (UTC)
- Thanks. :) -GTBacchus(talk) 00:49, 13 August 2009 (UTC)
- Looks OK to me. Michael Hardy (talk) 23:56, 12 August 2009 (UTC)
- That does it; thank you. The integral for each is zero, while that for 1/z is 2pi*i. The identity theorem isn't needed here; I just didn't think to integrate. Complex integration sure does a lot of stuff that real integration doesn't. I think I'm still getting used to that.
August 13
Absolute continuity
Assume f is a real-valued continuous of bounded variation on [0, 1], and f is absolutely continuous on every interval [a, 1] for 0 < a < 1. Prove f is absolutely continuous on [0, 1]. Can any one help me out a bit? Thanks StatisticsMan (talk) 00:19, 13 August 2009 (UTC)
- This may be overkill, but any function of bounded variation on the line is decomposable as the sum of an absolutely continuous function, a continuous function whose derivative is 0 almost everywhere but is not constant, and a collection of jump discontinuities. Since your function is continuous of bounded variation, it must be the sume of an absolutely continuous function and a continuous function whose derivative is 0 almost everywhere but is not constant. The latter function is 0, so therefore the function itself must be absolutely continuous. (see, for example, Stein's Real Analysis, Chapter 3). RayTalk 02:33, 13 August 2009 (UTC)
- How can the latter function be 0 if you have specified that it is not constant? --Tango (talk) 02:43, 13 August 2009 (UTC)
- Oops. Amend to "not necessarily." RayTalk 03:21, 13 August 2009 (UTC)
- How can the latter function be 0 if you have specified that it is not constant? --Tango (talk) 02:43, 13 August 2009 (UTC)
- You can argue that the total variation of f on the interval [0, a] goes to zero as a goes to zero. That said, for any ε>0, choose a so that the total variation on [0, a] is less than ε/2, and choose a δ like you normally would on [a, 1] for ε/2. I think that should work. Rckrone (talk) 02:46, 13 August 2009 (UTC)
- Actually, I have a solution for the method you are talking about Rckrone (as in someone else's solution). And, I think I understand everything after what you just said. But, showing that the total variation goes to zero as a goes to zero is the part I don't get. All this solution says is since f is continuous at 0, goes to 0 as a goes to 0 (where the x_i's come from some partition of [0, a] of course). But, I don't get that. StatisticsMan (talk) 03:02, 13 August 2009 (UTC)
- Well if the total variation g is bounded and continuous on [0, 1], then the limit of g(x) as x goes to 0 must be g(0). Rckrone (talk) 03:35, 13 August 2009 (UTC)
- Okay, Royden doesn't define the total variation as a function, though I have seen this in other books. So, I will check that out. Thanks! StatisticsMan (talk) 16:12, 13 August 2009 (UTC)
- Hmm, okay, so I only found such a function in one of my undergrad books (not in my two graduate books) and it doesn't say that the total variation function is continuous. It does say that if the total variation function is continuous at a point, then the function itself is continuous at that same point... but nothing about the other way around. And, what you say makes sense that it would be true, but I don't understand exactly why. StatisticsMan (talk) 01:15, 15 August 2009 (UTC)
- Okay, Royden doesn't define the total variation as a function, though I have seen this in other books. So, I will check that out. Thanks! StatisticsMan (talk) 16:12, 13 August 2009 (UTC)
- Well if the total variation g is bounded and continuous on [0, 1], then the limit of g(x) as x goes to 0 must be g(0). Rckrone (talk) 03:35, 13 August 2009 (UTC)
- Actually, I have a solution for the method you are talking about Rckrone (as in someone else's solution). And, I think I understand everything after what you just said. But, showing that the total variation goes to zero as a goes to zero is the part I don't get. All this solution says is since f is continuous at 0, goes to 0 as a goes to 0 (where the x_i's come from some partition of [0, a] of course). But, I don't get that. StatisticsMan (talk) 03:02, 13 August 2009 (UTC)
- RMK: for a BV function f on [a,b] it is true that the total variation of f on [a,x] is continuous at x if and only if f is continuous at x. For an elementary proof see e.g. G.Choquet's "Cours d'Analyse" (t.2, Topologie). I do not have in mind an English reference in this moment. Another way to prove the original statement is via the fundamental theorem of calculus for AC functions: you can rephrase the original problem in terms of the (a.e.) derivative g of the function f. It translates into the easy: "If g is measurable on [0,1] and for 0 < a < 1, then g is integrable on [0,1]". --pma (talk) 14:43, 15 August 2009 (UTC)
- I'll just take your word on being continuous at x if and only if f is. Thanks! As far as your translation, it does look like it's pretty much immediate to prove. But, I don't see exactly where it comes from. Hmm, maybe I'm beginning to get it. I'll try to work it out the first way and keep thinking about the second! Thanks. StatisticsMan (talk) 17:53, 15 August 2009 (UTC)
- The fact behind is that for an absolutely continuous one has . (By the way, this formula holds true for Rn valued functions, and any given norm on Rn). --pma (talk) 19:03, 15 August 2009 (UTC)
- I think I get that, but showing g is integrable is equivalent to showing f is absolutely continuous? StatisticsMan (talk) 20:43, 15 August 2009 (UTC)
- Sure. A function is AC on [a,b] if and only if it is of the form
- for all x in [a,b], for some in L1([a,b]), in which case is unique a.e., in fact a.e., and ). Here, changing a bit the hypotheses into:
- "Assume is a real-valued continuous of bounded variation on [0,1], and is absolutely continuous on every interval [0,a] for 0<a<1, we have immediately that
- for all ;
- moreover for all . This implies (by Beppo Levi's theorem) that is in L1([0,1]), that is what we had to show (well, the last point is proving the equality
- for x=1 too, but that follows because it holds true for x<1 and by continuity of both sides: the LHS by hypothesis, the RHS because g is in L1([0,1]) ). --pma (talk) 12:13, 16 August 2009 (UTC)
- I think I get that, but showing g is integrable is equivalent to showing f is absolutely continuous? StatisticsMan (talk) 20:43, 15 August 2009 (UTC)
- The fact behind is that for an absolutely continuous one has . (By the way, this formula holds true for Rn valued functions, and any given norm on Rn). --pma (talk) 19:03, 15 August 2009 (UTC)
- I'll just take your word on being continuous at x if and only if f is. Thanks! As far as your translation, it does look like it's pretty much immediate to prove. But, I don't see exactly where it comes from. Hmm, maybe I'm beginning to get it. I'll try to work it out the first way and keep thinking about the second! Thanks. StatisticsMan (talk) 17:53, 15 August 2009 (UTC)
- Well, that solution is much nicer, easier, simpler than the other one! Thanks. StatisticsMan (talk) 15:54, 16 August 2009 (UTC)
- What's Beppo Levin's Theorem? I could also just say that since , then exists a.e. and is measurable (this is true for monotone functions), and , so is integrable. Then the function is absolutely continuous. StatisticsMan (talk) 16:13, 16 August 2009 (UTC)
- Well, that solution is much nicer, easier, simpler than the other one! Thanks. StatisticsMan (talk) 15:54, 16 August 2009 (UTC)
Radius and Volume
A pizza with radius z and a thickness of a has a volume of pi*z*z*a. Is that right? I just read it, and I'm not a mathematician, I'm a linguist. --KageTora - (영호 (影虎)) (talk) 01:57, 13 August 2009 (UTC)
- If the pizza's cylindrical, yes. Algebraist 02:00, 13 August 2009 (UTC)
- Yum, I could go for a nice slice of right now... 70.90.174.101 (talk) 08:28, 13 August 2009 (UTC)
ordered sets and order types
For well-ordered sets we say the set has an "order type" which we label with an ordinal number. So the natural numbers with the usual order have order type , the set of pairs of natural numbers has order type etc. But ordinals are all about well-ordering so we can do induction over the sets.
What about just ordinary ordering as opposed to well-ordering? I.e. is there something we'd call an "order type" for the reals and the standard < ordering? What about pairs of reals, etc.? We're not trying to support induction over the whole set, just comparisons between two elements.
I guess this is a question more of terminology than actual math. The objects I want to order are somewhat complicated tree structures with real numbers and some other things stuck in them. I'd like some nomenclature analogous to "order type" that identifies the order relation on a given set of these trees.
Thanks 70.90.174.101 (talk) 08:25, 13 August 2009 (UTC)
- I think I've seen the term "order type" applied more generally than to well-orderings. There's no "standard" ordering of pairs of reals; there are many different ways in which they can be ordered. Michael Hardy (talk) 10:38, 13 August 2009 (UTC)
- (e/c) In general, an order type of a partial order is an object which identifies the order up to isomorphism. This is somewhat vague description since the actual representation does not matter, but for definiteness you may define the order type of P to be the set of all posets isomorphic to P of minimal rank. — Emil J. 10:42, 13 August 2009 (UTC)
Isn't this talk of "vagueness" and a need for "representation" (which does not matter!) and "definiteness" and "defining" the thing as a particular "set" (chosen to be of minimal rank...) just a consequence of trying to fit the idea into a particular, not necessarily sacred, theory of sets? It seems to me it's a perfectly good concept that people have qualms about because they feel a need to fit it into that particular theory, and the fact that it takes some work to make it fit should really be viewed as clearly indicating that that particular theory really isn't the last word. Michael Hardy (talk) 14:17, 14 August 2009 (UTC)
- I agree. However, the fact that Scott's trick always works for such questions shows that ZF, if not canonical, is at least good enough for this sort of thing. Algebraist 00:53, 15 August 2009 (UTC)
- So "order type" is fine for non-wellordered sets, and as Michael notes, we needn't trouble ourselves excessively about details of representation except when we need to for technical reasons. However, there are a couple of reasons that the original poster might not have come across this usage. First, order types of general linear orders don't tell you as much. If two wellorderings have the same order type, then you know not merely that there's an order isomorphism between them, but that this isomorphism is unique.
- Secondly, the ordinal numbers are much more important than order types of general linear orders. They're used all over the place; even the intended interpretation of set theory, the von Neumann hierarchy, starts with ordinals before it gets to sets (although it is later shown that ordinals may be represented as sets). Ordinals are in some sense the most concrete sort of object that set theory deals with, this ultra-rigid structure that sticks up like a spine through the set-theoretic universe. As soon as you can pin something down by an ordinal, you feel you have about as good a grasp on it as it is possible to have. --Trovatore (talk) 02:09, 15 August 2009 (UTC)
Vogel's approximate method
The transportation algorithm is applicable to the problem of moving goods from suppliers to customers at least total cost, by determining how much should go on each possible route. The algorithm is akin to the Simplex method of linear programming, but takes advantage of the special structure of a transportation problem by carrying out successive iterations of improvement in a simple table whose rows/columns represent the suppliers/customers and whose cells represent the routes. It is important to get a good initial solution, which Vogel's approximate method does by considering the opportunity cost of not using the cheapest-cost route in each row and column, then allocating as much material as possible to the route which has the highest such value. My question is, who was Vogel and when and where was the method first described? It seems too obvious an idea to have a particular name attached to it, but someone must have thought it worthwhile to do this, and since then the label has stuck.→86.148.186.112 (talk) 16:14, 13 August 2009 (UTC)
- The standard reference text seems to be Reinfeld, N V and W R Vogel (1958). Mathematical Programming. Englewood Gliffs, New Jersey: Prentice-Hall. I haven't been able to find any biographical information yet. MuDor (talk) 01:16, 14 August 2009 (UTC)
Sudoku Puzzles
I am curious to determine how many possible ways this digits from 1-9 can be placed in a 9x9 square such that they follow the rules of a sudoku puzzle; i.e., no digits repeated in any 3x3 square, row, or column. How could this be calculated?CalamusFortis 21:50, 13 August 2009 (UTC)
- 6,670,903,752,021,072,936,960, as stated in Sudoku. Algebraist 22:22, 13 August 2009 (UTC)
- This is more thoroughly covered in Mathematics of Sudoku, sections Enumerating Sudoku solutions and Enumeration results. --COVIZAPIBETEFOKY (talk) 22:26, 13 August 2009 (UTC)
- Note that this enumeration is a complete enumeration; it considers two grids as different if they have different values in any of their 81 pairs of corresponding cells. Given one grid, you can trivially produce 9! similar grids by permuting the digits 1 to 9. You can also create more grids by rotating, reflecting, permuting horizontal and vertical bands, permuting rows within a horizontal band, and permuting columds within a vertical band. Each of these "similar" grids is counted as different in the complete enumeration. Gandalf61 (talk) 12:14, 14 August 2009 (UTC)
- If similar grids are counted as the same, the number of solutions is 5,472,730,538. — Emil J. 12:18, 14 August 2009 (UTC)
August 14
Infinity--Question/comment/request for comments
Gauss quote from Actual infinity: "I protest against the use of infinite magnitude as something completed, which is never permissible in mathematics. Infinity is merely a way of speaking, the true meaning being a limit which certain ratios approach indefinitely close, while others are permitted to increase without restriction." In Kenneth Kunen's book on forcing, I recall he said roughly proper classes are not defined in the most popular theory ZFC but are frequently talked about in an informal way and used as an aid to intuition. That is pretty akin to being not permissible, to being a manner of speaking, as Gauss described actual infinity. That makes me suspect that although mathematicians have genuinely tackled infinity and have some really big infinite sets like the reals and more, Aren't we confronting the same issue Gauss had? That is, we've tried to deal with his issue by grappling with infinite sets but ran into it again in proper classes?Rich (talk) 06:47, 14 August 2009 (UTC)
- I'm not sure what the question is. What are you hoping for? Some comment that's better than what's already at that article? The reference desk isn't really a discussion board though sometimes it might seem otherwise. Dmcq (talk) 13:19, 14 August 2009 (UTC)
- ZFC doesn't axiomatize classes but other set theories like NBG do. ZFC itself postulates the existence of certain collections of objects (called "sets") that obey a certain bunch of axioms (the ZFC axioms). It turns out that any model of ZFC must necessarily also contain some collections of objects (e.g. the ordinals in the model) that can't follow the axioms without leading to contradiction. This was considered paradoxical for a while (Burali-Forti paradox) but it just means that not every collection is a set. 70.90.174.101 (talk) 01:24, 15 August 2009 (UTC)
- Well, actually, from a realist point of view, it means a bit more than that. The class of all ordinals is not a completed totality at all (if it were, it would have to be a set). So when we speak of "the class of all ordinals", what we're actually referring to, via a metalingustic circumlocution, is the predicate "is an ordinal", rather than a collection of any sort. --Trovatore (talk) 01:43, 15 August 2009 (UTC)
- Responding to the original question: It seems to me that what you are talking about is the notion of the absolute infinite. As I understand it (and this is sort of a reconstruction from a modern point of view, one for which I don't exactly have clear references to point you to), you can look at it like this: Gauss says, "you can't treat infinite collections as being actual". Cantor says "you're right, but many of the things that were formerly thought of as being infinite, say the set of all natural numbers, are not "infinite" in that sense. They are transfinite; that is, beyond some limit, but not infinite or entirely without limit.".
- See also limitation of size. Our article on that could use some (or a lot of) work. I started it but never got around to doing the literature search to really bring it up to snuff. --Trovatore (talk) 01:51, 15 August 2009 (UTC)
- Maybe Gauss had a notion of absolute infinite, since he was very smart and didn't publish many of his brilliant ideas? If he did maybe that means his quote was a denial of what he really thought, to avoid controversy, sort of like what some historian claimed was Gauss's lack of courage for not publishing his thoughts on noneuclidean geometry, though I've never heard that he actually REJECTED noneuclidean geometry.(I don't know if the claim of lack of courage is fair, since Gauss was a perfectionist and didn't publish things he hadn't had time to polish.) But if Gauss didn't have a notion of absolute infinity, then I think his quote must mean he rejected all quantities that weren't finite, that no infinity could be actual, really exist, like a zfcer would say some classes could not exist as sets, and only as collections"in a manner of speaking." Also, do we know if Gauss thought in terms of "cardinality of sets" for measuring quantity? Thanks to both of you for your thoughtful answers.Rich (talk) 15:21, 16 August 2009 (UTC)`
Regifting Robin
http://www.regiftable.com/regiftingrobinpopup.html
What's the secret behind this little game? --Halcatalyst (talk) 13:54, 14 August 2009 (UTC)
- The same as it was last time. Algebraist 13:57, 14 August 2009 (UTC)
- Sometimes, though, it comes up with something other than "board game." --Halcatalyst (talk) 16:29, 14 August 2009 (UTC)
- When you take a 2-digit number and subtract both digits, you always get a number divisible by 9. So all that game has to do is label all the numbers divisible by 9 with the same item. --COVIZAPIBETEFOKY (talk) 17:58, 14 August 2009 (UTC)
- Sometimes, though, it comes up with something other than "board game." --Halcatalyst (talk) 16:29, 14 August 2009 (UTC)
Showcase Showdown strategy
When should contestants spin again or stay to have the best chance of winning? I guess it would be different depending on whether you go first or second, and whoever goes third doesn't have to worry about it since they are just trying to beat the best score. Recury (talk) 17:29, 14 August 2009 (UTC)
What's the difference?
My statistics text book says this:
Suppose we calculate from one sample in our battery example the following confidence interval and confidence level: "We are 95 percent confident that the mean battery life of the population lies within 30 and 42 months' This statement does not mean that the chance is 0.95 that the mean life of all our batteries falls within the interval established from this one sample. Instead, it means that if we select many random samples of the same size and calculate a confidence interval for each of these samples, then in about 95 percent of these cases, the population mean will within that interval.
My question is - don't the two statements that are bolded mean the same thing? That is, doesn't one imply the other? What's the difference between the two? I have been scratching my head for a long time over this but can't figure it out and I'm feeling extremely stupid now =/ --ReluctantPhilosopher (talk) 20:47, 14 August 2009 (UTC)
- I think that "We are 95 percent confident that the mean battery life of the population lies within 30 and 42 months" is talking about the accuracy of the calculation of the mean value. The statement "...the chance is 0.95 that the mean life of all our batteries falls within the interval established" is talking about the distrabution of the data.
For example, let's say you tested one million batteries, and you found that 500,000 batteries had lives of 1 month and that 500,000 had lives of 71 months. In this case the mean battery life would be exactly 36, but none of the batteries would have a battery life within 30 and 42 months.~~ Dr Dec (Talk) ~~ 22:19, 14 August 2009 (UTC)
- 95% of the time the confidence interval will contain the population mean. That means if you have independent repetitions of the experiment with the mean remaining the same throughtout, in 95% of cases that will happen. But you're looking at just one case, where you got the interval from 30 to 42. The conclusion that they're saying is not justified is that you can be 95% sure in that one case. The difference is that being 95% sure in that one case—that one repetition of the experiment—is not a statement that says in 95% of all repetitions of the experiment, a specific thing happens.
- In fact, sometimes you may even find something in your data that tells you that the one specific repetition of the experiment is one of the other 5% of repetitions, where the specified method of finding an interval gives you an interval that fails to include the population mean. And sometimes you might find information in your data that doesn't tell you for sure that you've got one of the other 5%, but makes it probable. That only happens when you've got a badly designed method of finding confidence intervals, but nonetheless the 95% confidence level is correctly calculated. Ronald Fisher's technique of "conditioning on an ancillary statistic" was intended to remedy that problem.
- The statement that in one particular repetition of the experiment, which gave you the interval from 30 to 42, the population mean has a 95% chance of being in that interval, is a statement about 95% of possible values of the population mean, not about 95% of repetitions of the experiment.
- The argument that one should be 95% sure, conditional on the outcome of that one particular repetition of the experiment may actually be reasonable in cases where all of the information in the data was taken into account in forming the interval, but it's not actually backed up by the math involved. Something other than mathematics, not as well understood, is involved. Michael Hardy (talk) 22:33, 14 August 2009 (UTC)
- Summary: One statement is about 95% of all independent repetions of the experiment. The other is about 95% of all equally (epistemically) probable values of the population mean, given the outcome of one particular repetition of the experiment. "95% of repetitions" is a relative frequency, not an epistemic probability. Michael Hardy (talk) 22:36, 14 August 2009 (UTC)
So, Michael, please tell us: what's the Wikipedia convention that made you put a line (like the one above) before your answer? ~~ Dr Dec (Talk) ~~ 22:53, 14 August 2009 (UTC)
- You should not be feeling stupid. Your professor should. The concept of confidence interval is low quality science. The dispute between frequentist and bayesian statistics is behind this sad state of affairs. Bo Jacoby (talk) 11:56, 15 August 2009 (UTC).
I didn't want to indent at a different level from the previous comment but I want the boundary between the previous comment and mine to be clear. Michael Hardy (talk) 19:29, 15 August 2009 (UTC)
One of the problems you're facing is that the population mean is not a random variable. The mean lifetime of your batteries is a fixed number. We don't know what it is, but in repeated experiments it will never change. This raises issues with the the frequency view of probability. You can't really assign a (frequency based) probability to the population mean, as it's always exactly the same, no matter how you conduct your sample. Any discussion of probability with respect to the population mean would refer, rather, to our state of knowledge (or lack thereof) about the mean - a Bayesian or epistemic probability. The catch is that the standard confidence intervals were derived from frequency-based statistics. Bayesian statistics has it's own related measure, the credible interval, but the two are not necessarily equivalent. So formally, the "95%" has to refer to a frequency-based probability for a random variable: "If you carry out random sampling multiple times, 95% of the time the calculated confidence interval (the endpoints are random variables) will enclose the population mean." You can't say: "If you carry out random sampling multiple times, 95% of the time the population mean (NOT a random variable) will be within 30 and 42 months", because the population mean either is in that interval or it isn't. It doesn't jump around with repeated sampling. -- 128.104.112.102 (talk) 19:30, 15 August 2009 (UTC)
Lipschitz, absolute continuity
I'm working through problems in Royden today. Probablem 5.20b says:
Show that an absolutely continuous function f satisfies a Lipschitz condition if and only if |f'| is bounded.
This isn't true is it? I mean, |x| is Lipschitz but the derivative does not exist at x = 0 so it is not bounded. Something that doesn't exist can not be bounded. But, whenever it does exist, it is bounded. Would it be correct if it were changed to "if and only if |f'| is bounded whenever it exists."??? StatisticsMan (talk) 23:45, 14 August 2009 (UTC)
- Yes; more precisely, recall that an absolutely continuous function f on an interval I is differentiable a.e. in I, so that f' is defined a.e.; then, f is Lipschitz if and only if f' is essentially bounded. If I is a bounded interval youu may also rephrase it saying that Lip(I) coincides with the Sobolev space W1,∞(I). --pma (talk) 07:54, 15 August 2009 (UTC)
- Okay, thanks. So, say f is absolutely continuous and Lipschitz. Then, if the derivative exists at a point a, it is . By the Lipschitz condition, , so the limit is also within these bounds, when it exists. Now, assume f is absolutely continuous and the derivative is bounded, whenever it exists. I'll have to think about this one. Thanks! StatisticsMan (talk) 13:01, 15 August 2009 (UTC)
- And, as to the first implication, remember that a Lipschitz function is in particular absolutely continuous. For the other implication, remember the generalization of the fundamental theorem of calculus in the setting of absolutely continuous functions. --pma (talk) 13:57, 15 August 2009 (UTC)
- Okay, so this is pretty simple too. Assume |f'| <= M whenever it exists, which is almost everywhere since f is absolutely continuous on [a, b]. Also, the derivative is measurable and f is equal to the antiderivative of its derivative, . Then, for any x < y in [a, b], we have
Measurable functions/Derivatives
Let f : [0, 1] to R be a measurable function and E a subset of {x : f'(x) exists}. If m(E) = 0, show that m(f(E)) = 0.
This is a qual problem from the past that I have not seen a solution for. Any ideas? Thanks StatisticsMan (talk) 23:46, 14 August 2009 (UTC)
- Some hints for a completely elementary proof.
- For any positive integer k consider
- .
- Since by assumption , there exists a relatively open nbd of , such that .
- Consider the collection of all those (relatively) open intervals such that , that is, such that is contained in an interval of length less than k times the length of .
- Consider the union of these intervals. The relevant facts that you can check are: is a relatively open neighborhood of , and ; moreover (key point) each connected component of is an interval that belongs to the class .
- This implies that , whence since is the increasing countable union of the . --pma (talk) 09:53, 15 August 2009 (UTC)
August 15
L^p(R^n) limit
I am working through a solution a friend presented at our study group for a problem and I am not sure his first step is even true. All we are given is for some . He says for p > q,
- .
Well, a function can be essentially unbounded but still be integrable ( on [0, 1]) but he is claiming the essential supremum is finite. Is this true with ? Thanks! Sorry so many questions, but I have just a few days before my qual. And, if it bothers you, I am fine with you not answering them. StatisticsMan (talk) 02:01, 15 August 2009 (UTC)
- I guess I should mention the point of the problem is to show and the first step is just to show that this makes sense by showing from a certain point on is finite. After that, I have the rest of the solution. StatisticsMan (talk) 02:11, 15 August 2009 (UTC)
- Umm, isn't it true that and on ?(Igny (talk) 02:34, 15 August 2009 (UTC))
- Yes, that's why we have the assumption that for some q. The second part of the question asks for a counterexample if we do not have that assumption and the one you gave is the one. StatisticsMan (talk) 03:09, 15 August 2009 (UTC)
- Umm, isn't it true that and on ?(Igny (talk) 02:34, 15 August 2009 (UTC))
- In fact I do not understand what statement you want to prove. In general, a given function on RN belongs to Lp for a set of exponents p that is an interval in [1,∞]. Conversely, for any interval J of [1,∞] there is a measurable function f on RN such that f is in Lp if and only if p is in J. --pma (talk) 07:32, 15 August 2009 (UTC)
- Okay, well let me give you the exact question, just to be sure.
- If for some , show that . Also, show by example that the conclusion may be false without the assumption that . StatisticsMan (talk) 12:52, 15 August 2009 (UTC)
- I think this can still be true. Say f is in L^q for q in [100,10000]. Then, it is measurable so this means that the L^q norm for q > 10000 is infinity. In that case, this is simply saying that the infinity norm is also infinity. And, I think to prove this, we just do 2 cases. One is where the infinity norm is infinity and the other is where it is finite. The finite one is the one I put up there. Thus, in that case, we are assuming it is finite so in that case f is in L^p for every , though as you said, this is not true in general. StatisticsMan (talk) 13:31, 15 August 2009 (UTC)
- Yes, it's a well-known property of Lp norms; unfortunately the article here does not have the proof but it's a simple thing that you can find in almost all textbooks on the subject. (PS: how could one imagine that the question was that one?) --pma (talk) 13:39, 15 August 2009 (UTC)
- My second post, right under my first one, says that the first post is the first step in the proof of showing the p-norm goes to the infinity norm. StatisticsMan (talk) 13:58, 15 August 2009 (UTC)
- uh yeah I missed it --pma (talk) 14:06, 15 August 2009 (UTC)
- Yea, I should have put it there in the first place so it would be less likely to be missed! StatisticsMan (talk) 14:26, 15 August 2009 (UTC)
- uh yeah I missed it --pma (talk) 14:06, 15 August 2009 (UTC)
- My second post, right under my first one, says that the first post is the first step in the proof of showing the p-norm goes to the infinity norm. StatisticsMan (talk) 13:58, 15 August 2009 (UTC)
N disjoint solutions to N-queens puzzle
What attention, if any, has been given to the problem of dividing an N-by-N board into N solutions to the N-queens puzzle? NeonMerlin 02:05, 15 August 2009 (UTC)
- We can generalise the solution given in the article to any n × n board where n is divisable by 4, say n = 4m for some positive integer m ≥ 1. Assume that the coordinates are given by where We have three families of queens:
- ~~ Dr Dec (Talk) ~~ 11:53, 15 August 2009 (UTC)
What is summation of r! from r=1 to r=n?
I've tried adding manually to find a pattern, but can't find any from 1, 3, 9, 33, 153, 873, ...
The original question is to find the summation of r(r!) from r=1 to r=n, so I split up the summation into the summation of r (I know it's n(n+1)/2) and the summation of r! which I can't find a conjecture for. Doing the summation of r(r!) manually gives 1, 5, 23, 119, 719, 5039, ... which I can't find a pattern for either. But since we are not expected to know the formula for summation of r!, I think it's more likely I need to find a pattern from one of the two manual summations. Any hints?
So this is PART of a homework question I have problems with. Later I have to prove the conjecture using mathematical induction. —Preceding unsigned comment added by 59.189.57.133 (talk) 02:07, 15 August 2009 (UTC)
- If it's not too big a hint, find out what "OEIS" stands for ;) 70.90.174.101 (talk) 03:01, 15 August 2009 (UTC)
- Let's go back to the original question : your first step was not the best thing to do. Just observe that r(r!)=(r+1)!-r!. --pma (talk) 08:47, 15 August 2009 (UTC)
- Exactly! summing r·r! is a lot easier than summing r! alone. ~~ Dr Dec (Talk) ~~ 10:38, 15 August 2009 (UTC)
- Even if the simpler-looking sum had been easy to compute, there's no reason to believe that knowing and would help in determining . , for instance, is very seldom true. —JAO • T • C 11:01, 15 August 2009 (UTC)
Differential definition
An editor tried to change the definition of Differential (infinitesimal) with this edit [1] to have:
instead of
I reverted this and argued against him on the talk page at Differential_(infinitesimal)#The precise definition of a differential. as he had derived it from his own reasoning and dx isn't delta x it was just a possible value. However I have now looked at the Springer Maths dictionary for the subject differential and it does the same sort of thing. Is what he was putting in really right? Dmcq (talk) 14:47, 15 August 2009 (UTC)
- I would have agreed with you because you need to take the limit for the editor's expression to work, which was what he apparently did not do. If he had done that then both you're expressions would become equivalent and equally valid. Or in more mathematical terms:
For
If we see where the discussion in the talk page seems to lead, I propouse the article Differential (infinitesimal) to be renamed Infinitesimals in calculus as that seems to be the subject of the article and change where it says "a differential is traditionally an infinitesimally small change in a variable" whith other where there is reference to Leibniz notation about taking dy and dx as infinitesimals, and how that is related with differentials. As I see differentials are one thing (we have two references now on that) and infinitesimals are other. And then create an article about differentials.Usuwiki (talk) 16:08, 15 August 2009 (UTC)
- Shouldn't that be ? -- 128.104.112.102 (talk) 18:27, 15 August 2009 (UTC)
- No. That is an infinitesimal. When you take the limit for to approach 0 you are turning the definition of the differential into an infinitesimal.
- Unfortunately Leibniz's notation has every one of us confused. dx can be a differential, or an infinitesimal, depends on you taking one notation or the other.Usuwiki (talk) 20:32, 15 August 2009 (UTC)
- Shouldn't that be ? -- 128.104.112.102 (talk) 18:27, 15 August 2009 (UTC)
Saying that it's
is a bit of nonsense that modern textbook writers have adopted out of squeamishness about infinitesimals, stemming from the fact that you can't present infinitesimals to freshmen in a logically rigorous way. Insisting on logical rigor is clearly a mistake—typical freshmen can't be expected to appreciate that. The absurdity of that convention becomes apparent as soon as you think about expressions like
Michael Hardy (talk) 19:17, 15 August 2009 (UTC)
Here's the actual edit. Michael Hardy (talk) 19:22, 15 August 2009 (UTC)
- People, those of you thinking dx, dy, dt as infinitesimals have to stop calling dx a differential. Either you say dx is a differential or say that dx is an infinitesimal, if you stick to the first interpretation you are on standard calculus, if you stick to the second you are on Leibniz's notation. Do not call dx a differential when you will interpret it as an infinitesimal.
- Where did you get that a differential is an infinitesimal?
- It seems you got confused because Leibniz used dx, dy, dt as infinitesimals and then you readed that dy is a differential in standard calculus.Usuwiki (talk) 20:02, 15 August 2009 (UTC)
- These is why my proposal stands. From the title, the article is wrong. Perhaps changing the title as I sugest, separating the concepts, and clarifying the notation that is used will be helpfull.Usuwiki (talk) 20:32, 15 August 2009 (UTC)
- My understanding is that the traditional idea in calculus is that dx etc are indefinitely small quantities and Δx etc refers to finite amounts. More recently they have become viewed as linear maps or in other terms a covariant basis for the tangent space so they have escaped the constraint of being infinitessmals and become more what the word 'differential' means in english. However I can't imagine myself ever mixing Δx and dy and having them linked to each other the way the analysis textbooks seem to do now. I'd simply write f'(x)Δx rather than dy as mixing the two just will cause confusion. f'(x) is dy/dx and is defined as the limit of Δy/Δx at the point, then to mix the two so dy linearly depends on Δx just sounds like it is asking for trouble.
- Despite my dislike for the usage I guess it is notable and that's really all that matters. So it will have to be accommodated somehow. This is an encyclopaedia though so both the old and new views and both what happens in topology as well as this analysis view have to be dealt with. In my view the article as it stands is not wrong, just incomplete. Dmcq (talk) 21:49, 15 August 2009 (UTC)
- Ok, differential can have another interpretation in english, that's my bad. This way the title of the article stands for something like an infinitesimal differential of something? Disregarding what is mathematically called a differential?
- Anyway I have created this article on the differential of a function.Usuwiki (talk) 22:20, 15 August 2009 (UTC)
- Yes a differential is normally thought of as an infinitessmal in straightforward calculus. As far as I'm aware there is no difference in the English and Spanish treatment of the word in this context. Differential calculus refers to the limit differential and not a finite version. The use of differential as you have pointed out in analysis is an extension of its redefinition as a linear map. Saying dy is a linear function of Δx using what one might in this context call the infinitessmal differential which defines the tangent space is not an obvious way of proceeding for most people as you can see from the comments above. Dmcq (talk) 00:08, 16 August 2009 (UTC)
- We need to get together on this. There needs to be literature about the subject you are pointing, that is, some book that defines the differencial as a limit or something. Other way the only thing I understand is that you are confused.
- I'm trying to use the time I have right now to move Wikipedia forward, I want the next edit to be this one. But we need to get together and unify concepts before I continue with this.Usuwiki (talk) 00:36, 16 August 2009 (UTC)
- Just go ahead with the edit, it doesn't remove anything and it's obviously a good place to put it. The problem with the other article was changing the leader to a rather confusing business which didn't reflect the contents and citing a book which didn't actually give the equation you wrote down. As to the article you set up a major aim of wikipedia is to explain things for the audience liable to reach the article, it will be edited in strange ways by people who are confused unless it is explained well. Dmcq (talk) 08:11, 16 August 2009 (UTC)
Traditionally, differentials are infinitesimals. As I explained, the recent (probably less than 50 years ago) meme that differentials are finite was invented out of unjustified squeamishness about heuristics, and is seen to be absurd when you apply it to integrals.
Usuwiki, where did you pick up the weird idea that differentials are not infinitesimals? Michael Hardy (talk) 18:33, 16 August 2009 (UTC)
Ricci Tensor
I don't understand the article on Ricci Tensor. Can anyone elaborate the method of obtaining the Ricci Tensor from the Riemann Tensor in simpler terms?
The Successor of Physics 15:14, 15 August 2009 (UTC)
- John Baez gives a very clear geometric explanation here. Gandalf61 (talk) 15:03, 16 August 2009 (UTC)
P(n+1) more likely to be prime if P(n) is?
I have a lot of empirical evidence supporting the notion that primes tend to cluster somewhat among the values of irreducible polynomials over the integers. That is, it seems that given irreducible polynomial P, if it is given that P(n) is prime then this in some way increases the likelihood that P(n+1) is also (i.e., primes appear to arise in pairs, triples, etc. among the values of a given polynomial). Can this be right? Are there any theorems which either confirm or refute this idea? It doesn't seem to make sense to me, but I have quite a lot of specific data. Perhaps I'm looking at still too small a sample to judge.Julzes (talk) 23:09, 15 August 2009 (UTC)
- Formula for primes and Ulam spiral have some related information, but nothing that I can see to answer this question. I don't quite know how to word the conjecture well either, because you need to take a probability over all irreducible polynomials. It's obviously untrue for some IPs, like P(n) = n2 + 1 (if P(n) is a prime larger than 2, then P(n+1) is even). —JAO • T • C 12:11, 16 August 2009 (UTC)
August 16
Bounded linear operator on sigma-finite measure space
Let be a -finite measure space. Let and . Show that the operator , , is bounded, and .
Proof starts:
T is bounded: If , then
- .
If then it's easier, . So, for , we have . So, T is bounded and this is half the inequality.
I'm not exactly sure where to go from here. I have a friend's solution but they do something which is obviously wrong at this point. Any ideas? Thanks! StatisticsMan (talk) 00:56, 16 August 2009 (UTC)
- For simplicity, make . By definition of the norm, there is a set , with such that on . Let us put on , and 0 otherwise. Then , and . Phils 02:45, 16 August 2009 (UTC)
- I was thinking about a similar thing, but if , that just means it's essentially bounded (we can just say bounded). So, g = 1 is such a g, even satisfying . But then, no matter what is. StatisticsMan (talk) 03:25, 16 August 2009 (UTC)
- Oh, I get it. You're not defining E to be the set where . If that set is real big, just take a part that has finite measure. And, since with , we can look at the set where is true and intersect it with to get a set where it is true such that the set has finite measure, just as you said. Thanks! StatisticsMan (talk) 03:28, 16 August 2009 (UTC)
- I was thinking about a similar thing, but if , that just means it's essentially bounded (we can just say bounded). So, g = 1 is such a g, even satisfying . But then, no matter what is. StatisticsMan (talk) 03:25, 16 August 2009 (UTC)
Entropy
What is a simple derivation for the formula for calculating entropy? Mo-Al (talk) 08:14, 16 August 2009 (UTC)
- In what context? Do you mean in statistical mechanics? In which case, entropy is , which sums over the different microstates that correspond to a given macrostate. But I wouldn't call this "derivable", rather, that it's a definition of entropy.--Leon (talk) 14:38, 16 August 2009 (UTC)
- Well I suppose my question is really what motivates the definition -- why is this formula a natural thing to use? Mo-Al (talk) 19:03, 16 August 2009 (UTC)
- Ah! I was taught that the definition allows one to "correctly" derive ALL the thermodynamics of any particular system; that is, a non-trivially different definition would lead to different, incorrect (in accordance with experiment) thermodynamic predictions. This definition, however, allows one to correctly predict thermodynamic behaviour. However, there is an intuitive "logic" to it, in that the more microstates corresponding to a given macrostate, the lower the information communicated by the macrostate variables. For instance, in a lowest entropy configuration, with one microstate corresponding to the macrostate in question, the macrostate tells us EVERYTHING about the system. For a high entropy configuration, the system contains much more information than the macrostate variables (temperature, pressure etc.) can communicate.
Does that make any sense?--Leon (talk) 19:20, 16 August 2009 (UTC)
- Ah! I was taught that the definition allows one to "correctly" derive ALL the thermodynamics of any particular system; that is, a non-trivially different definition would lead to different, incorrect (in accordance with experiment) thermodynamic predictions. This definition, however, allows one to correctly predict thermodynamic behaviour. However, there is an intuitive "logic" to it, in that the more microstates corresponding to a given macrostate, the lower the information communicated by the macrostate variables. For instance, in a lowest entropy configuration, with one microstate corresponding to the macrostate in question, the macrostate tells us EVERYTHING about the system. For a high entropy configuration, the system contains much more information than the macrostate variables (temperature, pressure etc.) can communicate.
- Well I suppose my question is really what motivates the definition -- why is this formula a natural thing to use? Mo-Al (talk) 19:03, 16 August 2009 (UTC)
- And see this.--Leon (talk) 19:25, 16 August 2009 (UTC)
Please fill the gaps in a table
Please help me to fill the gaps in this table: Derivatives and integrals of elementary functions in alternative calculi--MathFacts (talk) 08:25, 16 August 2009 (UTC)
- This sort of thing should be filled from looking up a book or journal - and I doubt you'll find much in that way for those systems, they're pretty obscure! Anyway they can mostly be filled in fairly automatically by formulae in the systems once you can do the normal version so overall I'm not certain about the point. Dmcq (talk) 12:50, 16 August 2009 (UTC)
- My computer algebra systems do not give expressions for the empty cells.--MathFacts (talk) 13:39, 16 August 2009 (UTC)
- I don't think an algebra system producing results from things you feed in is counted as a notable source. And I had to cope with such a result stuck in an article just a day or so ago where the results were not quite right and had ambiguities and besides weren't in simplest terms. Dmcq (talk) 13:45, 16 August 2009 (UTC)
- Source can not be notable or unnotable. It's reliable or unreliable. Notable or unnotable may be topic.--MathFacts (talk) 13:56, 16 August 2009 (UTC)
- I don't think an algebra system producing results from things you feed in is counted as a notable source. And I had to cope with such a result stuck in an article just a day or so ago where the results were not quite right and had ambiguities and besides weren't in simplest terms. Dmcq (talk) 13:45, 16 August 2009 (UTC)
- Running expressions you think of through a program and sticking them in a table is counted as original research. The stuff in wikipedia really does have to have some bit of notability and if you can't find some tables giving the expressions or something very similar then the subject really doesn't satisfy notability criteria. Wikipedia isn't for publishing facts you dreamt might be useful and worked out, they have to be notable. Dmcq (talk) 14:25, 16 August 2009 (UTC)
- Nobody will publish something that can be derived with a machine - it is simply ridiculous. Only scientific discoveries are published. Using machine or calculator is not a research of course.--MathFacts (talk) 15:16, 16 August 2009 (UTC)
- Running expressions you think of through a program and sticking them in a table is counted as original research. The stuff in wikipedia really does have to have some bit of notability and if you can't find some tables giving the expressions or something very similar then the subject really doesn't satisfy notability criteria. Wikipedia isn't for publishing facts you dreamt might be useful and worked out, they have to be notable. Dmcq (talk) 14:25, 16 August 2009 (UTC)
- What you put in is original research as far as wikipedia is concerned. Please read the leader of that article. It is quite specific and is a core wikipedia policy. I know maths doesn't always follow that to the letter and it shouldn't either for straightforward things. However you have set up an article that reflects nothing in published literature full of things you thought of yourself and generated the results using a program without any results in sources to check them against. That really is going way beyond the bounds. Interesting articles I would have preferred kept where the person had cited the facts but where the synthesis was not something that had been written about have been removed because of that rule. Dmcq (talk) 16:11, 16 August 2009 (UTC)
- There are published rules on how to compute such things and anyone can prove them either by himself or using some mathematical software. Regarding integrals anyone can take a derivative to verify.--MathFacts (talk) 16:19, 16 August 2009 (UTC)
- I would like to point out that there is a link at the top of each column in this article (except for 1) which takes you to the article on that specific subject. And, those likely have most of the formulas in the table. So, it's not original research, at least mostly. StatisticsMan (talk) 16:29, 16 August 2009 (UTC)
- Check them yourself and you'll see they don't. Dmcq (talk) 16:31, 16 August 2009 (UTC)
- And ones which have seem to have been filled in by MathFacts presumably the same way as he did this list. Generating content using his computer without looking up things. Dmcq (talk) 16:41, 16 August 2009 (UTC)
- I would like to point out that there is a link at the top of each column in this article (except for 1) which takes you to the article on that specific subject. And, those likely have most of the formulas in the table. So, it's not original research, at least mostly. StatisticsMan (talk) 16:29, 16 August 2009 (UTC)
- There are published rules on how to compute such things and anyone can prove them either by himself or using some mathematical software. Regarding integrals anyone can take a derivative to verify.--MathFacts (talk) 16:19, 16 August 2009 (UTC)
- (ec) One of the gaps to be filled in your table asks for the "discrete integral" of . Checking your link, I see you want a solution of the functional equation . By the way, the definition there is a bit misleading: you should be aware of the fact that the solution is unique up to a 1-periodic function, not just up to an additive constant C (and the analogous remark holds for your "multiplicative discrete integral"). That said, a particular solution is
- not a particularly relevant information as far as I know.--pma (talk) 16:39, 16 August 2009 (UTC)
- Yes. There is a inconsistency in that article. It needs clarification. I know about it. Still not have enough time to clarify. The abovementioned equation is not enough to define the sum. But it is usually defined through Faulhaber's formula or equivalent.--MathFacts (talk) 17:33, 16 August 2009 (UTC)
- So, I don't see a real case of original research, but I do not see any reason for the name "alternative calculi", either. --pma (talk) 18:46, 16 August 2009 (UTC)
- Any suggestions?--MathFacts (talk) 18:47, 16 August 2009 (UTC)
- So, I don't see a real case of original research, but I do not see any reason for the name "alternative calculi", either. --pma (talk) 18:46, 16 August 2009 (UTC)
Learning about multiple regression online
It is many years since I last was conversant with multiple regression, I need a refresher. And I have never used any recent MR software. Could anyone recommend any easy online learning materials to use please? I want to do multiple regression on a number of economic time series with the aim of forecasting the independant variable. Forecasting, not modelling - I think this means that correlations between the variables is not important, as it would be if I was modelling; but I'm not sure. I'm also aware of the different types of MR and unsure which would be best to use. Thanks. 78.144.207.41 (talk) 17:12, 16 August 2009 (UTC)
Uniformly convergent sequence of polynomials
Characterize those sequences of polynomials such that the sequence converges uniformly on the real line.
Here's another qual question for which I have no solution. If you happen to know this is not true or the solution is very complicated, you can just say that. If you know of a somewhat elementary solution, any help would be great. Thanks. StatisticsMan (talk) 19:27, 16 August 2009 (UTC)
- Any such sequence must be a sequence of either constant polynomials, or polynomials with identical leading coefficient after finitely many terms, no? Otherwise, for , which for , is arbitrarily large as . Fredrik Johansson 19:45, 16 August 2009 (UTC)
- By the same reasoning, wouldn't the next term need the same coefficient after finitely many terms? If the first terms of the polynomials had same coefficient, they cancel out and you're left with a polynomial of degree 1 less. So, would the answer be that the sequence of polynomials must differ only in the constant term after finitely many terms of the sequence, and those constant terms must converge to some real number? StatisticsMan (talk) 20:16, 16 August 2009 (UTC)
- Yes --pma (talk) 20:21, 16 August 2009 (UTC)
- By the same reasoning, wouldn't the next term need the same coefficient after finitely many terms? If the first terms of the polynomials had same coefficient, they cancel out and you're left with a polynomial of degree 1 less. So, would the answer be that the sequence of polynomials must differ only in the constant term after finitely many terms of the sequence, and those constant terms must converge to some real number? StatisticsMan (talk) 20:16, 16 August 2009 (UTC)
- Of course; my blunder. Fredrik Johansson 22:09, 16 August 2009 (UTC)