Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Maths: discovery or invention?: unindent, not really a reponse
Line 569: Line 569:


A related question would be whether mathematical concepts or techniques are copyrightable or patentable? It's a relevant question when you consider cryptology and compression technologies? I wonder what would would be the effect of someone having a patent on the [[Pythagorean theorem]]? -- [[User:Tcncv|Tcncv]] ([[User talk:Tcncv|talk]]) 07:58, 16 February 2009 (UTC)
A related question would be whether mathematical concepts or techniques are copyrightable or patentable? It's a relevant question when you consider cryptology and compression technologies? I wonder what would would be the effect of someone having a patent on the [[Pythagorean theorem]]? -- [[User:Tcncv|Tcncv]] ([[User talk:Tcncv|talk]]) 07:58, 16 February 2009 (UTC)
: I believe there is prior art. [[Special:Contributions/76.126.116.54|76.126.116.54]] ([[User talk:76.126.116.54|talk]]) 08:11, 16 February 2009 (UTC)


== Central Force Problem ==
== Central Force Problem ==

Revision as of 08:11, 16 February 2009

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 8

Linear Forms of Quadratic Equations

Yesterday I was reading somethings about absolute values and suddenly I thought you could express quadratic functions with absolute valued linear functions! This is how you transform quadratic functions into absolute valued linear functions:
t+│sx+k│=ax^2+bx+c=0
s=±√a
k=b/(2sx)
t=±√(k^2-c)
ax^2+bx+c=s^2x^2+2ksx+k^2-t^2=t+│sx+k│=0
and this is how you transform absolute valued linear functions into quadratic functions:
ax^2+bx+c=t+│sx+k│=0
│sx+k│=-t=√((sx+k)^2)
(sx+k)^2=t^2=s^2x^2+2ksx+k^2
s^2x^2+2ksx+k^2-t^2= t+│sx+k│=ax2+bx+c=0
I want to ask has anyone discovered these forms of quadratic functions yet and did I do anything wrong in my calculations above?The Successor of Physics 03:24, 8 February 2009 (UTC)[reply]

Your use of the equal sign is wrong ... you mean that s^2x^2+2ksx+k^2-t^2=0 implies that t+│sx+k│=0, don't you (but it rather implies that |t|-|sx+k|=0)? Anyway, the latter equation is not at all linear in x because k=b/(2sx). Icek (talk) 08:10, 8 February 2009 (UTC)[reply]
Oh! the implies sign must have changed into an equal sign when I transferred it from Microsoft word to wikipedia! sorry about that. also, I don't really mean linear, but I only mean its form is linear. The formula for finding roots to these equations are a lot simpler than the quadratic formula; x=(±t-k)/s, and for quadratic functions with a non unit coefficient for x2, my method is simpler than factorizing.The Successor of Physics 10:07, 11 February 2009 (UTC)[reply]

Vector fields and surface normals

If a vector field B(x) is parallel to the normals of a family of surfaces of f(x)=constant, what do we know about it?

I'm eventually aiming at proving B.curlB=0, but initially I'm just looking for a starting point here. I'm not even totally sure about the 'family of surfaces' - would it be as the constant varies or as some paramater in f(x) varies?

Cheers, Spamalert101 (talk) 14:40, 8 February 2009 (UTC)Spamalert[reply]

If ƒ is continuous then the set of all values of x for which ƒ(x) is equal to a specified constant is a surface. If you change the constant you get a different surface. (Maybe I'll come back to your other question.) Michael Hardy (talk) 15:19, 8 February 2009 (UTC)[reply]
Well, an example of such a would be , right? Now, for such , is zero, which is stronger than the result you want, but it seems like this might help.Ctourneur (talk) 19:19, 9 February 2009 (UTC)[reply]
If f is reasonably well-behaved (I don't know what's needed, but it's certainly enough for f to be smooth and the constant a regular value), then B must be parallel to , say where is a scalar field, then , which is perpendicular to B, as required. Algebraist 19:28, 9 February 2009 (UTC)[reply]

Ahh brilliant - I love vector calculus but it does frustrate me so: thankyou! —Preceding unsigned comment added by Spamalert101 (talkcontribs) 02:11, 10 February 2009 (UTC)[reply]

Interest Banks pay.

The genertic function for interest is a exponential function of the form. Y = Y 0 e ^

Where Y 0 is the initial amount you put into your account, r is the interest rate, and t is the number years you have the money in your bank. Y is therefore the total amount in your account after 1 year. Find out for yourself the approximate vaule of e. What is it approximate value? Graph the equation assuming your interest rate on a corporate bond is 8%, and your initial investment is 10,000. Use a graphing program. Estimate on the graph the total value of your account after 5 years. Calculate exaclty how much money you will have after 25 years. —Preceding unsigned comment added by Freemanjr2 (talkcontribs) 16:25, 8 February 2009 (UTC)[reply]

We're not going to do your homework for you. If there is a specific bit you are stuck on, show us what you've got so far and we'll try and help. --Tango (talk) 16:30, 8 February 2009 (UTC)[reply]


February 9

Solenoidal Fields, Curl and Div

Sorry to bombard the forum with vector calc questions today but I'm ill and stubbornly refusing to give up on my work which seems to be a bad combination, I'd really appreciate it if someone could just point me in the right direction to start this question off:

Consider .

I'm meant to show that curl(A)=B if div(B)=0 everywhere - as far as I'm aware from my reading, this would make B a solenoidal field, with a vector potential of A, right? I'm not sure how I make the jump (if that's true) from the integral to though - can anyone give me a little help in the right direction? I've been told to use - but I'm unaware as to how that helps me, sigh!

Thanks very much for the help, Spamalert101 (talk) 00:02, 9 February 2009 (UTC)Spamalert[reply]

Right, I've been working on this a little more and figured I'd write up what I've got so far in the hopes someone might be able to give me a hint or point out a mistake:

(subbing in X=xt)

(I think you can take the curl under the integral - is this not doable? Would be nice to know why not if so)

(Using the expansion for and )

(since I hope?)

- and then could I integrate this by parts? What next?

I'm not sure whether this was the right direction but it was the only thing I could see to do... Thanks for any help, Spamalert101 (talk) 05:36, 10 February 2009 (UTC)Spamalert101[reply]

Master Theorem

Regarding the master theorem, for case 2, ie where , then the recurrence is , does that logarithm have a base? Copysan (talk) 04:36, 9 February 2009 (UTC)[reply]

The base doesn't matter in that case. It is big O notation so multiplying by a constant factor can be ignored. Changing logarithm base corresponds to multiplying by a constant. PrimeHunter (talk) 04:54, 9 February 2009 (UTC)[reply]

is there a proof that no "betting system" can affect the expected return

Is there a mathematical proof that no "betting system" could offset losses in a memoryless game offering percentages all in the favor of the house? —Preceding unsigned comment added by 82.120.236.246 (talk) 11:11, 9 February 2009 (UTC)[reply]

Yes. The expectation of the sum of random variables is the sum of the expectations, and a sum of negative numbers is negative. Since it is memoryless there is nothing you can do to increase the future expectations using knowledge of past results, so the expectations will always be negative. --Tango (talk) 11:29, 9 February 2009 (UTC)[reply]
Sorry, this is not rigorous enough for me, since it does not address the reason "betting systems" cannot leverage betting differences: betting systems work by increasing and decreasing the size of bets in response to winning and losing streaks. Intuitively, this should not be possible if the game is memoryless. But that is not a proof. (Your argument is good if you don't have a chance to affect the size of the bet). Is there a proof that leveraging can in no way effect the expected return?
By the way I have a proof that it is possible to make any amount of money if you have an infinite payroll, without affecting your infinite bankroll in any way, and I want to know if it is mathematically sound. Let's say you want to make $1 billion dollars, and have an infinite bankroll to help you do it (there will be no net effect on this bankroll). My method is to open a bank account (for the winnings from this method) write yourself a check for $5 billion drawn on your infinite payroll, and deposit it. When the check clears, your infinite bankroll will not be affected in any way but your new bank account will be $5b richer for it. Is my reasoning true and correct? Thank you! —Preceding unsigned comment added by 82.120.236.246 (talk) 12:02, 9 February 2009 (UTC)[reply]
Yes, but why would you bother giving yourself five billion dollars if you already had an infinite bankroll? Algebraist 13:02, 9 February 2009 (UTC)[reply]
Because you don't want to keep using your infinite bankroll? Or you are just using someone else's, and don't want them to notice on their balance (which should remain infinite)? Honestly, I don't know: I just know that a lot of sites actually say that the martingale betting system is great, because it works with probability 1 if you have an infinite bankroll! So, I wonder if my system would be a good response to these people, since it doesn't even involve handling bets, etc. So it's simpler. Any thoughts? —Preceding unsigned comment added by 82.120.236.246 (talk) 13:17, 9 February 2009 (UTC)[reply]
Yes, if someone still believes in the Martingale system even after it has been pointed out that you need an infinite bankroll for it to work, then I guess your example might make its flaw more obvious. —JAOTC 13:38, 9 February 2009 (UTC)[reply]
Systems like the one you describe require a very long time to recoup losses and so ignore an important cost -- the time value of money. For example, suppose prevailing interest rates are 5%. When you withdraw the $1 billion, you immediately begin incurring an opportunity cost of $5,700 per hour (the interest you would have earned had you not withdrawn the money). So, your system has to generate an income of $5,700 per hour *in addition* to the income it must generate to overcome gambling losses. Wikiant (talk) 13:26, 9 February 2009 (UTC)[reply]
Assuming that it's possible to hold an infinite amount of money in an account, and assuming that the bank still promises a 5% interest, I don't see any loss here. If you had in the account, you'll now get each hour, but that's still . —JAOTC 13:57, 9 February 2009 (UTC)[reply]
What you are saying is that if you have an infinite amount of money you can spend a finite amount of money and still have the same amount of money left. That's basically the definition of infinity. As for my proof, I think it is rigorous. Increasing the size of your bet just multiplies the expectation by a positive number, a negative expectation times a positive number is still negative. You bet more, you just lose more. The Martingale system is well known and only works if you have an infinite bankroll, if you don't (and if we're talking about the real world, you don't) then your expectation is still negative (and actually works out to exactly the same expectation as just betting your whole bankroll in one go, I believe). --Tango (talk) 13:41, 9 February 2009 (UTC)[reply]
Thank you, what you just added ("Increasing the size of your bet just multiplies the expectation by a positive number, a negative expectation times a positive number is still negative. You bet more, you just lose more.") is what I was looking for. I guess I'm not used to thinking rigorously, so that I could request your addition but not come up with it myself. This answers my question.
Resolved
The fact that the expectation is negative isn't quite enough, though. It doesn't exclude the possibility that some betting scheme could allow you to win large amounts of money with high probability, it just means there would have to be a concomitant small chance of losing very very large amounts of money, as in the St. Petersburg paradox. Algebraist 13:49, 9 February 2009 (UTC)[reply]
True, and if you factor in the diminishing marginal utility of money you could theoretically end up with a positive expectation from something like that. Is there a proof that such a system is impossible? --Tango (talk) 14:01, 9 February 2009 (UTC)[reply]
Well, that's pretty much what the martingale system gives you: if you're willing to risk a huge enough bankroll, you can win as much money as you want with a probability as close to 1 as you want. I think you can get some kind of useful theorem if you impose a condition equivalent to the fact that casinos have maximum bets, but that's based on a vague memory of a book I glanced at several years ago. Algebraist 14:08, 9 February 2009 (UTC)[reply]
Martingale only allows you to win the size of your initial bet. I guess you can make that arbitrarily large but it requires an even larger bankroll to get your probability of winning to whatever level you require (it rapidly increases to unrealistic levels - if you want to win $1 betting on red on a double-zero roulette table with a probability of 99% you need $128 [assuming I can use a calculator], that obviously scales with an increased desired win, if you want 99.9% it becomes $1024, and so on). If you factor in the house limit that puts a limit on how many times you can bet - it should be pretty easy to calculate your expectations. Incidentally, I don't think what I said about diminishing marginal utility of money applies, at least not directly, the fact that you can declare bankruptcy and limit your losses could give you a positive expectation, though. --Tango (talk) 14:28, 9 February 2009 (UTC)[reply]
The result I was trying to remember is the Optional stopping theorem. It says that a gambler in a fair casino with a finite lifespan and a house limit will, on average, leave with as much money as he came in, regardless of his strategy. It still doesn't say anything that isn't about expectation, though. Algebraist 01:25, 10 February 2009 (UTC)[reply]
If memory serves, that theorem can be extended to unfair casinos as well - your expected bankroll when you leave is the same as your expected bankroll had you just bet everything on the first spin/toss/deal/whatever. I don't remember the proof, if I ever saw one, though. --Tango (talk) 01:33, 10 February 2009 (UTC)[reply]

Given that set A = {1,2,{3,4}}, is 3 a member of A?

Is {3} also a subset of A? 3 itself is probably not a subset of A, right?

Sorry for asking such an elementary question, but Wikipedia isn't making this very clear, especially between elements and subsets. Could you update your articles? They are alarmingly sparse. 137.54.10.188 (talk) 20:15, 9 February 2009 (UTC)[reply]

The notation 'A={B,C,D}' means that A is the set whose elements are B, C and D. The statement 'A is a subset of B' means that every element of A is also an element of B. Does that answer your questions? Algebraist 20:22, 9 February 2009 (UTC)[reply]
Yes, but is {3} an element of the set {1,2,{3,4}}? That is, what about nested set membership?? Because I'm just trying to sort this out from the idea that {3} != 3. So is {1,2} a subset of {11,4,{1,2,3}}? Or {{11,4,{5,6,{1,2}}}? Or {{11,4,{5,6,{1,2,3}}}? And so if A = {{11,4,{5,6,{1,2,3}}}, is 3 an element in A? Please help! 137.54.10.188 (talk) 20:28, 9 February 2009 (UTC)[reply]
The elements of {11,4,{1,2,3}} are 11, 4 and {1, 2, 3}. Thus 1 and 2 are not elements of {11,4,{1,2,3}}, so {1, 2} is not a subset. Algebraist 20:30, 9 February 2009 (UTC)[reply]
I am slightly confused because I read somewhere that A can belong to B, and B can belong to C, but it's possible to have A not belong to C. But is {A} a subset of C? What's the difference between membership and subset inclusion in nested cases?? 137.54.10.188 (talk) 20:32, 9 February 2009 (UTC)[reply]
Consider the set {{A}}. It has only one element, namely {A}. In turn, {A} has only one element, namely A. Thus A is not an element of {{A}}, and so {A} is not a subset of {{A}} (since by the definition of subsethood, {A} is a subset of B exactly if A is an element of B). Thus {A} is an element of {{A}} but not a subset. Algebraist 20:38, 9 February 2009 (UTC)[reply]
But are 1 and 2 members of the set {11,4,{1,2,3}}? The thing is I'm trying to sort out BOTH set membership and subset inclusion simultaneously. 137.54.10.188 (talk) 20:34, 9 February 2009 (UTC)[reply]
As I said above, the elements (aka members) of {11,4,{1,2,3}} are 11, 4 and {1,2,3}. That's just what the notation means. Are any of these 1 or 2? Algebraist 20:38, 9 February 2009 (UTC)[reply]
However, in one way of looking at it, 3 is an element of {{0,1,2},{3,4}}. (This is sort of a pun.) --Trovatore (talk) 20:46, 9 February 2009 (UTC)[reply]
True, but horrible. It's probably best to treat numbers as urelemente for the purposes of basic set theory of this sort. Algebraist 20:48, 9 February 2009 (UTC)[reply]
Aha, so all we have to do is expand the definition of as , of which clearly is not an element… I think, but I can't really see because my eyes are watering. —Bromskloss (talk) 00:52, 14 February 2009 (UTC)[reply]

Divisibility

Here's an interesting problem I came across: If a^2 + b^2 is divisible by 3, prove that ab is divisible by 3. This actually isn't a homework problem. I've given it some thought, but I can't think of a good way to approach this. But I tried to come up with some numbers a and b which actually satisfy the first condition, and all the numbers I found where divisible by three. And if either a or b is divisible by 3, then ab will be divisible by 3. But I can't find any way to prove that either a or b must be divisible by 3. I've tried a bunch of algebraic manipulations, but none of them have gotten me anywhere. Could you help me? —Preceding unsigned comment added by 70.52.46.213 (talk) 23:33, 9 February 2009 (UTC)[reply]

Our article on modular arithmetic gives background. Briefly, if , then your apparent options are or . You can readily check that the second option is not possible, and if that means . Ray (talk) 23:41, 9 February 2009 (UTC)[reply]
A hint for that "readily check" (which could prove rather tricky if you've never seen anything like it before) - check which integers mod 3 are perfect squares. --Tango (talk) 01:37, 10 February 2009 (UTC)[reply]
If you are not familiar with modular arithmetic then think of it like this: a must be of the form 3n or 3n+1 or 3n+2 where n is an integer. a^2 must be of the form 3m or 3m+1 or 3m+2, but can you say which of the three based on the form of a? Similar for b and finally for a^2+b^2. PrimeHunter (talk) 02:00, 10 February 2009 (UTC)[reply]
In fact, we can go further and say that if a2 + b2 is divisible by 3 (with a and b integers) then ab is divisible by 9. Gandalf61 (talk) 07:06, 10 February 2009 (UTC)[reply]


February 10

Different sized infinitesimals

Cantor's theorem proves that there are different sized infinities. By analogy, are there different sized infinitesimals as well? such that d(dx) < dx? --Yanwen (talk) 01:10, 10 February 2009 (UTC)[reply]

Firstly, you should be aware that in conventional analysis, notations such as dx do not refer to infinitesimals; in fact infinitesimals are not involved in conventional real analysis at all. There are, however, various settings other than conventional real analysis in which infinitesimals (or things like them) are allowed. In all of these that I am aware of, such as the hyperreal numbers and surreal numbers, there are infinitely many different sizes of infinitesimal. By the way, if you do (like Leibniz) interpret dx as an infinitesimal, then it makes sense to say d(dx) < dx, since d(dx) is an infinitesimal increment in dx. Algebraist 01:19, 10 February 2009 (UTC)[reply]
This makes me think, you can work with derivatives and infinitesimals quite nicely in , for example, as the article on dual numbers explains, if we have , then . This allows us to reverse the process and consider derivatives as a quotient of infinitesimals. But this then fails when trying to do second derivatives as we just get 0 out, and working in loses many of the properties we wanted. Is there a nice way to get this working? --XediTalk 02:04, 10 February 2009 (UTC)[reply]
Why do you just get zero out? The derivative of a polynomial is a polynomial, so the same definition works just fine. --Tango (talk) 11:35, 10 February 2009 (UTC)[reply]
Yeah sorry, I didn't really explain, the polynomial thing was just a motivational example, what I meant was that you use ε to define derivatives instead, so that . You then run into trouble if you try to do second derivatives. You have a valid point though, I guess you can consider as another function and repeat the process, but you can't write things like --XediTalk 17:54, 10 February 2009 (UTC)[reply]
Is that how you usually define 2nd derivatives? I just define them as the derivative of a derivative... Using that definition, once you have a definition of a derivative you're set. --Tango (talk) 17:56, 10 February 2009 (UTC)[reply]
Of course, with the standard definition of a derivative, the two are equivalent - assuming you replace epsilon with h and some limits and those limits all behave themselves nicely. --Tango (talk) 17:58, 10 February 2009 (UTC) [reply]
If e squares to zero, it's what's called a nilpotent element and you can't divide by it in the ordinary way. The standard way to get around that is to rewrite all rules in terms of multiplication. In other words, becomes , where f' might be any function that satisfies it, if any such function exists. Repeating this gives definitions for higher-order derivatives, but doesn't give a formula for them, because the next step would be Black Carrot (talk) 23:35, 10 February 2009 (UTC)[reply]
Although, come to think of it, you could use the Im() notation from complex numbers to take the infinitesimal part of this, making it f'(x) = Im(f(x+e)-f(x)) and f"(x) = Im(Im(f(x+2e)-f(x+e))-Im(f(x+e)-f(x))). Black Carrot (talk) 14:16, 12 February 2009 (UTC)[reply]

Is this some kind of joke?

From [1], "There are, however, some rather counterintuitive properties of coin tossing. For example, it is twice as likely that the triple TTH (tails, tails, heads) will be encountered before THT than after it, and three times as likely that THH will precede HHT. Furthermore, it is six times as likely that HTT will be the first of HTT, TTH, and TTT to occur than either of the others (Honsberger 1979). "

and

"More amazingly still, spinning a penny instead of tossing it results in heads only about 30% of the time (Paulos 1995)."

A couple questions. First, WTF is going on? Second, is this (if it's even true) due more to a counterintuitive property of mathematics or a counterintuitive property of physics? Recury (talk) 20:33, 10 February 2009 (UTC)[reply]

The second, if true (I make no claim either way) would be due to physical asymmetries of the coin. I would believe it is not exactly 50% but 30% or thereabouts sounds dubious to me. Baccyak4H (Yak!) 20:41, 10 February 2009 (UTC)[reply]
The first set of claims can be checked by probability calculus (assuming 50%, ignoring the second claim for now). For example, consider TTH vs. THT. As we start the series, any Hs don't contribute to either pattern...yet. As soon as we get a T, then let's see what happens. If the next flip is another T (50%), then TTH will happen first...as soon as the first H is flipped. If it is an H, (50%), then a T following (50%, or 25% total) gives the second. If another H (50% or 25% total), then neither triple can now be made and the flips continue until another T appears, and the same calculus applies, except we have now conditioned on the event that we did not see either triple after the first T, which is an event with 25% probability. One can use either induction (2:1 ratio of probabilities after the first T, 2:1 ratio if not after the first but after the second, etc.) or geometric series (TTH = 50% + (25% × 50%) + (25% × 25% × 50%) + ...) to realize that TTH does appear first twice as often. Similar arguments could be used to check the other claims (which I have not done). Baccyak4H (Yak!) 20:54, 10 February 2009 (UTC)[reply]
I have verified the other claims: there is a 3/4 probability of THH being the first of (THH, HHT), and there is a 3/4 probability of HTT being the first of (HTT, TTH, TTT), with 1/8 probability for each of the other two. The reasoning I used is analogous to the above. Eric. 131.215.158.184 (talk) 21:22, 10 February 2009 (UTC)[reply]
The last one is especially easy to see. TTT and TTH can only occur first if they're the first three tosses. (Otherwise the first occurrence would follow an H or a T, but if it was an H then HTT already happened, and if it was a T then TTT already happened.) The chance of that is 1/8 each, which leaves 3/4 for HTT. -- BenRG (talk) 00:58, 11 February 2009 (UTC)[reply]
Thanks, I admit I don't totally have my head around all of that, but I can at least see how it would be possible. The 30% on the penny on the other hand... Recury (talk) 21:35, 10 February 2009 (UTC)[reply]

It is easy (and fun) to check for yourself what happens when you spin a coin on its edge. The ratio of heads to tails varies from coin to coin. It is seldom close to 50% and can be quite far from 50%. McKay (talk) 21:37, 10 February 2009 (UTC)[reply]

It's not easy - getting a coin to spin properly is hard, I just tried... I spun a UK 1p coin 5 times and got 4 heads (so 80%), but it took far more than 5 attempts! --Tango (talk) 21:41, 10 February 2009 (UTC)[reply]
If it were 30%, the odds of 4 or more heads would be a tiny bit over 3%, so that's pretty statistically significant evidence that 30% is not true for my coin. --Tango (talk) 22:04, 10 February 2009 (UTC)[reply]
There might also be a psychological reason for the coin-spinning example. People might tend to always hit the coin on either the face or tail to start it spinning, which may make it more likely to land a particular way. StuRat (talk) 23:00, 10 February 2009 (UTC)[reply]

There was something within the past three years or so in the Monthly about this. I'll see if I can find it.

This is NOT about physical asymmetries. Michael Hardy (talk) 21:17, 11 February 2009 (UTC)[reply]

This dude seems to think it is about mass distribution or the nature of the edge of the coin (for American pennies). [2] 68.144.30.20 (talk) 23:13, 11 February 2009 (UTC)[reply]
The claim that spinning a penny gives heads 30% of the time is about physical asymmetries, if it's true at all. Algebraist 23:43, 11 February 2009 (UTC)[reply]

Note the fact 1 has nothing to do with coins, it is just quite easy combinatorics. You can find all about statistics on the occurrences of a given substring within a random words, e.g. in Knuth's concrete mathematics. The second claim is most likely exaggerated, but in any case is clearly related with the physics of coin tossing. Coupling these two statements with no comment about their explanation is somehow misleading: a person with no scientific background may think that the first claim also refer to a mysterious behaviour of coins. After checking it experimentally, this person may think that the second fact also is true. So, either, yes, it is a kind of joke, or it is a case of quite bad scientific divulgation (maybe the source was a joke, and contained a further explanation, like it was in the style of great Martin Gardner; and then the explanation has been missed by the writer of the online article. Very bad!) pma (talk) 08:39, 12 February 2009 (UTC)[reply]

The asymetry of probabilities of heads or tails when spinning a coin is because the decision is made by the unavoidable imbalance at the very start of the spin. Thus if you always hold the coin a particular way around to start the spin, the average result will be biased away from the ideal 50/50. By how much is an empirical result and the alleged 30% heads is only anecdotal. Cuddlyable3 (talk) 15:25, 12 February 2009 (UTC)[reply]

February 11

Help me grok e

This is a result that mathematicians everywhere seem to take for granted:

Can someone provide me with a proof of it? --Tigerthink (talk) 01:02, 11 February 2009 (UTC)[reply]

Which equals sign are you looking for a proof of? The second one can be taken as a definition of e. There are various alternative definitions, which you can prove are equivalent - is there a particular definition you favour? The first equality is intuitively obvious, I think (just expand out brackets for the first few terms), it may require a little more work to make it rigorous (I can't do rigorous analysis at 1:20am...). --Tango (talk) 01:21, 11 February 2009 (UTC)[reply]
Take a logarithm. Find the limit of the result. Re-exponentiate. Ray (talk) 02:18, 11 February 2009 (UTC)[reply]

Assuming that you already know that then

by making the substitution jr=n

. This should take care of both equalities.-Looking for Wisdom and Insight! (talk) 07:25, 11 February 2009 (UTC)[reply]

If you want to do it well and in elementary way, here is the program.
Program: prove that for any real number r the sequence is increasing as soon as n>|r|. Prove that it is bounded. So it is convergent: define exp(r) its limit. Prove that exp(r+s)=exp(r)exp(s) for all real numbers. Prove the equality with the exponential series. etc.--194.95.184.74 (talk) 09:39, 11 February 2009 (UTC)[reply]

I don't understand this proof

This was a proof for a statement on the binomial coefficient page.

Furthermore,

for all 0 <  k < n if and only if n is prime.

We can prove this as follows: When p is prime, p divides

for all 0 <  k < p

because it is a natural number and the numerator has a prime factor p but the denominator does not have a prime factor p. So ≡0 (mod p)

Unfortunately, I don't understand how the conclusion that ≡0 (mod p) is reached. It's probably very simple, and I'm just missing it, but could someone explain it please. Btw I'm somewhat familiar with modular arithmatic, but I would prefer it was explained in another context (i.e. by divisbility). Thanks,. —Preceding unsigned comment added by 65.92.237.46 (talk) 06:12, 11 February 2009 (UTC)[reply]

The numerator is with k > 0 so it is clear that this is a multiple of p. The denominator is and this is not a multiple of p because k is less than p and p is prime - if the denomiantor were a multiple of p then we could find a non-trivial factorisation of p, which is impossible. So we are dividing a multiple of p by something that is not a multiple of p - we know the result is an integer, but it must also be a multiple of p because there is no factor of p in the denominator to cancel the factor of p in the numerator. Gandalf61 (talk) 09:24, 11 February 2009 (UTC)[reply]
In case it's the last step that's troubling you, be aware that 'a≡0 (mod n)' is just a fancy way of saying that n divides a. Algebraist 11:51, 11 February 2009 (UTC)[reply]

Many thanks. —Preceding unsigned comment added by 65.92.237.46 (talk) 11:53, 11 February 2009 (UTC)[reply]

Degree of Relatedness

Not sure if this is more math or biology, but here goes. What is the degree of relatedness between two children of an incestuous relationship between a half-sister and half-brother, I feel like it's 0.75, but I'm not, any ideas? 169.229.75.128 (talk) 07:12, 11 February 2009 (UTC)[reply]

The expected value of the degree of relatedness of the offspring of two half-siblings (that is, sharing one parent) is 9/16. To get an expected value of degree of relatedness of 3/4 you need to consider the offspring of two clones.
A few things: the degree of relatedness of two offspring is a distribution (in theory, the offpsring could be identical, or could have no genes in common, although these two extremes are extremely unlikely), so properly you should be talking about the expected value of their degree of relatedness.
Secondly, I find the term "degree of relatedness" a bit confusing and loaded. For some species, such as ants, where different individuals may have different numbers of genes, it is possible for the degree of relatedness of A and B to be different from the degree of relatedness of B and A. It is somewhat clumsier but more descriptive to say: "given a gene of A, what is the probability that that gene is found in B?". Thinking about the problem in that manner may also make it easier to go about solving it. Eric. 131.215.158.184 (talk) 10:13, 11 February 2009 (UTC)[reply]
How did you get 9/16, though. 169.229.75.128 (talk) 16:43, 11 February 2009 (UTC)[reply]
I believe that was explained on another Desk. I agree that it's correct. Posting on multiple Desks isn't generally allowed, as it leads to us having to repeat ourselves. StuRat (talk) 00:01, 12 February 2009 (UTC)[reply]
By the way, just in case someone finds their way back to this thread in the future, I was in fact using the wrong definition of coefficient of relationship and the answer is actually 5/8. Eric. 131.215.158.184 (talk) 07:11, 14 February 2009 (UTC)[reply]

Absolute Integrability

This issue came up defining Fourier transform in my PDE class. Let us say that a function f is absolutely integrable if is a finite number. My question is, if a real function f is absolutely integrable, then can't we already say (without assuming anything extra) that . I mean how can a function have a nonzero limit as x grows without bound and still be absolutely integrable. If the limit is any nonzero number, then wouldn't the absolute integral be infinite? The same will be true as x goes to negative infinity. That limit must be zero as well. Is my reasoning correct or wrong?-Looking for Wisdom and Insight! (talk) 07:33, 11 February 2009 (UTC)[reply]

The limit could simply fail to exist. Imagine (a smoothed version of) Σ n χ[n,n+1/n3). It is a non-negative function with integral Σ 1/n2 ≤ 2, where all sums are over the positive integers. However, its limsup on every interval [a,∞) is +∞, its liminf on every interval [a,∞) is 0, and the limit as x → ∞ does not exist. JackSchmidt (talk) 09:09, 11 February 2009 (UTC)[reply]
By the way, such functions are normally called simply integrable (or L1). Algebraist 11:49, 11 February 2009 (UTC)[reply]

Or they're called "Lebesgue-integrable".

So say you have a non-negative function with a pulse of height 1 at 1, another at 2, another at 3, and so on. But the pulses keep getting narrower, so the sum of all the areas under them is a convergent series. Then the integral from 0 to ∞ is finite but the function does not approach 0 at ∞. Michael Hardy (talk) 21:12, 11 February 2009 (UTC)[reply]

But you can always say without any extra assumption that . Unfortunately, it is possible that (Igny (talk) 05:09, 15 February 2009 (UTC))[reply]

GENERAL QUESTION>

I'm not asking for an answer for this question so it ain't homework i just couldn't figure out a logic to go about this sum.It's as follows ->Calculate the total natural numbers that exist from 0 to 2000 which has a square such that its sum of digits is 21. .Now no need to tell me any answer but just tell me a basic logic i can use to go about doing this sum. —Preceding unsigned comment added by Vineeth h (talkcontribs) 14:32, 11 February 2009 (UTC)[reply]

It's a trick question. Algebraist 14:52, 11 February 2009 (UTC)[reply]
The only sum of digits you get are: 0, 1, 4, 7, 9, 10, 13, 16, 18, 19, 25, 27, 28, 31, 34, 36, 37, 40, 43, 45, 46, 49. -- SGBailey (talk) 15:05, 11 February 2009 (UTC)[reply]
You missed 22. Algebraist 15:15, 11 February 2009 (UTC)[reply]
Consider divisibility rules. — Emil J. 15:11, 11 February 2009 (UTC)[reply]

SGBailey-> you're right,turns out 21 isn't divisible by any number so the final answer turns out to be 0 natural numbers.So could you please tell me how you generalised that the only sum of digits you get are the ones you mentioned?Whats the logic behind that?Cause one can't possibly remember all the numbers you mentioned there just to solve a sum like this!Vineeth h (talk) 17:12, 11 February 2009 (UTC)[reply]

I must have deleted 22 by mistake when doing the previous edit. The "algorithm" I used was to do the calculation 2001 times x -> x^2 -> sum(digits(x^2)). -- SGBailey (talk) 20:11, 11 February 2009 (UTC)[reply]
No need for scare quotes - brute force is a perfectly legitimate algorithm! --Tango (talk) 20:13, 11 February 2009 (UTC)[reply]
You don't need to know the list. Just remember the divisibility criteria for 3 and 9: if a number has sum of digits 21, then it is divisible by 3, but not 9, and that's impossible for a square. — Emil J. 17:23, 11 February 2009 (UTC)[reply]
This BASIC code prints out the sums of digits that occur and how often they occur. Sorry, the sum 21 never occurs. Cuddlyable3 (talk) 15:09, 12 February 2009 (UTC)[reply]
deflng a-z
dim f(49)
for x=0 to 2000
 x2=x^2
 sd=0
 for p=6 to 0 step -1  'powers of 10
  dp=int(x2/10^p)
  sd=sd+dp
  x2=x2-dp*10^p
 next p
 incr f(sd)
next x
for n=0 to 49
 if f(n)>0 then print n;f(n)
next n

February 12

Matrix property

If:

What is this property called? It's something -symmetric but I can't quite remember what the prefix is.

Thanks in advance. 128.86.152.139 (talk) 01:52, 12 February 2009 (UTC)[reply]

Skew-symmetric matrix. Algebraist 02:02, 12 February 2009 (UTC)[reply]
Ah yes, genius! Thanks. 128.86.152.139 (talk) 02:11, 12 February 2009 (UTC)[reply]

Points in [0,1] whose decimal expansions contain only the digits 4 and 7

I've posted this question on the math portal talk section and was told the answer, but I tried and don't know how to prove it.

Let E be the set of all x in [0,1] whose decimal expansion contains only the digits 4 and 7. How is it closed?

If x is in [0,1] and not in E, it'll have a digit different from 4 and 7. Then I tried to find a neighborhood of x that's disjoint from E, but it's difficult as there are many cases each requiring separate treatment. Can anyone offer a proof? —Preceding unsigned comment added by IVI JAsPeR IVI (talkcontribs) 13:27, 12 February 2009 (UTC)[reply]

If it has a digit different that 4 and 7, then it will have a first such digit. You can do what you like to digits after that and always stay outside E. Does that help? --Tango (talk) 13:55, 12 February 2009 (UTC)[reply]
As always with decimal expansions, there's the annoying matter of non-uniqueness to be dealt with. Algebraist 13:59, 12 February 2009 (UTC)[reply]
You have the right idea; just show that the complement is open. There will be several cases, because you have to worry about numbers like 0.3999999... and 0.474740000... . Proofs of statements that refer to decimal digits are always difficult because of the non-uniqueness. It is much easier to prove this statement for Baire space (set theory), and that space is sufficiently similar to the real line to guide intuition. — Carl (CBM · talk) 14:03, 12 February 2009 (UTC)[reply]
Why would you have to worry about that? He says "only 4 and 7." Anyway, I would use convergent sequences. If a convergent sequence in [0,1] consists of numbers containing only 4 and 7, it converges to a number made of only 4 and 7. The set is closed. Black Carrot (talk) 14:11, 12 February 2009 (UTC)[reply]
Well, you would need to either know that fact about pointwise convergence of the decimal expansion, or prove it. And in general one cannot say that if a sequence x converges to y then any sequence of decimal expansions of x converges pointwise to any given decimal expansion of y. This is the usual headache with decimal expansions. — Carl (CBM · talk) 14:16, 12 February 2009 (UTC)[reply]
I guess it depends on whether we count trailing 0's as digits. If we do, then both recurring 9's expansions and terminating expansions contain digits other than 4 and 7, so there's no problem. If we don't, then 0.6999... and 0.7 are in different categories despite being the same number. --Tango (talk) 14:19, 12 February 2009 (UTC)[reply]
Even in the former case, you have to worry (briefly) about this in your approach: which is the first digit not 4 or 7 in 7/10? Algebraist 14:24, 12 February 2009 (UTC)[reply]
If doesn't matter. You don't need to know which is the first such digit, just that it exists. Just call it the nth digit and get on with it. --Tango (talk) 14:39, 12 February 2009 (UTC)[reply]
Hence the briefness of the worry. Algebraist 14:46, 12 February 2009 (UTC)[reply]
If a decimal expansion has n digits after the point there are n2^(n-1) possible sequences of digits comprising exclusively 2 digits. But there is no limit to increasing n. Therefore the set E is infinite. Cuddlyable3 (talk) 14:22, 12 February 2009 (UTC)[reply]
Yes. So what? Algebraist 14:24, 12 February 2009 (UTC)[reply]
(I added a descriptive title.) I think this is pretty easy with just two cases. For a nonterminating decimal (which has no alternate terminating expansion), find the first illegal digit and choose a neighborhood small enough that that digit doesn't vary. For a terminating decimal... you fill in the blank. -- BenRG (talk) 14:26, 12 February 2009 (UTC)[reply]
The non-uniqueness of decimal expansion is definitely a plague. I've thought about for a bit more and came up with this: Let x be outside of E, and the first digit different from 4 and 7 be α at the nth place. The next digit must be a digit from 0 to 9 (since we are using base 10). If it's 1-8, the neighborhood 1/10n+1 with center at x, has no element in common with E (because if we add any amount less than 1/10n+1 to x it will not change the digit α (even if its 0.(... ...)α8999999... ..., we must add or subtract something with absolute value strictly less than 0.00000... ...01 - the digit 1 is at n+1th place - to it and so it can't get to 0.(... ...)α999999... ... = 0.(... ...)α+100000... ... (or if α is already 9 α+1 will contribute to the digit on its left)
If the digit after α is 0 or 9, we still use the neighborhood 1/10n+1 with center at x but it gets a bit more difficult to demonstrate (and I'm not entirely sure it's correct due the decimal non-uniqueness), it's easier to do it with a diagram but frankly I really don't know how to use wikipedia. If anyone can point out any mistakes i made (which I'm pretty sure I did) please correct it. And also with regards to Baire Space isn't it uncountable? Because countably many cartesian product of natural numbers simply means the set of all functions from N to N, so why do they use ωω? Isn't that ordinal countable (from my intuition it's somewhat like the union of all finite cartesian products of of natural numbers - it's defined to be the supreme of ωn, n run over the naturals, and ωn is somewhat like Nn. Homeomorphism is probably the word but that's just from casual reading). Sorry I'm not quite at your level yet.--IVI JAsPeR IVI (talk) 12:44, 14 February 2009 (UTC)[reply]
I think there is some confusing notation going on. ω is used both to represent the set of natural numbers and the first infinite ordinal (they are, after all, the same thing), but how you manipulate the symbol depends on which meaning you are giving it. If it's the set of natural numbers then ωω means the set of all functions from the natural numbers to themselves, which is uncountable. If it's an ordinal, then ωω means the limit of ωn as n goes to infinity, which is a countable ordinal. I think the answer is a avoid using ω to refer to the set of natural numbers and just use it for the ordinal (I think that's the most common notation - I've only seen ω used for the natural numbers in rather old books). --Tango (talk) 13:06, 14 February 2009 (UTC)[reply]
The use of ω for the naturals is still common in logic and, I believe, in set theory. One slight advantage of this notation is that ω quite definitely includes 0, while with ℕ it's anyone's guess. Algebraist 14:27, 14 February 2009 (UTC)[reply]
Another benefit of using ω to refer to the set of finite ordinals is that it's very clear exactly which set is intended. This usage is extremely common in practice. The corresponding solution in practice to the issue Jasper and Tango mentioned is that you need to say so explicitly if you are using ordinal exponentiation (which is used somewhat rarely in practice). This leads to the following conventions:
  • ωω is the set of infinite sequences of natural numbers.
  • ω is the set of finite sequences of natural numbers.
  • [ω]ω is the set of infinite sets of natural numbers.
  • [ω] is the set of finite sets of natural numbers.
— Carl (CBM · talk) 14:37, 14 February 2009 (UTC)[reply]
I tend to use and to avoid the ambiguity. --Tango (talk) 15:42, 14 February 2009 (UTC)[reply]

Thanks. I thought the symbol ω (and anything containing ω that "looks" like elementry operations) was used exclusively for ordinals and arithmatic/exponentiation on ordinals.--IVI JAsPeR IVI (talk) 08:56, 15 February 2009 (UTC)[reply]

Book recommendations

Could someone recommend me easy to understand and interesting to read book(s) covering the following topics: Rodrigues' rotation formula, Clifford algebra, Rotation groups, Lie groups, Exponential map, etc. Thanks a ton! deeptrivia (talk) 18:39, 12 February 2009 (UTC)[reply]

/Wrt Clifford algebras, search the net for stuff from John Baez, his weekly column has two specials on it, and book recommendations. I would expect to find similar recommendations for the other things in similar places. --Ayacop (talk) 19:17, 12 February 2009 (UTC)[reply]

Simple Math

This is a really easy question compared to most of the ones here so I'm sure someone can help me.

How do I solve for 'm' in the following equation:

900 = 1500(0.95)^m

I will admit right now that this is from my homework but I have tried really hard but just can't get it. Thanks in advance.

This page isn't for homework problems. I suggest you reread the chapter in your textbook from which the problem originated, paying particular attention to the worked examples. Ray (talk) 23:35, 12 February 2009 (UTC)[reply]
Are you familiar with logarithms? If yes, this is easy. If no, you should read your textbook's section on them. Algebraist 23:36, 12 February 2009 (UTC)[reply]
And, if you haven't had logarithms in school yet, and don't care to learn them, either, this could also be solved by a trial-and-error approach. I'll get you started:
900/1500 = (1500/1500)(0.95)^m
0.6 = (1)(0.95)^m

0.6 = (0.95)^m
Now try a range of values for m:
m  (0.95)^m
-- ---------
 1  0.95
10  0.598737
Since 0.6 is between 0.95 and 0.598737, but much closer to 0.598737, we should try a value for m between 1 and 10, but much closer to 10. I'll try 9.9:
m      (0.95)^m
--     ---------
 1     0.95
 9.9   0.601816
10     0.598737
Since 0.6 is approximately halfway between 0.601816 and 0.598737, next try an m approximately halfway between 9.9 and 10. Continue with this process until you have the desired number of significant digits for m. StuRat (talk) 01:23, 13 February 2009 (UTC)[reply]
The trial-and-error approach that StuRat showed is also called "successive approximation" and at this link you can see it used in an electronic circuit. You have yet another method of solving your equation if you can still find a sliderule that has LL scales (and the ancient knowledge of how to use them). Cuddlyable3 (talk) 19:59, 13 February 2009 (UTC)[reply]

February 13

Finding x in logs

log_{2} x + log_{2} (x+5) = log_{2} 9

I use the law of logs and multiply x into x+5, then I raise 2 to both sides and end up with x^2+5x=9 but I can't factor that and get nice numbers, and I know I'm not supposed to use the quadratic formula. What am I doing wrong here? 98.221.85.188 (talk) 03:40, 13 February 2009 (UTC)[reply]

Complete the square from first principles like an honest man? Algebraist 03:46, 13 February 2009 (UTC)[reply]
5 doesn't divide evenly... 98.221.85.188 (talk) 04:19, 13 February 2009 (UTC)[reply]
Your problem does not have a nice round answer. So either you have to accept that or you wrote down the wrong equation above. Dragons flight (talk) 04:24, 13 February 2009 (UTC)[reply]
Well it's a webwork problem meaning we have to type in the answer on the web, and then it tells us if we got it right or wrong. So it's supposed to have an exact answer, but I keep getting an approximation of 1.4. The problem looks like it's written correctly. I don't know where the problem is. 98.221.85.188 (talk) 04:27, 13 February 2009 (UTC)[reply]
Nvm, I put sqrt15.25-2.5 and it says I was correct 98.221.85.188 (talk) 04:29, 13 February 2009 (UTC)[reply]
You missed something: You need to reject the extraneous root. You can't take the logarithm of a negative number. Michael Hardy (talk) 22:52, 13 February 2009 (UTC)[reply]

To complete the square, you divide 5 by 2 and then square, and add that amount to both sides:

Then:

so that

etc. Now the complication: One of the solutions is negative. You need to reject that one since there is no base-2 logarithm of a negative number. Michael Hardy (talk) 22:51, 13 February 2009 (UTC)[reply]

Finding distance given initial speed and friction

I'm making a simple game that involves throwing balls around a complex level. The ground has some friction on the balls that slows them down. I'm not going for a perfect simulation, so as it is I'm just multiplying their velocity to a constant f slightly less than 1. So, given an initial position and speed, and a certain friction, I'm trying to predict where it will stop so I can plug in some AI code. I'm guessing it'll involve some calculus, but it's been a while... What I have looks like:

But I'm kinda lost there. If anyone could point out the way on how to solve this thing, I'd be very grateful. Thanks! -- JSF —Preceding unsigned comment added by 189.112.59.185 (talk) 13:34, 13 February 2009 (UTC)[reply]

I don't think you want , you probably want . That gives you:
(Note, since f<1, log(f) is negative, so that minus sign does make sense.) That's if it's done continuously, if you are actually simulating it using discrete time steps, you'll get a slightly different answer (but not too far off if the time steps are small enough). --Tango (talk) 14:31, 13 February 2009 (UTC)[reply]
Watch out for the calculus because it tells you that with constant friction the ball never stops completely. Possibly you want a procedure like this pseudocode:
Enter P0 = start position (distance units e.g. inches)
      V0 = start velocity (distance per time step)
      F  = friction (velocity change per time step)
 p=P0
 v=V0
NEX:
 p = p + v*(1+F)/2
 v = v * F
 if v > 0.01 goto NEX
REM The ball stops at position p.  

From these start values

P0, V0, F = 1, .1, .8

the ball rolled for 11 time steps (loops to NEX) and stopped at position p = 1.41. Cuddlyable3 (talk) 20:52, 13 February 2009 (UTC)[reply]

College Math Problem Plea ....

A steel block of weight W rests on a horizontal surface in an inaccessible part of a machine. The coffecient of friction between the block and the surface is "u". To extract the block, a magnetic rod is inserted into the machine and this rod is used to pull the block at a constant speed along the surface with a force F. The magnetic force of attraction between the rod and the block is M. Explain why

(a) M > u X W (b) F = u X W


Math problem was under the chapter : "NEWTON'S THIRD LAW" in Mechanics M1 Book of Mathematics Course PLEASE DO HELP!!!!


—Preceding unsigned comment added by 202.72.235.204 (talk) 21:33, 13 February 2009 (UTC)[reply]

We are not going to do your homework for you. J.delanoygabsadds 21:37, 13 February 2009 (UTC)[reply]


Actually I was unable to do this one so I thought what better place for help than Wiki and ofcourse you guys!!!! —Preceding unsigned comment added by 202.72.235.204 (talk) 21:49, 13 February 2009 (UTC)[reply]

What bit are you stuck on? If you show us your working so far, we'll try and help you with the next bit, but we're not going to do the whole question for you. If you don't even know where to start, you should go and talk to your teacher. --Tango (talk) 21:50, 13 February 2009 (UTC)[reply]


Okay, WORKING:

The question's first part (a) suggests that magnetic attraction "M" should be greater than frictional force experienced by steel block "uW". When this happens, infact, the block will start to accelerate as the net force on block exceeds 0 (M > friction). But the block travels with steady speed as stated in the question. But as we see it is for granted that M is always constant thus object accelerates always.

Now, whats the deal with M and F. How does pulling the rod with a force of F change anything?

When the magnetic rod is in contact with the block there is a reactive force between them than partially cancels out M. If you weren't pulling the rod, it would completely cancel it out, by pulling the rod just the right amount you leave just enough resultant force on the block to counteract the friction allowing for constant velocity. Try drawing a diagram showing the block, the surface and the rod with all the forces (I count 7 forces in total). --Tango (talk) 22:43, 13 February 2009 (UTC)[reply]

CLARIFICATION...............................................................................................:

Okay, I didn't consider the steel block to be in contact with the magnet. There are few things for clarification though:

Lets take the steel block into consideration: Taking the steel block travels to the left

Forces to the right: The magnetic force M and the pull from the magnetic rod F / uW

Forces to the left (all horizontally): Reaction contact force R = M and friction uW

Resultant force horizontally = 0

My question is that it is logical that the contact reaction force on block from magnet would decrease INCREMENTALLY AS due to the pull increments but mathematically speaking, we can take reaction force constant and workout arithmetic summation to find net force taking that there is granted constant pull from magnet on block.


WHAT REALLY HAPPENS, DOES THE CONTACT FORCE DECREASES AS PULL EXIST OR INCREASE OR CONTACT FORCE REMAINS CONSTANT AS PULL ARISES??? —Preceding unsigned comment added by 202.72.235.208 (talk) 16:54, 14 February 2009 (UTC)[reply]

The pulling force from the rod isn't a separate force, it's just the difference between the magnetic attraction and the contact force. I suggest you include the rod in your diagram and consider the forces on it too (F is a force on the rod). --Tango (talk) 17:36, 14 February 2009 (UTC)[reply]
You're overthinking this Q. Yes, in reality any force which gets the block moving would also cause it to accelerate, but just ignore that. When they said it moves at a constant speed, what they really meant was "it will move at a slight acceleration, which is minimal enough that you need not consider it in your calculations". Similarly, there isn't a single coefficient of friction, but rather are two, a higher static one and a lower dynamic one. So, you would need to pull with a greater force to "break the block loose", then decrease the pulling force to prevent acceleration. StuRat (talk) 16:35, 15 February 2009 (UTC)[reply]

February 14

Mathematical fraction

The term "a third of a mil" in reference to a dollar amount is used in a New Jersey state statute. Please advise what that fraction is and what the decimal number is that should be used when multiplying another larger number. For instance what would I multiply $1,000,000 by to find a "third of a mil" of that amount?

Thank you,

Frank J. Mcmahon Mahwah, NJ [email removed] —Preceding unsigned comment added by 69.127.4.198 (talk) 02:07, 14 February 2009 (UTC)[reply]

Try a lawyer? I'd assume offhand "a third of a mil" means dollars. If there's some special legal meaning of the phrase, then it is something a lawyer would know, not a mathematician. Maybe it would help if we could see the full sentence that "a third of a mil" first appears in. By the way, we never email responses, so I removed yours to lessen visibility to spam-bots.... Eric. 131.215.158.184 (talk) 07:17, 14 February 2009 (UTC)[reply]
In some contexts I believe that, just like "1/3 per cent" means X / 300, this could mean "1/3 per thousand" or X / 3000. -- SGBailey (talk) 09:48, 14 February 2009 (UTC)[reply]
I would think "a third of a mil" is just short for "a third of a million" or $333,333.33. As SGBailey says, it could mean "a third per mil", or 1/3000 times. Either way, it's not a very common way of saying it, but then lawyers like to make things as confusing as possible - it keeps them in work. --Tango (talk) 12:53, 14 February 2009 (UTC)[reply]
I would assume that it refers to mill (currency). "A third of a mil" is 1/3000 of a dollar. -- BenRG (talk) 13:10, 14 February 2009 (UTC)[reply]
In light of [3], about library funding in New Jersey, it looks like SGBailey got it right; it's a third per mil. Of course, that also means it's a third of a mill per dollar (in this case, per dollar of assessed property value). So to answer the original question, for a property assessed at $1,000,000, a "third of a mil" would be around $333. —JAOTC 14:54, 14 February 2009 (UTC)[reply]

Numbers

I have these series of numbers and I know they are relate, but I don't know what they are called.

0,1,3,6,10,15, 21,28...

0,+1,+2,+3,+4,+5...

Thanks --68.231.197.20 (talk) 06:47, 14 February 2009 (UTC)[reply]

You should look at the triangular numbers. Eric. 131.215.158.184 (talk) 07:07, 14 February 2009 (UTC)[reply]
The OEIS is a good place to answer these questions. Algebraist 14:23, 14 February 2009 (UTC)[reply]

they're the perfect squares. 0*0=0, 1*1=1, 2x2=4, 3x3=9, etc. that is +1+3+5 etc. Do you notice a relationship with your series? How would you express that as an equation -- or maybe two? note: this may be BS on my part —Preceding unsigned comment added by 82.120.236.246 (talk) 21:59, 14 February 2009 (UTC)[reply]

What are the perfect squares? None of the numbers in the list (other than 0 and 1) are squares... what are you talking about? --Tango (talk) 00:02, 15 February 2009 (UTC)[reply]
Perhaps the sum of any two consecutive terms;) hydnjo talk 13:56, 15 February 2009 (UTC)[reply]

Dice game problem

I was thinking about a problem this morning and got a bit stuck on figuring out how to calculate the answer. It seemed like the sort of problem that has been calculated before, but I can't seem to find anything on it. Here's how it works:

You are playing a dice game with three six-sided dice. You roll the dice and set aside any that come up 1. You then reroll any dice that aren't one and continue the process of rerolling and setting aside 1's until all three dice have come up 1. The question is how many times on average do you need to roll the dice before they all individually have come up 1?

(Note: The original problem I'm actually trying to solve is very similar, except that instead of the probability of individual success being 1/6 the probability of individual success is 1/11.)

My friend and I worked out a brute force way to approximate it by computer (I think, I haven't typed it up yet), but I'm wondering if there's a more elegant or exact solution. Any suggestions? Thanks for the help. 71.60.89.143 (talk) 20:54, 14 February 2009 (UTC)[reply]

Just a quick follow-up - I used Excel to calculate that the expected number of rolls would be approximately 4.878 in order to get at least one success on each of the three dice. For my original problem where the chance of success is 1/11, the expected number of rolls is 8.623. Please feel free to confirm if you like, and if you have a nifty way of solving the problem I'd be interested in reading it. 71.60.89.143 (talk) 22:56, 14 February 2009 (UTC)[reply]

Your answers are certainly wrong. The expected number of rolls of one dice until you get a 1 is 6 (or 11), and getting the right number on all three is clearly harder. Algebraist 23:03, 14 February 2009 (UTC)[reply]
For three dice, probability p of success each time, I get , giving 10566/1001 for probability 1/6 and 45727/2317 for probability 1/11. Unfortunately, all I have for the general case (n dice, probability p) is a messy infinite sum, and I don't have time right now to do this properly. Algebraist 23:10, 14 February 2009 (UTC)[reply]
Yeah, after I typed the above I realized I made a mistake in my formula in Excel. I'm still not getting the same answer you got above, though. I get 7.38 for the expected value of p=1/6. Hmmm.... 71.60.89.143 (talk) 23:26, 14 February 2009 (UTC)[reply]
To clarify what I'm doing in Excel, I let P(k,n) be the probability that at least k of the three dice have succeeded (k=0 to 3) after at least n rolls. So P(2,5) would be the chance that at least two of the three dice hit a success after five rolls. Given that definition for P(k,n), I get the following formula for P(3,n) for n>1.
P(3,n) = P(3,n-1) + (P(2,n-1) - P(3,n-1))*(P(1,1) - P(2,1)) + (P(1,n-1) - P(2,n-1))*(P(2,1) - P(3,1)) + (1 - P(1,n-1))*P(3,1)


Above, the expression P(2,n-1) - P(3,n-1) is the probability that exactly two of the dice succeeded after n-1 rolls. The other expressions are similar. 71.60.89.143 (talk) 23:46, 14 February 2009 (UTC)[reply]
Alright, I found the error in my Excel sheet and the formula above.  :) The second term in the last few products is a probability involving rolling all three dice, but it should actually be only partial rolls. I fixed the error and, lo and behold, it matches your answer Algebraist. Good work! 71.60.89.143 (talk) 01:48, 15 February 2009 (UTC)[reply]

Just my take. Let X be the number of throws before you get 1 on an n-faced die. Its cdf is

If you try to throw k dice, it is equivalent to look at the maximum of k iid random variables, which has cdf

Thus the expected value of number of throws should be

which I am pretty sure is possible to calculate explicitly. (Igny (talk) 02:49, 15 February 2009 (UTC))[reply]

Yeah, that's the infinite sum I alluded to above. Algebraist 03:16, 15 February 2009 (UTC)[reply]
So if I did not screw up for 3 dice with 6 face the average number of throws is 10.56 and for 11 faces it is 19.74 (Igny (talk) 21:17, 15 February 2009 (UTC))[reply]

is it possible to fuck up when combining two random number generators?

If you're combining two random number generators that you think are pretty random, but who knows, maybe every so often they aren't random enough, is there any way to do that which looks okay and basically has the correct distribution, but in fact now is way not so random? Thanks.

P.s. this isn't malicious! I wouldn't be asking it this way, and on this forum, if it were —Preceding unsigned comment added by 82.120.236.246 (talk) 22:23, 14 February 2009 (UTC)[reply]
This isn't exactly an answer to your question, but: for the most part, any attempt to combine two good PRNGs, or to modify the output of a PRNG to make it "more random", will actually reduce the its quality. That's why there are so few of them that are considered cryptographically strong. By the way, if you do get a straight answer to your question, be sure to remember it in case you ever want to participate in the Underhanded C Contest. « Aaron Rotenberg « Talk « 23:16, 14 February 2009 (UTC)[reply]
Yes, it is possible to fuck up. By not defining what distribution you really need. Cuddlyable3 (talk) 23:39, 14 February 2009 (UTC)[reply]
If you can reduce the randomness of a PRNG by combining its output in some simple way with another PRNG, then it certainly wasn't cryptographically strong in the first place. However it is usually possible to improve the randomness of even weak PRNGs by combining them. I flatly disagree with Aaron when he claims that it "usually" reduces the quality. It can happen, if there is some unsuspected connection between the two PRNGs (or, of course, if you make some silly mistake in how you "combine" them), but it usually helps rather than hurts.
The simplest way to combine two PRNGs (say, normalized to return a value between 0 and 1) is simply to add their outputs modulo 1. If you do this to two PRNGs of relatively prime period, the period of the new PRNG is the product of the original periods. (Period by itself is not a good measure of randomness, but short period is always a problem.)
An even better way is the McLaren–Marsaglia method, in which you cache values from one of the PRNGs, and use the other one to select a value from the stream. --Trovatore (talk) 23:52, 14 February 2009 (UTC)[reply]
"if you can reduce the randomness..by combining its output in some simple way ..then it certainly wasn't ..strong" WTF!! How's this for starters:
perl -we "for(1..10000000){if (int rand 2 + int rand 2){$one++}else{$zero++} }; print qq/got $one ones and $zero zeros\n/"
I suppose the result "got 5831006 ones and 4168994 zeros" means that I just proved Perl's random number generator is way, way insecure!!! —Preceding unsigned comment added by 82.120.236.246 (talk) 00:58, 15 February 2009 (UTC)[reply]
I don't actually speak Pathologically Eclectic Rubbish Lister so I'm not quite sure what you're doing here. It looks like you're demanding that two random values chosen from 0 to 2 both be less than 1 in order to increment $zero, in which case I'd expect only 25% zeroes from a good RNG. But I certainly wouldn't be surprised if Perl had a bad RNG — in fact, that seems more likely than not.
In any case you seem to have ignore both of my stipulations — that the two PRNGs be unrelated, and that you not make some silly mistake when combining them (like choosing random bits from a 25-75 proposition). --Trovatore (talk) 03:28, 15 February 2009 (UTC)[reply]
int rand 2 + int rand 2 in Perl means int(rand(2 + int(rand(2)))), which should be 0 with probability 5/12 ≈ 0.4167, so Perl's RNG passed this test—though I suspect it wasn't the intended one. -- BenRG (talk) 04:21, 15 February 2009 (UTC)[reply]

February 15

Formula for partial derivative of A with respect to B given that f(A,B) is constant, applied to vectors

Suppose that we have a vector-valued function of the type .

I am trying to find a formula for the value of

.

This is equivalent to the Jacobian matrix of a function such that if , then .

There is a technique for finding this kind of partial derivative of scalar functions, and I tried to generalize it to vector-valued functions:

Holding constant, we get:

Multiplying both sides by and :

I am not sure, but I believe that is equivalent to , which, if it is correct, gives me an answer to my original question. Is this valid, or did I make a mistake somewhere?

On a side note, I'm new to both Wikipedia's formatting and LaTeX; if you have any comments on my formatting, I'd like to hear them.

24.130.128.99 (talk) 02:14, 15 February 2009 (UTC)[reply]

 —Preceding unsigned comment added by 24.130.128.99 (talk) 02:12, 15 February 2009 (UTC)[reply] 
I believe you are right, see implicit function theorem. Upd: k must be equal to n for the inverse matrix to make sense. (Igny (talk) 04:58, 15 February 2009 (UTC))[reply]

The Problem I am stuck with

Question


A car of mass 1200 kg, towing a caravan of mass 800 kg, is travelling along a motorway at a constant speed of 20 m/s. There are air resistance forces on the car and the caravan, of magnitude 100N and 400 N respectively. Calculate the magnitude of the force on the caravan from the towbar, and the driving force on the car.

The car brakes suddenly , and begins to decelerate at a rate of 1.5 m/s2. Calculate the force on the car fro the towbar. What effect will the driver notice?


I did get the first part which is quite easy. Given the objects travel at constant speed, pull on them must equal to the resistive forces to achieve equilibrium(constant speed).

Thus: Magnitude of the force on the caravan from towbar = 400N

     Driving force = 500N

MY PROBLEM

I am totally lost at the second part of the question:


The car brakes suddenly , and begins to decelerate at a rate of 1.5 m/s2. Calculate the force on the car from the towbar. What effect will the driver notice?


The book says the answer to this part is: 800N forwards; it will appear that the car is being pushed from behind.

But it doesn't say anything about why this is so. My question or help required is that why this is so? —Preceding unsigned comment added by 202.72.235.208 (talk) 08:54, 15 February 2009 (UTC)[reply]

If the 800 kg caravan is decelerating at 1.5 m/s2, what net force must be acting on it ? 400N of this force comes from air resistance - where does the rest come from ? The car exerts a force on the caravan through the towbar - what does Newton's third law then tell you about the force exerted by the caravan on the car ? Gandalf61 (talk) 09:24, 15 February 2009 (UTC)[reply]
First, a translation for US readers: "caravan" = "trailer". Next, there is an apparent assumption that only the car is braking. Next, we need a diagram:
         400N -> +------+
           __    |      |
         _/  \_  | 800kg|
100N -> |1200kg|-|      |
        +-O--O-+ +-O--O-+ 
Now calculate the total deceleration force needed on the trailer:
F = ma = (800kg)1.5m/s2 = 1200kg•m/s2 = 1200N
                
Now, if there's a 1200N deceleration force on the trailer, and 400N of that is initially provided by wind resistance, the additional 800N must be provided by the tow bar. Note that either the rate of deceleration will decrease, or the braking force and force transmitted by the tow bar must increase, as the speed (and therefore wind resistance) decreases. StuRat (talk) 16:17, 15 February 2009 (UTC)[reply]
...and the driver may notice that he must apply increasing force on the brake pedal to keep the deceleration constant, and that the wind, engine and tire noises decrease. Cuddlyable3 (talk) 21:01, 15 February 2009 (UTC)[reply]
StuRat, that's a simply phenomenal piece of AsciiArt. Kudos! --DaHorsesMouth (talk) 21:28, 15 February 2009 (UTC)[reply]

Collatz-like sequence

Does anyone know of any results concerning the Collatz-like sequences generated by

I have found papers on various generalisations of the Collatz conjecture, but I haven't found any results on this specific case.

As far as I can tell, there are three loops:

and every sequence I have tested eventually enters one of these loops. Having found three loops, I was surprised not to find more - why just three ? Gandalf61 (talk) 09:14, 15 February 2009 (UTC)[reply]

Never mind - I have just realised that these are essentially the same as Collatz sequences if we replace n with −n. Gandalf61 (talk) 11:37, 15 February 2009 (UTC)[reply]
Resolved

StuRat (talk) 15:31, 15 February 2009 (UTC)[reply]

Name the curve

What is the name of the curve(of four cusps)described by the enclosure of a moving straight line of length A, wherein the end points of line A move along their respective X and Y axis?

The equation given for this curve is: x^2/3 + y^2/3 = A^2/3. The curve is similar to the hypocycloid of four cusps,(the astroid), however the line generation appears to be uniquely different.

Vaughnadams (talk) 19:26, 15 February 2009 (UTC)[reply]

According to our article, that is an astroid. Algebraist 19:44, 15 February 2009 (UTC)[reply]
It looks a bit like this. Cuddlyable3 (talk) 20:51, 15 February 2009 (UTC)[reply]

Maths: discovery or invention?

Are mathematical developments discoveries or inventions, and does someone's answer to this question effect the conclusions they can draw? Thanks in advance. 86.8.176.85 (talk) 19:46, 15 February 2009 (UTC)[reply]

This depends on your Philosophy of mathematics. That article has some positions that various people have held. Algebraist 19:53, 15 February 2009 (UTC)[reply]
This is a perennial source of discussion for philosophers; there is no clearly correct answer. It's analogous to solving a crossword puzzle – would you say that you created the solution, or that you discovered it? Even usage among mathematicians is varied. I typically say that I discover a new mathematical object but I invent a new technique. — Carl (CBM · talk) 19:58, 15 February 2009 (UTC)[reply]
One often speaks of constructing a new object also. Algebraist 20:45, 15 February 2009 (UTC)[reply]
Mathematics is discovery in an abstract universe that does not exist but is useful to invent. I think the questioner means affect not effect the conclusions one can draw. The answer to the first question does not affect the conclusions one can draw, only the mathematics can prove or disprove a mathematical conclusion. Cuddlyable3 (talk) 20:46, 15 February 2009 (UTC)[reply]
It can, actually. A realist (about mathematical objects) is forced to conclude that the continuum hypothesis must be either true or false, and may be able to convince himself one way or the other. Some types of antirealist, on the other hand, are able to conclude that CH is without truth value. Algebraist 21:36, 15 February 2009 (UTC)[reply]

A related question would be whether mathematical concepts or techniques are copyrightable or patentable? It's a relevant question when you consider cryptology and compression technologies? I wonder what would would be the effect of someone having a patent on the Pythagorean theorem? -- Tcncv (talk) 07:58, 16 February 2009 (UTC)[reply]

I believe there is prior art. 76.126.116.54 (talk) 08:11, 16 February 2009 (UTC)[reply]

Central Force Problem

Never mind, I worked it out ;)

Not serious series

1. Counters of beansFine mathematicians with too much free time, can you supply the last term of this series:

3 , 3 , 5 , 4 , 4 , 3 , 5 , ?

2. Riddle me this: why does six fear seven ? Cuddlyable3 (talk) 21:19, 15 February 2009 (UTC)[reply]

The answer to 1. is 5. Algebraist 21:23, 15 February 2009 (UTC)[reply]
2: Because 7 8 9. As for 1, I say the answer is pi. -mattbuck (Talk) 21:24, 15 February 2009 (UTC)[reply]
1. is A005589 at the OEIS. Algebraist 21:26, 15 February 2009 (UTC)[reply]
Interesting how they stopped at "one hundred", nicely sidestepping the issue of one hundred one versus one hundred and one (though still vulnerable to the challenge from a hundred). --Trovatore (talk) 23:02, 15 February 2009 (UTC)[reply]
The value of 4 for the noughth entry is also arguable. Algebraist 00:01, 16 February 2009 (UTC)[reply]

February 16