Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jammie (talk | contribs) at 17:31, 17 February 2009 (→‎February 17). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 11

Help me grok e

This is a result that mathematicians everywhere seem to take for granted:

Can someone provide me with a proof of it? --Tigerthink (talk) 01:02, 11 February 2009 (UTC)[reply]

Which equals sign are you looking for a proof of? The second one can be taken as a definition of e. There are various alternative definitions, which you can prove are equivalent - is there a particular definition you favour? The first equality is intuitively obvious, I think (just expand out brackets for the first few terms), it may require a little more work to make it rigorous (I can't do rigorous analysis at 1:20am...). --Tango (talk) 01:21, 11 February 2009 (UTC)[reply]
Take a logarithm. Find the limit of the result. Re-exponentiate. Ray (talk) 02:18, 11 February 2009 (UTC)[reply]

Assuming that you already know that then

by making the substitution jr=n

. This should take care of both equalities.-Looking for Wisdom and Insight! (talk) 07:25, 11 February 2009 (UTC)[reply]

If you want to do it well and in elementary way, here is the program.
Program: prove that for any real number r the sequence is increasing as soon as n>|r|. Prove that it is bounded. So it is convergent: define exp(r) its limit. Prove that exp(r+s)=exp(r)exp(s) for all real numbers. Prove the equality with the exponential series. etc.--194.95.184.74 (talk) 09:39, 11 February 2009 (UTC)[reply]

I don't understand this proof

This was a proof for a statement on the binomial coefficient page.

Furthermore,

for all 0 <  k < n if and only if n is prime.

We can prove this as follows: When p is prime, p divides

for all 0 <  k < p

because it is a natural number and the numerator has a prime factor p but the denominator does not have a prime factor p. So ≡0 (mod p)

Unfortunately, I don't understand how the conclusion that ≡0 (mod p) is reached. It's probably very simple, and I'm just missing it, but could someone explain it please. Btw I'm somewhat familiar with modular arithmatic, but I would prefer it was explained in another context (i.e. by divisbility). Thanks,. —Preceding unsigned comment added by 65.92.237.46 (talk) 06:12, 11 February 2009 (UTC)[reply]

The numerator is with k > 0 so it is clear that this is a multiple of p. The denominator is and this is not a multiple of p because k is less than p and p is prime - if the denomiantor were a multiple of p then we could find a non-trivial factorisation of p, which is impossible. So we are dividing a multiple of p by something that is not a multiple of p - we know the result is an integer, but it must also be a multiple of p because there is no factor of p in the denominator to cancel the factor of p in the numerator. Gandalf61 (talk) 09:24, 11 February 2009 (UTC)[reply]
In case it's the last step that's troubling you, be aware that 'a≡0 (mod n)' is just a fancy way of saying that n divides a. Algebraist 11:51, 11 February 2009 (UTC)[reply]

Many thanks. —Preceding unsigned comment added by 65.92.237.46 (talk) 11:53, 11 February 2009 (UTC)[reply]

Degree of Relatedness

Not sure if this is more math or biology, but here goes. What is the degree of relatedness between two children of an incestuous relationship between a half-sister and half-brother, I feel like it's 0.75, but I'm not, any ideas? 169.229.75.128 (talk) 07:12, 11 February 2009 (UTC)[reply]

The expected value of the degree of relatedness of the offspring of two half-siblings (that is, sharing one parent) is 9/16. To get an expected value of degree of relatedness of 3/4 you need to consider the offspring of two clones.
A few things: the degree of relatedness of two offspring is a distribution (in theory, the offpsring could be identical, or could have no genes in common, although these two extremes are extremely unlikely), so properly you should be talking about the expected value of their degree of relatedness.
Secondly, I find the term "degree of relatedness" a bit confusing and loaded. For some species, such as ants, where different individuals may have different numbers of genes, it is possible for the degree of relatedness of A and B to be different from the degree of relatedness of B and A. It is somewhat clumsier but more descriptive to say: "given a gene of A, what is the probability that that gene is found in B?". Thinking about the problem in that manner may also make it easier to go about solving it. Eric. 131.215.158.184 (talk) 10:13, 11 February 2009 (UTC)[reply]
How did you get 9/16, though. 169.229.75.128 (talk) 16:43, 11 February 2009 (UTC)[reply]
I believe that was explained on another Desk. I agree that it's correct. Posting on multiple Desks isn't generally allowed, as it leads to us having to repeat ourselves. StuRat (talk) 00:01, 12 February 2009 (UTC)[reply]
By the way, just in case someone finds their way back to this thread in the future, I was in fact using the wrong definition of coefficient of relationship and the answer is actually 5/8. Eric. 131.215.158.184 (talk) 07:11, 14 February 2009 (UTC)[reply]

Absolute Integrability

This issue came up defining Fourier transform in my PDE class. Let us say that a function f is absolutely integrable if is a finite number. My question is, if a real function f is absolutely integrable, then can't we already say (without assuming anything extra) that . I mean how can a function have a nonzero limit as x grows without bound and still be absolutely integrable. If the limit is any nonzero number, then wouldn't the absolute integral be infinite? The same will be true as x goes to negative infinity. That limit must be zero as well. Is my reasoning correct or wrong?-Looking for Wisdom and Insight! (talk) 07:33, 11 February 2009 (UTC)[reply]

The limit could simply fail to exist. Imagine (a smoothed version of) Σ n χ[n,n+1/n3). It is a non-negative function with integral Σ 1/n2 ≤ 2, where all sums are over the positive integers. However, its limsup on every interval [a,∞) is +∞, its liminf on every interval [a,∞) is 0, and the limit as x → ∞ does not exist. JackSchmidt (talk) 09:09, 11 February 2009 (UTC)[reply]
By the way, such functions are normally called simply integrable (or L1). Algebraist 11:49, 11 February 2009 (UTC)[reply]

Or they're called "Lebesgue-integrable".

So say you have a non-negative function with a pulse of height 1 at 1, another at 2, another at 3, and so on. But the pulses keep getting narrower, so the sum of all the areas under them is a convergent series. Then the integral from 0 to ∞ is finite but the function does not approach 0 at ∞. Michael Hardy (talk) 21:12, 11 February 2009 (UTC)[reply]

But you can always say without any extra assumption that . Unfortunately, it is possible that (Igny (talk) 05:09, 15 February 2009 (UTC))[reply]

GENERAL QUESTION>

I'm not asking for an answer for this question so it ain't homework i just couldn't figure out a logic to go about this sum.It's as follows ->Calculate the total natural numbers that exist from 0 to 2000 which has a square such that its sum of digits is 21. .Now no need to tell me any answer but just tell me a basic logic i can use to go about doing this sum. —Preceding unsigned comment added by Vineeth h (talkcontribs) 14:32, 11 February 2009 (UTC)[reply]

It's a trick question. Algebraist 14:52, 11 February 2009 (UTC)[reply]
The only sum of digits you get are: 0, 1, 4, 7, 9, 10, 13, 16, 18, 19, 25, 27, 28, 31, 34, 36, 37, 40, 43, 45, 46, 49. -- SGBailey (talk) 15:05, 11 February 2009 (UTC)[reply]
You missed 22. Algebraist 15:15, 11 February 2009 (UTC)[reply]
Consider divisibility rules. — Emil J. 15:11, 11 February 2009 (UTC)[reply]

SGBailey-> you're right,turns out 21 isn't divisible by any number so the final answer turns out to be 0 natural numbers.So could you please tell me how you generalised that the only sum of digits you get are the ones you mentioned?Whats the logic behind that?Cause one can't possibly remember all the numbers you mentioned there just to solve a sum like this!Vineeth h (talk) 17:12, 11 February 2009 (UTC)[reply]

I must have deleted 22 by mistake when doing the previous edit. The "algorithm" I used was to do the calculation 2001 times x -> x^2 -> sum(digits(x^2)). -- SGBailey (talk) 20:11, 11 February 2009 (UTC)[reply]
No need for scare quotes - brute force is a perfectly legitimate algorithm! --Tango (talk) 20:13, 11 February 2009 (UTC)[reply]
You don't need to know the list. Just remember the divisibility criteria for 3 and 9: if a number has sum of digits 21, then it is divisible by 3, but not 9, and that's impossible for a square. — Emil J. 17:23, 11 February 2009 (UTC)[reply]
This BASIC code prints out the sums of digits that occur and how often they occur. Sorry, the sum 21 never occurs. Cuddlyable3 (talk) 15:09, 12 February 2009 (UTC)[reply]
deflng a-z
dim f(49)
for x=0 to 2000
 x2=x^2
 sd=0
 for p=6 to 0 step -1  'powers of 10
  dp=int(x2/10^p)
  sd=sd+dp
  x2=x2-dp*10^p
 next p
 incr f(sd)
next x
for n=0 to 49
 if f(n)>0 then print n;f(n)
next n


February 12

Matrix property

If:

What is this property called? It's something -symmetric but I can't quite remember what the prefix is.

Thanks in advance. 128.86.152.139 (talk) 01:52, 12 February 2009 (UTC)[reply]

Skew-symmetric matrix. Algebraist 02:02, 12 February 2009 (UTC)[reply]
Ah yes, genius! Thanks. 128.86.152.139 (talk) 02:11, 12 February 2009 (UTC)[reply]

Points in [0,1] whose decimal expansions contain only the digits 4 and 7

I've posted this question on the math portal talk section and was told the answer, but I tried and don't know how to prove it.

Let E be the set of all x in [0,1] whose decimal expansion contains only the digits 4 and 7. How is it closed?

If x is in [0,1] and not in E, it'll have a digit different from 4 and 7. Then I tried to find a neighborhood of x that's disjoint from E, but it's difficult as there are many cases each requiring separate treatment. Can anyone offer a proof? —Preceding unsigned comment added by IVI JAsPeR IVI (talkcontribs) 13:27, 12 February 2009 (UTC)[reply]

If it has a digit different that 4 and 7, then it will have a first such digit. You can do what you like to digits after that and always stay outside E. Does that help? --Tango (talk) 13:55, 12 February 2009 (UTC)[reply]
As always with decimal expansions, there's the annoying matter of non-uniqueness to be dealt with. Algebraist 13:59, 12 February 2009 (UTC)[reply]
You have the right idea; just show that the complement is open. There will be several cases, because you have to worry about numbers like 0.3999999... and 0.474740000... . Proofs of statements that refer to decimal digits are always difficult because of the non-uniqueness. It is much easier to prove this statement for Baire space (set theory), and that space is sufficiently similar to the real line to guide intuition. — Carl (CBM · talk) 14:03, 12 February 2009 (UTC)[reply]
Why would you have to worry about that? He says "only 4 and 7." Anyway, I would use convergent sequences. If a convergent sequence in [0,1] consists of numbers containing only 4 and 7, it converges to a number made of only 4 and 7. The set is closed. Black Carrot (talk) 14:11, 12 February 2009 (UTC)[reply]
Well, you would need to either know that fact about pointwise convergence of the decimal expansion, or prove it. And in general one cannot say that if a sequence x converges to y then any sequence of decimal expansions of x converges pointwise to any given decimal expansion of y. This is the usual headache with decimal expansions. — Carl (CBM · talk) 14:16, 12 February 2009 (UTC)[reply]
I guess it depends on whether we count trailing 0's as digits. If we do, then both recurring 9's expansions and terminating expansions contain digits other than 4 and 7, so there's no problem. If we don't, then 0.6999... and 0.7 are in different categories despite being the same number. --Tango (talk) 14:19, 12 February 2009 (UTC)[reply]
Even in the former case, you have to worry (briefly) about this in your approach: which is the first digit not 4 or 7 in 7/10? Algebraist 14:24, 12 February 2009 (UTC)[reply]
If doesn't matter. You don't need to know which is the first such digit, just that it exists. Just call it the nth digit and get on with it. --Tango (talk) 14:39, 12 February 2009 (UTC)[reply]
Hence the briefness of the worry. Algebraist 14:46, 12 February 2009 (UTC)[reply]
If a decimal expansion has n digits after the point there are n2^(n-1) possible sequences of digits comprising exclusively 2 digits. But there is no limit to increasing n. Therefore the set E is infinite. Cuddlyable3 (talk) 14:22, 12 February 2009 (UTC)[reply]
Yes. So what? Algebraist 14:24, 12 February 2009 (UTC)[reply]
(I added a descriptive title.) I think this is pretty easy with just two cases. For a nonterminating decimal (which has no alternate terminating expansion), find the first illegal digit and choose a neighborhood small enough that that digit doesn't vary. For a terminating decimal... you fill in the blank. -- BenRG (talk) 14:26, 12 February 2009 (UTC)[reply]
The non-uniqueness of decimal expansion is definitely a plague. I've thought about for a bit more and came up with this: Let x be outside of E, and the first digit different from 4 and 7 be α at the nth place. The next digit must be a digit from 0 to 9 (since we are using base 10). If it's 1-8, the neighborhood 1/10n+1 with center at x, has no element in common with E (because if we add any amount less than 1/10n+1 to x it will not change the digit α (even if its 0.(... ...)α8999999... ..., we must add or subtract something with absolute value strictly less than 0.00000... ...01 - the digit 1 is at n+1th place - to it and so it can't get to 0.(... ...)α999999... ... = 0.(... ...)α+100000... ... (or if α is already 9 α+1 will contribute to the digit on its left)
If the digit after α is 0 or 9, we still use the neighborhood 1/10n+1 with center at x but it gets a bit more difficult to demonstrate (and I'm not entirely sure it's correct due the decimal non-uniqueness), it's easier to do it with a diagram but frankly I really don't know how to use wikipedia. If anyone can point out any mistakes i made (which I'm pretty sure I did) please correct it. And also with regards to Baire Space isn't it uncountable? Because countably many cartesian product of natural numbers simply means the set of all functions from N to N, so why do they use ωω? Isn't that ordinal countable (from my intuition it's somewhat like the union of all finite cartesian products of of natural numbers - it's defined to be the supreme of ωn, n run over the naturals, and ωn is somewhat like Nn. Homeomorphism is probably the word but that's just from casual reading). Sorry I'm not quite at your level yet.--IVI JAsPeR IVI (talk) 12:44, 14 February 2009 (UTC)[reply]
I think there is some confusing notation going on. ω is used both to represent the set of natural numbers and the first infinite ordinal (they are, after all, the same thing), but how you manipulate the symbol depends on which meaning you are giving it. If it's the set of natural numbers then ωω means the set of all functions from the natural numbers to themselves, which is uncountable. If it's an ordinal, then ωω means the limit of ωn as n goes to infinity, which is a countable ordinal. I think the answer is a avoid using ω to refer to the set of natural numbers and just use it for the ordinal (I think that's the most common notation - I've only seen ω used for the natural numbers in rather old books). --Tango (talk) 13:06, 14 February 2009 (UTC)[reply]
The use of ω for the naturals is still common in logic and, I believe, in set theory. One slight advantage of this notation is that ω quite definitely includes 0, while with ℕ it's anyone's guess. Algebraist 14:27, 14 February 2009 (UTC)[reply]
Another benefit of using ω to refer to the set of finite ordinals is that it's very clear exactly which set is intended. This usage is extremely common in practice. The corresponding solution in practice to the issue Jasper and Tango mentioned is that you need to say so explicitly if you are using ordinal exponentiation (which is used somewhat rarely in practice). This leads to the following conventions:
  • ωω is the set of infinite sequences of natural numbers.
  • ω is the set of finite sequences of natural numbers.
  • [ω]ω is the set of infinite sets of natural numbers.
  • [ω] is the set of finite sets of natural numbers.
— Carl (CBM · talk) 14:37, 14 February 2009 (UTC)[reply]
I tend to use and to avoid the ambiguity. --Tango (talk) 15:42, 14 February 2009 (UTC)[reply]

Thanks. I thought the symbol ω (and anything containing ω that "looks" like elementry operations) was used exclusively for ordinals and arithmatic/exponentiation on ordinals.--IVI JAsPeR IVI (talk) 08:56, 15 February 2009 (UTC)[reply]

Book recommendations

Could someone recommend me easy to understand and interesting to read book(s) covering the following topics: Rodrigues' rotation formula, Clifford algebra, Rotation groups, Lie groups, Exponential map, etc. Thanks a ton! deeptrivia (talk) 18:39, 12 February 2009 (UTC)[reply]

/Wrt Clifford algebras, search the net for stuff from John Baez, his weekly column has two specials on it, and book recommendations. I would expect to find similar recommendations for the other things in similar places. --Ayacop (talk) 19:17, 12 February 2009 (UTC)[reply]

Simple Math

This is a really easy question compared to most of the ones here so I'm sure someone can help me.

How do I solve for 'm' in the following equation:

900 = 1500(0.95)^m

I will admit right now that this is from my homework but I have tried really hard but just can't get it. Thanks in advance.

This page isn't for homework problems. I suggest you reread the chapter in your textbook from which the problem originated, paying particular attention to the worked examples. Ray (talk) 23:35, 12 February 2009 (UTC)[reply]
Are you familiar with logarithms? If yes, this is easy. If no, you should read your textbook's section on them. Algebraist 23:36, 12 February 2009 (UTC)[reply]
And, if you haven't had logarithms in school yet, and don't care to learn them, either, this could also be solved by a trial-and-error approach. I'll get you started:
900/1500 = (1500/1500)(0.95)^m
0.6 = (1)(0.95)^m

0.6 = (0.95)^m
Now try a range of values for m:
m  (0.95)^m
-- ---------
 1  0.95
10  0.598737
Since 0.6 is between 0.95 and 0.598737, but much closer to 0.598737, we should try a value for m between 1 and 10, but much closer to 10. I'll try 9.9:
m      (0.95)^m
--     ---------
 1     0.95
 9.9   0.601816
10     0.598737
Since 0.6 is approximately halfway between 0.601816 and 0.598737, next try an m approximately halfway between 9.9 and 10. Continue with this process until you have the desired number of significant digits for m. StuRat (talk) 01:23, 13 February 2009 (UTC)[reply]
The trial-and-error approach that StuRat showed is also called "successive approximation" and at this link you can see it used in an electronic circuit. You have yet another method of solving your equation if you can still find a sliderule that has LL scales (and the ancient knowledge of how to use them). Cuddlyable3 (talk) 19:59, 13 February 2009 (UTC)[reply]


February 13

Finding x in logs

log_{2} x + log_{2} (x+5) = log_{2} 9

I use the law of logs and multiply x into x+5, then I raise 2 to both sides and end up with x^2+5x=9 but I can't factor that and get nice numbers, and I know I'm not supposed to use the quadratic formula. What am I doing wrong here? 98.221.85.188 (talk) 03:40, 13 February 2009 (UTC)[reply]

Complete the square from first principles like an honest man? Algebraist 03:46, 13 February 2009 (UTC)[reply]
5 doesn't divide evenly... 98.221.85.188 (talk) 04:19, 13 February 2009 (UTC)[reply]
Your problem does not have a nice round answer. So either you have to accept that or you wrote down the wrong equation above. Dragons flight (talk) 04:24, 13 February 2009 (UTC)[reply]
Well it's a webwork problem meaning we have to type in the answer on the web, and then it tells us if we got it right or wrong. So it's supposed to have an exact answer, but I keep getting an approximation of 1.4. The problem looks like it's written correctly. I don't know where the problem is. 98.221.85.188 (talk) 04:27, 13 February 2009 (UTC)[reply]
Nvm, I put sqrt15.25-2.5 and it says I was correct 98.221.85.188 (talk) 04:29, 13 February 2009 (UTC)[reply]
You missed something: You need to reject the extraneous root. You can't take the logarithm of a negative number. Michael Hardy (talk) 22:52, 13 February 2009 (UTC)[reply]

To complete the square, you divide 5 by 2 and then square, and add that amount to both sides:

Then:

so that

etc. Now the complication: One of the solutions is negative. You need to reject that one since there is no base-2 logarithm of a negative number. Michael Hardy (talk) 22:51, 13 February 2009 (UTC)[reply]

Reals-chauvinist! —Tamfang (talk) 06:42, 18 February 2009 (UTC)[reply]

Finding distance given initial speed and friction

I'm making a simple game that involves throwing balls around a complex level. The ground has some friction on the balls that slows them down. I'm not going for a perfect simulation, so as it is I'm just multiplying their velocity to a constant f slightly less than 1. So, given an initial position and speed, and a certain friction, I'm trying to predict where it will stop so I can plug in some AI code. I'm guessing it'll involve some calculus, but it's been a while... What I have looks like:

But I'm kinda lost there. If anyone could point out the way on how to solve this thing, I'd be very grateful. Thanks! -- JSF —Preceding unsigned comment added by 189.112.59.185 (talk) 13:34, 13 February 2009 (UTC)[reply]

I don't think you want , you probably want . That gives you:
(Note, since f<1, log(f) is negative, so that minus sign does make sense.) That's if it's done continuously, if you are actually simulating it using discrete time steps, you'll get a slightly different answer (but not too far off if the time steps are small enough). --Tango (talk) 14:31, 13 February 2009 (UTC)[reply]
Watch out for the calculus because it tells you that with constant friction the ball never stops completely. Possibly you want a procedure like this pseudocode:
Enter P0 = start position (distance units e.g. inches)
      V0 = start velocity (distance per time step)
      F  = friction (velocity change per time step)
 p=P0
 v=V0
NEX:
 p = p + v*(1+F)/2
 v = v * F
 if v > 0.01 goto NEX
REM The ball stops at position p.  

From these start values

P0, V0, F = 1, .1, .8

the ball rolled for 11 time steps (loops to NEX) and stopped at position p = 1.41. Cuddlyable3 (talk) 20:52, 13 February 2009 (UTC)[reply]

College Math Problem Plea ....

A steel block of weight W rests on a horizontal surface in an inaccessible part of a machine. The coffecient of friction between the block and the surface is "u". To extract the block, a magnetic rod is inserted into the machine and this rod is used to pull the block at a constant speed along the surface with a force F. The magnetic force of attraction between the rod and the block is M. Explain why

(a) M > u X W (b) F = u X W


Math problem was under the chapter : "NEWTON'S THIRD LAW" in Mechanics M1 Book of Mathematics Course PLEASE DO HELP!!!!


—Preceding unsigned comment added by 202.72.235.204 (talk) 21:33, 13 February 2009 (UTC)[reply]

We are not going to do your homework for you. J.delanoygabsadds 21:37, 13 February 2009 (UTC)[reply]


Actually I was unable to do this one so I thought what better place for help than Wiki and ofcourse you guys!!!! —Preceding unsigned comment added by 202.72.235.204 (talk) 21:49, 13 February 2009 (UTC)[reply]

What bit are you stuck on? If you show us your working so far, we'll try and help you with the next bit, but we're not going to do the whole question for you. If you don't even know where to start, you should go and talk to your teacher. --Tango (talk) 21:50, 13 February 2009 (UTC)[reply]


Okay, WORKING:

The question's first part (a) suggests that magnetic attraction "M" should be greater than frictional force experienced by steel block "uW". When this happens, infact, the block will start to accelerate as the net force on block exceeds 0 (M > friction). But the block travels with steady speed as stated in the question. But as we see it is for granted that M is always constant thus object accelerates always.

Now, whats the deal with M and F. How does pulling the rod with a force of F change anything?

When the magnetic rod is in contact with the block there is a reactive force between them than partially cancels out M. If you weren't pulling the rod, it would completely cancel it out, by pulling the rod just the right amount you leave just enough resultant force on the block to counteract the friction allowing for constant velocity. Try drawing a diagram showing the block, the surface and the rod with all the forces (I count 7 forces in total). --Tango (talk) 22:43, 13 February 2009 (UTC)[reply]

CLARIFICATION...............................................................................................:

Okay, I didn't consider the steel block to be in contact with the magnet. There are few things for clarification though:

Lets take the steel block into consideration: Taking the steel block travels to the left

Forces to the right: The magnetic force M and the pull from the magnetic rod F / uW

Forces to the left (all horizontally): Reaction contact force R = M and friction uW

Resultant force horizontally = 0

My question is that it is logical that the contact reaction force on block from magnet would decrease INCREMENTALLY AS due to the pull increments but mathematically speaking, we can take reaction force constant and workout arithmetic summation to find net force taking that there is granted constant pull from magnet on block.


WHAT REALLY HAPPENS, DOES THE CONTACT FORCE DECREASES AS PULL EXIST OR INCREASE OR CONTACT FORCE REMAINS CONSTANT AS PULL ARISES??? —Preceding unsigned comment added by 202.72.235.208 (talk) 16:54, 14 February 2009 (UTC)[reply]

The pulling force from the rod isn't a separate force, it's just the difference between the magnetic attraction and the contact force. I suggest you include the rod in your diagram and consider the forces on it too (F is a force on the rod). --Tango (talk) 17:36, 14 February 2009 (UTC)[reply]
You're overthinking this Q. Yes, in reality any force which gets the block moving would also cause it to accelerate, but just ignore that. When they said it moves at a constant speed, what they really meant was "it will move at a slight acceleration, which is minimal enough that you need not consider it in your calculations". Similarly, there isn't a single coefficient of friction, but rather are two, a higher static one and a lower dynamic one. So, you would need to pull with a greater force to "break the block loose", then decrease the pulling force to prevent acceleration. StuRat (talk) 16:35, 15 February 2009 (UTC)[reply]

February 14

Mathematical fraction

The term "a third of a mil" in reference to a dollar amount is used in a New Jersey state statute. Please advise what that fraction is and what the decimal number is that should be used when multiplying another larger number. For instance what would I multiply $1,000,000 by to find a "third of a mil" of that amount?

Thank you,

Frank J. Mcmahon Mahwah, NJ [email removed] —Preceding unsigned comment added by 69.127.4.198 (talk) 02:07, 14 February 2009 (UTC)[reply]

Try a lawyer? I'd assume offhand "a third of a mil" means dollars. If there's some special legal meaning of the phrase, then it is something a lawyer would know, not a mathematician. Maybe it would help if we could see the full sentence that "a third of a mil" first appears in. By the way, we never email responses, so I removed yours to lessen visibility to spam-bots.... Eric. 131.215.158.184 (talk) 07:17, 14 February 2009 (UTC)[reply]
In some contexts I believe that, just like "1/3 per cent" means X / 300, this could mean "1/3 per thousand" or X / 3000. -- SGBailey (talk) 09:48, 14 February 2009 (UTC)[reply]
I would think "a third of a mil" is just short for "a third of a million" or $333,333.33. As SGBailey says, it could mean "a third per mil", or 1/3000 times. Either way, it's not a very common way of saying it, but then lawyers like to make things as confusing as possible - it keeps them in work. --Tango (talk) 12:53, 14 February 2009 (UTC)[reply]
I would assume that it refers to mill (currency). "A third of a mil" is 1/3000 of a dollar. -- BenRG (talk) 13:10, 14 February 2009 (UTC)[reply]
In light of [1], about library funding in New Jersey, it looks like SGBailey got it right; it's a third per mil. Of course, that also means it's a third of a mill per dollar (in this case, per dollar of assessed property value). So to answer the original question, for a property assessed at $1,000,000, a "third of a mil" would be around $333. —JAOTC 14:54, 14 February 2009 (UTC)[reply]

Numbers

I have these series of numbers and I know they are relate, but I don't know what they are called.

0,1,3,6,10,15, 21,28...

0,+1,+2,+3,+4,+5...

Thanks --68.231.197.20 (talk) 06:47, 14 February 2009 (UTC)[reply]

You should look at the triangular numbers. Eric. 131.215.158.184 (talk) 07:07, 14 February 2009 (UTC)[reply]
The OEIS is a good place to answer these questions. Algebraist 14:23, 14 February 2009 (UTC)[reply]

they're the perfect squares. 0*0=0, 1*1=1, 2x2=4, 3x3=9, etc. that is +1+3+5 etc. Do you notice a relationship with your series? How would you express that as an equation -- or maybe two? note: this may be BS on my part —Preceding unsigned comment added by 82.120.236.246 (talk) 21:59, 14 February 2009 (UTC)[reply]

What are the perfect squares? None of the numbers in the list (other than 0 and 1) are squares... what are you talking about? --Tango (talk) 00:02, 15 February 2009 (UTC)[reply]
Perhaps the sum of any two consecutive terms;) hydnjo talk 13:56, 15 February 2009 (UTC)[reply]

Dice game problem

I was thinking about a problem this morning and got a bit stuck on figuring out how to calculate the answer. It seemed like the sort of problem that has been calculated before, but I can't seem to find anything on it. Here's how it works:

You are playing a dice game with three six-sided dice. You roll the dice and set aside any that come up 1. You then reroll any dice that aren't one and continue the process of rerolling and setting aside 1's until all three dice have come up 1. The question is how many times on average do you need to roll the dice before they all individually have come up 1?

(Note: The original problem I'm actually trying to solve is very similar, except that instead of the probability of individual success being 1/6 the probability of individual success is 1/11.)

My friend and I worked out a brute force way to approximate it by computer (I think, I haven't typed it up yet), but I'm wondering if there's a more elegant or exact solution. Any suggestions? Thanks for the help. 71.60.89.143 (talk) 20:54, 14 February 2009 (UTC)[reply]

Just a quick follow-up - I used Excel to calculate that the expected number of rolls would be approximately 4.878 in order to get at least one success on each of the three dice. For my original problem where the chance of success is 1/11, the expected number of rolls is 8.623. Please feel free to confirm if you like, and if you have a nifty way of solving the problem I'd be interested in reading it. 71.60.89.143 (talk) 22:56, 14 February 2009 (UTC)[reply]

Your answers are certainly wrong. The expected number of rolls of one dice until you get a 1 is 6 (or 11), and getting the right number on all three is clearly harder. Algebraist 23:03, 14 February 2009 (UTC)[reply]
For three dice, probability p of success each time, I get , giving 10566/1001 for probability 1/6 and 45727/2317 for probability 1/11. Unfortunately, all I have for the general case (n dice, probability p) is a messy infinite sum, and I don't have time right now to do this properly. Algebraist 23:10, 14 February 2009 (UTC)[reply]
Yeah, after I typed the above I realized I made a mistake in my formula in Excel. I'm still not getting the same answer you got above, though. I get 7.38 for the expected value of p=1/6. Hmmm.... 71.60.89.143 (talk) 23:26, 14 February 2009 (UTC)[reply]
To clarify what I'm doing in Excel, I let P(k,n) be the probability that at least k of the three dice have succeeded (k=0 to 3) after at least n rolls. So P(2,5) would be the chance that at least two of the three dice hit a success after five rolls. Given that definition for P(k,n), I get the following formula for P(3,n) for n>1.
P(3,n) = P(3,n-1) + (P(2,n-1) - P(3,n-1))*(P(1,1) - P(2,1)) + (P(1,n-1) - P(2,n-1))*(P(2,1) - P(3,1)) + (1 - P(1,n-1))*P(3,1)


Above, the expression P(2,n-1) - P(3,n-1) is the probability that exactly two of the dice succeeded after n-1 rolls. The other expressions are similar. 71.60.89.143 (talk) 23:46, 14 February 2009 (UTC)[reply]
Alright, I found the error in my Excel sheet and the formula above.  :) The second term in the last few products is a probability involving rolling all three dice, but it should actually be only partial rolls. I fixed the error and, lo and behold, it matches your answer Algebraist. Good work! 71.60.89.143 (talk) 01:48, 15 February 2009 (UTC)[reply]

Just my take. Let X be the number of throws before you get 1 on an n-faced die. Its cdf is

If you try to throw k dice, it is equivalent to look at the maximum of k iid random variables, which has cdf

Thus the expected value of number of throws should be

which I am pretty sure is possible to calculate explicitly. (Igny (talk) 02:49, 15 February 2009 (UTC))[reply]

Yeah, that's the infinite sum I alluded to above. Algebraist 03:16, 15 February 2009 (UTC)[reply]
So if I did not screw up for 3 dice with 6 face the average number of throws is 10.56 and for 11 faces it is 19.74 (Igny (talk) 21:17, 15 February 2009 (UTC))[reply]
That's right, Igny. And thanks for breaking down your solution; it's a little simpler and more generalizable than what I used. 63.95.36.13 (talk) 14:56, 16 February 2009 (UTC)[reply]

is it possible to fuck up when combining two random number generators?

If you're combining two random number generators that you think are pretty random, but who knows, maybe every so often they aren't random enough, is there any way to do that which looks okay and basically has the correct distribution, but in fact now is way not so random? Thanks.

P.s. this isn't malicious! I wouldn't be asking it this way, and on this forum, if it were —Preceding unsigned comment added by 82.120.236.246 (talk) 22:23, 14 February 2009 (UTC)[reply]
This isn't exactly an answer to your question, but: for the most part, any attempt to combine two good PRNGs, or to modify the output of a PRNG to make it "more random", will actually reduce the its quality. That's why there are so few of them that are considered cryptographically strong. By the way, if you do get a straight answer to your question, be sure to remember it in case you ever want to participate in the Underhanded C Contest. « Aaron Rotenberg « Talk « 23:16, 14 February 2009 (UTC)[reply]
Yes, it is possible to fuck up. By not defining what distribution you really need. Cuddlyable3 (talk) 23:39, 14 February 2009 (UTC)[reply]
If you can reduce the randomness of a PRNG by combining its output in some simple way with another PRNG, then it certainly wasn't cryptographically strong in the first place. However it is usually possible to improve the randomness of even weak PRNGs by combining them. I flatly disagree with Aaron when he claims that it "usually" reduces the quality. It can happen, if there is some unsuspected connection between the two PRNGs (or, of course, if you make some silly mistake in how you "combine" them), but it usually helps rather than hurts.
The simplest way to combine two PRNGs (say, normalized to return a value between 0 and 1) is simply to add their outputs modulo 1. If you do this to two PRNGs of relatively prime period, the period of the new PRNG is the product of the original periods. (Period by itself is not a good measure of randomness, but short period is always a problem.)
An even better way is the McLaren–Marsaglia method, in which you cache values from one of the PRNGs, and use the other one to select a value from the stream. --Trovatore (talk) 23:52, 14 February 2009 (UTC)[reply]
"if you can reduce the randomness..by combining its output in some simple way ..then it certainly wasn't ..strong" WTF!! How's this for starters:
perl -we "for(1..10000000){if (int rand 2 + int rand 2){$one++}else{$zero++} }; print qq/got $one ones and $zero zeros\n/"
I suppose the result "got 5831006 ones and 4168994 zeros" means that I just proved Perl's random number generator is way, way insecure!!! —Preceding unsigned comment added by 82.120.236.246 (talk) 00:58, 15 February 2009 (UTC)[reply]
I don't actually speak Pathologically Eclectic Rubbish Lister so I'm not quite sure what you're doing here. It looks like you're demanding that two random values chosen from 0 to 2 both be less than 1 in order to increment $zero, in which case I'd expect only 25% zeroes from a good RNG. But I certainly wouldn't be surprised if Perl had a bad RNG — in fact, that seems more likely than not.
In any case you seem to have ignore both of my stipulations — that the two PRNGs be unrelated, and that you not make some silly mistake when combining them (like choosing random bits from a 25-75 proposition). --Trovatore (talk) 03:28, 15 February 2009 (UTC)[reply]
int rand 2 + int rand 2 in Perl means int(rand(2 + int(rand(2)))), which should be 0 with probability 5/12 ≈ 0.4167, so Perl's RNG passed this test—though I suspect it wasn't the intended one. -- BenRG (talk) 04:21, 15 February 2009 (UTC)[reply]

February 15

Formula for partial derivative of A with respect to B given that f(A,B) is constant, applied to vectors

Suppose that we have a vector-valued function of the type .

I am trying to find a formula for the value of

.

This is equivalent to the Jacobian matrix of a function such that if , then .

There is a technique for finding this kind of partial derivative of scalar functions, and I tried to generalize it to vector-valued functions:

Holding constant, we get:

Multiplying both sides by and :

I am not sure, but I believe that is equivalent to , which, if it is correct, gives me an answer to my original question. Is this valid, or did I make a mistake somewhere?

On a side note, I'm new to both Wikipedia's formatting and LaTeX; if you have any comments on my formatting, I'd like to hear them.

24.130.128.99 (talk) 02:14, 15 February 2009 (UTC)[reply]

 —Preceding unsigned comment added by 24.130.128.99 (talk) 02:12, 15 February 2009 (UTC)[reply] 
I believe you are right, see implicit function theorem. Upd: k must be equal to n for the inverse matrix to make sense. (Igny (talk) 04:58, 15 February 2009 (UTC))[reply]

The Problem I am stuck with

Question


A car of mass 1200 kg, towing a caravan of mass 800 kg, is travelling along a motorway at a constant speed of 20 m/s. There are air resistance forces on the car and the caravan, of magnitude 100N and 400 N respectively. Calculate the magnitude of the force on the caravan from the towbar, and the driving force on the car.

The car brakes suddenly , and begins to decelerate at a rate of 1.5 m/s2. Calculate the force on the car fro the towbar. What effect will the driver notice?


I did get the first part which is quite easy. Given the objects travel at constant speed, pull on them must equal to the resistive forces to achieve equilibrium(constant speed).

Thus: Magnitude of the force on the caravan from towbar = 400N

     Driving force = 500N

MY PROBLEM

I am totally lost at the second part of the question:


The car brakes suddenly , and begins to decelerate at a rate of 1.5 m/s2. Calculate the force on the car from the towbar. What effect will the driver notice?


The book says the answer to this part is: 800N forwards; it will appear that the car is being pushed from behind.

But it doesn't say anything about why this is so. My question or help required is that why this is so? —Preceding unsigned comment added by 202.72.235.208 (talk) 08:54, 15 February 2009 (UTC)[reply]

If the 800 kg caravan is decelerating at 1.5 m/s2, what net force must be acting on it ? 400N of this force comes from air resistance - where does the rest come from ? The car exerts a force on the caravan through the towbar - what does Newton's third law then tell you about the force exerted by the caravan on the car ? Gandalf61 (talk) 09:24, 15 February 2009 (UTC)[reply]
First, a translation for US readers: "caravan" = "trailer". Next, there is an apparent assumption that only the car is braking. Next, we need a diagram:
         400N -> +------+
           __    |      |
         _/  \_  | 800kg|
100N -> |1200kg|-|      |
        +-O--O-+ +-O--O-+ 
Now calculate the total deceleration force needed on the trailer:
F = ma = (800kg)1.5m/s2 = 1200kg•m/s2 = 1200N
                
Now, if there's a 1200N deceleration force on the trailer, and 400N of that is initially provided by wind resistance, the additional 800N must be provided by the tow bar. Note that either the rate of deceleration will decrease, or the braking force and force transmitted by the tow bar must increase, as the speed (and therefore wind resistance) decreases. StuRat (talk) 16:17, 15 February 2009 (UTC)[reply]
...and the driver may notice that he must apply increasing force on the brake pedal to keep the deceleration constant, and that the wind, engine and tire noises decrease. Cuddlyable3 (talk) 21:01, 15 February 2009 (UTC)[reply]
StuRat, that's a simply phenomenal piece of AsciiArt. Kudos! --DaHorsesMouth (talk) 21:28, 15 February 2009 (UTC)[reply]
Thanks ! StuRat (talk) 17:41, 16 February 2009 (UTC)[reply]

Thanks people for your help and specially thankful to StuRat for his visual description, really appreciate that kind of helping hand. By the way, air resistance was told to be kept constant in this problem unless otherwise stated in this one, it was written at the top of the chapter, SORRY FORGOT TO MENTION. The answer is that the car had to provide extra 800N braking force despite its own 1700N needed braking for such deceleration.


                                  400N ->+------+
           __                            |      |
         _/  \_                          | 800kg|
100N -> |1200kg|<-800N-------------800N->|      |
        +-O--O-+                         +-O--O-+ 

Braking(1700 + 800)N ->


Towbar acts as a couple, meaning 800N forces bothways parallely. Thus the reaction force is 800N. —Preceding unsigned comment added by 202.72.235.202 (talk) 18:34, 16 February 2009 (UTC)[reply]

Collatz-like sequence

Does anyone know of any results concerning the Collatz-like sequences generated by

I have found papers on various generalisations of the Collatz conjecture, but I haven't found any results on this specific case.

As far as I can tell, there are three loops:

and every sequence I have tested eventually enters one of these loops. Having found three loops, I was surprised not to find more - why just three ? Gandalf61 (talk) 09:14, 15 February 2009 (UTC)[reply]

Never mind - I have just realised that these are essentially the same as Collatz sequences if we replace n with −n. Gandalf61 (talk) 11:37, 15 February 2009 (UTC)[reply]
Resolved

StuRat (talk) 15:31, 15 February 2009 (UTC)[reply]

Name the curve

What is the name of the curve(of four cusps)described by the enclosure of a moving straight line of length A, wherein the end points of line A move along their respective X and Y axis?

The equation given for this curve is: x^2/3 + y^2/3 = A^2/3. The curve is similar to the hypocycloid of four cusps,(the astroid), however the line generation appears to be uniquely different.

Vaughnadams (talk) 19:26, 15 February 2009 (UTC)[reply]

According to our article, that is an astroid. Algebraist 19:44, 15 February 2009 (UTC)[reply]
It looks a bit like this. Cuddlyable3 (talk) 20:51, 15 February 2009 (UTC)[reply]

Maths: discovery or invention?

Are mathematical developments discoveries or inventions, and does someone's answer to this question effect the conclusions they can draw? Thanks in advance. 86.8.176.85 (talk) 19:46, 15 February 2009 (UTC)[reply]

This depends on your Philosophy of mathematics. That article has some positions that various people have held. Algebraist 19:53, 15 February 2009 (UTC)[reply]
This is a perennial source of discussion for philosophers; there is no clearly correct answer. It's analogous to solving a crossword puzzle – would you say that you created the solution, or that you discovered it? Even usage among mathematicians is varied. I typically say that I discover a new mathematical object but I invent a new technique. — Carl (CBM · talk) 19:58, 15 February 2009 (UTC)[reply]
One often speaks of constructing a new object also. Algebraist 20:45, 15 February 2009 (UTC)[reply]
Mathematics is discovery in an abstract universe that does not exist but is useful to invent. I think the questioner means affect not effect the conclusions one can draw. The answer to the first question does not affect the conclusions one can draw, only the mathematics can prove or disprove a mathematical conclusion. Cuddlyable3 (talk) 20:46, 15 February 2009 (UTC)[reply]
It can, actually. A realist (about mathematical objects) is forced to conclude that the continuum hypothesis must be either true or false, and may be able to convince himself one way or the other. Some types of antirealist, on the other hand, are able to conclude that CH is without truth value. Algebraist 21:36, 15 February 2009 (UTC)[reply]

A related question would be whether mathematical concepts or techniques are copyrightable or patentable? It's a relevant question when you consider cryptology and compression technologies? I wonder what would would be the effect of someone having a patent on the Pythagorean theorem? -- Tcncv (talk) 07:58, 16 February 2009 (UTC)[reply]

I believe there is prior art. 76.126.116.54 (talk) 08:11, 16 February 2009 (UTC)[reply]
Theorems are just statements of fact, I don't think they are copyrightable or patentable. Mathematical algorithms can be patented, but are not copyrightable, as I recall (a given software implementation of the algorithm is copyrightable, though). --Tango (talk) 11:59, 16 February 2009 (UTC)[reply]
Mathematical algorithms are really no different from other mathematical facts and therefore should not be patentable. Indeed, generally they are not patentable outside the United States, see software patent. — Emil J. 13:48, 16 February 2009 (UTC)[reply]
Any copyrighted image or text that can be digitised becomes just a big binary number that may be judged to be an Illegal number. Psssst the secret number is 42 but you didn't hear that from me. Cuddlyable3 (talk) 15:11, 16 February 2009 (UTC)[reply]

Not serious series

1. Counters of beansFine mathematicians with too much free time, can you supply the last term of this series:

3 , 3 , 5 , 4 , 4 , 3 , 5 , ?

2. Riddle me this: why does six fear seven ? Cuddlyable3 (talk) 21:19, 15 February 2009 (UTC)[reply]

The answer to 1. is 5. Algebraist 21:23, 15 February 2009 (UTC)[reply]
2: Because 7 8 9. As for 1, I say the answer is pi. -mattbuck (Talk) 21:24, 15 February 2009 (UTC)[reply]
1. is A005589 at the OEIS. Algebraist 21:26, 15 February 2009 (UTC)[reply]
Interesting how they stopped at "one hundred", nicely sidestepping the issue of one hundred one versus one hundred and one (though still vulnerable to the challenge from a hundred). --Trovatore (talk) 23:02, 15 February 2009 (UTC)[reply]
The value of 4 for the noughth entry is also arguable. Algebraist 00:01, 16 February 2009 (UTC)[reply]
Mattbuck, the mission I give you is to take a LARGE piece of paper and write down the exact answer in very small print. Cuddlyable3 (talk) 14:29, 16 February 2009 (UTC)[reply]

3. I gave a junior class an introduction to using x as a variable (which some of them found baffling) by getting them to evaluate these expressions:

a) when
b) when
One student evaluated a) as 28 which is wrong. He got b) wrong too but at least he was consistent in his method. What was his answer to b) ? Cuddlyable3 (talk) 14:29, 16 February 2009 (UTC)[reply]
57.5 --Tango (talk) 14:46, 16 February 2009 (UTC)[reply]
That's not really wrong, is it? Concatenation can be denoted by juxtaposition just like multiplication. — Emil J. 15:02, 16 February 2009 (UTC)[reply]
You can denote anything you like however you like, but it's not standard. If you want to concatenate digits you would usually use some kind of symbol to denote it (perhaps ). --Tango (talk) 15:11, 16 February 2009 (UTC)[reply]
But the standard notation for concatenation of strings over a finite alphabet in combinatorics, theoretical computer science, and related areas is simple juxtaposition, that's the point. People also use stuff like when they want to be extra explicit, but then again, there is explicit notation for multiplication as well. I've never seen used for concatenation. — Emil J. 15:35, 16 February 2009 (UTC)[reply]
In those areas the symbols usually just represent an arbitrary alphabet, not actual digits (they may use the same symbols, but they aren't actually representing numbers). If you are doing addition and subtraction with them then it is clear that they are numbers. Juxtaposition of numbers (with at least one represented by a variable - obviously the juxtaposition of '2' and '3' means twenty-three) almost universally denotes multiplication. --Tango (talk) 16:15, 16 February 2009 (UTC)[reply]
I have seen || used as well when there are other operations floating around using juxtaposition, as in GromXXVII (talk) 19:56, 16 February 2009 (UTC)[reply]

Mattbuck, almost finished are you? Cuddlyable3 (talk) 20:02, 16 February 2009 (UTC)[reply]

February 16

CASINOS

hello:D

i have a high school assignment on probability.. basically, i and my group have to design a simple, creative and original casino game. and we have to explain the winning/losing probabilities of the game and the concept of the game.. how do u suggest we go about creating the game? any possible suggestions? and what are key things we have to consider when creating a new game?

thanks. please respond asap. thanks:D --218.186.12.201 (talk) 09:20, 16 February 2009 (UTC)Pearl[reply]

I suggest something involving dice - they are a very easy way of getting randomness. (You could use coins, but they have fewer options which will probably make the game more boring.) The key thing to remember when making a casino game is that there needs to be a "house edge" - the odds have to favour the casino, otherwise you won't make any money. A very simple game would be for the player to bet some money, then the player and someone from the casino each roll a die, if the player gets a higher roll the casino gives him money equal to his bet, if he gets a lower or equal roll, he losses the bet. The fact that the casino wins when there is a tie is what gives the house edge. I suggest you come up with something a little more interesting, though! --Tango (talk) 11:45, 16 February 2009 (UTC)[reply]
Roulette where the wheel is a Realeaux polygon and the ball is a Meissner tetrahedron. You can bet that's original.Cuddlyable3 (talk) 14:53, 16 February 2009 (UTC)[reply]
Simply giving the house an advantage is one thing, but the most successful games will give the house an advantage without appearing to do so. Or, better yet, they will appear to give the players the advantage. This is why so many carnival games are "fixed". So, let's use this quote about coin tossing from an early post on this Desk: ("It is six times as likely that HTT will be the first of HTT, TTH, and TTT to occur than either of the others" ). So, we could give the players a 2X payout if they manage to flip the TTH or TTT sequence before the HTT sequence, and keep their money otherwise, and they will be certain the odds are in their favor, when they really aren't.
You could also give the players the option to try for HHT or HHH before the THH sequence, with the same 2X payout when they win. This will help if they think the game is fixed in some way. Note that this game requires that they flip the coin continuously and count every sequence of 3. So, don't just count the 1st-3rd sequences and 4th-6th sequences as possible matches, but also look at the 2nd-4th and 3rd-5th coin tosses.
Also note that this doesn't need to use a coin, but any binary event will work. You could use a roulette wheel with black being one outcome and red the other (you'd also need to assign the green zero and double zero to act as either red or black). You could use card draws being red or black, as well. You could use dice throws with odds or evens. StuRat (talk) 18:00, 16 February 2009 (UTC)[reply]
I believe casinos are required by law to publish the odds for their games (in many jurisdictions, probably not all), so that kind of trick doesn't really work. Carnivals are far less regulated. --Tango (talk) 18:12, 16 February 2009 (UTC)[reply]
Yes, but gamblers either don't understand odds or don't care about them, or they wouldn't play at all, would they ? Gamblers tend to go on "instinct", which is frequently wrong, allowing them to be fleeced of their money. StuRat (talk) 18:22, 16 February 2009 (UTC)[reply]
Well, yes, many gamblers are just idiots. Some do know they are likely to lose but consider the enjoyment of playing to be worth it. --Tango (talk) 18:36, 16 February 2009 (UTC)[reply]
Someone said earlier to use dice, which is a good idea. You can design some really cool games with big jackpots and things like that. Like a 6 could pay you $5 AND give you an extra free roll (you could make a lot if you go on a run of 6's). You can do almost anything you want as far as payouts, the only trick is that once you design the game you have to properly calculate the expected payouts and then make the fee to play slightly higher (maybe round up to the nearest dollar for your profit). That is the core of the assignment is being able to calculate what the fair price of the game is and then charge a little more. Having fixed payouts like this in my opinion is easier, but it is more like a carnival game (where you charge $x dollars a try) than a casino game, because most casino games allow you to bet whatever amount you like. Anythingapplied (talk) 22:01, 16 February 2009 (UTC)[reply]
To make the game interesting the player needs to have choices to make along the way. Perhaps the extra roll could cost something (less than the usual bet), so you have to decide whether to take it or not. (That's a pretty rubbish choice - if you're playing the game, then obviously you think the usual bet is a fair price so, of course, you'll take it if it is cheaper, but you get the idea.) --Tango (talk) 22:14, 16 February 2009 (UTC)[reply]

Didn't understand

I couldn't understant the mechanical dialect in this mechanics question.

A drop-forge hammer, of mass 1500kg, falls under gravity on to a piece of hot metal which rests in a fixed die. From the instant the hammer strikes the piece of metal until it comes to rest, the hammer decelerates at 1.5 m/s2.

Find the magnitude of the force exerted by the hammer on the piece of metal (a) while the hammer is decelerating (b) after the hammer has come to rest

Answers written in the book : (a) 17250 N !!!!!!!!!!!!!!WOWOWOWOWOW!!!!!!!!!! (b) 15000 N

PLEASE NOTE THE BOLD PART OF THIS QUESTION, IT MADE ME THINK HARD AND I AM AT ODDS

My View: I understand that there would be a normal reaction force when the hammer hits the metal. Naturally this reaction force would be much greater than the hammer's weight to provide upward acceleration against motion (hammer travels down to hit metal piece) as the flexibility of hot metal allows it.

My question is, why would the downward contact force on piece of metal be 17250 N (force exerted by hammer on metal).

It should be, the force exerted by metal on hammer is 17250N (2250N > weight of hammer) thus exactly what is needed to provide that deceleration ( or upward acceleration against motion). If contact force on metal by hammer is 17250N, reaction would have to be more ( but I don't see how and why this is so).


                 |     |
                 |  ?  |  HAMMER
                 |   ^ |
            |---------------|
            |               |
            |     METAL     |



WHAT ARE YOUR VIEWS PEOPLE? —Preceding unsigned comment added by 202.72.235.202 (talk) 19:04, 16 February 2009 (UTC)[reply]

The key to this kind of question is not to worry about the scenario but just to extract the relevant information. The relevant information here is that the hammer has a mass of 1500kg and is decelerating at 1.5 m/s2. Those are the only two facts you need to know. --Tango (talk) 19:15, 16 February 2009 (UTC)[reply]
Item b is the weight of the hammer only. Item a is the weight (W=mg) plus the force required to decelerate that mass at that rate (F=ma). It's also apparent from the answers that they use the rather imprecise value of g=10m/s2, rather than 9.81. Also, while not part of the problem, you should realize that not all of the force exerted on the metal is passed on to the die. Some of the force is converted into heat and noise or used to deform the metal. StuRat (talk) 03:05, 17 February 2009 (UTC)[reply]

February 17

Finding unknown constant in function

Disclaimer: I'm not at all good at math, so this may be a stupid question.

I have no idea if the title I chose is at all descriptive, but it was the best I could think of. This isn't homework; in fact I asked my school math teacher and he couldn't tell me. My problem in practical terms would take forever to explain, but I've formalized it (hopefully correctly) as this:

and . Given , and , how can I find ?.

Aforementioned math teacher assures me it's possible; he just couldn't tell me how off the top of his head.

I could easily write a program to brute-force it, but that would be too slow for what I want to use it for (even given OCaml or C) (and besides it's really ugly and I want to know how to do it properly).

Thanks a lot, and apologies if I've screwed up the problem description - let me know if I've missed anything. --Aseld talk 12:47, 17 February 2009 (UTC)[reply]

. Rearrange and solve. Algebraist 13:16, 17 February 2009 (UTC)[reply]
Ah, using the indefinite integral. Thanks a lot. --Aseld talk 13:25, 17 February 2009 (UTC)[reply]
Well, no, it's a definite integral, but you do them by finding the antiderivative (which is the indefinite integral) and substituting in. --Tango (talk) 13:48, 17 February 2009 (UTC)[reply]
What is the difference between an antiderivative and an indefinite integral? Or are they just two names for the same thing? 92.1.184.208 (talk) 15:01, 17 February 2009 (UTC)[reply]
They're the same thing (well, almost. Some authors define the indefinite integral to be the set of all antiderivatives). Antiderivative is a much better term, to my mind. Algebraist 15:04, 17 February 2009 (UTC)[reply]

Simplifying an Expression

This is for part of my homework on potential dividers, I've got so far, but I can't seem to simplify this equation. The equation I'm trying to simplify is this: Potential_divider#Voltage_source. I've got these two equations:

and

and they are supposed to simplify to this equation

Any help with the steps of simplifying this would help, thanks Jammie (talk) 17:31, 17 February 2009 (UTC)[reply]