Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Fredrik (talk | contribs) at 11:42, 1 May 2009 (→‎Complex integral). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


April 24

Growth of a bacterial colony

Hi I was wondering if I could obtain some help with the following question: Starting with 1 bacterium that divides every 15 minutes, how many bacteria would there be after 6 hours? Assume none die etc. I thought ok so first of all how many sets of 15 minutes in 6 hours? There are 4 lots of 15 mins in 1 hour therefore 4 x 6 = 24 15 minute intervals in 6 hours. Nothing too difficult there. I thought then to work it out I would do eoriginal number x time making the equation e1x24 which gives an answer of 2.649 x 1010. I had a look at the answer and it said 16,777,216 which is obtained by the equation 224 but I have no idea where this equation 224 comes from. I'm also why my calculation involving e was incorecct. If anyone could explain these two things to me it would be a great help. Thanks. —Preceding unsigned comment added by 92.22.189.144 (talk) 08:24, 24 April 2009 (UTC)[reply]

Dividing every 15 minutes means doubling, so there will be twice as many as before. Hence it is the appropriate power of 2 which is wanted.81.154.108.6 (talk) 08:52, 24 April 2009 (UTC)[reply]
Right. After 15 minutes, there will be 2 (21) bacteria. They both will divide after 30 minutes (15x2), and there will be 4 (22) bacteria. After 45 minutes (15x3), these 4 bacteria will double and number of bacteria will be 8 (23), so you see there is a pattern. After 15xn minutes, there will be 2n bacteria. In your case n is 24, so there will be 224 bacteria after 15x24 = 360 minutes = 6 hours. I have no idea why you think there should be a connection with 'e'. - DSachan (talk) 11:21, 24 April 2009 (UTC)[reply]

Many thanks for the reply. I am sure there is an equation of exponential growth involving e or would it be log ekt? Not sure. Thanks anyway! —Preceding unsigned comment added by 92.21.233.141 (talk) 13:08, 24 April 2009 (UTC)[reply]

The article on exponential growth is quite helpful in this regard. We could write the number of bacteria as a function of the number of minutes that have passed as
but as the original poster guessed, we could just as well write this formula using base e.
To get the value of T, we set the two formulas equal and just take the log of both sides.
.
The number of bacteria at a given time can now be written as . (any suggestions to make my math look better would be appreciated). mislih 13:48, 24 April 2009 (UTC)[reply]
a suggestion to make the math look better: try \scriptstyle : . Bo Jacoby (talk) 14:13, 24 April 2009 (UTC).[reply]

sci.mathresearch newsreader question

What's an easy newsreader to use for posting there? (They say you have to use a newsreader now, and I don't know a newsreader from Adam.) thanks, Rich (talk) 09:28, 24 April 2009 (UTC)[reply]

e lon to nhat la ai nhi lu khon nan nhat oi vai lon chua kia so lon thi so bi aids nhe kinh —Preceding unsigned comment added by 123.18.213.33 (talk) 11:04, 24 April 2009 (UTC)[reply]

Just go to groups.google.com and follow instructions. McKay (talk) 12:02, 24 April 2009 (UTC)[reply]
thanksRich (talk) 06:07, 25 April 2009 (UTC)[reply]

Polynomial division

This is actually a homework sum. I tried in as many ways as possible but i didn't get an answer. The question is :

If the polynomial is divided by another polynomial , the remainder comes out to be . Find k and a.

I tried by the following method:

putting remainder on L.H.S,

If i divide L.H.S by g(x), i get and . I equate r(x) to zero coz g(x) is a factor of L.H.S. I get . I am unable to continue. Please help me--harish (talk) 13:51, 24 April 2009 (UTC)[reply]

The remainder must be the zero polynomial, meaning that both coefficients are zero. . Bo Jacoby (talk) 14:07, 24 April 2009 (UTC).[reply]
(e/c) If is the zero polynomial, this means that all its coefficients are zero, i.e., and . You can extract k from the first equation, and then you get a from the second equation. However, I think that you made a numerical error, I got a different result for q and r. — Emil J. 14:09, 24 April 2009 (UTC)[reply]
solve(identity(x^4-6*x^3+16*x^2-25*x+10 = (x^2-2*x+k)*(x^2+b*x+c) + (x+a),x),{a,b,c,k}); McKay (talk) 00:40, 25 April 2009 (UTC)[reply]

Holomorphic mapping

Hello, I am trying to prove that any one-to-one and onto holomorphic mapping of with a removable singularity at 0 satisfies and I am stuck.

I assumed it was something else, c. Then, there must exist some other point that also maps to c, say p. I assume I need to do something with an integral to count the number of zeros of the function f(z) - c, which has two (if I just abuse notation and let this f now represent f with the removable singularity removed). But, I am not sure what I can do. Form a circle big enough to contain 0 and p, call it D. Then, consider the function

which is a function of q. At c, the value is 2. I am guessing that I want to prove that for some small neighborhood around c, all numbers in that disc must also have value 2 for this integral, which will contradict the fact that the original map was one-to-one.

So, is this integral function a continuous function of q at c? I am pretty sure I know it is continuous in z but that's not what I want here. I am not sure exactly how this works.

Thanks StatisticsMan (talk) 21:52, 24 April 2009 (UTC)[reply]

Your argument is correct, but just observe that the extended function on is a nonconstant holomorphic function, hence it is an open map. Terefore if the pre-image of c has at least 2 points, the same is true in a nbd of c, loosing the injectivty on . --pma (talk) 06:56, 25 April 2009 (UTC)[reply]
Let me see if I understand this. If I take a disc around 0, maybe with radius |p|/2 to ensure p is not in it. Then I map this with the holomorphic function, it must be an open set and it contains c. Then, by continuity, the inverse image of that is an open set. But, it now contains p and also at least a small disc around p. But, that means every point in that disc is an inverse image of some point (I didn't name these discs so hard to talk about discs now), which means every non-p point in that disc around p goes to the same value as some nonzero point in the disc around 0, contradiction. Okay, this makes sense and it's much simpler.
Just for my knowledge, can someone please help me understand why that integral is continuous in q? (Nevermind on that, I actually found this in my book finally... and it's in the proof of the open mapping theorem, which makes sense.) Thanks for the help! StatisticsMan (talk) 14:08, 25 April 2009 (UTC)[reply]

How to compute statistical significance of correlation

I have a list correlations R[] and the associated number of samples for each correlation N[]. And I am not sure how to calculate a weight for each correlation/amount which is proportional to the chance that the correlation is non-spurious. Right now I have weight set to sqrt(N)*R^2 but this does a lousy job at discriminating one pair of R,N from another.

I believe you mean "statistically insignificant" instead of "non spurious." Spurious (at least in econometrics) refers to a correlation that is statistically significant due to random chance. Determining whether a correlation is spurious is, at best, non-trivial and, depending on your particular circumstances, quite possibly impossible. Wikiant (talk) 23:22, 24 April 2009 (UTC)[reply]

If you have an assumption of joint normality, and if you mean what I suspect you might mean (but I can't be sure, given what you've said and what you haven't said), an F-test should do it. Michael Hardy (talk) 03:21, 26 April 2009 (UTC)[reply]


April 25

higher derivatives

Hi, I read the following surprising comment in a maths textbook: "Let be a function such that exists. Then exists in an interval around for " Surely this is false. Take the function for rational x, and for some integer n > 2, for all irrational x. At x = 0, all derivatives below the nth will exist, but the function isn't even continuous around 0. Have I got this right? It's been emotional (talk) 00:21, 25 April 2009 (UTC)[reply]

It looks like the first derivative exists, but do the higher ones? The first derivative is only defined at 0, don't you need your function to be defined on an interval to differentiate it? --Tango (talk) 00:34, 25 April 2009 (UTC)[reply]

<sheepish grin> oops - I was only thinking with reference to the squeeze theorem, in which case the idea makes sense, but I think you are quite right, and there is no such thing as the higher derivatives. However, if instead of using in the definition of , you use the definition of in terms of , it might be a different story. Then I suspect the squeeze theorem would apply, and the result would follow. Am I right now? It's been emotional (talk) 04:24, 25 April 2009 (UTC)[reply]

The comment in your book is correct, and it's not a surprising result but rather a remark on the definitions. The n order derivative of f at c is the derivative at c of the function , therefore as Tango says you need the latter to exist in a neighborhood of c. The same for all . Maybe you have in mind a weaker definition of the second derivative like  ? --pma (talk) 07:16, 25 April 2009 (UTC)[reply]

Exactly what I was thinking. was in fact the form I came up with, and in the function I gave, you get an easy limit, since f(0) = 0 and 2h is rational exactly when h is, so they are taken from the same subcomponent of f(x) for any h. Is there anything wrong with using this as the definition of the second derivative? If not, it would seem advantageous, since differentiation is usually considered a "good" property for a function to have. Thanks for the help; I have always got good marks in maths, but never really had my rigour checked, so I make careless errors like this. It's good getting things "peer reviewed" so to speak. Much appreciated, It's been emotional (talk) 00:15, 26 April 2009 (UTC)[reply]

I think the primary difficulty is the loss of the property mentioned. It would be a bit of a problem if taking the derivative of something twice did not produce the second derivative, which would be the case if the second derivative was defined in that way. Black Carrot (talk) 02:47, 26 April 2009 (UTC)[reply]
Yes, it seems a notion too weak to be useful, as your example shows (nice!). Also in the symmetric form I wrote, any even function would have "". A somehow richer property closer to what you have in mind is maybe having a n-th order polynomial expansion: with as . So, n=0 is continuity at c; n=1 is differentiability at c; however for any n it doesn't even imply continuity at points other than c (therefore in particular it doesn't imply the existence of ). An intersting result here is that the Taylor theorem has a converse: that is of class if and onlfy if has n-th order expansions at all points x, with continuous coefficients and remainder as , locally uniformly. Then the coefficients are of course the derivatives. The analog characterization of maps holds true in the case of Banach spaces. So "having an n-th order polynomial expansion in a point" is a reasonable alternative notion to "having the n-th order derivative in a point"; they are not equivalent, but having them everywhere continuously is indeed the same. --pma (talk) 10:56, 26 April 2009 (UTC)[reply]

dice

If I roll 64 standard non-weighted 6 sided dice, what are my odds of rolling <= 1, 6?

Could you explain this further, please? Are you referring to your score on each die individually, or your total score, or what? It's been emotional (talk) 04:25, 25 April 2009 (UTC)[reply]

I mean what are the odds that 63 of the 64 dice will not have a 6 pips facing up. —Preceding unsigned comment added by 173.25.242.33 (talk) 04:39, 25 April 2009 (UTC)[reply]

If you mean 0 or 1 of them showing a 6, the probability is which is about 5.068 E-48, or 0.0000...005068, with 47 zeros after the decimal point. If you mean exactly 1 six, then that's just , or 5.02 E-48. It's been emotional (talk) 05:44, 25 April 2009 (UTC)[reply]

You surely mean . Bikasuishin (talk) 10:05, 25 April 2009 (UTC)[reply]
And about 1.2*10-4 for the other possibility. Algebraist 14:33, 25 April 2009 (UTC)[reply]
Thanks, that was my thesis doing things to my brain ;) It's been emotional (talk) 23:56, 25 April 2009 (UTC)[reply]

Poker theory and cardless poker

Inspired by This question, I was wondering if any math(s) types could gve me an idea of how this game would have to be played. The idea is you pick a poker hand at the start, write it down and then play as if you had been dealt that hand. Clearly to prevent everyone from choosing royal flushes or allowing one hand to become the optimum pick, there need to be rules limiting payment to high hands etc. What rules would be a good start? Any help is hugely appreciated 86.8.176.85 (talk) 05:07, 25 April 2009 (UTC)[reply]

The simplest variant would be to give everyone a pack, and they choose whatever cards, but when you've chosen a hand, that gets discarded. If you need to avoid having a pack, just get everyone to write the cards down, and strike out your cards as they are used, then you can't use them again for that game. With 52 cards, of course that's 10 hands, with two for each player that don't get used, then you start over. That limits a proper game to multiples of 10 rounds, or you just accept that you finish when everyone gets sick of it, and if Fred's been saving his royal flush up, and Sally has already played hers, too bad for him. It's been emotional (talk) 05:50, 25 April 2009 (UTC)[reply]

A large slice of pie

The 50,366,472nd digit onwards of pi is 31415926 (it's true, I'm not making it up). I was wondering what the expectation value is of the digit in pi after which all the digits up to that point are repeated. I think the probability of this happening must be;

which limits to

which is less than 0.5. Does this mean that expectation value is meaningless here? SpinningSpark 13:06, 25 April 2009 (UTC)[reply]

Your first assumption would have to be that pi is a normal number, which is not actually known. —JAOTC 13:48, 25 April 2009 (UTC)[reply]
Yes, I was making that assumption (and indeed I knew that it was an assumption - forgive my slopiness, I am an engineer, not a mathematician). I would still like to know how this can have a finite probability but not have an expectation value. SpinningSpark 14:19, 25 April 2009 (UTC)[reply]
Expectations are only meaningful for random events, and the digits of pi are not random, so you're never going to get a meaningful expectation here. Furthermore, even if pi is normal, it's not obvious (at least to me) that it must contain such a repetition. If instead of considering pi, we consider a random number (between 0 and 1, with independent uniformly-distributed digits, say), then the probability that such a repetition occurs is not 1 (it's not 1/9, either; your calculation is an overestimate due to some double counting). Thus to get a meaningful expectation value for the point at which the first repetition occurs, you need to decide what value this variable should take if no such repetition occurs. The obvious value to choose is infinity, in which case the expectation is also infinite. Algebraist 14:31, 25 April 2009 (UTC)[reply]
The digits of pi certainly are random in the Bayesian paradigm. Robinh (talk) 20:55, 25 April 2009 (UTC)[reply]

One could say the sequence of digits itself is random (regardless of whether π is normal or not) in the sense that it defines a probability measures, thus: the probability assigned to any sequence abc...x of digits is the limit of its relative frequency of occurrence as consecutive digits in the whole decimal expansion. Then one could ask about the expected value. However, if it is not known that π is a normal number, then finding such expected values could be very hard problem, whose answer one would publish in a journal rather than posting here.

As for being "random in the Bayesian paradigm", from one point of view the probability that they are what they are is exactly 1. Bayesianism usually takes provable mathematical propositions to have probability 1 even though in reality there may be reasonable uncertainty about conjectures. The Bayesian approach to uncertainty is quite mathematical in that respect. Michael Hardy (talk) 03:12, 26 April 2009 (UTC)[reply]

(@MH) As to the frequencies, note that at the moment it is not even known if they have a limit.--pma (talk) 08:27, 26 April 2009 (UTC)[reply]
The digits of pi are random in the Bayesian paradigm because it identifies uncertainty with randomness. In my line of research, we treat deterministic computer programs as having 'random' output (the application is climate models that take maybe six months to run). Sure, the output is knowable in principle but the fact is that if one is staring at a computer monitor waiting for a run to finish, one does not know what the answer will be. One can imagine people taking bets on the outcome. This qualifies the output to be a random variable from a Bayesian perspective. I see no difference between a pi-digits-program and a climate-in-2100 program (some people would take bets on the googol-th digit of pi, presumably). I write papers using this perspective, and it is a useful and theoretically rigorous approach. Best, Robinh (talk) 19:59, 26 April 2009 (UTC)[reply]
I wouldn't advise such a bet due to existence of these algorithms. I'm not sure what the constants are for the running time, so it is difficult to know if calculating the googolth digit is practical, but I suspect it is. --Tango (talk) 21:54, 26 April 2009 (UTC)[reply]
In 1999 they computed the 4*1013 binary digit of pi (and some subsequent ones) this way. It took more than one year of computing. I don't know of the further results. It's a 0, btw. --pma (talk) 23:13, 26 April 2009 (UTC)[reply]

Upper bound on the number of topologies of a finite set

Hi there - I'm looking to prove that the number of topologies on a finite set ({1,2,...,n} for example) doesn't have an upper bound of the form k^n (assuming this is true!), probably by contradiction, having proved 2^n is a lower bound (n>1) - but I'm not sure how to get started - could anyone give me a hand please?

Thanks very much, Otherlobby17 (talk) 20:48, 25 April 2009 (UTC)[reply]

2^n is an upper bound, isn't it? A topology has to be a subset of the power set, which has cardinality 2^n. --20:58, 25 April 2009 (UTC)
Which, of course, means an upper bound of 2^2^n. I apologise for my idiocy. --Tango (talk) 21:09, 25 April 2009 (UTC)[reply]
A topology on a finite set is the same thing as a preorder (OEIS:A000798). But even the number of total orders is already more than kn for any k. —David Eppstein (talk) 21:03, 25 April 2009 (UTC)[reply]

D. J. Kleitman and B. L. Rothschild, The number of finite topologies, Proc. Amer. Math. Soc., 25 (1970), 276-282 showed that the logarithm (base 2) of the number of topologies on an n-set is asymptotic to n2/4. So it is smaller than any expression 2kn for any k>1, which is probably the question you meant to ask. McKay (talk) 01:31, 26 April 2009 (UTC)[reply]

Very helpful thankyou - but I did mean to ask about rather than , having already known that is a (crude) upper bound, I was wondering if it could be improved to the extent of being OTF for some k - since I generally see it quoted as I assumed there was no such form, hence my question. Thanks very much for the information! How do we know it's a preorder? Is the number of total orders smaller than the number of topologies then? Thanks again, Otherlobby17 (talk) 01:58, 26 April 2009 (UTC)[reply]

How do we know: simply define a preorder : iff ; see the quoted link.--pma (talk) 08:09, 26 April 2009 (UTC)[reply]

Homework problem

As the title suggests, this is a homework problem but I only need to be told what a question means. Given the equation of an ellipse I am told that "The point N is the foot of the perpendicular from the origin, O, to the tangent to the ellipse at P.' I'm confused by the use of the word foot because if it's being used in the way I've seen it used before then in this case it would mean the origin but that can't be right. What is it meant to mean? Thanks 92.3.150.200 (talk) 21:33, 25 April 2009 (UTC)[reply]

I would guess it simply means the point where the two lines (L1 = the tangent and L2 = the line through the origin and perpendicular to the tangent) intersect. This MathWorld article seems to think the same. —JAOTC 21:45, 25 April 2009 (UTC)[reply]
Yes, that's what the foot of a perpendicular usually means. --Tango (talk) 21:50, 25 April 2009 (UTC)[reply]
So if this makes it clear, you would also find that if the tangent was a vertical or horizontal line, then N and P would be the same point, on the ellipse. Otherwise, N would be outside the ellipse. It's been emotional (talk) 00:02, 26 April 2009 (UTC)[reply]
I think it's not whether it's vertical or horizontal but whether dy/dx at P corresponds to the slope of a circle going through P. The slope of a circle at (x,y) is -x/y. The slope of an ellipse at (x,y) is -x/ay. So when their slopes times a particular constant factor are equal, the OP's N and P are the same point. I guess an ellipse's tangent is equal to a circle's tangent (at the same point) either 4 times (there's your "vertical or horizontal") or at every point (if the ellipse is a circle). .froth. (talk) 03:46, 26 April 2009 (UTC)[reply]


April 26

Convert a computer file to a non-negative Integer number

How do I convert an ordinary computer file into a non-negative Integer number. What I'm interested in is the contents of the file. The name of the file is irrelevant and need not be saved.

For example: a file with the size of 1 byte and the value of 16 can be easily be converted to the binary number 10000 which is decimal value 16.

However, I realizes that this method does not work because what if I have

A file with size of 2 bytes and with the hexadecimal value of "00 10" would not this also convert to the decimal value 16.

Clearly I must somehow also encode the size of the files in bytes as well as the actual value of the file.

What is the best way of converting an ordinary computer file to a non-negative Integer number which uses the least amount of numerical digits? 122.107.207.98 (talk) 00:36, 26 April 2009 (UTC)[reply]

Quick and dirty fix; tack a 1 in front of every file. So the first file would become x110, in decimal 272. The second file becomes x10010, or in decimal 65552. Taemyr (talk) 00:42, 26 April 2009 (UTC)[reply]
Since the total number of files with n or fewer bits is 2n+1-1, Taemyr's method is exceedingly close to optimal. McKay (talk) 01:37, 26 April 2009 (UTC)[reply]
I think that's wrong. Try it for n=3 or n=4; your constant is off by one. I think . .froth. (talk) 04:24, 26 April 2009 (UTC)[reply]
You forget the empty file. I have lots of them on my computer so I know they exist. :) McKay (talk) 08:23, 26 April 2009 (UTC)[reply]
Oh and have a look at arithmetic encoding although it doesn't really help your integer situation curse you tiredness it works fine. I suspect this is the optimal that taemyr's bit of waste approaches. .froth. (talk) 04:25, 26 April 2009 (UTC)[reply]

Taemyr, your method is brilliant.

  • no file has the decimal representation value of 0
  • a file with zero bytes has the decimal representation value of 1
  • a file with 1 byte and hex value of 00 has the decimal representation value of 256
  • a file with 2 bytes and hex value of 00 00 has the decimal representation value of 65536
  • a file with 1 byte has the decimal representation value ranging from 256-511
  • a file with 2 bytes has the decimal representation value ranging from 65536-131071

eh? what kind of file has the decimal representation value of 2 . It does seems to me that the range 2-255 is not being used. And the range 512-65535 is also not being used. 122.107.207.98 (talk) 10:46, 26 April 2009 (UTC)[reply]

Here's a better way: treat the file as a base-256 number, but with the bytes representing digit values of 1–256 instead of 0-255. This maps the empty file to 0, the one-byte files to 1–256, the two-byte files to 257–65792, and so on, and it's easy to compute:
   Integer int_of_file(FILE* f) {
       Integer n = 0, place = 1;
       int c;
       while ((c = getc(f)) != EOF) {
           n += place * (c + 1);
           place *= 256;
       }
       return n;
   }
   void file_of_int(FILE* f, Integer n) {
       while (n) {
           putc((n - 1) % 256, f);
           n = (n - 1) / 256;
       }
   }
That treats the file as little-endian. Big-endian is a bit more tricky as you have to search for the maximum place value in file_of_int. -- BenRG (talk) 11:04, 26 April 2009 (UTC)[reply]
For reference, here are the working python codes, ready to use

file2num.py

#!/usr/bin/python
import sys

if __name__ == "__main__":
  if len(sys.argv) != 2:
    print "Usage: file2num.py inputfile"
    sys.exit()
  num=0
  place=1
  myfile=open(sys.argv[1])
  while 1:
    char=myfile.read(1)
    if char:
      num = num + place * (ord(char) + 1)
      place = place * 256
    else:
      break
  print num

num2file.py

#!/usr/bin/python
from __future__ import division
import sys

if __name__ == "__main__":
  if len(sys.argv) != 2:
    print "Usage: num2file.py numberfile"
    sys.exit()
  numfile=open(sys.argv[1])
  num=int(numfile.readline())
  while num:
    c = (num - 1) % 256
    num = (num - 1) // 256
    sys.stdout.write(chr(c))

122.107.207.98 (talk) 03:14, 2 May 2009 (UTC)[reply]

Birthday paradox

If A doesn't share a birthday with B and B doesn't share a birthday with C, then there's no way A can share a birthday with C. Doesn't this ruin the calculations that "prove" the unintuitive result of the birthday paradox? Also there are circular relationships with 4 people, and 5 people, and n people.. What's the actual graph look like, or at least what's the 50/50 point? .froth. (talk) 03:26, 26 April 2009 (UTC)[reply]

Yes there is - if A and C are born on the 1st January, B is born on the 2nd of January, then A doesn't share a birthday with B who doesn't share a birthday with C, but A shares a birthday with C. 'Not sharing a birthday' does not have transitivity, whereas sharing a birthday does, so in fact sharing a birthday is an equivalence relation - don't expect to see it turning up on exams any time soon though... Otherlobby17 (talk) 03:45, 26 April 2009 (UTC)[reply]
Oh right >_< But sharing is transitive so should the graph actually be higher? .froth. (talk) 03:52, 26 April 2009 (UTC)[reply]
What makes you think that somehow the calculation would ignore the possibility of more than two sharers? It doesn't. (The actual, real-life, 50/50 point is a little higher due to leap days but may also be influenced by systematic biases in when people are born—but I'm pretty sure it's still above 23 and below 24.) —JAOTC 09:39, 26 April 2009 (UTC)[reply]

The Gateaux derivative - how is it defined?

Our article on the Gateaux derivative defines it this way:

A function f : UVW is called Gâteaux differentiable at if f has a directional derivative along all directions at x. This means that there exists a function g : VW such that
for any chosen vector h in V, and where t is from the scalar field associated with V (usually, t is real).

My question is whether anyone can confirm that this is correct, because I thought it required that t approach 0 from above, ie. t is always positive. The reason is that the influence function is considered a special Gateaux derivative, and that definitely requires that t approach from above. Thanks in advance, It's been emotional (talk) 08:35, 26 April 2009 (UTC)[reply]

Well, the standard definition (for V,W Banach spaces or also TVS , U open in V) is, f is Gâteaux differentiable at iff what you wrote happens, with a linear continuous operator; in this case you may equivalently take t positive in the definition, for g(-h)=-g(h). (By the way, I do not like so much the distinction between G-derivative and G-differential as it is made in the link; if g is not a linear continuous operator, people would just say "f has directional derivatives in all directions h", with no further names for this). --pma (talk) 09:06, 26 April 2009 (UTC)[reply]
Aren't you describing the Frechet derivative? My understanding was that the Gateaux derivative differed from the Frechet derivative in that the derivative did not have to be linear.76.126.116.54 (talk) 20:51, 26 April 2009 (UTC)[reply]
No, both differentials are linear continuous operators, but Fréchet differentiability is a stronger condition, in that it is required that f(x+h)-f(x)-Lh=o(h) as h tends to 0. A standard example of a function on that is differentiable in the origin in Gâteaux but not in Fréchet sense, is , which is not even continuous. --pma (talk) 21:26, 26 April 2009 (UTC)[reply]

Thanks, pma that's much clearer (though I'm still trying to work out if the function you gave really is discontinuous, rather than just not Frechet differentiable). I got the stuff I cut and pasted from our article Frechet derivative, which seems completely wrong. If you can confirm that for me, I'll get to work editing it, either soon or at least I'll make a note of it for when my thesis is done (I'm under the hammer at the moment). I'll add an acknowledgement of you and the ref desk for the help too. Thanks also to 76.126, because I also had the same question. It's been emotional (talk) 08:27, 27 April 2009 (UTC)[reply]

Well, maybe it's not completely wrong, but it uses a definition of Gâteaux derivative that is not the standard one. Also, usually differentiability or derivability are synonymous (both in the F. and in the G. context). At most, some authors distinguish between "differential" and "derivative", preferring the latter for functions of one (real or complex) variable, so that the differential is always the linear map, and the derivative is the usual limit vector, the two being linked by the identities df(x)[h]=f'(x)h and f'(x)=df(x)[1]). Great books on differential calculus in Banach spaces: Cartan; Dieudonné; also, the first chapter of Hörmander has a short but complete and perfect introduction. Going back to the function of the example, I think it is constant on the graph of any parabola (x,cx2), x>0, with a constant depending on c (this shows the discontinuity at the origin). --pma (talk) 11:57, 27 April 2009 (UTC)[reply]

Thx, all clear now! I think I'll at least tag that page, because it does need editing - it's inconsistent with the page on Gateaux derivatives. cheers, It's been emotional (talk) 02:35, 29 April 2009 (UTC)[reply]

Pólya enumeration theorem

I am having trouble in reconciling the Pólya enumeration theorem I am reading from my book, and what is given here on wikipedia.

My book states: Suppose S is a set of n objects and G is a subgroup of the symmetric group Sn. Let PG(X) be the cycle index of G. Then the pattern inventory for the nonequivalent colorings of S under the action of G using colors y1, y2...ym is

Here a pattern inventory of the colorings of n objects using the m colors is the generating function: . The sum here runs over all vectors of nonnegative integers satisfying ; represents the number of nonequivalent colorings of the n objects where the color occurs precisely times. For example by looking at the pattern inventory of the colorings of 4 objects (beads) by 3 colors (r,g and b) and taking G=D4, we can see that as the coefficient r2gb is 2 so there are two necklaces with 4 beads possible using these three colors. I understand this fully.


The wikipedia article however seems to take a more general approach. It starts off with two sets X and Y. I assume that X stands for the 4 beads and Y the colors {r,g,b}. The colors are then accorded some weights. Then the colorings are also assigned weights. Now it defines a generating function c(t) whose coefficients are the number of colors of a particular weight. I am having difficulty in renconciling this with what I have understood from my book. Specifically it would help if someone could clarify these things to me:

  • Does the WP aricle takes the approach that the number of possible colors are infinite?
  • What are the weights in the necklace problem that I have outlined?
  • What do weights signify in general?
  • How is my book's PET equivalent (or a special case of) WP's PET?

Thanks--Shahab (talk) 09:44, 26 April 2009 (UTC)[reply]

Example in the article about the Hermite interpolation

I looked at the article about Hermite interpolation [1] and I tried to reproduce the example, however I could not figure out where the 28 comes from (third row, fourth column).

It's also not clear for me what the columns (after the x and f(x) values) contain. The column which starts with -8 contains the first derivative, if the values on the left are equal to the values on the left one row above then it seems to contain the derivative, but otherwise?

It would be great if u could explain this a bit, so we could edit the example and make it easier to understand.

Thx in advance!

--F7uffyBunny (talk) 18:27, 26 April 2009 (UTC)[reply]

--- http://en.wikipedia.org/wiki/Hermite_interpolation

Figured it out and updated the article —Preceding unsigned comment added by F7uffyBunny (talkcontribs) 21:06, 26 April 2009 (UTC)[reply]


While waiting for a more specific answer to your question, notice that you can easily make free links using double square brakets. As to the Hermite interpolation, have also a look to Chinese theorem#Applications. --pma (talk) 21:10, 26 April 2009 (UTC)[reply]

Digit distribution

Periodically in "detective stories" you get a plot involving someone faking a set of accounts or some other list of numbers and they don't meet the normal statistical usage of digits wherein 1 is much more frequent than 9. Two parts to this: (1) does this apply for item sales records where I would expect an excess of ".99" to come up since stores love this price break. (2) If the list is the sort where that analysis applies, how should you fake it from a list of random numbers wherein each digit is equally likely - no I'm not planning anything fraudulent. -- SGBailey (talk) 22:10, 26 April 2009 (UTC)[reply]

Benford's law is about the leading digit. I haven's seen statistics about prices but I would guess it partially applies there. Selection of prices just below a round number could very well cause a deviation from it. PrimeHunter (talk) 23:09, 26 April 2009 (UTC)[reply]
For realistic distribution of the first digit, use 10**random(). Then you might want to round the result to an integer, and subtract .01 or .05. You'll also need to adjust this formula somehow for a distribution of magnitudes (number of digits). —Tamfang (talk) 23:42, 26 April 2009 (UTC)[reply]


April 27

The Greatest Integer Function

Hello, I am trying to prove that



is true for all positive integers where the square brackets represent the greatest integer function. I reasoned that if I can show that



for an integer m, then I am done because that is a definition of the floor function. So in order to prove this, I have shown that the difference between these functions is always between zero and one. Furthermore, two of the inequalities are easy to show but the other two are hard. Any ideas?--68.121.32.160 (talk) 03:08, 27 April 2009 (UTC)[reply]

If the claim is not true, there is some integer m and some value a between 1 and 2 such that
The solution to this equation is
For a in (1,2) the fractional part of the right side lies in
for even and odd m, respectively, so it can't be an integer. McKay (talk) 04:12, 27 April 2009 (UTC)[reply]

I understand everything perfectly except for the fractional part. How did you arrive at those bounds for the fractional part? Why does it matter if the integer is odd or even and how did you get those intervals? Thanks!68.126.127.36 (talk) 07:51, 28 April 2009 (UTC)[reply]

If m is even, say m=2k, then
Now you can check that for a in (1,2), that value is always strictly between k2-1 and k2.
If m is odd, say m=2k+1, then
which is strictly between k2+k-1 and k2+k. McKay (talk) 08:20, 28 April 2009 (UTC)[reply]

Why is "dense-in-itself" a useful notion?

After committing the embarassing rookie's mistake of linking the phrase "dense in itself" in the sentence "a nowhere dense set is always dense in itself" to dense-in-itself I got myself thinking: why is the notion of a set being dense-in-itself at all useful? Are there any interesting non-trivial properties of topological spaces without isolated points? The topology books I know may define the notion but only to never mention it again. — Tobias Bergemann (talk) 07:06, 27 April 2009 (UTC)[reply]

Perfect sets are, by definition, closed dense-in-itself sets, and they appear in various contexts, see e.g. Cantor–Bendixson theorem. — Emil J. 10:27, 27 April 2009 (UTC)[reply]
A complete metric space with no isolated points essentially has a subspace that is the continuous injective image of the cantor space. A similar result that (possibly) applies to a larger class of spaces asserts that any locally compact Hausdorff space without isolated points, has cardinality at least that of the continuum. The proofs of these facts are relatively simple if you were to attempt the proofs yourself. The idea embedded within the previous assertions is that spaces with no isolated points, having certain properties, must essentially be "large". --PST 10:39, 27 April 2009 (UTC)[reply]
Thank you both for your answers. I really should have thought of perfect sets and the Cantor-Bendixson theorem myself. I just couldn't think of anything interesting to say about a topological space about which it only is known that it has no isolated points and nothing else (a perfect space). — Tobias Bergemann (talk) 12:29, 27 April 2009 (UTC)[reply]

Proving or disproving a homeomorphism with [0,1]

Hi there guys - I was wondering about how to go about showing that is or is not homeomorphic to ? I don't want you to tell me how to do it, but what would you suggest to get started? I know the 2 sets have the same cardinality so that won't rule the possibility of bijection out, but I'm less certain about continuity - could anyone suggest anything to get me going? I imagine if they aren't homeomorphic I'll simply want to find a topological property they don't share, but I'm not sure where to start looking or whether that's even the case...

Also, I'm trying to find a homeomorphism between and - is the function continuous in the topological sense between these 2 sets?

I hope I'm not asking too much - Thanks a lot! Spamalert101 (talk) 08:39, 27 April 2009 (UTC)[reply]

(On a formatting note, how do I get my 2 sets to display in the same sized font?) Spamalert101 (talk) 08:40, 27 April 2009 (UTC)[reply]
To begin my response, let me stress that much of topology is intuitive. I do not think that it is worth it to worry too much about proving whether two spaces are homeomorphic or not if you absolutely see it intuitively - an exception being when the equivalence of two spaces may be of crucial importance in a theorem. If you have first learnt the concept however, it is nice to construct a few homeomorphisms.
To expand on my previous point is equivalent to solving the first problem. Initially, the idea is recall the connection between the factors of a product space (assuming the product topology - not that it matters in this case, even if you were to choose the box topology) and the product itself. Often one can say a lot about the product given information about its factors; this assertion lies within the continuous projection maps onto the factors. Therefore, it is necessary to find a property that is preserved under continuous maps, and that is shared by [0,1] but not by a finite discrete space.
Secondly, to check the continuity of the map given, is equivalent to checking continuity of its restriction to each "piece". This is because, essentially, the two pieces are "far from each other" (or more precisely, their closures are disjoint), and since continuity is "points close together get mapped to points close together", we only need to consider the map defined on each piece separately. Doing so is simply basic calculus.
Hope this helps. Let me add that it is nice to have a question on topology, once in a while! Rarely are questions on fields outside calculus, asked. --PST 10:24, 27 April 2009 (UTC)[reply]
To the first question: [0,1] is connected, whereas is disconnected (indeed, totally disconnected), hence they are not homeomorphic.
To the second question: yes, your function is continuous, and in fact a homeomorphism. However, you are making it unnecessarily complicated: works just as well. — Emil J. 10:19, 27 April 2009 (UTC)[reply]
Let me note, Emil J., that the OP requested specifically that only a hint be given (to get him/her started) rather than the answer. --PST 10:25, 27 April 2009 (UTC)[reply]
Don't worry, having the answers will be useful to check my own suggestions against, having (oddly) read upwards from the bottom of the post I managed to avoid the given solutions themselves whilst reading, but will certainly come back to them after attempting the rest of the problem - Thankyou both for the help, and I'll be sure to bring a couple more topology questions your way in the future! ;) Spamalert101 (talk) 11:26, 27 April 2009 (UTC)[reply]

Can every unit algebraic number be expressed as a root of unity?

What I mean by "unit algebraic number" is an algebraic number which has an absolute value of 1. Root of unity, of course, means a solution to Zn-1=0 for some positive integer n.

I believe this is equivalent (in light of the fundamental theorem of algebra and closure of algebraic numbers under multiplication) to saying that every polynomial with rational coefficients can be expressed, by multiplying it by some other polynomial and then factoring, as a product of polynomials of the form (aZ)n-1 for some quadratic a and positive integer n, in addition to some polynomial of the form Zn for positive integer n, for the zero roots. Of course, it doesn't matter whether a is a coefficient of Zn or directly with Z, but in the latter case, it has the more intuitive meaning of being the reciprocal of the magnitude of Z.

Is that statement true for polynomials with any complex coefficients, letting a be any real number?

All responses appreciated. --COVIZAPIBETEFOKY (talk) 12:47, 27 April 2009 (UTC)[reply]

I think the answer to your first question is "no". Consider the ring of algebraic integers in Q(sqrt(2)). Then 3+sqrt(2) is a unit in this ring, because its minimal polynomial is x2 - 6x + 1 (its associate is 3-sqrt(2)). But 3+sqrt(2) is clearly not a root of unity - all its integer powers are greater than 1. Gandalf61 (talk) 13:17, 27 April 2009 (UTC)[reply]
Note that the OP uses nonstandard terminology. The absolute value of 3+sqrt(2) is not 1, so it is not a "unit number" the way he defined it. — Emil J. 13:21, 27 April 2009 (UTC)[reply]
Nevertheless, the answer is still "no". The algebraic number (3 + 4i)/5 has absolute value 1, but it is not a root of unity, as its minimal polynomial is 5x2 − 6x + 5. — Emil J. 13:34, 27 April 2009 (UTC)[reply]
This result gives a large number of counterexamples, namely any a+bi where (a/b)2 is rational and not in the set {0, 1/3, 1, 3}. -- BenRG (talk) 14:00, 27 April 2009 (UTC)[reply]
A simple example is u:=2+i. It is easy to see (induction) that for all even natural number n, 5 divides un+u (i.e. it divides both the real and the imaginary part). Therefore no positive integer power of u is a real number. Hence u/|u| is an algebraic number of modulus 1, not a root of unity. --pma (talk) 14:32, 27 April 2009 (UTC)[reply]
I'm not sure I understand how you can use the minimal polynomial to predict whether a number will be a root of unity; after all, the minimal polynomial of -1/2+i√(3)/2 is X2 + X + 1, but it is a 3rd root of unity.
However, I do have an understanding of why (3+4i)/5 wouldn't be a root of unity, unrelated to its minimal polynomial: because its angle (roughly 0.927295 radians or 53.130102 degrees) is an irrational multiple of 2π, adding the angle to itself many times will never give a multiple of 2π. Thus, multiplying the number by itself will never yield 1. This sheds some light on BenRG's set of counterexamples. I also might understand pma's explanation if I mull it over a bit.
Also, apologies for using "nonstandard terminology". I haven't taken an actual class on the material, and I thought I could get away with it if I actually explained what I meant thoroughly in the text. But who reads the actual text? Silly me...
Thanks for the help. I suppose I should have been able to figure this out by myself, but I was thinking about it in terms of polynomials (the latter half of my question) rather than the numbers, which seems to have muddied things up a bit. Any pointers to getting a better understanding of the behavior of the polynomial side of the question? --COVIZAPIBETEFOKY (talk) 15:00, 27 April 2009 (UTC)[reply]
Well, yes, the angle atan(4/3) is an irrational multiple of 2π, but how do you prove that? It's no easier than showing that (3 + 4i)/5 is not a root of unity in the first place.
As for minimal polynomials: sorry I wasn't more clear on this point. Roots of unity are algebraic integers, hence their primitive minimal polynomials are monic (or equivalently, their monic minimal polynomials have integer coefficients). 5x2 − 6x + 5 is a primitive irreducible polynomial and it is not monic, hence its roots are not roots of unity. — Emil J. 15:13, 27 April 2009 (UTC)[reply]

Disjoint balls into a ball in R^n

Hi there, since you enjoyed the last topology question so much I figured I might send another one or two your way! I'm revising it right now so you might end up getting a good few if you don't mind lending me a little more help!

I've showed that there DNE 2 closed balls of radius 1 inside a closed ball of radius 2 in the Euclidean space but I'm now trying to find how many closed unit balls there are in the space inside balls of radii 3.001 and 2.001 - the first is apparently for some k>0, but how do I go about beginning to prove it? Thanks very much again for the help and if I'm asking too much just say!

Spamalert101 (talk) 14:54, 27 April 2009 (UTC)[reply]

So you want to pack unit -dimensional Euclidean balls into a ball of radius . I took the liberty of using open unit balls instead of closed, since the problem remains essentially the same, and things are easier to describe. Now, although these problems are generally difficult, at least in the case the situation is quite simple: take the balls pairwise tangent, that is, put their centers at a distance 2 from each other. With a small computation this gives a radius for the minimal ball containing them. Also, it is not immediate but not even hard to prove that this is actually the least number such that there are disjoint unit open balls inside : in other words, the minimizing configuration for the balls is the one above, where they are pairwise tangent. In particular, in dimension , there are three disjoint open unit balls inside iff . Another consequence is that if the number of unit balls inside is bounded independently from the dimension. On the other hand, as soon as , in dimension there are at least unit balls in , so the number of balls is unbounded as dimension increases, and very difficult to count exacltly. If I guess that the maximum number of unit balls in is obtained with one ball concentric with and all the other tangent to this one (as far as I see, it could be trivially true or trivially false, or an open problem). If this is true, the max number of unit balls in is 1 plus the kissing number, which is still a topic of current research. PS: Your question has the following nice Hilbert space version: there are infinitely many disjoint unit open balls inside the ball of an infinite dimensional Hilbert space (just take them centered in , where is an orthonormal basis). But if you take the radius any smaller, , then only finitely many disjoint unit open balls can be located in a ball of radius : precisely, at most . Life in Hilbert space is curious... (PPS: of course, feel free to ask for details if needed) pma (talk) 21:47, 27 April 2009 (UTC)[reply]

April 28

Łukasiewicz notation for propositional functions

Jan Łukasiewicz used C to denote implication, K to denote conjunction, A for disjunction, and E for logical equivalence, as noted at Polish_notation#Polish_notation_for_logic. Why these letters? Are they the initial letters of some relevant Polish words? If so, what words? —Dominus (talk) 04:15, 28 April 2009 (UTC)[reply]

Hmm, I always had a vague impression that the letters were based on Latin, but Polish actually makes more sense now that you mention it. Koniunkcja, alternatywa, ekwiwalencja (though równoważność appears to be the more common name), negacja, możliwość and dysjunkcja (which, strangely enough, does not mean disjunction in Polish, but Sheffer stroke) are transparent. I do not understand the source of C for implication (implikacja) and L for necessity (konieczność). — Emil J. 11:47, 28 April 2009 (UTC)[reply]
I suppose C might have come from czyni (makes), but I can't figure out the source for L, either. --CiaPan (talk) 14:56, 28 April 2009 (UTC)[reply]
Thanks. The article is probably wrong when it says that Łukasiewicz originated the use of L and M for modal operators. It is certainly wrong when it says that he originated the use of Σ and Π for quantifiers. —Dominus (talk) 15:04, 28 April 2009 (UTC)[reply]
Well, the article does not actually claim that Łukasiewicz originated all the notation, so it is not wrong. You may be right that L and M may come from a different source. But then the question remains, what is the source and what does it mean. According to modal logic#Axiomatic Systems, the usual and notation was already used by the founder of modern modal logic, C. I. Lewis. J. J. Zeman confirms it in the case of , but he notes that is a later addition. Either way, M and L must have been introduced when was already in use, which seems to suggest that they indeed originated in the context of the Polish prefix notation, even though nowadays they are also used in infix notation. — Emil J. 15:42, 28 April 2009 (UTC)[reply]

Easy probability question

  • Event occurs: probability 1
  • Result 1 occurs: probability 1/x
  • Result 2 occurs: probability (x-1)/x

What are the chances of result 1 happening if the event occurs x times? Vimescarrot (talk) 15:51, 28 April 2009 (UTC)[reply]

Sounds a bit like homework. Assuming the trials are independent, you can calculate the probability that only event 2 ever occurs using the multiplication rule, from which you can derive the result you want. You may also have a look at e to turn it into a neatly-looking approximation for large x. — Emil J. 16:08, 28 April 2009 (UTC)[reply]
And have a look at binomial distribution in case the number of occurrences of Result 1 is of interest.81.132.236.12 (talk) 16:14, 28 April 2009 (UTC)[reply]
I realised I forgot to specify "once or more", but never mind. It's not homework, it's just that...Well, this applys in computer games a lot (if the chances of this monster dropping this item are 1/100, what are the chances of getting it after kiling it 100 times?) Anyways, thanks very much for the help. Vimescarrot (talk) 18:03, 28 April 2009 (UTC)[reply]

What event occurs x times?? Could it be that you meant that their are x trials, and on each trial the probability of success is 1/x? It gets confusing when you don't use terminology in a standard way? And why do you mention "Result 2" if it has nothing to do with your question? Michael Hardy (talk) 19:33, 28 April 2009 (UTC)[reply]

I think the question was quite clear. A event occurs and can have one of two outcomes, the probability of outcome 1 is 1/x, the probability of outcome two is 1-1/x (=(x-1)/x). What is the probability of outcome one occurring at least once in x (independent) trials? The question has been answered by EmilJ, and the OP seems to be happy, so another question successfully resolved! --Tango (talk) 19:45, 28 April 2009 (UTC)[reply]
...OK, further guesses: What you meant was that "Result 2" was the complement of Result 1, i.e. to say that "Result 2" happens just means "Result 1" doesn't happen. Really, it wouldn't have hurt to say so, but even better would have been not to mention Result 2 at all. At any rate, if my guesses are right then the probability that "Result 1" never occurs in x trials is (1 − 1/x)x. That number approaches 1/e as x grows (where e is the base of natural logarithems). So the probability that "Result 1" occurs at least once is 1 minus that. Michael Hardy (talk) 19:48, 28 April 2009 (UTC)[reply]
I didn't use standard terminology because I don't know standard terminology. Vimescarrot (talk) 21:54, 28 April 2009 (UTC)[reply]
Your terminology was fine, I don't know what Michael is complaining about. Perhaps he missed the fact that (x-1)/x=1-1/x, which makes it clear that one is the complement of the other? --Tango (talk) 11:09, 29 April 2009 (UTC)[reply]
You're mistaken, Tango. If the sum of the two probabilities is 1, that does not mean they are complements of each other. The probability of getting a "1" when rolling a die is 1/6. The probability of getting a number no more than 5 is 5/6. The sum of those two is 1. But they are not complements. And the poster used the word "event" where he probably meant "trial". Michael Hardy (talk) 01:55, 30 April 2009 (UTC)[reply]

The set of complex numbers is the largest possible set of numbers?

I've once seen a proof that claimed that the set of all complex numbers was the largest possible set of numbers that can be conceived of, but do not remember the details of that proof. Does anyone know if this is true, and if so, have a link to a proof? JIP | Talk 18:50, 28 April 2009 (UTC)[reply]

Probably you are thinking of the fundamental theorem of algebra, which says that the only proper algebraic field extension of the field of real numbers is the field of complex numbers. Similar results for systems of numbers larger than the complex numbers are described in the articles Frobenius theorem (real division algebras) and Hurwitz's theorem#Hurwitz's theorem for composition algebras. JackSchmidt (talk) 19:06, 28 April 2009 (UTC)[reply]

The definition of "number" is not fully standard. Sometimes things like non-standard real numbers are considered numbers. Transfinite cardinal and ordinal numbers are called "numbers". Sometimes things like quaternions or members of finite fields are considered "numbers".

Maybe Jack Schmidt's guess as to what you remember is right. The term "fundamental theorem of algebra" is something of a misnomer. It says you don't need to extend your set of "numbers" beyond the complex numbers in order to have solutions of all algebraic (i.e. polynomial) equations. Michael Hardy (talk) 19:36, 28 April 2009 (UTC)[reply]

April 29

Evaluating very small natural logs

I am interested in the ratio of 2 probabilities, where the numerator is the probability associated with a value of x (x~), and the denominator is the Sum of the probabilities all possible values of x (Sigma x).

I apologise for not being able to show this in LaTex notation.

The problem is the the probability for all values of x is tiny and expressed as a natural log. e.g ln(P(x))=(-50000). I cannot see how to evaluate this function without calculating the exp. of each term, then taking the ratio. But the number is too small to be dealt with...can I use algebra to express the answer in terms of natural logs?

Thanks!

Ironick (talk) 10:02, 29 April 2009 (UTC)[reply]

There is something wrong here, you can't take the log of a negative number. Are you saying that you take the log of a small number (eg 10^-50) and the log of that is a large negative number. But again, a negative number isn't right for a probaility which should be between 0 and 1. Please clarify -- SGBailey (talk) 10:30, 29 April 2009 (UTC)[reply]

Sorry, my mistake. Edited for clarity (I Hope). The output of the natural log terms are around -50000. So the natural log of the true probability is -50000. Making the true probability too small to deal with. Thanks. Ironick (talk) 10:38, 29 April 2009 (UTC)[reply]

I understand you have some events, say x, y, z..., with probabilities Px, Py, Pz... respectively.
And the probabilities are not known explicitly, instead you have their logs, ie. values: Lx=log(Px), Ly=log(Py), Lz=log(Pz)... which are 'large negative numbers'.
And you say you are interested in ratios, say Px/Py or Px/Pz — is that right?
If so, utilize the most important property of logarithms, that they reduce multiplication to addition:
and consequently division to subtraction:
Thus your ratios can be calculated by
Lxy = log( Px/Py ) = log( Px ) − log( Py ) = LxLy
and finally
Px/Py = exp( Lxy ) = exp( LxLy )
CiaPan (talk) 11:04, 29 April 2009 (UTC)[reply]
Isn't the sum of the probabilities of all possible values of x simply 1? That's part of the definition of probability. --Tango (talk) 11:07, 29 April 2009 (UTC)[reply]
I guess you're trying to calculate where x and y are too small (or huge) to evaluate the exponentials directly. In that case, a good approximation is
The second case is exact, and you can calculate it directly on your floating point unit when x ≈ y. The first case is obtained from the second by using the fact that when z is small. The third is obtained by swapping the variables in the second and doing the same thing. Chances are you can ignore the exponential part of the first and third formulas and just use
-- BenRG (talk) 11:50, 29 April 2009 (UTC)[reply]

@CiaPan. Thanks, but unfortunately Lx-Ly is not computable, as too small.

@Tango. Yes, but I am dealing with the probability of data given a set of parameters. Liklihood is a better word. So they won't add to 1.

@BenRG. Thanks, that helps a lot, I think that's the solution. 130.88.243.41 (talk) 12:41, 29 April 2009 (UTC)[reply]

I don't understand this: unfortunately Lx-Ly is not computable, as too small.
If both Lx and Ly are negative, then their difference is LESS THAN 'greater' of them; it may be either negative or positive, but its abs() value does not exceed that of Lx and Ly. So, if Lx and Ly are computable (and they are, as you have computed them), then their difference is computable, too.
Example: for given log values −500 and −503 (assume they are 'large') the difference is either 3 or −3, which is certainly smaller (by absolute value) than 500 and 503.
In case you mean something like Lx=−500, Lx=−500.0001 — you just need larger precision arithmetics for your computations.
CiaPan (talk) 14:17, 29 April 2009 (UTC)[reply]

Apologies. I misunderstood/misexplained. I thought you meant the diffence between exp(-50000) and exp(-60000) rather than 50k and 60k=10k. The issue is not that I need to compute the ratio of 2 likelihoods, but for example (a)/(a + b + c) where a, b and c are all very small likelihoods. Which can be broken down to 1/1+(b/a)+(c/a). I am interested in the relative values of (a)/(a + b + c) compared to (b)/(a + b + c) and (c)/(a + b + c). 130.88.243.41 (talk) 15:59, 29 April 2009 (UTC)[reply]

By "relative values" do you mean ratios? They all have the same denominator, so you can just ignore it. The ratio of the first two is just a/b. --Tango (talk) 18:18, 29 April 2009 (UTC)[reply]

Integral - confusion

I'm trying to find the antiderivative of sqrt(C2+x2)dx

So I tried x=iC cos(iy) giving dx/dy=Csin(iy) and y=-ln(C)+ln(-x+sqrt(C2+x2) )

Substituting y for x gives me

sqrt(C2-C2 cos2(iy)) C sin(iy) dy

ie

C2 sin2(iy) dy

ie

C2 1/2 ( 1-cos(2iy) ) dy

So the antiderivative is

C2 1/2 ( y - (1/2i)sin(2iy) )

Without going further is there a mistake (partly because when I use the alternative substitution x=iC sin(iy) I get a change of sign in the antiderivative

ie C2 1/2 ( -y - (1/2i)sin(2iy) )

(I've been looking at the whole thing for about a week and can't find where I'm going wrong - I'm sure I used the same method some time ago for the "length of curve of a parabola" which involved sqrt(1+4x2)and I don't remember having any big problems.HappyUR (talk) 18:11, 29 April 2009 (UTC)[reply]

If you don't want to just look at the first integral at List of integrals of irrational functions for a solution, the problem is mixing i and sqrt. Square root can be positive or negative and you have to choose the right one to solve the problem, here by checking by differentiating again. You might find it easier to use tan instead of sin or cos and then you don't need to drag i into the equation so to speak. Dmcq (talk) 18:45, 29 April 2009 (UTC)[reply]
Can you clarify a bit about the square root - did you mean in:
y=-ln(C)+ln(-x+sqrt(C2+x2) )

Here I must take positive root for a postive log)

or in converting

sqrt(C2-C2 cos2(iy)) C sin(iy) dy

to

C2 sin2(iy) dy 

Here I don't see what difference it makes whether I take a positive or negative root - it doesn't seem to solve the issue I had?HappyUR (talk) 19:07, 29 April 2009 (UTC)[reply]

I'm afraid I didn't look to closely. It could be either or both and also the log could have bits added too when you start getting into complex logarithms. However just looking at your first bit
So I tried x=iC cos(iy) giving dx/dy=Csin(iy) and y=-ln(C)+ln(-x+sqrt(C2+x2) )
I believe the expression for y is wrong. The transformation for arccos in Inverse trigonometric function#Logarithmic forms is:
You'd have to use the version with the π/2 to get rid of the i in the ln. Dmcq (talk) 20:16, 29 April 2009 (UTC)[reply]
ok I need y = -i arcos (x/ic)
I think I missed a +ln(i) from my expression for y ( ln(i)=pi/2+2npi or something doesn't) it, if I add that to my expression for y I think its still right (It doesn't affect the integral when evaluated, I was aware of that) if you
- as I said I really have been doing this for a week and I'm starting to go a bit blind.. I'm sure it's an obvious typo or missed sign. But I'm at a can't see the wood for the trees state now - which is why I ask - typos aside - is the substitution ok ? (I'm 100% sure it worked before - my head is actually starting to hurt..) Will thank profusely anyone who can sort this out for me. ThanksHappyUR (talk) 21:24, 29 April 2009 (UTC)[reply]
I still haven't looked right through the calculation carefully, but just on an off chance, you're not getting confused because you've used y for two different things and the results look rather similar? Dmcq (talk) 10:45, 30 April 2009 (UTC)[reply]
Another thing, where you say you have to take the positive sqrt to get a positive log in ln(-x+sqrt(C2+x2)), thazt isn't true. If you've got i around the place one could have ln(x+sqrt(C2+x2)) + ln(-1). Complex logs are allowed to work on negative or complex numbers. Dmcq (talk) 10:53, 30 April 2009 (UTC)[reply]
There's only one type of y I've used, also in my equation - there is no i in the roots - if you look at the equation for y (it's at the top) (both C and x are real). Please read question before answering.HappyUR (talk) 14:50, 30 April 2009 (UTC)[reply]
Unless I'm misreading there is one y in x=iC cos(iy) and another different y in x=iC sin(iy). Dmcq (talk) 16:53, 30 April 2009 (UTC)[reply]
Plus as to the business about being real what you should have written I believe is x=C cos(iy) for the first substitution if you want both x and y to be real. The i is okay in the second. Also sinh doesn't cover the whole range of reals which could be another problem but not what I think is happening here. Dmcq (talk) 17:10, 30 April 2009 (UTC)[reply]
Let me clarify - I used the substitution x=iC cos(iy) and didn't get the right antiderivative, I also tried the subsitution x=iC sin(iy) and got a different (change of sign) antiderivative
So I have two problems - not getting the right antiderivative and getting different antiderivatives depending on substituion - I must have made more than one mistake?
I need to use x=iC cos(iy) for the substitution to work if I use x=C cos(iy) it gets me nowhere.
y is not real - it's complex. I don't need y to be real?HappyUR (talk) 17:57, 30 April 2009 (UTC)[reply]
Please see Hyperbolic function. cos(iy) is the same as cosh(y) and -i sin(iy) is the same as sinh(y). If using C cos(iy) went nowhere it was possibly because cosh as a real function is always greater than or equal to 1. Just sticking in an extra i won't make both x any y real, the sinh version should work okay though. Putting in i's and using sin though confuses it quite a bit. Dmcq (talk) 09:15, 1 May 2009 (UTC)[reply]

April 30

house payment

If I buy a $140,000 house with $7,000 down, $1,400/month going to the mortgage, and a one time $7,000 payment 6 months after the down payment, how many months will it take me to own the house fully if my interest rate is 4.875% fixed (no early repayment fees apply)? 65.121.141.34 (talk) 19:51, 30 April 2009 (UTC)[reply]

This could, plausibly, be a homework question, so I'm not going to give you an actual answer, but I suggest you use a spreadsheet to calculate it. Have one row for each month with the amount owed in one column and the amount paid that month in the other and another for the interest being added on. --Tango (talk) 20:06, 30 April 2009 (UTC)[reply]
Of course its a homework question. I am doing the homework before buying a house. Perhaps you could check my number. I got 113 payments (the last being lower then the standard $1400. 65.121.141.34 (talk) 20:19, 30 April 2009 (UTC)[reply]
Make that 112 since I forgot that the title line counts as 1 65.121.141.34 (talk) 20:24, 30 April 2009 (UTC)[reply]
Assuming that 4.75% is the APR compounded monthly for an effective annual rate of about 4.855%, the first $1400 payment is made 1 month after the start of the loadn, and that the $7000 payment at 6 months is in addition to the regular $1400 payment, I concur. -- Tcncv (talk) 05:24, 1 May 2009 (UTC)[reply]

May 1

Pairing colours, permutations and such

Argh numbers. I've been trying to work this out manually all evening and I have no idea if it's mathematically possible. I'll try to explain it clearly...bear with me...

  • I have a pool of eight colours, four hot and four cold.
  • I have six groups of four tiles. Imagine them as currency, valued as 1, 2, 3, and 4.
  • I know that a selection of four colours, two hot and two cold, can make six differing permutations, not counting pairing a colour with itself.
  • I'm trying to colour each tile so that no colour pairing is repeated anywhere, attempting to use a selection of four colours as above to colour each tile 'denomination'.

Thus far I've always hit a snag. I've no idea how to work it out using magic formulae. Is there a way of doing this?

Lady BlahDeBlah (talk) 00:42, 1 May 2009 (UTC)[reply]

Could you clarify a bit - how many colours per tile. 4?
Also if you have four hot colours, and four cold - and pick two of each, then I think you will have 4*3/2=6 choices for hot, and the same for cold. So there should be 6*6 hot/cold combinations = 36 Is that the answer you wanted?
ie there are 36 ways to colour a tile with 2 hot and 2 cold colours, all colours different.
ie you've got six hot tiles (bi-colored) and six cold tiles (bi-colored) to make 36 quad tiles.
(This gets more complex if your tile matters which way round the colours are placed - eg like a 2x2 checkerboard - does it matter if hot is next to hot, or diagonal?)HappyUR (talk) 01:16, 1 May 2009 (UTC)[reply]
Eeep. I knew I was being unclear, bugger. It's only two colours per tile. Lady BlahDeBlah (talk) 01:36, 1 May 2009 (UTC)[reply]
I'm confused. OK You have four hot colours (a,b,c,d) and four cold colours (w,x,y,z). You can have two colours per tile - Is this (two hot or two cold) or is this (one hot and one cold) or this this anything with anything? IE which of the following are valid tiles: The hot tiles: ab, ac, ad, bc, bd, cd; The cold tiles: wx, wy, wz, xy, xz, yz: The hot&cold tiles aw, ax, ay, az, bw, bx, by, bz, cw, cx, cy, cz, dw, dx, dy, dz. This is a set of 6 hot pairs, 6 cold pairs and 16 mixed pairs. (I excluded aa, bb, ww etc from the list as you said you couldn't pair with itself.) Where do "four colours" come into it? Are you trying to then join a hot pair to a cold pair (there will be 6*6=36 ways of doing this). -- SGBailey (talk) 08:08, 1 May 2009 (UTC)[reply]

Complex integral

I know I can do this integral by using residues. But, I'd probably have to add together like 1004 2008th roots of -1. Is there a trick to do that, or is there some other way to do this? This was on a recent analysis qualifying exam and I'm not sure how to go about it.

Thanks StatisticsMan (talk) 02:06, 1 May 2009 (UTC)[reply]

That wasn't 2048 instead of 2008 was it? i.e. 211 That's pretty bad and I'm not sure how to do it quickly and cleanly but it's nowhere as bad as 2008 because lots of stuff will cancel out. Dmcq (talk) 10:01, 1 May 2009 (UTC)[reply]
The trick to add 1004 2008th roots of −1 together is to observe that they are arranged in a geometric series. — Emil J. 10:37, 1 May 2009 (UTC)[reply]
So, the integral (let's denote it by I) is 1/2 of the integral of f(z) = 1/(1 + z2008) over a path which goes from −R to R and then by a half-circle in the upper half-place back to −R, for large enough R. Singularities of f in this region are simple poles at ak = ekπi/2008 for 0 < k < 2008, k odd. The residue of f at ak is
hence
and the original integral is
Not sure I haven't made any mistakes there, but the fact that the result is real and positive is an encouraging sign. — Emil J. 10:56, 1 May 2009 (UTC)[reply]
It can't be quite right, as it's easy to see that the integral should be numerically ≈ 1.0. Changing the "+" to a "−" in the denominator gives the right result. Fredrik Johansson 11:29, 1 May 2009 (UTC)[reply]
Further simplified, it is . Fredrik Johansson 11:42, 1 May 2009 (UTC)[reply]

Exponential Addition

What is u where c is a constant and . This question may also be stated as the following:
Solve for u where The Successor of Physics 05:09, 1 May 2009 (UTC)[reply]

Consider the logarithm to the base x on either side of the equation. The resultant equation after the perfomance of this operator is and thus u is the composition of a logarithmic function with a linear function. The most important link of these is logarithmic function. --PST 06:47, 1 May 2009 (UTC)[reply]
You might also be interested in Bring radical Dmcq (talk) 10:10, 1 May 2009 (UTC)[reply]