User talk:Bo Jacoby

From Wikipedia, the free encyclopedia
Jump to: navigation, search


Root-finding algorithm[edit]

Hello, and welcome on Wikipedia. I have some questions about your addition to root-finding algorithm. I don't remembering seeing this method before, but that's does not say much as I never really studied the numerical solution of polynomial equations. Do you have some reference for this method (this is required for verifiability)? Is there some analysis; for instance, does the iteration always converge, and is anything known about the speed of convergence? Just a small remark: we sign our contributions on talk pages, but not in the articlesthemselves; see Wikipedia:Ownership of articles#Guidelines. I hope that you continue contributing. Please drop by at Wikipedia:WikiProject Mathematics and feel free to ask me any questions on User talk:Jitse Niesen. Cheers, Jitse Niesen (talk) 22:20, 12 September 2005 (UTC)

Hello Jitse. Thank you very much for your comment on my article on root-finding algoritm. You request a reference for verifiability and some analysis, and you ask whether the method always converge and what is the speed? I agree that such theoretical stuff would be nice, but alas I have not got it. I have got a lot of practical experience with the method. It was taught in an engineering school in Copenhagen for more than 5 years, and the students implemented it on computer and solved thousands of examples. I have not got any nontrivial reports on nonconvergence. So much for verifiability. Does the method always converge ? The answer is no for topological reasons. This is why. Consider initial guess p,q,r,s converging towards roots P,Q,R,S. For reasons of symmetry the initial guess q,p,r,s will converge towards Q,P,R,S. Both solutions are satisfactory, but they are not the same point in four-dimensional complex space. Consider f(t)=(1-t)(p,q,r,s)+t(q,p,r,s), 0<t<1. This line joins the two initial guesses. Note that the iteration function, g, is continuous, no matter how many times we iterate. We don't iterate an infinite number of times. Let A and B be open disjoint sets such that A contains (P,Q,R,S) and B contains (Q,P,R,S) and such that g(f(0)) is in A and g(f(1)) is in B. But no continuous curve can jump from A to B. So for some value of t, 0<t<1, g(f(t)) is outside both A and B, and so the method does not converge everywhere.

I do not think that this immature argument belongs to a wikipedia article.

However, I believe that the method converges 'almost everywhere' in the sense of Lebesque, but I have no proof. Nevertheless, the question of convergence is not the right question to pose. As you only iterate a finite number of times, you will not necessary get close to a solution even if the method converges eventually. So, the method is good for no good theoretical reason! The solutions are attracting fixpoints for the iteration function. That's all.

Bo Jacoby 07:12, 13 September 2005 (UTC)

Please vote[edit]

Hello. Please vote at Wikipedia:Featured list candidates/List of lists of mathematical topics. Michael Hardy 23:01, 14 October 2005 (UTC)

Mathematical notation style conventions (non-TeX should match TeX as much as possible)[edit]

Hello. Please note the differences between the first and second versions of each of the following:

ln(-1) is a solution to ex=-1.
ln(−1) is a solution to ex = −1.

(Proper minus sign instead of nearly invisible hyphen. Spacing on both sides of "=".)

If x=it is
If x = it is


ex=eit is a point on
ex = eit is a point on


from point 1 (=1+0i) to eit.
from point 1 (= 1 + 0i) to eit

(Spacing. Italicizing i BOTH times, not just the second time.)

at the point -1 (=-1+0i). So e=-1.
at the point −1 (= −1 + 0i). So eiπ = −1.

(Proper minus sign. Spacing. Italicizing i BOTH times. Digits, inclduing "1", should not be italicized in non-TeX mathematical notation; neither should punctuation, although that point doesn't arise here.)

And so ln(-1)=iπ
And so ln(−1) = iπ

(Proper minus sign. Spacing. Consistently italicizing i.)

Michael Hardy 19:36, 3 November 2005 (UTC)

Thank you very much ! Bo Jacoby 07:27, 4 November 2005 (UTC)

Hot & Cold Photons?[edit]

I've left some comment on the thermodynamic evolution "Talk Page". Let me know if you have suggestions. Thanks:--Wavesmikey 04:38, 26 November 2005 (UTC)

why the difference in notation?[edit]

Consider the expression

\ {n \choose i}p^i(1-p)^{n-i}

Fixing (n,p) it is the binomial distribution of i. Fixing (n,i) it is the (unnormalized) beta distribution of p. The article does not clarify this.

Bo Jacoby 10:02, 15 September 2005 (UTC)

This is mentioned only implicitly in the current version, which describes the beta distribution as the conjugate prior for the binomial. You could add a section on occurrence and uses of the beta distribution that would clarify this point further. --MarkSweep 12:50, 15 September 2005 (UTC)

I don't see what makes you think the article is not explicit about this point. You wrote this on Sepember 15th, when the version of September 6th was there, and that version is perfectly explicit about it. It says the density f(x) is defined on the interval [0, 1], and x where it appears in that formula is the same as what you're calling p above. How explicit can you get? Michael Hardy 23:07, 16 December 2005 (UTC)

... or did you mean it fails to clarify that the same expression defines both functions? OK, maybe you did mean that ... Michael Hardy 23:08, 16 December 2005 (UTC)

Yes, precisely! Bo Jacoby 16:54, 31 December 2005 (UTC) See Inferential statistics, where the same simple expression is used for deductive and inductive distributions, and where the limiting cases are: the binomial distribution, the beta distribution, the poisson distribution and the gamma distribution, and, of cause, the normal distribution. I find this unified approch very attractive. Bo Jacoby 09:58, 4 January 2006 (UTC)

Mathematical notation conventions[edit]

Hello. Your comments at talk:normal distribution inspire this comment. In editing mathematics articles, you may find it useful to bear in mind the difference in (1) sizes of parentheses and (2) the dots at the end in these two expressions:


Michael Hardy 23:56, 8 January 2006 (UTC)

Thank you very much. I totally agree. Please feel free to edit on the spot. Bo Jacoby 07:52, 13 January 2006 (UTC)


"Bo Jacoby, Nulpunkter for polynomier, CAE-nyt 1988" — could you please write out "CAE-nyt" in full? Is it a journal, a technical report, something else? I have no idea where to find this reference. Thanks. -- Jitse Niesen (talk) 11:26, 11 January 2006 (UTC)

"CAE-nyt" = 'Computer Aided Engineering News', a periodical for "Dansk CAE Gruppe" = 'Danish CAE Group'. I can fax the article to you if you are interested in history. For mathematical reasons you need not read it, because the explanation in the WP-article is better than that of the old article. Bo Jacoby 08:00, 12 January 2006 (UTC)

I found the method from scatch, but I don't know who was the first one to do so. I gave a lecture to 'Dansk Selskab for Bygningsstatik' on December 10th 1991. My lecture was published and a reference to that publication is now added to the WP-article. After the lecture I had some correspondance with Jørgen Sand. He says that the method is the Durand Kerner method, and he gave the following reference, which I have not checked.

Terano, T., el al (1978): An Algebraic Equation Solver with global convergence Property. Research memorandum RMI 78-03, Tokyo.

For topological reasons strict global convergence is impossible, but the method converges almost everywhere, and the convergence is fast. Bo Jacoby 07:49, 13 January 2006 (UTC)

Excellent. I found some references to articles about the Durand-Kerner method. I'll check them when I have some time and see whether this is indeed the same method as described in Jacoby's method. -- Jitse Niesen (talk) 13:53, 13 January 2006 (UTC)
Ohh. I had to look. Cool. How about some pictures of the basin of attraction for this solver? We have those famous pictures of the basin of attraction for the Newton zero finder; I wonder how this compares. In particular, its not clear to me how/why the initial guesses can end up in different basins. linas 05:46, 31 January 2006 (UTC)

The space C4 contains 24 open basins of attraction, one for each permutation of the four roots of a degree 4 polynomial. I don't know how to make a picture of that. If an initial guess p,q,r,s is in one of the basins, then q,p,r,s is in another basin. Bo Jacoby 07:33, 31 January 2006 (UTC)

Style remarks[edit]

Hi Bo. I have a few style remarks. First is that one should make variables italic, so x instead of x. Second, per the math style manual one should not force PNG images if inline, so one should write a_n instead of a_n\, which is an image. These are small things, but they are good practice to follow. :) Oleg Alexandrov (talk) 01:17, 21 January 2006 (UTC)

Thanks, Oleg. You've got a point. I need to find out how to make a little not-equal sign in 'math'.

x=0 \,, x \ne 0, x=0 , x?0  ?

Without 'math' it can be done, but then the font is different:

x = 0, x ≠ 0

I'd like if the same variable takes exactly the same typographical shape thoughout the article. Bo Jacoby 05:49, 21 January 2006 (UTC)

In short, the math display on the web sucks. :) Oleg Alexandrov (talk) 06:17, 21 January 2006 (UTC)

One more style remark[edit]

Hi Bo. Just one remark. Writing links as Ordinary_differential_equation#Homogeneous_linear_ODEs_with_constant_coefficients is not a good idea, as they mess up the diffs, as you can see here. Then it is hard to see what changed. I will fix that right now, but a tip for future reference is to remove the underscores. (And by the way, I don't know if it is a good idea to link to sections to start with; those section names can (will) change eventually, and then the link breaks down. But I see, it does not hurt either). Oleg Alexandrov (talk) 00:23, 24 January 2006 (UTC)

Thanks. Its a very good tip. But why didn't you like my other edits to root-finding algorithm ? Please note the Wikipedia:Simplified_Ruleset point 9. Take your time to produce what we agree is a step forwards, rather than to make what I must consider a step backwards. For example I don't think that the words: 'Much attention has been given' belong in an encyclopedia. Bo Jacoby 10:32, 24 January 2006 (UTC)

Splitting circle method[edit]

I noticed this comment of yours. I created the article after stumbling across the algorithm's name, but didn't write an explanation as I couldn't figure out much from the sources I found. I created a stub anyway in the hope that someone with more knowledge in this domain will be able to expand it. If you could, the work would certainly be appreciated. Fredrik Johansson - talk - contribs 11:11, 24 January 2006 (UTC)

Thanks to this I read about Jacoby's method. That's a quite interesting algorithm, and remarkably simple to implement. Though it evidently works, it would be nice to have an online reference outside of Wikipedia for verification purposes. You don't have a website where you could put a description? Fredrik Johansson - talk - contribs 11:31, 24 January 2006 (UTC)
Hi Fredrik. Isn't it remarkable that an algorithm 'evidently works' ? Alas my references are all to old to be online. A new website would basicly contain the same information as the WP-article. Look at Talk:Root-finding algorithm for some discussion. There is not much more to be said. Try it and convince yourself that the problem is solved. Bo Jacoby 13:35, 24 January 2006 (UTC)
There's no problem, just an opportunity to make verification more convenient for future readers. Fredrik Johansson - talk - contribs 15:58, 24 January 2006 (UTC)

n-ary operations[edit]

I haven't looked at your edit to function (mathematics) on this point, but there are such things as n-ary operations, no matter what the abstract algebra article may say... Randall Holmes 22:40, 29 January 2006 (UTC)

I agree. But the group operation is an example of a binary operation, and not of an n-ary operation for n>2. Bo Jacoby 23:08, 29 January 2006 (UTC)


  • JA: Bo, I moved your question to the end (I gave warning in the edit line), as it's best to put new talk at the end, or else people tend to miss it. I'm writing a reply as we speak, well, not just this second, but in a second. Jon Awbrey 15:04, 3 February 2006 (UTC)


Hi. Some of your changes to the Combinations page removed info that is relevant, without making it really clear that the material was moved to another article. IMO - it would be better to have a complete, self-contained article on combinations, or to move all of the info to binomial distribution and then have combinations redirect there. Just my $0.02. dryguy 19:20, 8 February 2006 (UTC)

Surely the information is relevant, but it is also stated in binomial coefficient, so it does not need to be repeated everywhere. A link is sufficient. Bo Jacoby 10:44, 9 February 2006 (UTC)
Sorry, I mis-typed. I meant to say move to the binomial coefficient article. In any event, my point was, that some of the info that was moved was highly relevant to the combinations article, and probably best belongs there. If the duplication bothers you, why not pick one of the two articles and place all of the combinations info in one place. I think that the combinations article is now a bit too thin. It could either be restored, or the remaining info moved to binomial coefficient with combinations redirecting to the binomial coefficient article. dryguy 13:32, 9 February 2006 (UTC)
There is a discussion going on regarding merging of the two articles, as you also suggest, but not everybody is in favour of a merge. The present cleanup is a compromise. I don't mind at all that an article is thin, if it contains a definition and a link to more detail in another article. The concept of 'combination' is equivalent to 'subset', so there is not much to be said, I think. Bo Jacoby 14:30, 9 February 2006 (UTC)


Hello. Please note my recent edits to that article. Michael Hardy 00:40, 9 February 2006 (UTC) Thank you, Michael. Bo Jacoby 10:31, 9 February 2006 (UTC)


Bo, just one remark, and I may have said it before. Per the math style manual, variables should be italic. Thanks. Oleg Alexandrov (talk) 16:02, 8 March 2006 (UTC)

Ordinal fraction listed for deletion[edit]

An article that you have been involved in editing, Ordinal fraction , has been listed at Wikipedia:Articles for deletion/Ordinal fraction . Please look there to see why this is, if you are interested in it not being deleted. Thank you.


You may want to take a look and comment at Wikipedia talk:WikiProject Mathematics#Problem editor.

The moral of the story is that please modify articles only on topics you are very sure about, and only when you have good published references for whatever you are writing.

Also, if a couple or more of editors tell you to drop something, then drop it, especially if you are not completely sure you perfectly understand the topic at hand. Oleg Alexandrov (talk) 05:09, 17 August 2006 (UTC)


I moved your comment about the article to Talk:Exponentiation because discussions about article content belong on article talk pages. I assure you I don't have any personal grudge with you. The article Kepler's laws of planetary motion seems much improved due to your editing. CMummert · talk 14:17, 12 January 2007 (UTC)

Thank you! Bo Jacoby 15:31, 12 January 2007 (UTC).

Mathematics CotW[edit]

I am writing you to let you know that the Mathematics Collaboration of the week(soon to "of the month") is getting an overhaul of sorts and I would encourage you to participate in whatever way you can, i.e. nominate an article, contribute to an article, or sign up to be part of the project. Any help would be greatly appreciated, thanks--Cronholm144 17:46, 13 May 2007 (UTC)

Thanks! Bo Jacoby 18:04, 19 May 2007 (UTC).

definition of a subgroup[edit]

A subgroup is a pair (S, *) closed under the operation * and selection of unity, which is what we call a 'nullary operation". This means that the only unity allowed in the construction of a subgroup is the unity of the group. --VKokielov 18:37, 4 June 2007 (UTC)

By the way, 0 is never part of the multiplicative group of a field. --VKokielov 18:42, 4 June 2007 (UTC)

Thanks. The pair ({0},·) is closed under the operation of multiplication · . The element 0 satisfies 0·x=x for x in {0}, because 0·x = 0·0 = 0 = x. The group ({0},·) is not a proper subgroup of a larger group of complex numbers, but still it is a group. It is isomorphic to ({1},·) . Bo Jacoby 22:18, 4 June 2007 (UTC).


Thanks for such a precise and detailed answer to my question about statistical significance on the Talk:Standard deviation page. Luzhin 17:13, 23 July 2007 (UTC)

you are welcome, my friend. Bo Jacoby 20:24, 23 July 2007 (UTC).

Please don't use Wikipedia for self-promotion[edit]

Howdy, I noticed the little disagreement over at Wikipedia:Reference_desk/Mathematics#What_is_the_addition_equivalent_of_a_factorial.3F and it struck me as odd than a mathematical disagreement would get people saying "please take it elsewhere." In looking into this further, I came across things like Wikipedia_talk:WikiProject_Mathematics/Archive_16#Problem_editor and Wikipedia:Articles_for_deletion/Ordinal_fraction. It appears there has been an ongoing problem for quite some time with you trying to promote your own nonstandard notation. I'm not much of a subject matter expert in this field, so I can only go by what other people have written, but do you agree with this assessment? I must remind you once again that editors are expected to cooperate with each other, and this includes observing Wikipedia's policies and guidelines. Friday (talk) 18:56, 6 August 2007 (UTC)

\cdots on Negative binomial distribution[edit]

Does the Negative binomial distribution article really need product dots between each variable? I've never seen it written like this is statistics books, or many other articles.--Vince | Talk 07:55, 2 November 2007 (UTC)

no, it is not strictly necessary, but it makes the formulas safer to read, and it makes no harm. It avoids confusion where f(k) does not imply multiplication while p(1-p) does imply multiplication. Bo Jacoby 23:53, 2 November 2007 (UTC).

Durand-Kerner-Weierstrass method[edit]

Thank you for considering my proposed change regarding subscripts. I believe the two formulations are equivalent. I found the alternate slightly easier to implement for arbitrary n, as each iteration relies only on results from previous iterations, rather than the iteration in progress. This makes it easier to halt iteration if changes fall below a specified precision.

May I impose on you to comment on the suitability of this algorithm to polynomials with complex coefficients?

John Matthews JMatthews 09:01, 2 November 2007 (UTC)

Thanks for your comment. The two formulations are not quite equivalent. In the original case the newly computed values of the approximate are used as soon as possible. In your formulation they are not used until after the loop. Both formulations provide useful algorithms. The original formulation is the one consistent with the numerical example.
Regarding the halting condition: You do not want the computer to loop infinitely at any rate. So in very exceptional cases you must stop the iteration even if the roots have not been found with the specified accuracy. How many iterations are you prepared to perform in that case? Why not use the same number of iterations in the normal case? So don't bother testing against a precision, but iterate in the normal case the same number of times as you iterate in the worst case. Have a nice day. Bo Jacoby 09:18, 2 November 2007 (UTC).
Thank you for this thoughtful analysis. In this particular case, I am implementing a generic procedure. The user may have instantiated the code with a more or less precise numerical type, depending on space and time constraints. I'm not sure I see the value in iterating beyond the useful precision of the specified type. John Matthews JMatthews 11:54, 2 November 2007 (UTC)
Congratulations! If you skip the change condition the program becomes slower in average, but not in the worst case. Once users have grown accustomed to fast response they complain over slow response. Don't create an expectation that you cannot maintain. Bo Jacoby 11:55, 8 November 2007 (UTC).

Hi John. You will find that the 'old' version of the algorithm is slightly space-saving as only one version of the variables p,q,r,s is needed.
Yes, I see this. Of course, for a given polynomial order, n, each iteration takes O(n2) effort, while the array copy takes only O(n). It is still an appealing optimization. John Matthews JMatthews 19:06, 4 November 2007 (UTC)
You will also find that the criterion for halting the iteration loop is a little complicated, involving first the computation of a size of the last step taken, and secondly an emergency break preventing the program from looping infinitely in exceptional cases. If you try to solve x2+1=0 using real initial guesses the algorithm will loop forever. That's why real initial guesses are avoided. But no matter which initial guesses you choose some equation exists that remains unsolvable using these initial guesses. So, theoretically, you cannot safeguard against the infinite loop. That's a hard fact of life. So you must stop the program after a finite number of iterations. If your experiments show that the roots usually have stopped moving after 5 iterations, you may choose to limit the number of iterations to, say, 20. Having done this you may contemplate the cost and the benefit of performing these 20 iterations every time. The cost is the time of doing 15 needless iterations in the normal situation, and the benefit is the simplification of coding the halting criterion. It may seem insensitive and brutal and un-gentlemanlike to keep beating on roots after they have stopped moving, but actually it makes no harm or pain. Have fun! Bo Jacoby 00:22, 3 November 2007 (UTC).
My exit condition looks at both max_count and change. The former is O(1); latter is only O(n). I'm looking at scaling max_count as a function of n, now. Yes, it is indeed fun! John Matthews JMatthews 19:06, 4 November 2007 (UTC)
Yes, and the program becomes clearer if you simply omit the change part of the exit condition. You need the max_count part anyway. If one exit condition is sufficient then the other one is unnecessary from a logical point of view, if not from an optimization point of view. For sufficiently small values of n the algorithm is fast no matter what, and for sufficiently big values of n the algorithm is too slow no matter what. So why complicate the program for the sake of the intermediate values of n only? Scaling max_count is a good idea. I believe that max_count=7*n is sufficient, but I am not sure. It depends on the 'typical' equations to solve. Your report will be interesting. Bo Jacoby 12:26, 6 November 2007 (UTC).
I'm getting excellent results up to order 38 with the floating point precision I have available. Optimizing the max_count parameter proved unreliable, and the exit condition is really quite simple. Rather than reference my site, I'll post a link here. Thanks! John Matthews JMatthews 18:29, 7 November 2007 (UTC)

please comment on recent edit at Negative binomial distribution[edit]

In this edit you essentially rved an edit I made. I'd appreciate it if you started a discussion of it on the talk page. Also, your edit comment makes little sense to me. Pdbailey (talk) 13:56, 25 January 2008 (UTC)

Hi, I generally like to keep comments in one spot, so I replied to you over on my talk page. I'm only posting here because I don't know if you know that this is usual and that it is customary to watch a talk page that you post on. Pdbailey (talk) 17:39, 26 January 2008 (UTC)

power of one[edit]

Thank you for commenting on my edit to exponentiation and correcting the content.
I know I have add inappropriate content AFTER my edit. However, I did NOT correct back because I want to wait somebody for correcting this. QQ (talk) 06:17, 25 February 2008 (UTC)

My friend, I do not understand. Do you expect me to do make some corrections ? Bo Jacoby (talk) 14:46, 25 February 2008 (UTC).

Quick personal question[edit]

Howdy, your posts are almost always fluent English, but I noticed a weird grammar mistake in your last talk post. Since I have been writing assuming you are a native speaker, I wanted to catch myself before I made a mistake in how I read your posts.

The specific thing is that you said you were "willing to work against consensus" which normally means "willing to sabotage/counteract/fight consensus", as in, an enemy of consensus, not a friend.

It is important to both you and I, that I know if you are a (near) native speaker, because I believe you may be acting uncivilly and disruptively, but this is based on the assumption that you are completely comfortable with the language.

In other words, I need to know whether the failure to communicate is just a simple language barrier.

I guess in either case, it would be helpful to know:

  • Do you think I am listening to your side?

or more negatively:

  • Do I seem to be ignoring you on the talk page?

My intention is to listen to you carefully, find every possible thing that you say that could help the encyclopedia, and then ensure that Oleg sees that they are helpful too (and similarly from Oleg to you). JackSchmidt (talk) 16:03, 26 February 2008 (UTC)

No, I am a Dane and not a native English speaker, and yes, I do make mistakes in grammar. I should have written, "willing to work towards consensus" - that was a very bad mistake. I'll fix it. I do intend to act civilly and to cooperate. Yes, I think you are listening to my side, and no, you do not seem to be ignoring me on the talk page. Thank you very much for being careful on these matters. Bo Jacoby (talk) 21:34, 26 February 2008 (UTC).
Ah, I am very glad to hear I was mistaken too. I apologize if I said anything rude, was impatient, or otherwise irritating. :) Now when I read your talk comments, it is very easy to see you trying very hard to work together with people who do not yet agree. A few more people have joined in the discussion, so I am not at all worried that it will work out.
We now have the two extremes laid out clearly for people to examine, your van der Waerden/Bourbaki approach (which are probably ancestors of all algebra on continental Europe, so who can argue!), and Arthur Rubin's very precise statement, but with 5cm parentheses!
It may not be possible for everyone to agree which single formula is best, but it should be easy to include both (with fixed tex). I think the summation with "informal" limits (i+j=k) is easier for young students (well American students at least) to read, and a "purer" notation is better for the formal definition.
Thanks again for being patient. It can be very hard for people to communicate, both in formulas, and in words. JackSchmidt (talk) 22:02, 26 February 2008 (UTC)

Dash to minus[edit]

Thanks for catching it twice! I restored your fix, and left this note on Silly Rabbit's talk page too. JackSchmidt (talk) 19:18, 28 February 2008 (UTC)


As in ZFC, objects are not typed in NF.  --Lambiam 08:11, 1 April 2008 (UTC)

Compressibility misunderstanding[edit]

Thanks for catching that! what I mean to say was more along the lines of losing "compressibility effects"... so something like when you have a shock wave/blast wave and the gas loses its "ideal-ness" and compressibility effects come into play. (along with changes in chemical composition and whatever else). As far as formatting goes, I dont know if it should be left as is or adjusted to fit the other sections that are in the article or somehow rewrite that portion of the article so that the formatting is uniform. I'm not sure what to do with it. I'm trying to keep that section as clear as possible because its an area of great disagreement (cause no one does their homework). Thanks for the help! Katanada (talk) 23:07, 2 April 2008 (UTC)

Product of Planck constant and Gravitational constant[edit]

Hi Bo Jacoby: Would you look at "Talk:Black hole electron", Item 9, "Quantum-gravitational effect" when you have some time. Clearly, the product of h and G is a constant value. We will know this product value when the L1 wavelength (and/or Planck length) value is more precisely known. We need to know if any correction factor is needed in the equation: (L2) squared = (L3)(L1). This is clearly defined in the article. Any correction factor that may apply must be very small. If you have interest in this, let me know on my Talk page. DonJStevens (talk) 15:12, 13 April 2008 (UTC)

Hi Bo Jacoby: The one second Schwarzschild radius that you described is interesting. With frequency 5.48x10 exp 85 rotations per second, the wavelength (c/frequency) would be 5.47x10 exp -78 meter.

Some theorists expect that photon energy has an upper limit, so that the expression "photon energy equals h times frequency" has an upper cut-off limit. As photon energy approaches Planck mass energy, wavelength approaches Planck length and time per cycle approaches Planck time. The values length and time then become either quantized or uncertain so that the (E=hv) expression, at some high energy level does not apply. In my opinion, this is probably correct. DonJStevens (talk) 18:27, 14 April 2008 (UTC)

Hi Bo Jacoby: Some things in nature are clearly quantized by limits; electric charge, electron mass, muon mass, impedance of space, velocity of light and so on. The acceleration limit is fixed at (c squared)(1/radius). Photon energy is fixed (has a charge acceleration limit) because alternating current required to generate a continuous em wave cannot be less than one charge per 1/2 cycle and the front to back dimension of a minimum segment of the continuous wave cannot be less than one wavelength. For a continuous wave with minimum radiated energy, exitation current will be (2e) times frequency. Then (h)(frequency) will equal (2e) squared, times frequency, times impedance. With some algebra, we find frequency divided by voltage equals (2e) divided by (h). This is the Josephson junction frequency to voltage ratio. The charge acceleration is (c squared)/radius, where radius is wavelength/2 pi. Acceleration increases linearly with frequency and exitation current increases linearly with frequency. DonJStevens (talk) 14:42, 16 April 2008 (UTC)

Hi Bo Jacoby: As photon wavelength is shortened (frequency increased; energy increased) three significant energy levels are attained. The electron Compton wavelength is first 2.426·10−12 m, next is 1.213·10−12 m. This second wavelength has energy equal to the mass energy of one electron plus one positron. As wavelength L is shortened ever more a wavelength is found that has energy determined either by the Planck constant or the gravitational constant. E = hc/L and also, E = Lc4(1/3πG). This wavelength is L=(3πhG/c3)1/2 = (2π)(Planck length)(3/2) 1/2. This wavelength is related to the electron Compton wavelength in a specific way. I expect (can't yet prove) that the L1 photon wavelength is the upper energy limit photon wavelength. DonJStevens (talk) 14:27, 19 April 2008 (UTC)

Hi Don. I edited your text above for readability. Why should there be an upper limit for photon energy? There is no energy, nor wavelength, determined by the Planck constant. How should the gravitational constant be related to the Compton wavelength? Bo Jacoby (talk) 16:00, 19 April 2008 (UTC).

Hi Bo Jacoby; When the wavelength of a photon is known, then its frequency is known and its energy is known. E = h (frequency). The frequency equals (c)(one second)/ wavelength. E = (hc/wavelength)(1 sec). The photon wavelength that has energy equal to one electron mass is h/mc or 2.426x10 exp -12 m. This is electron Compton wavelength. The gravitational constant defines a photon sphere radius (or photon orbit radius) value for any gravitationally collapsed mass. Radius = 3Gm/(c squared). When the radius (or circumference) is known, then its mass (and mass energy) is known. Radius (c squared/3G) = m. Radius (c exp 4)/3G = E . We need to translate from electron Compton wavelength to photon sphere circumference in order to use either the Planck constant or the gravitational constant (or both of these constants) to specify electron mass. The applicable gravitational time dilation factor at the photon sphere radius is needed to perform the translation. The idea of an upper limit for photon energy is not new. It is based on a maximum energy density value. When photon energy is great enough to cause limit space curvature, then no greater energy density (no smaller wavelength) is possible. DonJStevens (talk) 21:05, 19 April 2008 (UTC)

Hi Don. Assuming a single photon makes a black hole. The radius, r, of that black hole is computed from the mass, m, by r = 2Gc−2m. The mass is computed from the energy, E, by m = c−2E. The energy is computed from the frequency, ν, by E = hν. The frequency is computed from the wavelength, L, by ν = cL−1. So r = 2Gc−2m = 2Gc−4E = 2Gc−4hν = 2Gc−3hL−1. (Note that Lr = 2Gc−3h is an area). What prevents it from being a big black hole (r>>L)? Where does the electron mass enter into the calculation? Bo Jacoby (talk) 06:30, 20 April 2008 (UTC).

Hi Bo Jacoby: I like your equations because they follow the logic used to define the Planck length. You said r = 2Gh/(c cubed)L. The L value relates to a circumference. The particle turns around twice to complete a spin cycle so the L value is 2(2pi r) or (4pi r). You will find that when L is equal to (4pi r) then the radius is required to be equal to the Planck length or (hG/2pi c cubed) exp 1/2. The Plack length computation explains how constants can be used to define a critical length value. A critical length based on a gravitational photon orbit radius, 3Gm/c squared rather than 2Gm/c squared is larger than the Planck length by the factor (3/2) exp 1/2. The proposed critical circumference is 2pi(Planck length)(3/2) exp 1/2. This circumference length can be directly related to the electron Compton wavelength by analyzing gravitational time dilation. The electron as modeled by Burinskii does not have an event horizon. It is a gravitationally confined (naked) ring singularity. It has some but not all of the properties predicted for a black hole. DonJStevens (talk) 16:40, 20 April 2008 (UTC)

Hi Don. I don't think the L value relates to a circumference. It is the wave length of a photon moving with the speed of light. It is not orbiting like an electron around a nucleus. I have no information on the theory of Burinskii. The Compton wavelength of the electron depend on the mass of the electron, and this mass does not appear here, so I do not understand where the Compton wavelength comes into the picture. Bo Jacoby (talk) 20:32, 20 April 2008 (UTC).

Hi Bo Jacoby, A different starting point may be more useful. The electron, separated from an atom, with no displacement velocity has the property of "spin". It has the angular momentum value h/4pi so it seems to spin like a top. The full description of spin is more involved but we can analyze it as a thin ring spinning at light velocity. Then mcr = h/4pi. The r value is h/4pi mc. This radius provides a circumference equal to 1/2 of the electron Compton wavelength. It must turn through two revolutions to complete a spin cycle. If this spinning ring is gravitationally collapsed to the radius value 3Gm/c squared (where m is the electron mass) the energy density is just right so that self-gravitational attraction (space curvature) allows the blueshifted electron Compton photon to chase its tail in a circular path. With gravitational collapse, angular momentum remains constant and total energy (mass energy) remains constant. This allows the circumference to relate to the electron Compton wavelength. Theory predicts infinite blueshift (time rate, zero seconds per second) at the Schwarzschild radius but not infinite at the photon orbit radius. DonJStevens (talk) 22:23, 20 April 2008 (UTC)

Hi Bo Jacoby, Here is a little more of this story. Notice that two radius values are defined. Radius = h/4pi mc and radius = 3Gm/c squared. The ratio of the smaller to the larger is r/r. This defines a size ratio, about 1.051x10exp -44 to one. The square root of this size ratio is the implied blueshift ratio at the electron mass photon orbit radius. Gravitational time dilation will shorten space to match time dilation. The blueshift ratio is square root of (r/r) or 1.o25x10exp -22 to one. This is equal to the ratio 2pi(Planck length)(3/2)exp 1/2 divided by 1.213x10exp -12 meters. The photon with wavelength 1.213x10exp -12 meters, when gravitationally collapsed, can materialize a pair of electron particles, each with a photon otbit radius 3Gm/c squared. DonJStevens (talk) 18:23, 22 April 2008 (UTC)

Hi Don. It is difficult for me to follow. Partly perhaps because your formulas use nonstandard notation. You talked about a cutoff frequency of photons. Then you talk about the electron. Why does the electron enter into that question? The electron spin is usually not interpreted classically as a thin ring spinning at light velocity. The Compton radius involves the charge of the electron while the r value h/(4πmc) does not. Where does the Compton wavelenght enter into the game? A photon materialize into a positron-electron pair, but that has nothing to do with gravitation. A photon does not have an orbit radius. So, I do not follow. Have a nice day. Bo Jacoby (talk) 19:36, 22 April 2008 (UTC).

Hi Bo Jacoby; A growing number of theorists are concluding that the electron must be gravitationally confined. This is not a new idea but studies by J. Wheeler, B. Greene, A. Burinskii and others have added evidence supporting this concept. This will continue to be evaluated until we find a way to merge general relativity and quantum mechanics. I appreciate your communication with me and wish you the best. DonJStevens (talk) 14:33, 23 April 2008 (UTC)

Hi Bo Jacoby; Though you may not wish to follow this (your choice), I concluded that I must reply to your question: Where does the electron Compton wavelength enter into the game? I will write the applicable equations and then define each value. L1/L2 = L2/L3 = L4/L1 = (L4/L2)exponent 1/2

L1 = 2pi(Planck length)(3/2)exp 1/2

L2 = 1.213x10 exp -12 meter

L3 = (2pi)squared, times (c) times one second

L4 = 2pi(3Gm/c squared), where m is electron mass

From the L or length definitions, it is clear that (L1/L2)squared is equal to L4/L2. Then L2 is equal to L4(L2/L1)squared. With L2/L1 equal to L3/L2 a substitution is made.

L2 = L4(L3/L2) exp 2

The electron Compton wavelength is 2(L2) or 2(L4)(L3/L2)squared. When L2/L1 is equal to L3/L2, the applicable G value must be very close to 6.6717456x10 exp -11. DonJStevens (talk) 16:31, 26 April 2008 (UTC)

Hi Don. To improve readability, please use subscripts and superscripts like this: L1 and 10−12. Bo Jacoby (talk) 21:25, 26 April 2008 (UTC).
Hi Bo: Good suggestion. L1 = (L2)2 divided by L3. This allows L1 to be very precisely determined. The product of h and G may then be precisely determined. DonJStevens (talk) 13:09, 27 April 2008 (UTC)

Hi Bo Jacoby: The relationship between photons, electrons and angular momentum is well described in the paper "Is the electron a photon with toroidal topology?" (1997). You can see this if you search for it by title. If you don't see this, let me know: I will help. DonJStevens (talk) 16:30, 4 May 2008 (UTC)

Nu vs theta for true anomaly[edit]

Hi Bo Jacoby, I changed θ (theta) to ν (nu) in the Kepler's Laws article because it denotes 'true anomaly', a parameter denoted universally by ν (nu) in all the other literature I've seen -- including other parts of this same article.

True anomaly is the angle PCL, where P is the satellite perifocus, C is the center of mass of the large central body (planet or sun), and L is the current location of the satellite, measured along the orbit plane. This same angle is already denoted as ν in several other places in the same article. In fact, the article as it stands has a lot of repetition that should be merged and removed. The derivation from Newton's laws should be merged into the rest of the article.

If I have misunderstood the meaning of the "angle formerly referred to as θ", i.e., if it is something other than the true anomaly, then please explain and accept my apologies. Karn (talk) 03:27, 7 June 2008 (UTC)


Refdesk barnstar candidate2.png The Reference Desk Barnstar
Thanks for helping me on the Mathematics Reference Desk! --Ye Olde Luke (talk) 17:16, 31 July 2008 (UTC)

Math help desk[edit]

Thanks for your reply on the help desk! One quick question though: is there a particular name for the method you gave me? Thanks again. - But I Played One On TV (talk) 17:49, 5 August 2008 (UTC)

Quick answer: No. Slow answer: It may partially be original research on my part. Bo Jacoby (talk) 10:03, 6 August 2008 (UTC). See also Talk:Inferential_statistics#material_cut_from_article. Bo Jacoby (talk) 08:50, 11 August 2008 (UTC).

Hi again! I just wanted to ask you a quick followup question to my original question. How would I go about determining the probability that an element is part of "Collection B" if it has the three most significant features, in this case Feature 1, Feature 2, and an absence of Feature 5 (with individual probabilities of 0.96±0.03, 0.88±0.04, and 0.88±0.04 respectively)? Many thanks for your help! But I Played One On TV (talk) 15:47, 11 September 2008 (UTC)

See [[1]]. Bo Jacoby (talk) 05:07, 12 September 2008 (UTC).
Thank you kindly! Once again, you've been an invaluable help. May I bug you one last time regarding uncertainty in the approach you gave? Would it be inappropriate to use the same technique using the population frequencies we got from the beta distribution as the input values, and propagating the uncertainty values in the standard way? (For convenience I also asked this on the original question section.) - But I Played One On TV (talk) 15:16, 16 September 2008 (UTC)

Style note[edit]

Hi Bo. A small note. There should be a period at the end of a sentence, as I put here. You're a long-timer on Wikipedia, perhaps it is time to cultivate the joy of seeing a carefully written text with proper notation and style. :) Oleg Alexandrov (talk) 03:00, 20 August 2008 (UTC)

Hi Oleg. Thank you. I'll try to be careful. I appreciate your improving rather than reverting. Bo Jacoby (talk) 07:18, 20 August 2008 (UTC).
I revert only when I see edits that are dubious or beyond repair. That because I think that Wikipedia is hurt more by having a piece of dubious/nonnotable/poorly written information than not having that piece of information at all. Oleg Alexandrov (talk) 03:23, 21 August 2008 (UTC)

Statistics Question[edit]

Hi Bo, thanks for answering my question on the help desk very promptly. The problem is my ability in statistics is very limited to i am stuggling to get my head around your answer. I even looked at the other similar answer you directed me to but i also got quite lost. Perhaps it would help if I told you what data i'm working with. Basically the Japan and spain data was collected from different sources and represents the number of foreign companies entering a foreign market for the first time. Assume none of the companies could have entered each market at the same time. I wish to analyse this data so as to test for regional preferences. How can i test the extent of these preferences? And also how can I say with a certain degree of certainty that these preferences exist. Perhaps if you have the time you could work through an example and explain what the findings show. I would be greatly appreciative.

Japan Spain
North America 3/45 5/89
Asia 26/45 34/89
Europe 7/45 34/89
Middle East 9/45 10/89
Oceania 0/45 6/89

Me again[edit]

OK so i have now done the test you suggested, and assuming i was correct to both + and - against each root I have found the following....

For Japan:

America: 0.125382329 - 0.044830437 Asia: 0.645831957 - 0.503104213 Europe: 0.224457654 - 0.115967878 Middle East: 0.271838054 - 0.153693861 Oceania: 0.042105213 - 0.000447979

Form Spain:

America: 0.091807244 - 0.040060888 Asia: 0.435336959 - 0.33389381 Europe: 0.435336959 - 0.33389381 Middle East: 0.154865589 - 0.086892653 Oceania: 0.104704428 - 0.049141726

I take it these data refer to some kind of range? How do i go about proving something along the lines of a preference of say european firms towards spain over japan, which there ostensibly is when one looks at the original data. Thanks (talk) 19:02, 29 August 2008 (UTC)

Thank you![edit]

Barnstar-atom3.png The E=mc² Barnstar
Both for your excellent contributions to the subjects of mathematics and statistics on Wikipedia, and for your service beyond the call of duty to the Mathematics Help Desk (especially to me) I reward you with this handsome barnstar. - But I Played One On TV (talk) 19:41, 16 September 2008 (UTC)

Thank you very much for the honour. Bo Jacoby (talk) 06:17, 17 September 2008 (UTC).

Lebesgue Integration Comment[edit]

I noticed that you made a comment on the talk page of the Lebesgue Integration article a little over a year ago. Well, I've just written a reply. I hope it helps. If there's anything else I can do then drop me a line.  Declan Davis   (talk)  16:49, 1 October 2008 (UTC)

On inspection the link doesn't go straight to the section as it ought. It's the Lebesgue Integration Talk Page, and the section is the Extended Real Number Line section.  Declan Davis   (talk)  16:59, 1 October 2008 (UTC)

Math reference desk[edit]

I think claiming as much accuracy as you do when you give 0.016947 as the probability is rash, since 0.155 = 1/64, from a continuity correction, is a reasonable approximation. That's one of the hazards of using the continuous to approximate the discrete. I've added a comment on this to the reference desk discussion. Michael Hardy (talk) 01:12, 8 October 2008 (UTC)

The Radial Equation[edit]

Bo Jacoby, thank goodness you wrote out that radial component of the planetary orbital equation on the talk page of Kepler's laws. I have often mentioned it on the centrifugal force talk page, and it is always denied. They demand citations. I give the citations and they still deny it. So thank you for writing it out so clearly. David Tombe (talk) 13:06, 4 November 2008 (UTC)

nu to theta[edit]

Hi. Please don't forget talk:Kepler's laws of planetary motion#Symbol for Angular Displacement. Bo Jacoby (talk) 10:10, 8 November 2008 (UTC).

I changed the figure. It is not mathematically exact, but I hope it will do. Brews ohare (talk) 12:27, 8 November 2008 (UTC)


What a sorehead eh? Hope he doesn't break any fingers asking the next question!  ;) hydnjo talk 00:20, 24 November 2008 (UTC)\

hi jacoby

Kepler's law of areal velocity[edit]

Bo, it's been a while since we were in contact. Are you still having difficulty in reconciling the fact that in elliptical, parabolic, and hyperbolic orbital motion, that the net tangential acceleration is zero, even though there are two distinct tangential accelerations apparent in the motion. The angular acceleration is visibly obvious, as is the Coriolis acceleration. These two accelerations are equal and opposite in magnitude and direction, and as such we have conservation of angular momentum. But these two accelerations do not actually cancel each other out physically. Are you having difficulty reconciling this concept? Unless we have a couple, equal and opposite forces normally cancel each other out completely. David Tombe (talk) 06:33, 27 December 2008 (UTC)

Bo, regarding your reply on my talk page, tangential acceleration only has one meaning. It is the second meaning that you gave. It is acceleration perpendicular to the radius vector. You stated the expression for that correctly, and as you can see, it has two components.
In a Keplerian orbit, these two components cancel each other out mathematically, and hence we have conservation of angular momentum. However, both of these components are clearly observable in the motion. Can you think of any other dynamic examples in nature, apart from the case of a couple, where two forces mutually cancel mathematically, but where their actions can be individually observed? Obviously in statics, two mutually cancelling forces can produce a pressure or a tension. But I want an example in dynamics where two mutually cancelling forces can still be observed to be producing accelerations, and where a couple is not involved.
By the way, we ought to take this very interesting issue to the talk page of Kepler's laws. I'd be obliged if you were to reply there. At any rate, I will raise this point on the talk page of Kepler's laws.
I've just looked at your reply again. The point with which you disagree is where I said that the Coriolis force and the angular force do not cancel physically. But it is manifestly obvious that they do not cancel physically. The action of the Coriolis force can be clearly seen in changing the direction of the radial motion. And the action of the angular force can be clearly seen by virtue of the fact that the tangential speed increases and decreases cyclically in an elliptical orbit. Yes, they cancel mathematically in magnitude and direction. But you can still see them both in operation. David Tombe (talk) 06:22, 28 December 2008 (UTC)

Hi David!

  1. See acceleration#Tangential_and_centripetal_acceleration showing the other meaning of tangential acceleration in use.
  2. The split of the total tangential acceleration  \scriptstyle r\ddot\theta + 2\dot r \dot\theta into the two components,  \scriptstyle r\ddot\theta , and coriolis acceleration  \scriptstyle 2\dot r \dot\theta , depend on the choice of coordinate system. So it is not a physical effect.
  3. This issue may not have sufficient maturity to be raised on the talk page of Kepler's laws.
  4. There is no angular force even if the angular speed  \scriptstyle \dot\theta and the tangential speed  \scriptstyle r\dot\theta changes. The coriolis force is not a force. Your claims are not manifestly obvious.

Bo Jacoby (talk) 21:53, 28 December 2008 (UTC).

Bo, OK let's just concentrate on the  \scriptstyle r\ddot\theta + 2\dot r \dot\theta definition of tangential acceleration. And let's just talk about acceleration as opposed to force. And let's choose an inertial frame of reference. And let's consider an elliptical orbit.
I can see both an angular acceleration by virtue of the tangential speed increasing and decreasing cyclically. I can also see a Coriolis acceleration by virtue of the radial direction of motion continually deflecting tangentially. The two effects cancel out mathematically in magnitude and direction, leading to conservation of angular momentum. But they are both clearly visible nevertheless.
Can you not also see these two effects individually? David Tombe (talk) 09:40, 29 December 2008 (UTC)

Calling  \scriptstyle\ddot\theta an angular acceleration is a misnomer because  \scriptstyle\ddot\theta is not a mechanical acceleration. This may be a source of confusion. The second order derivative of a coordinate is sometimes called an acceleration, even if it is not. (See Covariant derivative#Coordinate description). Consider uniform motion on a straight line. The angular velocity   \scriptstyle\dot\theta and the tangential speed  \scriptstyle r \dot\theta varies, even if there is no acceleration at all. Consider circular motion with constant speed. The acceleration is nonzero even if   \scriptstyle r\ddot\theta = 0 and   \scriptstyle \ddot\theta = 0 and   \scriptstyle  \ddot r = 0 and   \scriptstyle  \dot r = 0 because   \scriptstyle  r\dot\theta^2\ne 0. Bo Jacoby (talk) 19:35, 29 December 2008 (UTC).

OK, let's not call it angular acceleration then. Let's call it Billy. Can you see Billy in operation in an elliptical orbit? I can. Yet it cancels out mathematically with Harry (which I would prefer to call the Coriolis acceleration). So we have Billy and Harry cancelling out, yet we can still observe them both in action individually.
On your other point, yes, I am fully aware that a straight line motion contains Harry, Billy, and Andrew (who I would prefer to call centrifugal force). I tried unsuccessfully to argue this point on the centrifugal force talk page a few months back. David Tombe (talk) 05:00, 30 December 2008 (UTC)

Fine with me! Yes, an noncircular Kepler motion has varying angular speed. Why not just call it \scriptstyle \dot\theta . No physical force is needed to cause it to vary. Newton's first law says that in the absence of forces the speed is constant. \scriptstyle \dot\theta looks like a speed, and so people like to assign a force to cause it to vary, but that is a misunderstanding - no force is needed. The base vectors \scriptstyle (\hat{\mathbf{r}},\hat{\boldsymbol\theta}) vary in time, and so it is not an inertial frame of reference. "we can still observe them both in action individually". Well, the individual terms in the tangential acceleration  \scriptstyle \ddot \mathbf r\cdot \hat{\boldsymbol\theta} = r\ddot\theta + 2\dot r \dot\theta can be identified. Iff  \scriptstyle \ddot \mathbf r\cdot \hat{\boldsymbol\theta} = 0 then  \scriptstyle r\ddot\theta =- 2\dot r \dot\theta . I don't think anybody disagree with that. If you are trying to say something else, then you are likely to meet opposition. Don't try to argue controversial information into Wikipedia, but improvement in clarity is always welcome. Greetings! Bo Jacoby (talk) 14:12, 30 December 2008 (UTC).

Bo, that was basically it. Two distinctly observable dynamical effects cancel out mathematically. There must be no other precedent for it in dynamics.
I think that it is indeed telling us something important that has been overlooked. But the important thing is to highlight it for the purposes of making people have a better understanding of Kepler's second law.
We have zero net tangential acceleration, hence conservation of angular momentum. But that zero can be split into two distinctly observable effects. David Tombe (talk) 03:02, 31 December 2008 (UTC)

Understanding seems to come as a wave of psychological phases. Initially there is no problem. Then comes a mystery. Then the feeling of something important that has been overlooked. Then comes clarification. Finally the solution becomes trivial, 0 = a−a, and there is again no problem. However, I too think that the article of Kepler's laws can be improved. Emphasize the fact that Kepler's laws are kinematic, not dynamic; force and mass and angular momentum should not be included. But the differential equation \scriptstyle \ddot\mathbf{r}=(\ddot r - r\dot\theta^2)\hat\mathbf r+(r\ddot\theta+2\dot r\dot\theta) \hat\boldsymbol\theta=(-a/r^2)\hat\mathbf r+0\hat\boldsymbol\theta, should be derived from Kepler's laws. The derivation of Kepler's laws from Newton's laws should then be omitted. Bo Jacoby (talk) 11:12, 2 January 2009 (UTC).

Bo, On your final equation, you left out the centrifugal term. That is an essential term in the solving of the radial equation for the purposes of getting the conic section solutions. On the disappearance of the tangential component, there is still a mystery. The two individual terms sum to zero, yet they are still present in the motion in an elliptical orbit. There is no kinematical precedent for that anywhere else in nature.
Also, since you don't like the term  \scriptstyle r\ddot\theta being referred to as angular acceleration, what do you suggest that we refer to it as? Normally, I would have simply called it tangential acceleration, but unfortunately that is not good enough in this situation because we also have the Coriolis term which is also a tangential acceleration. So what do you suggest that we refer to  \scriptstyle r\ddot\theta as? Some might call it the Euler acceleration, but even that has problems because it is being used in wikipedia already to describe the fictitious version which we observe when we are in an angularly accelerating frame of reference.
On the issue of kinematics, yes I agree that the whole topic could be treated without involving force or mass. But at the same time, we should not overlook the fact that we can only do it all kinematically because Kepler's laws are based on a solar reference frame in which reduced mass is involved. For small masses, the mass effectively cancels out. David Tombe (talk) 08:49, 3 January 2009 (UTC)
  1. I don't think I left out the centrifugal term \scriptstyle r\dot\theta^2. Balancing of forces occurs at many places: The gravitational force on a person equals the upwards force from the floor, when the person is in rest.
  2.  \scriptstyle r\ddot\theta is the acceleration measured from a (noninertial) rotating frame of reference.
  3. Kepler's laws were found from observations. Tychos observations --> Kepler's laws --> the acceleration formula --> Newton's generalization. The concepts of mass and force do not enter until the last step. Newton.

Bo Jacoby (talk) 10:18, 3 January 2009 (UTC).

Bo, in an elliptical orbit, the  \scriptstyle r\ddot\theta term represents a very real and observable angular acceleration relative to the inertial frame. Rotating frames of reference don't enter into it.

Also, in the radial equation, the centrifugal acceleration and gravity both operate in tandem. The only time that they are balanced is in the special case of a circular orbit.

On your final point, yes, Kepler's laws only involve accelerations. There is no consideration of mass or force. I pointed this out myself a couple of months back. Kepler's laws can lead us to conclude the involvement of an inverse cube law centrifugal repulsive term and an inverse square law attractive term.

We then require Newton to complete the picture with his involvement of mass. The product Mm on the numerator is related to the concept of reduced mass Mm/(M+m). With the total acceleration numerator coming to G(M+m), the force numerator GMm follows from multiplying G(M+m) by the reduced mass. David Tombe (talk) 05:13, 4 January 2009 (UTC)

  1. The orthogonal unit vectors \scriptstyle \hat\mathbf r,\hat\boldsymbol\theta do define a rotating frame of reference. So the polar coordinate system is not an inertial frame of reference, and zero acceleration is not \scriptstyle \ddot r=\ddot\theta=0.
  2. In an inertial frame there is no centrifugal acceleration and no balancing of accelerations.
  3. Newton's completion of the picture need not be included in an article on Kepler's laws. According to Newton, Kepler's laws are only approximately true.

Bo Jacoby (talk) 13:18, 4 January 2009 (UTC).

Bo, when I look at an elliptical orbit, I can see an angular acceleration relative to the inertial frame. It's as simple as that. Rotating frames of reference don't need to come into it.

And as regards a circular orbit, the outward centrifugal acceleration is balanced by the inward gravitational acceleration. The radial equation on the main article is correct. But in your letter above, you have left out the centrifugal term in the radial equation.

Finally, as regards Newton, by all means leave him out of it. When I first entered this debate, I was saying that Kepler came first. But all the emphasis prior to my arrival seemed to be on deriving Kepler's laws from Newton's laws.

So I'll be in favour of wiping all that Newton stuff out of the main article, because it is very badly written anyhow.

The main points are that Kepler's first law equates with the radial equation, which includes a centrifugal term and a gravitational term working in tandem. And the second law deals with the fact that the two tangential terms add to zero, hence leading to conservation of angular momentum. The two individual tangential terms are both however still visibly present. David Tombe (talk) 05:30, 5 January 2009 (UTC)


  1. Centrifugal acceleration and Coriolis acceleration comes from the rotation of the cartesian frame of reference  \scriptstyle (\hat\mathbf r,\hat\boldsymbol\theta). (Same question, same answer). I wrote \scriptstyle \ddot\mathbf{r}=(\ddot r - r\dot\theta^2)\hat\mathbf r+(r\ddot\theta+2\dot r\dot\theta) \hat\boldsymbol\theta=(-a/r^2)\hat\mathbf r+0\hat\boldsymbol\theta. This I think is correct.
  2. The radial acceleration component is \scriptstyle \ddot r - r\dot\theta^2=-a/r^2. When you consider radial acceleration to be composed of the outward centrifugal acceleration and the inward gravitational acceleration you mean that \scriptstyle \ddot r = r \dot \theta^2 -a/r^2 , which is a correct equation, but \scriptstyle \ddot r  is not the radial component of the planet's acceleration. The differential equation \scriptstyle \ddot r = r\dot\theta^2-a/r^2 looks like the equation of motion of a particle having a cartesian coordinate \scriptstyle r and being subject to an acceleration \scriptstyle \ddot r  consisting of two terms. That particle should not be confused with the planet.
  3. The angular acceleration component is \scriptstyle r\ddot\theta+2\dot r\dot\theta=0. When you consider the angular acceleration to be nonzero you mean that \scriptstyle \ddot\theta=-2\dot\theta\dot r/r which is correct, but \scriptstyle \ddot\theta is not the angular component of the planet's acceleration. The differential equation \scriptstyle \ddot\theta=-2\dot\theta\dot r/r looks like the equation of motion of a particle having a cartesian coordinate \scriptstyle \theta and being subject to an acceleration \scriptstyle \ddot\theta consisting of a term \scriptstyle -2\dot\theta\dot r/r which may be called a Coriolis acceleration. That particle should not be confused with the planet.
  4. Let's not wipe anything out of the article until we have something better.

Bo Jacoby (talk) 10:27, 5 January 2009 (UTC).

Bo, your first sentence in section (2) above is correct. That is the correct equation. But in section (1) above, you have put the right hand side of this equation unto both sides. You have avoided the left hand side altogether, and in doing so, you have dropped centrifugal acceleration out of the equation entirely.

Once again, back to names. If \scriptstyle \ddot r  is not the radial acceleration, then what is it? If we call it Terry, then I'll repeat my point. In a circular orbit, the outward centrifugal acceleration balances with the inward gravitational acceleration, and Terry is zero.

We need that centrifugal term to be exposed because it is a necessary part of what leads to the conic section solution, along with the tangential equation. In section (1) above, you hid the centrifugal term.

On your next point, section (3), you completely lost me when you said that it becomes non-zero when you took one term across to the other side and equated the two terms. The total tangential acceleration is always zero. Also, you said that

"but \scriptstyle \ddot\theta is not the angular component of the planet's acceleration"

It is. If it's not the angular acceleration of the planet, then what is it?

By the way, I've taken the discussion to the talk page of Kepler's laws of planetary motion. David Tombe (talk) 12:34, 5 January 2009 (UTC)

Hi David

  1. you have put the right hand side of this equation unto both sides. I dont follow you. You are welcome to explain more detailed.
  2. If \scriptstyle \ddot r  is not the radial acceleration, then what is it? It is the acceleration of a fictive particle having the equation of motion \scriptstyle \ddot r = r\dot\theta^2-a/r^2 as I said before. It is not the radial component of the planet's acceleration, which is \scriptstyle \ddot r - r\dot\theta^2.
  3. You are right that \scriptstyle \ddot\theta is called angular acceleration, but it is not the same thing as the angular component of the acceleration, which is \scriptstyle r\ddot\theta+2\dot r\dot\theta and which you called the tangential acceleration, even if that name is reserved to mean the component of the acceleration in the direction of the motion, as discussed earlier. It is important to distinguish between these concepts.

Bo Jacoby (talk) 14:07, 5 January 2009 (UTC).

Bo, the two relevant equations are now clearly laid out on the talk page of Kepler's laws, under the section "Simplification of the Article". Nobody is disputing these two equations. The problem seems to be entirely about what name to use for each of the terms.

You managed to hide the centrifugal term completely in the way that you re-arranged the radial equation. The centrifugal term, which is very important, had simply disappeared. You hid it.

If you go over now to the Kepler talk page, we are making progress because we have narrowed it all down to those two equations. We may even have to avoid using names altogether for the individual terms, if a consensus can't be reached. David Tombe (talk) 04:24, 6 January 2009 (UTC)

David, please quote precisely the equation where you think I hid the centrifugal term. I can't find it! Bo Jacoby (talk) 09:38, 6 January 2009 (UTC).

Bo, It's exactly at the end of the paragraph above where you began

Understanding seems to come as a wave of psychological phases - - -

You wrote the equation, \scriptstyle \ddot\mathbf{r}=(\ddot r - r\dot\theta^2)\hat\mathbf r+(r\ddot\theta+2\dot r\dot\theta) \hat\boldsymbol\theta=(-a/r^2)\hat\mathbf r+0\hat\boldsymbol\theta

You successfully dropped the centrifugal term right out of the final result on the extreme right. You took gravity to one side of the radial equation and then equated it with itself, hence eliminating the centrifugal term from view.

Anyhow, I am continuing this discussion on the talk page of Kepler's laws because the issue of terminology for the five disputed terms needs to be brought into an open arena. David Tombe (talk) 05:55, 7 January 2009 (UTC)

Thanks. The equation \scriptstyle \ddot\mathbf{r}=-(a/r^2)\hat\mathbf r says that the planet's acceleration \scriptstyle \ddot\mathbf{r} is in the sun's direction \scriptstyle -\hat\mathbf r, and the magnitude \scriptstyle a/r^2 is in inverse proportion to the square of the distance \scriptstyle r . The magnitude of the acceleration can be split into two components. The first term is \scriptstyle \ddot r , which you call the radial acceleration, not to be confused with the radial component of the acceleration. The second term is the centrifugal acceleration \scriptstyle r\dot\theta^2 . The socalled radial acceleration \scriptstyle \ddot r satisfies the differential equation \scriptstyle \ddot r =r\dot\theta^2-a/r^2, so it is written as a sum of the centrifugal acceleration \scriptstyle r\dot\theta^2 and the gravitational acceleration \scriptstyle -a/r^2. If you disagree, then write down what you believe is correct. Let's keep this clearance of misunderstandings away from the talk page. Bo Jacoby (talk) 11:38, 7 January 2009 (UTC).

Bo, I'm not going to reply anymore on your talk page. The matter is already being discussed on the talk page of Kepler's laws of planetary motion. You need to ask yourself why you cannot accept that the sum of the centrifugal acceleration and the gravitational acceleration amounts to the total radial acceleration. You seem to insist on taking the centrifugal term to the other side and defining the radial acceleration as being exclusively gravity. David Tombe (talk) 06:55, 8 January 2009 (UTC)

David, you are not listening. You must distinguish between the radial acceleration \scriptstyle \ddot r, and the radial component of the acceleration \scriptstyle \ddot r -r\dot\theta^2. Bo Jacoby (talk) 08:52, 8 January 2009 (UTC).

Bo, I've given a very concise answer to that specific question on the talk page of Kepler's laws of planetary motion. David Tombe (talk) 03:18, 9 January 2009 (UTC)


Hi, I'm posting this on your (and other members of the Maths Wikiproject) talk as we need editors who are knowledgeable about Mathematics to evaluate the following discussion and check out the editors and articles affected. Please follow the link below and comment if you can help.


Thankyou. Exxolon (talk) 18:02, 1 July 2009 (UTC)

Question about your answer @ math helpdesk[edit]

I am the guy that wrote this- [2]

In your answer does your algorithm give me the best car where:

"power ranking" := the sum of the power rankings of the engine, body, wheels. or

power ranking maximizes the minimum of the power rankings of the engine, body, wheels.

Thanks! Quilby (talk) 23:23, 24 July 2009 (UTC)

The sum. Bo Jacoby (talk) 04:29, 25 July 2009 (UTC).


Hello and thank you for the reply on Likelihood talk page for the chart section. This is off topic, but do you happen to know to solve for the median in terms of n and i, also stated as:

find m if the integral of p from 0 to m == integral of p from m to 1 of your expression, for a given n and i?

I do not know how to carry conversations on wiki. If you are able to reply on this page, could you please ping me at wikiping AT so that I can know to check back at this page.

Thank you again, Full Decent (talk) 19:46, 3 September 2009 (UTC)

See Talk:Likelihood_function#Median. The median m is such that (the integral from 0 to m) = (1/2 of the integral from 0 to 1). But don't use the median if the mean value is defined. It has nicer algebraic properties. Conversations are made on Wikipedia:Reference desk/Mathematics. I don't know how to 'ping at wikiping AT' Bo Jacoby (talk) 22:39, 3 September 2009 (UTC).
Ah, very enlightening. I followed your posting at Talk:Likelihood_function#Median, and found more discussions between you and Michael Hardy at There you mention "Generally the mean value of the likelihood function, (which is a beta distribution), is (i+1)/(n+2)". I will use this in my application rather than the median. Full Decent (talk) 14:35, 4 September 2009 (UTC)
I'm pleased that my writing is useful. Good luck! Bo Jacoby (talk) 15:31, 4 September 2009 (UTC).

Your comments[edit]

Regarding your post. I'm amazed by this. You keep saying that I've been impolite, but fail to give any tangible evidence of this. I have pointed out that I have not been impolite, and that pma is the one that has been impolite. I gave you examples of his comments that were impolite. So, how is that childish? You are casting aspersion towards me in defence of pma when those charges should be filed with pma and not me. That isn't childish: that's showing you, with examples and evidence, the facts of the matter. ~~ Dr Dec (Talk) ~~ 21:39, 3 October 2009 (UTC)


I wish to warmly thank you, and PST for your defense. I see that the incident is not yet closed albeit I decided to retire myself. So maybe some further explanation may help, since clearly there is a more general problem behind. Let my start from the quotation of the bible in RD/M of Oct 1 2009 in two posts. The first post ("The answer to this system of equations?") received a short, simple and complete answer from you. Then PST provided a further information, well written and with useful general remarks, intended as a hint for further reflection, of course not only for the OP. This is what I consider a well organized RD/M. Then DrDec added his post, totally ignoring the first answer, and mocking the second one ("Your little monologue helped whom? How?! To the OP:." &c) and, subsequently, vaguely accusing PST to be attempting of showing off, and started the n-th useless debate. So he essentially added nothing but noise, that only may get the questioners confused. This is what I call spoiling the work of a well organized team. In the second post "Limits on discrete sets" another questioner asked about a limit in a way that, according to our experience at RD/M, clearly indicates a beginner, trying to work with the definition of a sequence of real numbers. Accordingly, user:EmilJ shortly gave a first answer, and put the customary question to the OP, in order to understand better the case. In these cases, it is agreed to wait for the OP's response, to address better the subsequent answer. Again Davis Davies, following his habits, couldn't help adding noise, in the form of both wrong and off topic considerations about the complex Gamma function. I wonder what idea left in the OP's about his doubts on limits; in any case, he disappeared. I found quite funny that DrDec's remark to PST's previous post, exactly fitted to his own post: wasn't his an attempt, although awkward, of showing off? I thought that quoting the sentence from Matthew in small characters was just a polite and humorous way to suggest him the incongruence of his previous behaviour.

The Gamma function was perfectly ON-topic. Read the thread. The OP was talking about the factorial applied to non-integer values. This is exactly Γ(n+1). And your petty insistance to misspell my surname is testimony to your lack of respect. ~~ Dr Dec (Talk) ~~ 14:14, 5 October 2009 (UTC)

Unfortunately, as he always does with any form of criticism, he took it as a personal attack, as the edit history shows, and started a series of nonsensical remarks about Latin, and religion, being off topic in the RD/M. Now, here is the key point: in the past 2 years or so I dedicated part of my time to Wikipedia, because I believe, as many others do, in sharing my knowledge. But, as many others of us, my spare time is the limited time left free from work duties. So, while I am happy of spending time in answerring questions of maths, I consider now wasting my time explaining to a Declan Davis Davies why his comments are idiot; especially because he has shown he is not able to understand the explanation. It is not my habit loosing my patience and being unpolite: this is the first and the last time I do it here: I found it a desperate case, and I have lost my patience (not a premeditate attack; I hope it is clear). I am sorry of that, essentially because I showed a bad example, that I wish nobody will follow, as clearly it's not resolutory, and damages the environment.

Not a big deal, were it only a problem with me. But this is only the last of a very long series of incidents that DrDec had with several people. As some of us recall, in the past he even created a hispanic sock puppet, Raul, User:Dharma6662000,[3] to support himself in his disputes (with comic dialogues indeed). Later, everybody tried to help him, but it seems that he only improved his formal behaviour, in order to better provoke people within the formal respect of the rules. Formally, an advanced troll. Now, my energies and my patience are below the task. At the moment, I do not see the conditions to make a further useful work at the RD/M. Answerring maths questions now implies a drudgery work of keeping clean the RD/M from DrDec's noise; he showed he's not willing to follow any rule shared by the regular and authoritative persons there, starting with the most elementary one: Avoid pretending you know a subject if you ignore it. Recently User:Meni_Rosenfeld in a both polite and professional way tried to recall DrDec the common rules followed at the RD/M, with no effect; eventually he lost his patience too, after having tried repeatedly to make him reason. This is also why decided to retire myself. I see very recently User:Sławomir_Biały did the same, after an extenuating editorial war with DrDec. I hope DrDec will eventually learn how to behave, and why not, the definition of integer numbers, before everybody else get disgusted and leave. In the meanwhile I warmly wish you all good luck. pma -- (talk) 16:28, 4 October 2009 (UTC)

I noticed that you posted here, and so I thought that I should at least attempt to persuade you out of your retirement. In my opinion, you add an extra dimension to the reference desk which will be lost if you retire. For example, once in a while a non-trivial problem is asked at the reference desk which usually remains unanswered until a editor like you comes along. In fact, there are not many others with whom I can debate about those nice examples in point-set topology (remember the irrational "shadow" topology? :)). Let me add that I hope you perhaps come out of retirement at some point in the future, but if not, I wish you the best.
Regarding Dr. Dec, I have attempted to communicate with him in a non-confrontational manner with the last remark. I think that he could possibly become a useful editor if he focuses on the questions being asked at the reference desk rather than the other editors answering them. Nevertheless, I think that the damage inflicted by Dr. Dec is transgressing, and I fear that it will not stop. I did not wish to take action against him (despite me saying so) and that stance still remains. Communicating with him personally appears to be a waste of time, since he is under the impression that three editors are against him. In fact, I think that he feels his actions are completely appropriate. If I do believe that he repeats that which he is accused of in the future, I shall make a note of it on the talk page of RD/M. Otherwise, I fear that he will lead to the retirement of more editors.
Anyway, I do hope that you will come out of retirement at some point; otherwise it is a great loss to the project. --PST 01:04, 5 October 2009 (UTC)
Thanks to PMA and PST. I welcome your remarks above. As Wikipedia is an anarchistic project where everyone can contribute, no one is personally responsible for the quality. Each editor can try to provide good contributions and show good attitude, but it is pointless to get emotionally involved in bad contributions and bad attitude. People hear what you say, and see what you do, even if they don't tell you. So don't underestimate the power of your example. Let's patiently give Dr.Dec a chance to develop. Don't worry, be happy! Bo Jacoby (talk) 08:25, 5 October 2009 (UTC).
I take solace in this remark. If "People hear what you say, and see what you do, even if they don't tell you." then I will surely be vindicated by the community at large. ~~ Dr Dec (Talk) ~~ 15:39, 5 October 2009 (UTC)

I'm sick and tired of the bullying that I'm suffering at your hands. Either put-up or shut-up. Get some other uninvolved peope to look at the events of this last week and to give their opinions. Report me to whomever you like, take whatever action you feel necessary. Maybe then you will see how deluded you are all being. You all have the same viewpoints and you're all whipping yourselves into an anti-me frenzy, it's quite worrying to watch. So to repeat: stop bad-mouthing me and take some formal action. You won't because you know that I've done nothing wrong! One thing that I find funny is that you mention User:Sławomir_Biały. Well, PST and Sławomir Biały seem to have been having their own little run-ins. If you check Sławomir Biały's history he was always apologising for being rude. And pma has himself has to apologise for his rudeness towards me. No-one seems to be reading a word I've written. I've shown you politely and patiently how rude other people have been, all the while following Wikipedia procedure. Yet I am the one branded a fool. Interesting. I look forward to reading the results of any enquiries. Although I know none of you will solicit one, you'll just carry on bad-mouthing me. If you're not going to take further action then SHHHH! This is becoming boring. ~~ Dr Dec (Talk) ~~ 13:32, 5 October 2009 (UTC)

Dr. Dec, do you understand why people that know neither you nor each other agree to 'bad-mouth' you? Why some become so rude to you that they have to apologize? Do you intend to find out? Did you ask? Bo Jacoby (talk) 14:36, 5 October 2009 (UTC).
  • The problems with Sławomir Biały don't worry me. He's already had problem with PST ([1] & [2]) and so it's clear that he likes to argue. The problems with pma don't bother me either, he clearly has a hot temperament and has shown a lack of civility and patience repeatedly. Although the last time he had the good grace to apologise ([3]). The simple fact that I am one of the more vocal editors on the reference desk explains why I have drawn attacks and criticism from Sławomir Biały and pma. As for PST, well he was clearly upset by the comments that I made about his post, that's why he jumped to the defence of pma. However, having read what I had to say about pma's actions and my own reactions PST has even said that he "understand[s] that some of the circumstances of events were different" than he originally believed when he went to pma's defence. So it looks like PST's stance is changing. Although, I was most surprised to see how much his tone had changed from when he made this post to when he made the post above.
  • Basically pma still held a grudge because of something from the past, so when I commented on PST's post he jumped at the chance to draw his dagger. I think that you might have been unhappy because I pointed out the errors in your post about odds and probabilities. PST was clearly unhappy because of the comments I made regarding his post. So you all had the means, the motive, and the oppertunity. So it's not surprising that you all attacked me. What does surprise me is your show of solidarity with pma despite his repeated personal attack and incivility. I might be annoying, but I don't openly insult people! (Before you look for a link to an insult I might have made over a year ago, don't. This seems to be pma's line: instead of judging me on my current conduct he harks back to things I did, or things I said, over a year ago when I had been editing a matter of days and didn't at all understand Wikipedia.)
  • If you believed that my behaviour could be improved; which undoubtedly all of ours can be, then is insulting me and attacking me the way to do things? Is anyone going to listen when they are subjected to a string of unfounded, biased and personal attacks? Would you? Then to have to listen to threats about people taking action against me for using Wikipedia procedure to protect myself? Well, that's a joke. Maybe I should have resorted to name-calling like pma did? ~~ Dr Dec (Talk) ~~ 15:20, 5 October 2009 (UTC)


Pma seems to be calling me an "advanced troll". I think the following content from our discussion on my talk page details events and explains the actions I have taken in my defence. Calling me a troll, well, this seems to follow pma's modus operandi: when the facts stack up in favour of an idea contrary to his viewpoint he resorts to name calling. ~~ Dr Dec (Talk) ~~ 15:14, 5 October 2009 (UTC)

"You mean this post? As I said on your talk page: you are casting aspersion towards me in defence of pma when those charges should be filed with pma and not me. Pointing this out is not childish, it is showing you that your accusations are misdirected and showing you towards whom they ought to be directed. Your bias in this matter is clear. Pma has made a series of personal attacks towards me, yet you have not written a single word in condemnation towards him. All of the edit histories show that I explained my problems to him calmly (1), that he continued to attack me in the edit summary (2), that I used Wikipedia procedure to caution him (3), and that he continued to attack me (4). Notice that this wasn't in the heat of the moment. After writing his last series of insults at 22:04 he came back at 05:58 to edit them. This was a premeditated attack. So it amazes me that you accuse me of being childish and leave pma blame free. I think that I have been mature by not being baited and by using Wikipedia procedure to defend myself."

Does your understanding of the causes of the trouble have a constructive aspect? Bo Jacoby (talk) 16:47, 5 October 2009 (UTC).
Quite clearly: it shows that I'm not an "advanced troll", but an editor using Wikipedia procedure to defend himself instead of resorting to name calling. It shows that pma's actions are indefensible, and highlights your blind bias in the support of pma. Given these two facts any editor with any power of reason will see your comments as misguided and ultimately irrelevant. ~~ Dr Dec (Talk) ~~ 17:03, 5 October 2009 (UTC)
Would you yourself consider your last comment as polite, if it was written by someone else and referring to you? Why should I have a blind bias in the support of pma? Do you know of 'any editor with any power of reason', besides yourself? Bo Jacoby (talk) 20:43, 5 October 2009 (UTC).
It was both polite and forthright. If you disagree then I suggest that you seek a second opinion. As to why you are blindly biased in the support of pma, well, I don't know. I have demonstrated such bias. It's up to you to explain it. ~~Dr Dec (Talk)~~ 22:23, 5 October 2009 (UTC)

Dr Dec, as you are confident that you will surely be vindicated by the community at large, why not lean down and relax waiting for that to happen? You suggest that I seek a second opinion. Well, my opinion is already a second opinion because PMA and PST have expressed a first opinion. There is no personal attacks involved here, simply because we do not know each other in person. This is merely WP-editors commenting on each other's contributions. When you are referring to Wikipedia procedure rather than to try to understand the meaning of other editor's contributions, then in my humble opinion you may fairly be described as an "advanced troll". (See wp:What is a troll?). You feel insulted by the name, but you do not understand the meaning. Your feelings are not the subject discussed in wikipedia. Let other peoples contributions speak for themselves, as you let your own contributions speak for themselves. Bo Jacoby (talk) 05:15, 6 October 2009 (UTC).

You're being silly now. I think it's you that's being the troll here. You seem to ignore everything I say, and forget everything that's been written and then make a point which has already been dealt with earlier. The whole problem began because pma was commenting on contributors instead of content. I have commented on content all along. I refer you to the original discussion following my warning to pma. I'll assume that you have read the original discussion, now tell me: What is there to understand about pma's contributions? They were personal attacks and insults! ~~Dr Dec (Talk)~~ 13:58, 6 October 2009 (UTC)

Just to let you know: I won't be replying to any more posts. We seem to be going round and around in circles. I'm removing this page from my watch list. If you wish to write a comment that I will read then please post it on my talk page. ~~Dr Dec (Talk)~~ 14:04, 6 October 2009 (UTC)

Your 1^n notation[edit]

Sadly, I wasn't registered when that took place, but I do remember seeing someone use the notation in a book somewhere. The book used 1^{\frac{1}{2}} to mean both roots of unity, +1 and -1. Protactinium-231 (talk) 02:34, 23 December 2009 (UTC)

Thank you! The notation (-1)^{\frac 2 n} for e^{\frac{2\pi i}n} is less controversial than 1^{\frac 1 n} . The J (programming language) does it this way
   _1^2r3        NB. minus one power two thirds

   ^0j2r3p1      NB. e power two thirds of pi times the square root of minus one

   1^1r3         NB. one power one third

Bo Jacoby (talk) 11:59, 23 December 2009 (UTC).

I suspect that this has something to do with how \sqrt{-1} (or, equivalently, (-1)^{\frac 1 2}) is defined. It would explain a lot if it was defined as being i ori (which I think is the case) rather than only i. Double sharp (talk) 05:02, 4 March 2012 (UTC)

2+2=5 becomes a really neat bit of mathematics[edit]

Hi Bo,

Two weeks ago you suggested on the Math Desk that one could make 2+2=5 in a "non-cheating" manner (my math terminology isn't great) by stating X+X \ne 2X for a random variable X. My response was the "wonderous proof" that I think incites a bit of further investigation. You'll note that if you take a gaussian sample for X, the result becomes X+X = \alpha X for some normalization constant alpha (it's predetermined, but I really didn't care to remember the normalization of the gaussian - sue me). However, it does not work for X sampled in a uniform distribution.

The point is that this works for other distributions like the gaussian - for example, a distribution in the cosine from -pi/2 to pi/2. We see we need a convolution identity such that f*f = \alpha f with, this time, a non-predetermined normalization constant, and this is apparently solvable with a large, but not covering, class of exponential functions.

I do see another interesting thing: it is possible that this is applicable to still-open questions of general probability arithmetic (for an arbitrary distribution - we know gaussian algebra and uniform algebra decently, but not much expansion beyond that) and possibly to computation theories separating probabilistic from deterministic automata. However, it's also possibly more possible that this is nothing new, just interesting. Either way, it's neato, and I'd like to put the 2+2=5 thing online to start before talking to my advisor about whether it's expandable, so how would I credit you via web (that site is waaaay out of date)? SamuelRiv (talk) 08:16, 16 June 2010 (UTC)

The point is that 2(\mu\pm\sigma)=2\mu\pm 2\sigma while (\mu\pm\sigma)+(\mu\pm\sigma)=2\mu\pm \sqrt 2\sigma. Here \mu\pm\sigma signifies any random variable having mean value \mu and standard deviation \sigma. It does'nt matter if the distributions are normal or not, but the terms in the sum must be independent random variables. The notation \mu\pm\sigma is nice because the rules a+(\mu\pm\sigma)=(a+\mu)\pm\sigma and a(\mu\pm\sigma)=a\mu\pm a\sigma look so familiar, but X+X\ne 2X does not look familiar. See Multiset#Cumulant_generating_function. Bo Jacoby (talk) 13:27, 16 June 2010 (UTC).
That is correct, of course, but it doesn't make a complete algebra (I don't think). What I am looking at actually (on a double-check) doesn't work for the Gaussian (because on my notes, as noted in jest above, I never actually did the renormalization... so... oops... but it may actually work for a specific mean-sd combination), but it does work for a class of exponentials including 1+Cos[x] over a finite symmetric interval for which the result takes the same functional form as what was added (which consequently no longer means we're in strict probability theory since the normalization constant changes). As far as the generating function, I had looked at that earlier and will have to look again for this class (one has to take in all moments, it seems, to get it so that you can "cancel out" the Xs at the end). But it again looks like the sets of self-eigenfunctions (which were PDFs at first) of the convolution - my guess is that not normalizing like a regular probability distribution is what makes all the difference. SamuelRiv (talk) 17:37, 16 June 2010 (UTC)
I am sorry, but I do not follow. The formulas work for PDFs having finite mean and standard deviation, including Gaussian distributions. It has nothing to do with normalization (let alone renormalization). The '+' is sum of random variables, not of PDFs. Bo Jacoby (talk) 18:17, 16 June 2010 (UTC).


Wikipedia Reviewer.svg

Hello. Your account has been granted the "reviewer" userright, allowing you to review other users' edits on certain flagged pages. Pending changes, also known as flagged protection, is currently undergoing a two-month trial scheduled to end 15 August 2010.

Reviewers can review edits made by users who are not autoconfirmed to articles placed under pending changes. Pending changes is applied to only a small number of articles, similarly to how semi-protection is applied but in a more controlled way for the trial. The list of articles with pending changes awaiting review is located at Special:OldReviewedPages.

For the guideline on reviewing, see Wikipedia:Reviewing. Being granted reviewer rights doesn't change how you can edit articles even with pending changes. The general help page on pending changes can be found here, and the general policy for the trial can be found here.

If you do not want this userright, you may ask any administrator to remove it for you at any time. Tiptoety talk 15:11, 12 July 2010 (UTC)

Kepler's laws of planetary motion[edit]

>> Your contribution to the talk page makes no sense. Are you joking? Bo Jacoby (talk) 01:28, 16 September 2010 (UTC).

This were some calculation i made using mathematica, i have no reference for it so i didn't had it to the main page, maybe you can find some. It is not that serious.... —Preceding unsigned comment added by Paclopes (talkcontribs) 14:29, 24 September 2010 (UTC)

Mathematical notation[edit]

I see that you are interested in mathematical notation like me. If you understand German or what an online translator produces, you may be interested in HenningThielemann (talk) 19:05, 19 October 2010 (UTC)

Thanks. Bo Jacoby (talk) 19:20, 19 October 2010 (UTC).

Negative multinomial distribution[edit]

If you feel you can provide a unbiased, technical and scientific review on the Negative_multinomial_distribution article please read this talk page. A couple of users have attempted to simplify the NMD description and in the process have introduced a number of technical errors. I believe we may need to revert the content to the 11 November 2009 version by User:Atama, if not earlier. Thanks. Iwaterpolo (talk) 18:48, 29 November 2010 (UTC)

License tagging for File:German tank problem.pdf[edit]

Thanks for uploading File:German tank problem.pdf. You don't seem to have indicated the license status of the image. Wikipedia uses a set of image copyright tags to indicate this information; to add a tag to the image, select the appropriate tag from this list, click on this link, then click "Edit this page" and add the tag to the image's description. If there doesn't seem to be a suitable tag, the image is probably not appropriate for use on Wikipedia.

For help in choosing the correct tag, or for any other questions, leave a message on Wikipedia:Media copyright questions. Thank you for your cooperation. --ImageTaggingBot (talk) 21:05, 3 January 2011 (UTC)

Request for Comment[edit]

I've opened a request for comment on the maths reference desk. Hopefully we can get some closure. Fly by Night (talk) 21:48, 27 March 2011 (UTC)

thanks for the link[edit]

Thanks for the link to Wolfram Alpha! How did you find out about it, this seems to be quite new. --Thebackofmymind (talk) 21:17, 7 May 2011 (UTC)

You are welcome. I do not recall how I found out about it. We have an article on wolframalpha. Bo Jacoby (talk) 18:15, 4 March 2012 (UTC).

Disambiguation link notification for April 19[edit]

Hi. When you recently edited German tank problem, you added a link pointing to the disambiguation page Bayesian (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 09:28, 19 April 2012 (UTC)

Wikipedia Help Survey[edit]

Hi there, my name's Peter Coombe and I'm a Wikimedia Community Fellow working on a project to improve Wikipedia's help system. At the moment I'm trying to learn more about how people use and find the current help pages. If you could help by filling out this brief survey about your experiences, I'd be very grateful. It should take less than 10 minutes, and your responses will not be tied to your username in any way.

Thank you for your time,
the wub (talk) 18:08, 14 June 2012 (UTC) (Delivered using Global message delivery)


Find the factors of (factorize):
x2 + 4y2 + 4y - 4xy - 2x - 8
You gave the correct answer but you did't show steps. Will you please show me steps? You also provided me a link to a very nice mathematical website. Can you provide me name of some mathematical websites which solve word problems as well as equations? I know you remain very busy throughout the day but it will be your kindness if you provide me name of such websites. (talk) 03:27, 2 August 2012 (UTC)

The steps are these. You want a factorization:
(a1x+b1y+c1)(a2x+b2y+c2) = a1a2x2+b1b2y2+(a1b2+b1a2)xy+(a1c2+c1a2)x+(b1c2+c1b2)y+c1c2 = x2+4y2−4xy−2x+4y−8. Equating coefficients gives: a1a2=1, b1b2=4, a1b2+b1a2=−4, a1c2+c1a2=−2, b1c2+c1b2=4, c1c2=−8. The equation a1a2=1 has the solution a1=a2=1. The equations b1b2=4, b2+b1=−4, have the solution b1=b2=−2. The equations c2+c1=−2, c1c2=−8, have the solution c1=2, c2=−4. The last equation b1c2+c1b2=4 is satisfied. So x2+4y2−4xy−2x+4y−8 = (x−2y+2)(x−2y−4) is the factorization you are looking for. is your friend! Bo Jacoby (talk) 08:35, 2 August 2012 (UTC).

Aryabhata's calculation of Pi[edit]

I am interested to ask a question in mathematics reference desk. But I am not fully sure if it is a subject of desk there. Can you please suggest User_talk:Titodutta#Aryabhata.27s_Calculation_of_Pi --Tito Dutta 14:48, 9 August 2012 (UTC)

Depending on your question. If you request a reference regarding mathematics the mathematics reference desk is the proper place to ask. If you want to discuss Aryabhata's work it is not. Bo Jacoby (talk) 16:23, 9 August 2012 (UTC).

Kepler' s laws and time derivative of anomalies[edit]

Hi. I've seen your answer at the Reference desk. It seems that article about Kepler's laws does not mention the three types of angular velocities.-- (talk) 14:35, 2 October 2012 (UTC)

Austrian case[edit]

Hi, regarding the question that you've answered at the Reference desk, what I actually would like to know is how probable it is that an Austrian architect born in ca. 1865 and still alive in 1909 was still alive in 1945. I need to know this to resolve a copyright case. Thanks a lot if you can help me here. --Eleassar my talk 18:22, 16 November 2012 (UTC)

Hi, Can stating a probability really resolve a copyright case? You better find out whether this very Austrian architect was alive in 1945. (I am not supposed to give legal advice!) Look up the number of 44-year-old persons in 1909 and relate it to the number of 80-year-old persons in 1945. Then find the probability. Bo Jacoby (talk) 22:14, 16 November 2012 (UTC).
Sometimes such information just can't be found (see orphan works). Otherwise, thanks for the hint. By the way, don't worry, I'll not make any decision based on this information. It's just a debate. --Eleassar my talk 23:45, 16 November 2012 (UTC)

Kepler's laws[edit]

Hi Bo Jacoby, I saw your edits in Kepler's law. I guess you're right the article is on his laws and not on Newtonian mechanics. But don't you think it helps to understand these laws if we show that they can also be explained by later mechanics. After all, we live in the 21st century and have the advantage of being able to look back and use more modern science to appreciate the achievements of earlier scientsts. Cheers, Wikiklaas (talk) 00:09, 25 December 2012 (UTC)

Hi Wikiklaas! Thank you for asking. I do agree that Newtons laws put Kepler's laws in perspective, but several other articles in wikipedia do that, so this article need not do it. Kepler's laws are about geometric and kinematic concepts such as position, distance, angle, time, and angular velocity. Knowledge of dynamical concepts such as mass, force, energy, linear momentum, angular momentum, gravitational constant, and center of mass, are not necessary in order to understand Kepler's laws. It seems to me that the deleted contributions were not helpful to the reader, because they assumed unnecessary knowledge. I think that understanding Kepler's laws is a prerequisite for deriving and understanding Newton's laws, and not the other way round. Bo Jacoby (talk) 00:31, 25 December 2012 (UTC).
Thanks for your answer. I hope you don't mind me disagreing. It's not that readers only need to be familiar with everything that went on before Kepler devised his laws in order to best understand them. A little help coming from later discoveries, and the fact that Kepler's description of the phenomena (not his explanation for them) was later proven to be rather accurate, may provide the reader with a piece of text that makes it easier to understand the concept. I think there's nothing wrong with stating that his descritions were confirmed, and when and how that was done (Kepler's laws combined with Newtonian mechanics make a strong case). I see I'm not the only one holding this opinion as you already had to remove a large piece on Newtonian mechanics for a second time. Maybe you will have to think about how you would like to include this yourself, if you don't want to be forced to check the article for "unwanted" additions on Newtonian stuff by others every week. Wikiklaas (talk) 00:39, 26 December 2012 (UTC)
The article Kepler orbit contains the derivation of Kepler's laws based on Newton's laws, while the article Kepler's laws of planetary motion contains the derivation of Newton's laws based on Kepler's laws. Not every article needs to say everything, IMO. The articles on Newton's laws of motion and Newton's law of universal gravitation and classical mechanics and even history of classical mechanics say nothing about Kepler. Bo Jacoby (talk) 03:59, 26 December 2012 (UTC).
That's maybe because you don't need to know Kepler's laws to understand Newtonian mechanics. Wikiklaas (talk) 00:38, 28 December 2012 (UTC)
That is true, but Newton needed Kepler's laws in order to derive his own laws. Bo Jacoby (talk) 12:10, 28 December 2012 (UTC).
OK. Let's see if users start to complain or state they miss some elementary explanation, or else leave it like you wish. Best wishes, Wikiklaas (talk) 21:25, 28 December 2012 (UTC)


Volume of cone circumscribed about a sphere|ts=05:01, 15 December 2013 (UTC) Even I have trouble following your solution to the problem because I don't see where you get all the starting equations (and I also feel that the implicit differentiation you used is needlessly confusing). Jasper Deng (talk) 05:01, 15 December 2013 (UTC)

Statistical induction and prediction[edit]

My article is original research and so this important result is not included in wikipedia. Bo Jacoby (talk) 22:19, 16 December 2013 (UTC).

Disambiguation link notification for December 18[edit]

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Kepler's laws of planetary motion, you added a link pointing to the disambiguation page Perturbation (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 08:56, 18 December 2013 (UTC)


Hi, I noticed that you have made substantial contributions to the German tank problem, I was hoping you could have a look at mark-recapture and improve that? Jamesmcmahon0 (talk) 11:41, 21 February 2014 (UTC)

Thanks for noticing! I'll give it a look. Bo Jacoby (talk) 10:43, 25 February 2014 (UTC).
Is this what you had in mind? Bo Jacoby (talk) 07:08, 4 March 2014 (UTC).
Thanks for help here: Wikipedia:Reference_desk/Archives/Mathematics/2014_March_13#telescoping_series

I finally managed to find the closed form for the series here. Bo Jacoby (talk) 21:03, 8 July 2014 (UTC).


Hi, Bo Jacoby!I have seen your comments on the talk:spin (physics) re the integer values of spin instead of 1/2 and integer. Could you give more details how have started your reasoning?-- (talk) 14:36, 13 January 2015 (UTC)

Well, spin is angular momentum. Our article on Planck constant says that h/2π is called the reduced Planck constant, and then that: The reduced Planck constant is the quantum of angular momentum in quantum mechanics. This latter statement is not quite true. The z-component of the angular momentum of an electron is either h/4π or -h/4π, so the quantum of angular momentum is actually h/4π rather that h/2π. Expressed in units of h/2π the electron has spin 1/2 but expressed in units of h/4π the electron has spin 1. (1/2)⋅(h/2π)=1⋅(h/4π). Bo Jacoby (talk) 10:49, 14 January 2015 (UTC).
Very sound reasoning. It is a convenient way to express the difference in nature of orbital and spin angular momentum. Using this expression it seems that an important objection to the proton-electron structure of the neutron as envisioned by Rutherford disappears. Please give feedback on the issue of the spin of neutron as it follows from nitrogen-14 spectral data in apparent inconsistency to the proton-electron structure on talk:neutron.-- (talk) 11:14, 20 January 2015 (UTC)
My contribution is actually about language rather than physics. Orbital angular momentum is n⋅(h/4π) where n is even. Spin angular momentum is n⋅(h/4π) where n may be even or odd. Bo Jacoby (talk) 22:14, 22 January 2015 (UTC).
This language convention is very important actually, by expressing the spin of a neutron as an integer by this convention an important experimental spectroscopic objection (the spin of N-14 nucleus) to the proton-electron structure of neutron dissapears.-- (talk) 12:56, 27 January 2015 (UTC)
I respectfully disagree. The physics does not depend on the unit of measurement. Bo Jacoby (talk) 09:19, 30 January 2015 (UTC).
Of course the physical aspect does not depend on the unit of measurement, but the situation mentioned is based on problematic terminological (mis)understanding based on the relation between integer multiples of h/4π and integer multiples of h/2π.-- (talk) 10:59, 20 February 2015 (UTC)
Our article on Quantum harmonic oscillator has the formula
    E_n = \hbar \omega \left(n + {1\over 2}\right)
I prefer
    E_n =(2 n + 1) {\hbar \over 2} \omega
emphasizing that the energy is an odd number of energy quanta. Bo Jacoby (talk) 13:05, 28 February 2015 (UTC).
I agree with your preference. (It can be noticed that the energy quantum of the oscillator has a different value than the energy quantum of light, namely the photon hν).-- (talk) 12:11, 3 March 2015 (UTC)
The energy of a monochromatic electromagnetic field is an even number of energy quanta.
    E_n =n h \nu = 2 n {\hbar \over 2} \omega
Bo Jacoby (talk) 19:56, 7 March 2015 (UTC).

Beta Function used in Beta distribution[edit]

Could you tell me does the denominator of the normalized beta distribution indicate the total number of outcomes of a single event. Since the denominator in binomial distribution is total probability of the event to happen for example as you stated in tossing four coins we are dividing with 16 isn't it to normalize the binomial distribution (because since the total number of outcomes=16)? Is there any relation with this (denominator in binomial distribution) to beta distribution?JUSTIN JOHNS (talk) 07:54, 23 March 2015 (UTC)

Thanks for asking. In the binomial distribution the success probability p is a real number satisfying the inequality 0≤p≤1. So it has infinitely many possible values. An infinity of possibilities is an nuisance because you cannot assign the same probability to an infinity of possibilities. If the assigned probability is zero then the sum is zero, and otherwise the sum is infinite. In no case is it one. So the principle of insufficient reason does not apply. This problem is circumvented by use of measure theory, but the result is not satisfactory. It is better to study a finite number of possibilities first. So you should start with the hypergeometric distribution where the success probability has but finitely many possible values. See [4]. Bo Jacoby (talk) 14:18, 23 March 2015 (UTC).

Thanks for the link. It's a very good site and I joined it. Hope that it might be helpful to learn these concepts. I tried to download the pdf but it's not accessible due to network issues. Anyway I'll try looking it online and come back. JUSTIN JOHNS (talk) 12:31, 24 March 2015 (UTC)

Of course I understood the hypergeometric distribution. Could you know tell me what else do I need to study beta distribution to reach at Dirichlet distribution. (talk) 13:00, 26 March 2015 (UTC)

The (unnormalized) hypergeometric distribution is
 = \binom{K}{k}\binom{N-K}{n-k}
Let k_1=k, k_2=n-k, K_1=K, K_2=N-K, then
f(k_1, k_2) = [n = k_1 + k_2]\binom{K_1}{k_1}\binom{K_2}{k_2}
where [n = k_1 + k_2] is an Iverson bracket. f(k_1, k_2) describes the odds that a sample of n balls, taken from an urn containing K_1 white balls and K_2 black balls, contains k_1 white balls and k_2 black balls.
The limiting case where K_1 + K_2 >> n is the (unnormalized) binomial distribution.
f(k_1, k_2) = [n = k_1 + k_2]{p_1}^{k_1} {p_2}^{k_2}
where p_1={K_1 \over K_1+K_2} and p_2={K_2 \over K_1+K_2} .
The (unnormalized) multivariate hypergeometric distribution is the obvious generalization
f(k_1, \cdots , k_I) = [n = \sum_{i=1}^I k_i]\prod_{i=1}^I\binom{K_i}{k_i}
The limiting case where \sum_{i=1}^I K_i >> n is the (unnormalized) multinomial distribution.
f(k_i) =[n = \sum_{i=1}^I k_i]\prod_{i=1}^I {p_i}^{k_i}
where p_i={K_i \over \sum_{i=1}^I K_i} .
The hypergeometric distribution is a deduction distribution describing how information on the population K_i is translated into information on the sample k_i .
The corresponding induction distribution describe how information on the sample k_i is translated into information on the population K_i .
The (unnormalized) multivariate induction distribution
f(K_1, \cdots , K_I) = [N = \sum_{i=1}^I K_i]\prod_{i=1}^I\binom{K_i}{k_i}
describes the odds that a population of N balls contains K_i balls of color i for i= 1, . . . , I, when a sample contained k_i balls of color i.
The limiting case where N>>\sum_{i=1}^I k_i defines the (unnormalized) Dirichlet distribution of p_i={K_i \over N}.
The special case I=2 is the beta distribution.
Bo Jacoby (talk) 21:55, 26 March 2015 (UTC).

I admit that I'm too novice to learn these concepts and equations. I've learned the hypergeometric distribution and could you tell the next step to understand beta distribution and dirichlet distribution. It might be true that you can't learn how the equations are formed in the beta distribution but could you give an explanation of parameters in beta distribution and how it is used in probability. I would suggest an explanation that doesn't uses much symbols, equations and jargon terms.JUSTIN JOHNS (talk) 12:28, 27 March 2015 (UTC)

The hypergeometric distribution predicts the constitution of a sample from the known constitution of the population. This is deduction. The corresponding induction distribution predicts the constitution of the population from the known constitution of a sample. The limiting case where the population is big is the beta distribution. The parameters of the beta distribution is the constitution of the sample. Bo Jacoby (talk) 17:27, 27 March 2015 (UTC).

Yeah this gives me a sense of relief. Do you mean to say that beta distribution predicts the probability of the population rather than the probability of the sample as in hypergeometric distribution? As far as my knowledge goes beta distribution contains two parameters \alpha and \beta and the beta function. Is \alpha and \beta the parameters of the population or the sample? Is there any criteria to select values for \alpha and \beta? I couldn't understand what you meant by the limiting case in beta distribution? JUSTIN JOHNS (talk) 05:44, 28 March 2015 (UTC)

Yes, that is correct. The beta distribution predicts the success probability based on the observed number of successes and failures. Consider a big jar containing N balls out of which K balls are white and NK balls are black. Consider a white ball to be a success and a black ball to be a failure. Then the success probability is p=K/N and the failure probability is 1−p. Now take a sample from the jar. The total number of balls in the sample is n and the number of white balls (successes) in the sample is k and the number of black balls (failures) in the sample is nk. The deduction problem is to estimate k from p and n. The answer is the binomial distribution. The induction problem is to estimate p from k and n. The answer is the beta distribution with parameters α = k+1 and β = nk+1. I do not know why α and β are conventionally chosen rather than k and nk. The beta function is used to normalize the beta distribution. Strictly speaking all this is only true in the limiting case where N is very big. For finite values of N the deduction distribution is the hypergeometric distribution (rather than the binomial distribution) and the induction distribution is a discrete distribution of K (rather than the continuous beta distribution of p). Bo Jacoby (talk) 10:39, 28 March 2015 (UTC).

That too a great explanation. Now I have understood the numerator of the beta distribution but when it comes to the denominator there's a bit confusion.Could you show an simple example by substituting the values for n and k for beta distribution. Using Wolfram Alpha I realized that the beta function gives a value between 1 and 0. I still can't get how the normalization takes place in beta distribution by the beta function because even if it (beta function) provides a value between 0 and 1 how could it restrict an big N (very large) and it's factorial to 0 and 1 while computing the beta distribution. Also when N becomes very large is it permissible to use binomial distribution or should we strictly use beta distribution? Why couldn't we use beta function in binomial distribution for normalization?JUSTIN JOHNS (talk) 12:01, 30 March 2015 (UTC)

I may be repeating myself. The binomial distribution is
f(k)=\binom n k p^k (1-p)^{n-k}
The sum of the odds is (using the binomial theorem)
\sum_{k=0}^n f(k)=\sum_{k=0}^n \binom n k p^k (1-p)^{n-k}=(p+(1-p))^n = 1^n = 1
so the distribution function is normalized.
The beta distribution is the same expression, but considered a function of p rather than of k
f(p)=\binom n k p^k (1-p)^{n-k}
The sum of the odds is an integral because the variable p is continuous
\int_0^1 f(p)dp
Evaluating this integral you need the beta function
\mathrm{\Beta}(k+1,n-k+1) = \int_0^1 p^{k}(1-p)^{n-k}\,\mathrm{d}p \!
and the normalized beta distribution is
{f(p)\over\int_0^1 f(p)dp}={p^k (1-p)^{n-k}\over \mathrm{\Beta}(k+1,n-k+1)}
A simple example: You got one success out of eight trials. k=1 and n=8. The distribution function for the unknown success probability p is
{p(1-p)^7\over\mathrm{\Beta}(2,8)}=72 p (1-p)^7
Bo Jacoby (talk) 17:59, 30 March 2015 (UTC).

I tried substituting p=0.3 but the value gets over 1 here for the example you have shown.Does this mean the example you provided isn't normalized or whether it's not permissible to use p=0.3?JUSTIN JOHNS (talk) 09:23, 31 March 2015 (UTC)

It means that the credibility that 0.3<p<0.3+dp is 1.77885 dp for small values of dp. The hypothesis that 30%<p<31% has 1.8% credibility. Bo Jacoby (talk) 18:13, 31 March 2015 (UTC).

Could you explain in laymen terms the meaning of credibility. Is it the likelihood or anything else? Do you mean to say that the beta function calculates the likelihood of a certain value to occur? Is the value 1.77885 normalized?JUSTIN JOHNS (talk) 05:40, 1 April 2015 (UTC)

Yes, the value 1.77885 is normalized. But 1.77885 is not a probability. It is a probability density. When multiplied by a small interval size it becomes a probability. So 1.77885⋅0.01=0.0177885 is a probability. But some people use the word probability only when talking about outcomes of experiments that can be repeated. So you compute the probability
P=8 p (1-p)^7
that the total number of successes is one (k=1) when you perform an experiment eight times (n=8), assuming that you know the success probability for one experiment (p). If, however, you do not know the probability p, then you cannot compute P. But knowing k and n gives a clue to the value of p. If k=1 and n=8 then it is highly incredible that p>0.9. You may talk about the credibility of an hypothesis. The credibility of the hypothesis a<p<b is
72\int_a^b p (1-p)^7 dp
Computationally a credibility is like a probability, but they have different interpretations. You talk about the degree of probability that some experiment gets a certain result, and the degree of credibility that some hypothesis is true. Bo Jacoby (talk) 06:43, 1 April 2015 (UTC).

Sorry for the great gap in responding to your answer since I was at home and couldn't access internet. Thanks for the kindness and patience you have taken in helping me. I couldn't get how could probability p be greater than 0.9 in the example you have given. Could you help me. Could you also give the equation for probability density. Is it (probability density) probability by the interval choosen? Have we used the beta function for getting the value that we calculated as 1.77885? Eventhough you gave the equation 72 p (1-p)^7 I couldn't see any beta function in this. Do you mean to say that the value obtained from beta function might be 0.01(interval) and when we multiply this with 1.77885 we would get a probability? Does beta function actually acts as a normalizing constant? JUSTIN JOHNS (talk) 12:55, 6 April 2015 (UTC)

Welcome back! I never taught probability before and I am learning from doing it here. The credibility density that the success probability is around p is
(n+1)\binom n k p^k (1-p)^{n-k}.
Here k is the number of successes and n is the number of trials, so n−k is the number of failures. The normalizing divisor is a beta function value because
\int_0^1 p^k (1-p)^{n-k} dp=B(k+1,n-k+1)={1 \over(n+1)\binom n k}.
(n+1)\binom n k \int_0^1 p^k (1-p)^{n-k}dp=1,
meaning that this credibility density is normalized.
The credibility that the success probability is between a and b (where 0≤ab≤1) is
(n+1)\binom n k \int_a^b p^k (1-p)^{n-k}dp.
Assume now that k=1 and n=8: one success out of eight trials.
(8+1)\binom 8 1=9\cdot 8= 72.
So the credibility that the success probability is between 0.3 and 0.3+0.01 is
72 \cdot 0.3 \cdot (1-0.3)^7 \cdot 0.01= 0.0177885 .
The credibility that the success probability is greater than 0.9 is
72\int_{0.9}^1 p(1-p)^7 dp=0.000000082
so the hypothesis that p≥0.9 is highly incredible.
Bo Jacoby (talk) 14:27, 6 April 2015 (UTC).

Okay that's good but still confused with credibility density. Could you tell what's really credibility density? Also how did an (n+1) term came for the beta distribution? Also if \alpha=k+1 and \beta=(n-k)+1 what's really n? Is it  \alpha + \beta which means n=n+2? Then why do we have the equation  \binom n k=\frac{n!} {k! (n-k)!} instead of  \binom {n+2} k = \frac{(n+2)!}{k! ((n+2)-k)!} for beta distribution? JUSTIN JOHNS (talk) 13:29, 7 April 2015 (UTC)

A credibility densitity is a function, f(p) such that the credibility that a\le p\le b is
\int_a^b f(p)dp.
The formula from Beta_function#Properties
{n \choose k} = \frac 1 {(n+1) \Beta(n-k+1, k+1)}
generalizes the binomial coefficient to noninteger arguments, but here we can do with binomial coefficients with integer arguments. Forget all about the confusing α and β variables. Of course n=k+(n-k). Bo Jacoby (talk) 13:58, 7 April 2015 (UTC).

Even though I got an picture of the beta distribution I'm still confused with normalization and credibility density. Does both (normalization and credibility density) takes place in beta distribution? Could you tell that if beta function provides normalization which factor provides credibility?. Could you list any link that might help an novice like me to study about credibility density. Also as you stated before :So the credibility that the unknown success probability is located somewhere between the numbers p and p+dp is

\frac{f(p)dp}{\int_0^1 f(p)dp}=\frac{p^k(1-p)^{n-k}dp}{\int_0^1 p^k(1-p)^{n-k}dp}.

Could you explain how did you add an 'dp' term in the numerator even if there isn't any integration taking place in the numerator? JUSTIN JOHNS (talk) 07:59, 9 April 2015 (UTC)

Yes, the beta distribution is a normalized density function. See the article probability density function. There is no factor providing credibility. The difference between the concept of probability and the concept of credibility is merely that probability is about events that have not yet happened, and credibility is about hypotheses that may or may not be true. The credibility that the unknown success probability p is located somewhere between the numbers a and b is
\frac{\int_a^b f(p)dp}{\int_0^1 f(p)dp}
=\frac{\int_a^b p^k(1-p)^{n-k}dp}{\int_0^1 p^k(1-p)^{n-k}dp}
=(n+1)\binom n k \int_a^b p^k(1-p)^{n-k}dp
If a=p and b=p+dp then the credibility is
\frac{f(p)dp}{\int_0^1 f(p)dp}
=(n+1)\binom n k p^k(1-p)^{n-k}dp
Bo Jacoby (talk) 05:21, 10 April 2015 (UTC).

I'm still confused with credibility. Could you give an example that shows what really credibility is. JUSTIN JOHNS (talk) 12:36, 10 April 2015 (UTC)

No big deal. The result of tossing a die may be 1 or 2 or 3 or 4 or 5 or 6. When you haven't yet tossed the die the probability that the result will be 6 is 1/6. When you have tossed the die, but before anybody have seen the result, then it is awkward to talk about the probability that the result is 6, because the outcome is now a fact, albeit unknown. Frequentists don't like to consider the probability of a fact. I quote:
In a frequentist approach to inference, unknown parameters are often, but not always, treated as having fixed but unknown values that are not capable of being treated as random variates in any sense, and hence there is no way that probabilities can be associated with them. In contrast, a Bayesian approach to inference does allow probabilities to be associated with unknown parameters, where these probabilities can sometimes have a frequency probability interpretation as well as a Bayesian one. The Bayesian approach allows these probabilities to have an interpretation as representing the scientist's belief that given values of the parameter are true [see Bayesian probability - Personal probabilities and objective methods for constructing priors].
In order not to make the frequentists upset I use the word credibility to signify a Bayesian probability. So the credibility that the unseen result of tossing a die is 6, is 1/6. Bo Jacoby (talk) 16:35, 10 April 2015 (UTC).

I think credibilty means the likelihood of an event isn't it? Could you explain the difference between binomial distribution and beta distribution. Is beta distribution a binomial distribution? What makes beta distribution different from binomial distribution? JUSTIN JOHNS (talk) 04:27, 11 April 2015 (UTC)

Let n be the number of trials.
If you know the success probability p then you can compute the probability that the number of successes k will be between integer values a and b,
0 ≤ akbn,
by the binomial distribution formula
\sum_{k=a}^b \binom n k p^k (1-p)^{n-k}
If you know the number of successes k then you can compute the credibility that the success probability p is between real values a and b,
0 ≤ apb ≤ 1,
by the beta distribution formula
\int_a^b (n+1)\binom n k p^k (1-p)^{n-k}dp
That makes beta distribution different from binomial distribution.
Bo Jacoby (talk) 21:05, 11 April 2015 (UTC).

Okay. Thanks. Now I'm getting a sense of relief on how beta distribution differs from binomial distribution. Is there any way to figure out that beta distribtion provides an probability density rather than a probability? Also does binomial distribution provide an probability density or an probability? Is normalization making the probability density to be in the range 0 to 1 or anything else?JUSTIN JOHNS (talk) 06:32, 13 April 2015 (UTC)

When the unknown variable is an integer, like k in the binomial distribution, then f(k) is a probability. It can be summed from a to b.
When the unknown variable is a real number, like p in the beta distribution, then f(p) is a probability density function. It can be integrated from a to b.
Normalization means that the total probability is one.
Bo Jacoby (talk) 17:58, 13 April 2015 (UTC).

Okay. Could you tell whether the value : {p(1-p)^7\over\mathrm{\Beta}(2,8)}=72 p (1-p)^7 is normalized?Also why do we need to make the total probability as 1 in normalization when it's quite sure that the probability lies between 0 and 1.Could you show how f(k) and f(p) is related by using it's equation.Is there any term in f(p) that's not in f(k) to make it a probability distribution rather than a probability?JUSTIN JOHNS (talk) 07:13, 16 April 2015 (UTC)

Yes, 72 p (1-p)^7 is normalized because \int_0^1 72 p (1-p)^7 dp=1. The credibility that 0≤p≤1, is 1. Bo Jacoby (talk) 07:38, 16 April 2015 (UTC).

Yeah got the concept of normalization. Could you explain how f(k) and f(p) is related? I would really like to know what makes beta distribution a probability density function rather than a probability function? JUSTIN JOHNS (talk) 11:33, 16 April 2015 (UTC)

Hypergeometric deduction and induction[edit]

Good! My advice is to progress from the finite case. The binomial- and beta distributions are limiting cases. Consider a lottery with K prizes, and NK blanks. Draw k prizes and nk blanks, leaving Kk prizes and (NK)−(nk) blanks behind. These numbers are non-negative integers. Knowing N and n and K you can compute the probability distribution for k. Knowing N and n and k you can compute the credibility distribution for K. Try to do that! I have given clues above. Bo Jacoby (talk) 22:43, 16 April 2015 (UTC).
Try first a simple example, say N=10, n=4. Compute this table of unnormalized distributions:
210 126  70  35 15   5  1   0   0   0   0
  0  84 112 105 80  50 24   7   0   0   0
  0   0  28  63 90 100 90  63  28   0   0
  0   0   0   7 24  50 80 105 112  84   0
  0   0   0   0  1   5 15  35  70 126 210
The columns K=0 to N are deduction distributions. They are normalized by dividing by 210.
The rows k=0 to n are induction distributions. They are normalized by dividing by 462.
Bo Jacoby (talk) 09:34, 17 April 2015 (UTC).

Sorry for the gap I was having exams this week and two more exams left.Thanks for your answer.I'm okay with the first answer.Could you tell me how did you created the table given in the second answer?It seems that I can't get how 210,126,70 etc came in the table.Could you explain the formula used to create this table.JUSTIN JOHNS (talk) 12:27, 22 April 2015 (UTC)

The table is
The examples are
etc. Bo Jacoby (talk) 22:08, 22 April 2015 (UTC).

I can't really get the meaning of this term:\binom{K}{k}\binom{10-K}{4-k} .Could you tell what \binom{K}{k} and \binom{10-K}{4-k} represents?Why do we multiply these terms to get \binom{K}{k}\binom{10-K}{4-k} ?JUSTIN JOHNS (talk) 12:19, 23 April 2015 (UTC)

I made a subsection header so that we don't need to edit the whole section but only the last subsection.
Sorry, I thought you knew. See binomial coefficient. The number of ways to select k prizes out of K prizes is \binom{K}{k}. The number of ways to select 4−k blanks out of 10−K blanks is \binom{10-K}{4-k}. The number of ways to select k prizes and 4−k blanks out of K prizes and 10−K blanks is \binom{K}{k}\binom{10-K}{4-k}. Bo Jacoby (talk) 20:08, 23 April 2015 (UTC).

Thanks.That's good.I'll study binomial coefficient and come back later.JUSTIN JOHNS (talk) 12:50, 24 April 2015 (UTC)

Okay I understood binomial coefficient. Could you tell me why did you called the columns K=0 to N as deduction distributions and rows k=0 to n as induction distributions? Why are these distributions hypergeometric? JUSTIN JOHNS (talk) 06:40, 25 April 2015 (UTC)

Deduction is top-down reasoning and induction is bottom-up reasoning. See the article deductive reasoning. The hypergeometric distribution is a deduction distribution. Its cumulative distribution function can be expressed as a hypergeometric function. That is the reason why it has that name. The corresponding induction distribution is not described in wikipedia as it is WP:original research on my part. The column (15,80,90,24,1) shows the odds that the number of prizes in the sample (n=4) will be k=(0,1,2,3,4) when the number of prizes in the lottery (N=10) was K=3. These odds are unnormalized deduction probabilities. The row (0,0,0,7,24,50,80,105,112,84,0) shows the odds that the number of prizes in the lottery was (0,1,2,3,4,5,6,7,8,9,10) when the number of prizes in the sample was k=3. These odds are unnormalized induction credibilities. Bo Jacoby (talk) 19:33, 25 April 2015 (UTC).

I would like to know whether the column (15,80,90,24,1) that shows the number of prizes in the lottery (N=10) was K=3 or whether it was K=4? While doing the calculation by substituting K=4 and k=0 you would get \binom{4}{0}\binom{10-4}{4-0}=15 which is the first value in the column(15,80,90,24,1) but if you place K=3 you would get 35. Am I wrong or which one is correct? Also I can't see any top-down reasoning in deduction while speaking about columns K=0 to N and bottom up reasoning in rows k=0 to n.Could you help me.JUSTIN JOHNS (talk) 07:19, 27 April 2015 (UTC)

You are gloriously right and I was shamefully wrong! K=4 is the correct label to the column (15,80,90,24,1). Counting from 0 it is column number 4. Top-down means reasoning from the lottery variable K downwards to the sample variable k. So that is deduction. Bottom-up means reasoning from the sample variable k upwards to the lottery variable K. So that is induction. Bo Jacoby (talk) 17:52, 27 April 2015 (UTC).

Okay. Aren't we taking both values k and K while going from columns K=0 to N? Could you tell what do you mean by saying that "we're sampling from K downwards to the sample variable k"? Does that mean using K we are finding k?But here we are using both values K and k isn't it for finding the top most row? Do you mean to say that while going down a column we're doing top-down reasoning or is it(top-down reasoning) while across a row? Also do you mean to say that we keep K fixed and change k? JUSTIN JOHNS (talk) 07:58, 28 April 2015 (UTC)

"The whole is greater than the part" says Euclid. The lottery is the whole. The tickets you buy is the part. Deduction is reasoning about the part when knowing the whole. Induction is reasoning about the whole when knowing the part.
If you know that a lottery with N=10 tickets has K=4 prizes and you buy n=4 tickets, then you reason that the number of prizes may be (0,1,2,3,4) with odds (15,80,90,24,1). This is deduction. The mean value is μ=(0⋅15+1⋅80+2⋅90+3⋅24+4⋅1)/(15+80+90+24+1)=1.6. The standard deviation σ satisfies μ22=(02⋅15+12⋅80+22⋅90+32⋅24+42⋅1) /(15+80+90+24+1)=3.2. So σ=0.8. The estimated number of prizes in the sample is summarized: k≈1.6±0.8.
If you buy n=4 tickets and get k=3 prizes from a lottery with N=10 tickets, then you reason that the number of prizes in the lottery may have been (0,1,2,3,4,5,6,7,8,9,10) with odds (0,0,0,7,24,50,80,105,112,84,0). This is induction. The mean value is μ=(0⋅0+1⋅0+2⋅0+3⋅7+4⋅24+5⋅50+6⋅80+7⋅105+8⋅112+9⋅84+10⋅0) /(0+0+0+7+24+50+80+105+112+84+0)=7. Check that the standard deviation is σ=1.51186. The estimated number of prizes in the lottery is summarized: K≈7±1.5. Bo Jacoby (talk) 10:51, 28 April 2015 (UTC).

I couldn't get what you meant by 'odds'. Could you explain the term 'odds(15,80,90,24,1)'. Also could you tell how you calculated the mean and standard deviation or could you write the formuala used to find these(mean and sd) values. For induction since k=3 prizes I am thinking that the number of prizes would be (0,1,2,3). Could you say why the number of prizes in the lottery would be (0,1,2,3,4,5,6,7,8,9,10) and not (0,1,2,3) for induction. Still I can't get what makes a difference in induction from deduction in this lottery prize problem. I could see that the only difference that exist between deduction and induction is that in deduction you're considering K and in induction you're considering k. Is there any other difference? JUSTIN JOHNS (talk) 04:18, 29 April 2015 (UTC)

Deduction. K=4 prizes and N−K=6 blanks in the lottery. Buy n=4 tickets.[edit]

The number of bought prizes may be k=0 in 15 ways.
The number of bought prizes may be k=1 in 80 ways.
The number of bought prizes may be k=2 in 90 ways.
The number of bought prizes may be k=3 in 24 ways.
The number of bought prizes may be k=4 in 1 way.
These numbers of ways to obtain an outcome are called odds.
The total number of ways is 15+80+90+24+1=210.
The mean value of 15 zeros, 80 ones, 90 twos, 24 threes and on four is computed by (15⋅0+80⋅1+90⋅2+24⋅3+1⋅4)/210. See the article mean. See also the article standard deviation.

Induction. N=10 tickets in the lottery. Buy k=3 prizes and n−k=1 blank.[edit]

You do not know the number K of prizes in the lottery. There are these possibilities: K= 3, K=4, K=5, K=6, K=7, K=8, and K=9. You bought 3 prices and 1 blank, so K=0, K=1, K=2 and K=10 are not possible and has odds zero.
If K=3 then the observation k=3 can occur in 7 ways.
If K=4 then the observation k=3 can occur in 24 ways.
If K=5 then the observation k=3 can occur in 50 ways.
If K=6 then the observation k=3 can occur in 80 ways.
If K=7 then the observation k=3 can occur in 105 ways.
If K=8 then the observation k=3 can occur in 112 ways.
If K=9 then the observation k=3 can occur in 84 ways.
See the point? Bo Jacoby (talk) 20:52, 29 April 2015 (UTC).

Wow that's good. Thanks for your great explanation. Could you tell why when we pick the number of number of bought prizes k=3 and the number of tickets in the lottery N=10 there are only chances of K=3 upto K=9? Since the number of prizes brought is k=3 it's definite that out of 10 tickets 3 would be prizes isn't it? So the rest 7 can be prizes or blanks isn't it? So if that's the case there might be a chance of getting 10 prizes in 10 tickets isn't it? Could you help me. JUSTIN JOHNS (talk) 06:14, 2 May 2015 (UTC)

The number of blanks in the lottery must have been at least equal to the number of blanks bought. So N−Kn−k. So KN−(nk) = 10-1 = 9. The table provided zero odds for impossible options. Odds may be written with colons instead of commas like this: 0:0:0:7:24:50:80:105:112:84:0. Bo Jacoby (talk) 15:42, 2 May 2015 (UTC).

Mean value and standard deviation[edit]

If outcomes 0, 1, 2, 3, ⋅ ⋅ ⋅ , n have odds a0:a1:a2:a3: ⋅ ⋅ ⋅ :an then the mean value, μ, of the outcome is

μ=(a0⋅0+a1⋅1+a2⋅2+a3⋅3+ ⋅ ⋅ ⋅ +ann)/(a0+a1+a2+a3+ ⋅ ⋅ ⋅ +an)

and the standard deviation, σ, satisfies

σ22=(a0⋅02+a1⋅12+a2⋅22+a3⋅32+ ⋅ ⋅ ⋅ +ann2)/(a0+a1+a2+a3+ ⋅ ⋅ ⋅ +an)

It turns out to be a useful trick to replace powers with binomial coefficients.

\mu=\frac{\sum_{i=0}^n a_i \binom i 1}{\sum_{i=0}^n a_i \binom i 0}
\frac 1 2 \left( \frac{\sigma^2}\mu +\mu-1\right)=\frac{\sum_{i=0}^n a_i \binom i 2}{\sum_{i=0}^n a_i \binom i 1}

because for the deduction and induction odds the sums

\sum_{i=0}^n a_i \binom i Q

are easier to compute than

\sum_{i=0}^n a_i i^ Q

for Q = 0, 1, 2. Bo Jacoby (talk) 13:02, 3 May 2015 (UTC).


I would like to have a pause in this conversation since I am having university exams from May 6th to June 1st.Thanks for your cooperation and kindness.See you back on June 1st.JUSTIN JOHNS (talk) 05:57, 4 May 2015 (UTC)

I wish you good exams! Bo Jacoby (talk) 09:05, 4 May 2015 (UTC).

Hi nice to see you.Due to reshedule in exams I am having exams upto June 4th.We'll meet back at June 4th.JUSTIN JOHNS (talk) 10:07, 1 June 2015 (UTC)

Thanks! I look forward to resume our conversation. Bo Jacoby (talk) 19:44, 1 June 2015 (UTC).

Really our exam finishes on June 5th 2015 due to reshedules.Anyway should we resume the conversation back?I think the conversation takes place like an talk between an novice and a professor.I admit that I'm totally weak in basic concepts of statistics like probability distribution function,normalization,likelihood etc.So should I go and take the prerequisites and come back or can we resume without taking the prerequisites?JUSTIN JOHNS (talk) 10:05, 4 June 2015 (UTC)

Feel free to come back. We can do prerequisites one thing at a time. Bo Jacoby (talk) 16:15, 4 June 2015 (UTC).

Thanks for your assurance.So let's start.Which prerequisite should I take first?JUSTIN JOHNS (talk) 09:32, 5 June 2015 (UTC)

probability mass function[edit]

The idea of a probability mass function is useful. Consider a lottery with K = 4 prizes and N – K = 6 blanks. Buy n = 4 tickets chosen randomly from the lottery. What is the probability mass function of the number k of prizes bought? Bo Jacoby (talk) 21:53, 5 June 2015 (UTC).

Sorry for the gap in replying because I couldn't get to college since I have already completed my four-year course.I'll try my best to keep in contact with you.Could you give me a hint from where to start to answer this questions?JUSTIN JOHNS (talk) 05:23, 11 June 2015 (UTC)

The number k of prizes bought may be 0, 1, 2, 3, or 4. The odds have previously been computed to be 15:80:90:24:1. The sum of the odds is 210. When you divide the odds by their sum you get a probability mass function.
I use the J (programming language) which may be unfamiliar to you. You may prefer to use a spreadsheet.
(* ];.0)@(!/&i.&>:)
   4 odds 10
210 126  70  35 15   5  1   0   0   0   0
  0  84 112 105 80  50 24   7   0   0   0
  0   0  28  63 90 100 90  63  28   0   0
  0   0   0   7 24  50 80 105 112  84   0
  0   0   0   0  1   5 15  35  70 126 210
   4{|:4 odds 10
15 80 90 24 1
   (,:~i.@#)(%+/)4{|:4 odds 10
        0        1        2        3         4
0.0714286 0.380952 0.428571 0.114286 0.0047619
Bo Jacoby (talk) 09:02, 12 June 2015 (UTC).

Nice to see you again.Still can't get how you computed the odds.Could you help me.JUSTIN JOHNS (talk) 17:45, 13 June 2015 (UTC)

Yes, reread User_talk:Bo_Jacoby#Hypergeometric_deduction_and_induction. We have done it before! Bo Jacoby (talk) 23:19, 13 June 2015 (UTC).

Yeah I got the answer for 'odds'.I'm still confused with k,K,N,n.Could you mention what they are.JUSTIN JOHNS (talk) 10:33, 16 June 2015 (UTC)

The lottery had K prizes and N – K blanks. You buy n tickets: k prizes and n – k blanks. Bo Jacoby (talk) 07:15, 17 June 2015 (UTC).

Okay that's done.Could you tell why do we need to consider n and k when we already have N(blanks) and K(prizes) to make an binomial distribution?JUSTIN JOHNS (talk) 07:31, 17 June 2015 (UTC)

We know N (tickets) and K (prizes) and of course N – K (blanks), and n (tickets in the sample), but we do not know k (prizes in the sample). We compute the probability mass function of k. It is not a binomial distribution, however. It is called the hypergeometric distribution. Bo Jacoby (talk) 11:42, 17 June 2015 (UTC).


Hey Bo! You have me intrigued about J. I was surprised to see that it hasn't been packaged for Debian yet, though there was a request for packaging back in 2011

Anyhow, I wanted to point out a typo in your "1000–300+30–10" (the last 0) over at WP:RDMA. Cheers! -- ToE 14:47, 3 June 2015 (UTC)

Thanks! I just fixed the error you pointed out. I know nothing about Debian I'm afraid. Bo Jacoby (talk) 08:50, 4 June 2015 (UTC).