|WikiProject Mathematics||(Rated C-class, Mid-priority)|
- 1 Clarifying a revised equation
- 2 Speed of Convergence and Error Estimates
- 3 Convergent version
- 4 Claims about QM
- 5 What about these Stirling's approximation approximation (yes, an approximation of an approximation!)?
- 6 The two limit relations cited in the comment above are false
- 7 Error?
- 8 History: First appeared in de Moivre's or Stirling's book?
- 9 Error?
- 10 Wrong equation - what was intended ?
- 11 Gosper's approx
- 12 Deriving n! from log(n!)
- 13 \sim vs. =
- 14 Graphic quality
- 15 Reference for Convergent Version
- 16 Faster convergence for 'integral' Derivation of ln(n!) by switching bounds
- 17 New figure for relative error
Clarifying a revised equation
In the section of the article headed “Speed of Convergence and Error Estimates”, following the text "Stirling's formula is in fact the first approximation to the following series (now called the Stirling series):", an equation with the factor (1 + 1/(12n) +•••) is presented.
Shortly after this equation, the a rewritten version is presented prefaced by the text: "Writing Stirling's series in the form:".
I think that the derivation of the rewritten equation would be clearer if the line,"Writing Stirling's series in the form:", were changed to the following: “Stirling’s series can be rewritten in the form below, where the approximation that ln(1 ± |x|) ≈ ±|x| for the series ln(1 + 1/(12n) + •••) is used.” — Preceding unsigned comment added by Lcaretto (talk • contribs) 16:44, 20 June 2013 (UTC)
Speed of Convergence and Error Estimates
The text says, "It is not a convergent series; for any particular value of n there are only so many terms of the series that improve accuracy, after which point accuracy actually gets worse." This seems incorrect. The series obviously converges for any reasonably large n, and I'd guess for any n >= 1. Perhaps the author meant to say that in a floating point implementation, the terms soon get smaller than 1 unit in the least significant place (ULP), and additional terms risk adding only rounding error. But that can be avoided by adding terms in the series from right to left. —Preceding unsigned comment added by DMJ001 (talk • contribs) 20:10, 12 June 2008 (UTC)
Okay, I'm going to retract the above paragraph. Using Mathematica (in a sort of hack way), I reproduced the series out to 200 terms. I'm going to display them in floating point, but they were actually computed using exact integer arithmetic. Here are the terms out to the coefficient of 1/n^35: 1., 0.0833333, 0.00347222, -0.00268133, -0.000229472, 0.000784039, 0.0000697281, -0.000592166, -0.0000517179, 0.000839499, 0.000072049, -0.00191444, -0.000162516, 0.00640336, 0.000540165, -0.0295279, -0.00248174, 0.17954, 0.0150561, -1.3918, -0.116546, 13.398, 1.1208, -156.801, -13.1079, 2192.56, 183.191, -36101.1, -3015.08, 691346., 57721.3, -1.52358*10^7, -1.27174*10^6, 3.82848*10^8, 3.19498*10^7, -1.08809*10^10, and the coefficient of the 1/n^200 is -7.63651*10^209.
At first the terms are small, then for a little while every other term seems to be large, and then they all get large, and yet larger the farther out you go. I also computed the ratio of adjacent coefficients and also the ratios of coefficients at distance two apart. Here are ratios of coefficients 2 apart, where the last one is the coefficient of 1/n^200 divided by the coefficient of 1/n^198. -928.912, -948.417, -948.416, -968.124, -968.123, -988.034, -988.033. You can see this is wildly diverging, and getting worse the farther out you got. So it appears that for any n, there is a point in the series beyond which the series will diverge. DMJ001 (talk) 23:40, 11 July 2008 (UTC)
Over the last several months, I've added two graphs. The first is intended to be informative of how accurate Stirling's approximation is. The second is intended to graphically demonstrate the non-convergence. DMJ001 (talk) 05:13, 18 February 2009 (UTC)
The "convergent version" section is unsatisfactory. What is the analytic value of that integral in terms of Γ(z)? What does "entails evaluating" mean? --Zero 00:53, 13 Mar 2005 (UTC)
- Ok, fixed it myself. Someone might like to check. --Zero 01:22, 13 Mar 2005 (UTC)
How large is the remainder R in the Euler-Maclaurin series? It's traditional, I thought, to indicate that R = O(1/n) or somesuch. Slaniel 22:41, 28 February 2006 (UTC)
The link to Landau notation does not explain anything about what the tilde means.
- See "Generalizations and related usages" there.
In the section A version suitable for calculators square brackets are referenced, but do not appear in the formula. -- Meekohi 16:09, 16 April 2007 (UTC)
- Yep, would be nice if someone could fix that, especially as the reference is dead now! (thomas9987 10:53, 24 April 2007 (UTC))
I removed the external link (An overview and comparison of different approximations of the factorial function.) which isn't working at the moment, as well as the "square brackets" thing in the section A version suitable for calculators . Hope someone can put the square brackets back in the expression! --thomas9987 10:58, 24 April 2007 (UTC)
Claims about QM
I am removing the 2006 additions to the history section. It is entirely too speculative, and the discussion for small n is inappropriate. For background see Wikipedia talk:WikiProject Physics/Archive 13 and Talk:Planck's law. Melchoir (talk) 05:55, 5 April 2008 (UTC)
What about these Stirling's approximation approximation (yes, an approximation of an approximation!)?
Are these proven for n goes to infinity? If n is not high enough then the approximation of e is not good enough, so this is an approximation of Stirling's which is an approximation of n! (a double approximation so to speak, which is likely a worse approximation of n! but the last form which is a rewriting of the previous one looks so cool that I believe it to be true at the limit to infinity!) Since this will always underestimete e, it follows that it will always overestimete Stirling's approximation, so it seems that it might always be a worse approximation of n! than Stirling's, but could it be that for a very large n it would approximate n! better than Stirling's especially if Stirling's happens to be an underestimate of n! (I have to check wether Stirling's always an underestimate, an overestimate or alternates between under and overestimate of n!)
The two limit relations cited in the comment above are false
The correct value of the limits is . Indeed, these formulas are equivalent to saying that can be replaced by in Stirling's formula; in other words, that their quotient goes to 1. But in fact the limit of their quotient is This can be seen by taking the logarithm of the quotient and using the power series of . Lb561 (talk) 05:05, 16 October 2008 (UTC)
I guess the correct formula for the Gamma function should be
History: First appeared in de Moivre's or Stirling's book?
I put a "verify credibility" template behind a quote stating that Stirling's formula originally appeared in de Moivre's book. This quote is actually in Le Cam's article. Yet, it fails to proof the claim: Stirling's book appeared in 1730, *before* the edition of de Moivre's book mentioned in the quote. Stirling's book contains the formula in its first edition. Yet, there *are* editions of de Moivre's book even before 1730. So the correct thing to do would be to read an *early* edition of de Moivres book and verify that the formula appears therein. I won't do this, but if somebody volunteers that would be nice! :-) --Thomas Bliem (talk) 11:21, 10 May 2009 (UTC)
Wrong equation - what was intended ?
The equation following the words "computing two-order expansion using Laplace's method yields" currently reads
but the integral on the left is infinite for positive , and it is unclear what is meant by "computing two-order expansion using Laplace's method". Might it be possible for whoever put this up to check and clarify ? Thank you. Rfs2 (talk) 15:19, 17 December 2013 (UTC)
Deriving n! from log(n!)
Could please someone clarify on why is the "next term in the O(ln(n)) is (1/2)ln(2πn)"? If that is the case, then, trivially, the stated equivalence can be obtained by taking the exponential of both sides of the equation, but why do we choose this particular term, (1/2)ln(2πn), out of the O(ln(n)) class? — DmitriyZotikov (talk) 15:18, 22 January 2015 (UTC)
- The line you quoted is in the lead section article, which is supposed to be a concise summary of things explained in more detail later in the article. Your question is answered in the "Derivation" section which immediately follows the lead. —David Eppstein (talk) 17:41, 22 January 2015 (UTC)
\sim vs. =
An editor has recently replaced two copies of asymptotic equality with actual equality in non-convergent asymptotic series. Since the series don't converge, surely = is wrong; but somehow \sim misses the point, too (it is a much weaker statement). What are the usual notations used for asymptotic series like this? --JBL (talk) 00:46, 29 April 2015 (UTC)
Reference for Convergent Version
Faster convergence for 'integral' Derivation of ln(n!) by switching bounds
The "Derivation" section uses the integral:
Would it be useful noting that changing the bounds to gives an even tighter fit? In other words
(See plot below)
I realize this makes no difference for larger n, but it makes a difference for smaller n.
New figure for relative error
Glosser.ca recently added this figure showing the relative error of Stirling's approximation, replacing this older figure. Obviously the new figure is more attractive, except for one issue: there are "kinks" in two of the curves. This suggests, falsely, that these two approximations get abruptly better, then worse again, and this is not the case. I want to suggest that the figure should be changed to avoid this issue. Glosser.ca, are you willing to do this?
(The cause of the kinks is that the figure actually shows relative error as an approximation of Gamma(n + 1). Some of the approximating series begin as under-estimates and later become over-estimates, and so by continuity they actually are equal to the gamma function at some point. Thus, the relative error at those points is 0, and the log relative error goes to negative infinity, and we see kinks. I think the "right" solution is to interpolate the relative error, rather than interpolating the factorial function and then taking the relative error of that. I am open to other opinions.)
- Joel_B._Lewis is exactly right and I had considered this when I constructed the new figure (though not your interpolation solution). Because the (continuous) approximation is largely an approximation *to* the (continuous) gamma function which conveniently also fits factorials, I'm inclined to say we keep the kinks, comment on them in the figure caption, and re-work the corresponding section to use the gamma function instead of factorials. Alternatively, interpolating the relative error should be really straight forward, or I can just make the plot discrete due to the discreteness of factorials (though this may not look as nice as the interpolation thing). Glosser.ca (talk) 17:51, 26 April 2016 (UTC)
- Thanks for your response. I am happy with your suggestion (implemented by David Eppstein) to keep the kinks, with a short explanation. (I should have thought to do this myself.) I do not think it is neccesary to rewrite the section, as the relationship with the gamma function is treated in the immediately following section. --JBL (talk) 22:31, 26 April 2016 (UTC)