Euler–Maclaurin formula

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, the Euler–Maclaurin formula provides a powerful connection between integrals (see calculus) and sums. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.

The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735 (and later generalized as Darboux's formula). Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals.

The formula[edit]

If m and n are natural numbers and f(x) is a smooth (meaning: sufficiently often differentiable), analytic function of exponential type < 2π defined for all real numbers x in the interval [m,n], then the integral

I = \int_m^n f(x)\,dx

can be approximated by the sum (or vice versa)

S = f\left(m + 1\right) + \cdots + f\left(n - 1\right) + f(n)

(see trapezoidal rule). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives ƒ(k) at the end points of the interval m and n. Explicitly, for any natural number p, we have

 S - I = \sum_{k=1}^p {\frac{B_k}{k!}\left(f^{(k - 1)}(n) - f^{(k - 1)}(m)\right)} + R

where B1 = +1/2, B2 = 1/6, B3 = 0, B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0, B8 = −1/30, … are the Bernoulli numbers, and R is an error term which is normally small for suitable values of p and depends on n, m, p and f.

The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for B1, in which case we have

\sum_{i=m+1}^n f(i) =
    \int^n_m f(x)\,dx + B_1 \left(f(n) - f(m)\right) +
    \sum_{k=1}^p\frac{B_{2k}}{(2k)!}\left(f^{(2k - 1)}(n) - f^{(2k - 1)}(m)\right) + 
    R.

The remainder term[edit]

The remainder term R is most easily expressed using the periodic Bernoulli polynomials Pn(x). The Bernoulli polynomials Bn(x), n = 0, 1, 2, … are defined recursively as

\begin{align}
   B_0(x) &= 1 \\
  B_n'(x) &= nB_{n - 1}(x) \text{ and } \int_0^1 B_n(x)\,dx = 0\text{ for }n \ge 1
\end{align}

Then the periodic Bernoulli functions Pn are defined as

 P_n(x) = B_n \left(x - \lfloor x\rfloor\right)\text{ for } x > 0

where \scriptstyle \lfloor x\rfloor denotes the largest integer that is not greater than x. Then, in terms of Pn(x), the remainder term R can be written as

 R = \int_m^n f^{(2p)}(x) {P_{2p}(x) \over (2p)!}\,dx

or equivalently, integrating by parts, assuming ƒ(2p) is differentiable again and recalling that all odd Bernoulli numbers (but the first one) are zero:

 R = -\int_m^n f^{(2p+1)}(x) {P_{(2p + 1)}(x) \over (2p + 1)!}\,dx \quad p>0

When n > 0, it can be shown that

\left| B_n \left( x \right) \right| \le 2 \cdot \frac{n!}{\left( 2\pi \right)^n}\zeta \left( n \right)

where ζ denotes the Riemann zeta function (see Lehmer; one approach to prove the inequality is to obtain the Fourier series for the polynomials Bn). The bound is achieved for even n when x is zero. Using this inequality, the size of the remainder term can be estimated using

\left|R\right|\leq\frac{2 \zeta (2p)}{(2\pi)^{2p}}\int_m^n\left|f^{(2p)}(x)\right|\ \, dx

Applicable formula[edit]

In the end, we get the following simple formula ([1]):

 I = \int_{x_0}^{x_N} f(x)\,dx= h(\frac{f_{0}}{2} + f_1 +f_2...+f_{N-1} + \frac{f_{N}}{2} ) + \frac{h^2}{12}[f'_0 - f'_N] - \frac{h^4}{720}[f'''_0 - f'''_N]+ ... .

Where 'N' is the number of points in the interval of integration, from  x_0 to  x_N .

This is just the trapezoid rule with correction terms ([2]).

Applications[edit]

The Basel problem[edit]

The Basel problem asks to determine the sum

 1 + \frac14 + \frac19 + \frac1{16} + \frac1{25} + \cdots = \sum_{n=1}^\infty \frac{1}{n^2}

Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals π2 / 6, which he proved in the same year.[3] Parseval's identity for the Fourier series of f(x) = x gives the same result.

Sums involving a polynomial[edit]

If f is a polynomial and p is big enough, then the remainder term vanishes. For instance, if f(x) = x3, we can choose p = 2 to obtain after simplification

\sum_{i=0}^n i^3 = \left(\frac{n(n + 1)}{2}\right)^2

(see Faulhaber's formula).

Numerical integration[edit]

The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation.

Asymptotic expansion of sums[edit]

In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is

\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,dx + \frac{f(b) + f(a)}{2} + \sum_{k=1}^\infty \,\frac{B_{2k}}{(2k)!}\left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right)

where a and b are integers.[4] Often the expansion remains valid even after taking the limits {\scriptstyle a\to -\infty} or {\scriptstyle b\to +\infty}, or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,

\sum_{k=0}^\infty \frac{1}{(z + k)^2} \sim \underbrace{\int_0^\infty\frac{1}{(z + k)^2}\,dk}_{= \frac{1}{z}} + \frac{1}{2z^2} + \sum_{t = 1}^\infty \frac{B_{2t}}{z^{2t + 1}}

Here the left-hand side is equal to {\scriptstyle \psi^{(1)}(z)}, namely the first-order polygamma function defined through {\scriptstyle \psi^{(1)}(z)=\frac{d^2}{dz^2}\ln \Gamma(z)}; the gamma function {\scriptstyle \Gamma(z)} is equal to {\scriptstyle (z-1)!} if {\scriptstyle z} is a positive integer. This results in an asymptotic expansion for {\scriptstyle \psi^{(1)}(z)}. That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.

Examples[edit]

  •  \sum_{k=1}^n \frac{1}{k^s} = \frac{1}{n^{s-1}} + s \int_1^n \frac{\lfloor x\rfloor}{x^{s+1}} dx \qquad \text{with }\quad s \in \R \setminus \{1\}
  •  \sum_{k=1}^n \frac{1}{k} = \log n + 1 - \int_1^n \frac{x-\lfloor x\rfloor}{x^{2}} dx

Proofs[edit]

Derivation by mathematical induction[edit]

We follow the argument given in Apostol.[5]

The Bernoulli polynomials Bn(x), n = 0, 1, 2, … may be defined recursively as follows:

\begin{align}
   B_0(x) &= 1 \\
  B_n'(x) &= nB_{n - 1}(x) \text{ and } \int_0^1 B_n(x)\,dx = 0\text{ for }n \ge 1
\end{align}

The first several of these are

\begin{align}
  B_1(x) &= x - \frac{1}{2} \\
  B_2(x) &= x^2 - x + \frac{1}{6} \\
  B_3(x) &= x^3 - \frac{3}{2}x^2 + \frac{1}{2}x \\
  B_4(x) &= x^4 - 2x^3 + x^2 - \frac{1}{30} \\
         & \vdots
\end{align}

The values Bn(0) are the Bernoulli numbers. Notice that for n ≠ 1 we have

B_n(0) = B_n(1) = B_n\quad(n\text{th Bernoulli number})

For n = 1,

 B_1(0) = -B_1(1) = B_1

We define the periodic Bernoulli functions Pn by

 P_n(x) = B_n(x - \lfloor x\rfloor)

where  \lfloor x\rfloor denotes the largest integer that is not greater than x. So Pn agree with the Bernoulli polynomials on the interval (0, 1) and are periodic with period 1. Thus,

 P_n(0) = P_n(1) = B_n

Let k be an integer, and consider the integral

 \int_k^{k + 1} f(x)\,dx = \int_k^{k + 1} u\,dv

where

\begin{align}
   u &= f(x) \\
  du &= f'(x)\,dx \\
  dv &= P_0(x)\,dx && \text{since }P_0(x) = 1 \\
   v &= P_1(x)
\end{align}

Integrating by parts, we get

\begin{align}
 \int_k^{k + 1} f(x)\,dx &= \Big[uv\Big]_k^{k + 1} - \int_k^{k + 1} v\,du \\
                         &= \Big[f(x)P_1(x)\Big]_k^{k + 1} - \int_k^{k+1} f'(x)P_1(x)\,dx \\[8pt]
                         &= B_1(1)f(k+1)-B_1(0)f(k) - \int_k^{k+1} f'(x)P_1(x)\,dx
\end{align}

Summing the above from k = 0 to k = n − 1, we get

\begin{align}
\int_0^1 f(x)\,dx + \dotsb + \int_{n-1}^n f(x)\,dx &= \int_0^n f(x)\, dx  \\
&= \frac{f(0)}{2}+ f(1) + \dotsb + f(n-1) + {f(n) \over 2} - \int_0^n f'(x) P_1(x)\,dx
\end{align}

Adding (f(n) - f(0))/2 to both sides and rearranging, we have

 \sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + {f(n) - f(0) \over 2} + \int_0^n f'(x) P_1(x)\,dx\qquad (1)

The last two terms therefore give the error when the integral is taken to approximate the sum.

Next, consider

 \int_k^{k+1} f'(x)P_1(x)\,dx = \int_k^{k + 1} u\,dv

where

\begin{align}
   u &= f'(x) \\
  du &= f''(x)\,dx \\
  dv &= P_1(x)\,dx \\
   v &= \frac{1}{2}P_2(x)
\end{align}

Integrating by parts again, we get,

\begin{align}
  \Big[uv\Big]_k^{k + 1} - \int_k^{k + 1} v\,du &= \left[ {f'(x)P_2(x) \over 2} \right]_k^{k+1} - {1 \over 2}\int_k^{k+1} f''(x)P_2(x)\,dx \\
                  &= {B_2 \over 2}(f'(k + 1) - f'(k)) - {1 \over 2}\int_k^{k + 1} f''(x)P_2(x)\,dx
\end{align}

Then summing from k = 0 to k = n − 1, and then replacing the last integral in (1) with what we have thus shown to be equal to it, we have

 \sum_{k=1}^n f(k) = \int_0^n f(x)\,dx + {f(n) - f(0) \over 2} + \frac{B_2}{2}(f'(n) - f'(0)) - {1 \over 2}\int_0^n f''(x)P_2(x)\,dx.

By now the reader will have guessed that this process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula by mathematical induction, in which the induction step relies on integration by parts and on the identities for periodic Bernoulli functions.

See also[edit]

Notes[edit]

  1. ^ DEVRIES, Paul L. ; HASBUN, Javier E. A first course in computational physics. Second edition. Jones and Bartlett Publishers: 2011,p.156)
  2. ^ DEVRIES, Paul L. ; HASBUN, Javier E. A first course in computational physics. Second edition. Jones and Bartlett Publishers: 2011,p.156)
  3. ^ David J. Pengelley, "Dances between continuous and discrete: Euler's summation formula", in: Robert Bradley and Ed Sandifer (Eds), Proceedings, Euler 2K+2 Conference (Rumford, Maine, 2002), Euler Society, 2003.
  4. ^ Abramowitz & Stegun (1972), 23.1.30
  5. ^ Apostol, T. M. (1 May 1999). "An Elementary View of Euler's Summation Formula". The American Mathematical Monthly (Mathematical Association of America) 106 (5): 409–418. doi:10.2307/2589145. ISSN 0002-9890. JSTOR 2589145.  edit

References[edit]