Trapezoidal rule

From Wikipedia, the free encyclopedia
  (Redirected from Trapezoid rule)
Jump to: navigation, search
The function f(x) (in blue) is approximated by a linear function (in red).

In mathematics, and more specifically in numerical analysis, the trapezoidal rule (also known as the trapezoid rule or trapezium rule) is a technique for approximating the definite integral


The trapezoidal rule works by approximating the region under the graph of the function as a trapezoid and calculating its area. It follows that


The trapezoidal rule may be viewed as the result obtained by averaging the left and right Riemann sums, and is sometimes defined this way.

Illustration of "chained trapezoidal rule" used on an irregularly-spaced partition of .

The integral can be even better approximated by partitioning the integration interval, applying the trapezoidal rule to each subinterval, and summing the results. In practice, this "chained" (or "composite") trapezoidal rule is usually what is meant by "integrating with the trapezoidal rule". Let be a partition of such that and be the length of the -th subinterval (that is, ), then


The approximation becomes more accurate as the resolution of the partition increases (that is, for larger ). When the partition has a regular spacing, as is often the case, the formula can be simplified for calculation efficiency.

As discussed below, it is also possible to place error bounds on the accuracy of the value of a definite interval estimated using a trapezoidal rule.


A 2016 paper reports that the trapezoid rule was in use in Babylon before 50 BC for integrating the velocity of Jupiter along the ecliptic.[1]

Numerical implementation[edit]

Non-uniform grid[edit]

When the grid spacing is non-uniform, one can use the formula

Uniform grid[edit]

For a domain discretized into equally spaced panels, considerable simplification may occur. Let

the approximation to the integral becomes

which requires fewer evaluations of the function to calculate.

Error analysis[edit]

An animation showing how the trapezoidal rule approximation improves with more strips for an interval with and . As the number of intervals increases, so too does the accuracy of the result.

The error of the composite trapezoidal rule is the difference between the value of the integral and the numerical result:

There exists a number ξ between a and b, such that[2]

It follows that if the integrand is concave up (and thus has a positive second derivative), then the error is negative and the trapezoidal rule overestimates the true value. This can also be seen from the geometric picture: the trapezoids include all of the area under the curve and extend over it. Similarly, a concave-down function yields an underestimate because area is unaccounted for under the curve, but none is counted above. If the interval of the integral being approximated includes an inflection point, the error is harder to identify.

In general, three techniques are used in the analysis of error:[3]

  1. Fourier series
  2. Residue calculus
  3. Euler–Maclaurin summation formula:[4][5]

An asymptotic error estimate for N → ∞ is given by

Further terms in this error estimate are given by the Euler–Maclaurin summation formula.

It is argued that the speed of convergence of the trapezoidal rule reflects and can be used as a definition of classes of smoothness of the functions.[6]

Periodic functions[edit]

The trapezoidal rule converges rapidly for periodic functions. This is an easy consequence of the Euler-Maclaurin summation formula, which says that if is times continuously differentiable with period

where and is the periodic extension of the th Bernoulli polynomial[7]. Due to the periodicity, the derivatives at the endpoint cancel and we see that the error is .

Although some effort has been made to extending the Euler-Maclaurin summation formula to higher dimensions[8], the most straightforward proof of the rapid convergence of the trapezoidal rule in higher dimensions is to reduce the problem to that of convergence of Fourier series. This line of reasoning shows that if is periodic on a -dimensional space with continuous derivatives, the speed of convergence is . For very large dimension, the shows that Monte-Carlo integration is most likely a better choice, but for 2 and 3 dimensions, equispaced sampling is efficient. This is exploited in computational solid state physics where equispaced sampling over primitive cells in the reciprocal lattice is known as Monkhorst-Pack integration.[9].

"Rough" functions[edit]

For various classes of functions that are not twice-differentiable, the trapezoidal rule has sharper bounds than Simpson's rule.[10]

Applicability and alternatives[edit]

The trapezoidal rule is one of a family of formulas for numerical integration called Newton–Cotes formulas, of which the midpoint rule is similar to the trapezoid rule. Simpson's rule is another member of the same family, and in general has faster convergence than the trapezoidal rule for functions which are twice continuously differentiable, though not in all specific cases. However, for various classes of rougher functions (ones with weaker smoothness conditions), the trapezoidal rule has faster convergence in general than Simpson's rule.[10]

Moreover, the trapezoidal rule tends to become extremely accurate when periodic functions are integrated over their periods, which can be analyzed in various ways.[6][11]

For non-periodic functions, however, methods with unequally spaced points such as Gaussian quadrature and Clenshaw–Curtis quadrature are generally far more accurate; Clenshaw–Curtis quadrature can be viewed as a change of variables to express arbitrary integrals in terms of periodic integrals, at which point the trapezoidal rule can be applied accurately.

See also[edit]


  1. ^ Ossendrijver, Mathieu (Jan 29, 2016). "Ancient Babylonian astronomers calculated Jupiter's position from the area under a time-velocity graph". Science. 351: 482–484. doi:10.1126/science.aad8085. PMID 26823423. 
  2. ^ Atkinson (1989, equation (5.1.7))
  3. ^ (Weideman 2002, p. 23, section 2)
  4. ^ Atkinson (1989, equation (5.1.9))
  5. ^ Atkinson (1989, p. 285)
  6. ^ a b (Rahman & Schmeisser 1990)
  7. ^ Kress, Rainer (1998). Numerical Analysis, volume 181 of Graduate Texts in Mathematics. Springer-Verlag. 
  8. ^ "Euler-Maclaurin Summation Formula for Multiple Sums". 
  9. ^ Thompson, Nick. "Numerical Integration over Brillouin Zones". Retrieved 19 December 2017. 
  10. ^ a b (Cruz-Uribe & Neugebauer 2002)
  11. ^ (Weideman 2002)


External links[edit]