Adaptive Simpson's method

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962.[1] It is probably the first recursive adaptive algorithm for numerical integration to appear in print,[2] although more modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner. The technique is usually much more efficient than composite Simpson's rule since it uses fewer function evaluations in places where the function is well-approximated by a cubic function.

A criterion for determining when to stop subdividing an interval, suggested by J.N. Lyness,[3] is

where is an interval with midpoint , , , and are the estimates given by Simpson's rule on the corresponding intervals and is the desired tolerance for the interval.

Simpson's rule is an interpolatory quadrature rule which is exact when the integrand is a polynomial of degree three or lower. Using Richardson extrapolation, the more accurate Simpson estimate for six function values is combined with the less accurate estimate for three function values by applying the correction . The thus obtained estimate is exact for polynomials of degree five or less.

Sample implementations[edit]

Python[edit]

Here is an implementation of adaptive Simpson's method in Python. Note that this is explanatory code, without regard for efficiency. Every call to recursive_asr entails six function evaluations. For actual use, one will want to modify it so that the minimum of two function evaluations are performed.

def simpsons_rule(f,a,b):
    c = (a+b) / 2.000
    h3 = abs(b-a) / 6.0
    return h3*(f(a) + 4.0*f(c) + f(b))

def recursive_asr(f,a,b,eps,whole):
    "Recursive implementation of adaptive Simpson's rule."
    c = (a+b) / 2.0
    left = simpsons_rule(f,a,c)
    right = simpsons_rule(f,c,b)
    if abs(left + right - whole) <= 15*eps:
        return left + right + (left + right - whole)/15.0
    return recursive_asr(f,a,c,eps/2.0,left) + recursive_asr(f,c,b,eps/2.0,right)

def adaptive_simpsons_rule(f,a,b,eps):
    "Calculate integral of f from a to b with max error of eps."
    return recursive_asr(f,a,b,eps,simpsons_rule(f,a,b))

from math import sin
print (adaptive_simpsons_rule(sin,0,1,.000000001))

C[edit]

Here is an implementation of the adaptive Simpson's method in C99 that avoids redundant evaluations of f and quadrature computations. The amount of memory used is O(h) where h is the maximum recursion depth. Each stack frame caches computed values that may be needed in subsequent calls.

#include <math.h>  // include file for fabs and sin
#include <stdio.h> // include file for printf


int main(){
 float I = adaptiveSimpsons(sin, 0, 2, 0.001, 100);        // compute integral of sin(x)
                                                          // from 0 to 2 and store it in 
                                                          // the new variable I
 printf("I = %lf\n",I); // print the result
 return 0;
}

//
// Adaptive Simpson's Rule
//
float adaptiveSimpsons(float (*f)(float),   // ptr to function
                           float a, float b,  // interval [a,b]
                           float epsilon,  // error tolerance
                           int maxRecursionDepth) {   // recursion cap        
  float c = (a + b)/2, h = b - a;                                                                  
  float fa = f(a), fb = f(b), fc = f(c);                                                           
  float S = (h/6)*(fa + 4*fc + fb);                                                                
  return adaptiveSimpsonsAux(f, a, b, epsilon, S, fa, fb, fc, maxRecursionDepth);                   
}                                                                                                   
 
//
// Recursive auxiliary function for adaptiveSimpsons() function below
//                                                                                                 
float adaptiveSimpsonsAux(float (*f)(float), float a, float b, float epsilon,                 
                         float S, float fa, float fb, float fc, int bottom) {                 
  float c = (a + b)/2, h = b - a;                                                                  
  float d = (a + c)/2, e = (c + b)/2;                                                              
  float fd = f(d), fe = f(e);                                                                      
  float Sleft = (h/12)*(fa + 4*fd + fc);                                                           
  float Sright = (h/12)*(fc + 4*fe + fb);                                                          
  float S2 = Sleft + Sright;                                                                       
  if (bottom <= 0 || fabs(S2 - S) <= 15*epsilon)   // magic 15 comes from error analysis                                       
    return S2 + (S2 - S)/15;                                                                        
  return adaptiveSimpsonsAux(f, a, c, epsilon/2, Sleft,  fa, fc, fd, bottom-1) +                    
         adaptiveSimpsonsAux(f, c, b, epsilon/2, Sright, fc, fb, fe, bottom-1);                     
}

Racket[edit]

Here is an implementation of the adaptive Simpson method in Racket with a behavioral software contract. The exported function computes the indeterminate integral for some given function f.

;; -----------------------------------------------------------------------------
;; interface, with contract 

;; Simpson's rule for approximating an integral
(define (simpson f L R)
  (* (/ (- R L) 6) (+ (f L) (* 4 (f (mid L R))) (f R))))

(provide/contract
 [adaptive-simpson 
  (->i ((f (-> real? real?)) (L real?) (R  (L) (and/c real? (>/c L))))
       (#:epsilon (ε real?))
       (r real?))]
 [step (-> real? real?)])


;; -----------------------------------------------------------------------------
;; implementation 

(define (adaptive-simpson f L R #:epsilon [ε .000000001])
  (define f@L (f L))
  (define f@R (f R))
  (define-values (M f@M whole) (simpson-1call-to-f f L f@L R f@R))
  (asr f L f@L R f@R ε whole M f@M))

;; computationally efficient: 2 function calls per step 
(define (asr f L f@L R f@R ε whole M f@M)
  (define-values (leftM  f@leftM  left*)  (simpson-1call-to-f f L f@L M f@M))
  (define-values (rightM f@rightM right*) (simpson-1call-to-f f M f@M R f@R))
  (define delta* (- (+ left* right*) whole))
  (cond
    [(<= (abs delta*) (* 15 ε)) (+ left* right* (/ delta* 15))]
    [else (define epsilon1 (/ ε 2))
          (+ (asr f L f@L M f@M epsilon1 left*  leftM  f@leftM) 
             (asr f M f@M R f@R epsilon1 right* rightM f@rightM))]))

(define (simpson-1call-to-f f L f@L R f@R)
  (define M (mid L R))
  (define f@M (f M))
  (values M f@M (* (/ (abs (- R L)) 6) (+ f@L (* 4 f@M) f@R))))

(define (mid L R) (/ (+ L R) 2.))

The code is an excerpt of a "#lang racket" module and that includes a (require rackunit) line.

Bibliography[edit]

  1. ^ G.F. Kuncir (1962), "Algorithm 103: Simpson's rule integrator", Communications of the ACM, 5 (6): 347, doi:10.1145/367766.368179 
  2. ^ For an earlier, non-recursive adaptive integrator more reminiscent of ODE solvers, see S. Henriksson (1961), "Contribution no. 2: Simpson numerical integration with variable length of step", BIT Numerical Mathematics, 1: 290 
  3. ^ J.N. Lyness (1969), "Notes on the adaptive Simpson quadrature routine", Journal of the ACM, 16 (3): 483–495, doi:10.1145/321526.321537 

External links[edit]

Dead link, please correct.