Laplace's method

(Redirected from Laplace approximation)

In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form

${\displaystyle \int _{a}^{b}\!e^{Mf(x)}\,dx}$

where ƒ(x) is some twice-differentiable function, M is a large number, and the integral endpoints a and b could possibly be infinite. This technique was originally presented in Laplace (1774, pp. 366–367).

The idea of Laplace's method

The function e(x), in blue, is shown on top for M = 0.5, and at the bottom for M = 3. Here, ƒ(x) = sin x/x, with a global maximum at x0 = 0. It is seen that as M grows larger, the approximation of this function by a Gaussian function (shown in red) is getting better. This observation underlies Laplace's method.

Assume that the function ƒ(x) has a unique global maximum at x0. Then, the value ƒ(x0) will be larger than the other values of ƒ(x). If we multiply this function by a large number M, the ratio between (x0) and (x) will stay the same (since (x0)/(x) = ƒ(x0)/ƒ(x)), but it will grow exponentially in the function ${\displaystyle e^{Mf(x)}}$(see figure).

Thus, significant contributions to the integral of this function will come only from points x in a neighbourhood of x0, which can then be estimated.

General theory of Laplace's method

To state and motivate the method, we need several assumptions. We will assume that x0 is not an endpoint of the interval of integration, that the values ƒ(x) cannot be very close to ƒ(x0) unless x is close to x0, and that the second derivative ${\displaystyle f''(x_{0})<0}$.

We can expand ƒ(x) around x0 by Taylor's theorem,

{\displaystyle {\begin{aligned}f(x)&=f(x_{0})+f'(x_{0})(x-x_{0})\\&+{\frac {1}{2}}f''(x_{0})(x-x_{0})^{2}+R\end{aligned}}}

where ${\displaystyle R=O\left((x-x_{0})^{3}\right).}$

Since ƒ has a global maximum at x0, and since x0 is not an endpoint, it is a stationary point, so the derivative of ƒ vanishes at x0. Therefore, the function ƒ(x) may be approximated to quadratic order

${\displaystyle f(x)\approx f(x_{0})-{\frac {1}{2}}|f''(x_{0})|(x-x_{0})^{2}}$

for x close to x0 (recall that the second derivative is negative at the global maximum ƒ(x0)). The assumptions made ensure the accuracy of the approximation

${\displaystyle \int _{a}^{b}\!e^{Mf(x)}\,dx\approx e^{Mf(x_{0})}\int _{a}^{b}e^{-M|f''(x_{0})|(x-x_{0})^{2}/2}\,dx}$

(see the picture on the right). This latter integral is a Gaussian integral if the limits of integration go from −∞ to +∞ (which can be assumed because the exponential decays very fast away from x0), and thus it can be calculated. We find

${\displaystyle \int _{a}^{b}\!e^{Mf(x)}\,dx\approx {\sqrt {\frac {2\pi }{M|f''(x_{0})|}}}e^{Mf(x_{0})}{\text{ as }}M\to \infty .\,}$

A generalization of this method and extension to arbitrary precision is provided by Fog (2008).

Formal statement and proof:

Assume that ${\displaystyle f(x)}$ is a twice continuously differentiable function on ${\displaystyle [a,b]}$ with ${\displaystyle x_{0}\in (a,b)}$ the unique point such that ${\displaystyle f(x_{0})=\max _{[a,b]}f(x)}$. Assume additionally that ${\displaystyle f''(x_{0})<0}$.

Then,

${\displaystyle \lim _{n\to +\infty }{\frac {\int _{a}^{b}e^{nf(x)}\,dx}{e^{nf(x_{0})}{\sqrt {\frac {2\pi }{n(-f''(x_{0}))}}}}}=1}$

This method relies on 4 basic concepts such as

Based on these four concepts, we can derive the relative error of this Laplace's method.

Other formulations

Laplace's approximation is sometimes written as

{\displaystyle {\begin{aligned}&\int _{a}^{b}\!h(x)e^{Mg(x)}\,dx&\approx {\sqrt {\frac {2\pi }{M|g''(x_{0})|}}}h(x_{0})e^{Mg(x_{0})}{\text{ as }}M\to \infty \,\end{aligned}}}

where ${\displaystyle h}$ is positive.

Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in ${\displaystyle g(x)}$ and what goes into ${\displaystyle h(x)}$.[1]

In the multivariate case where ${\displaystyle \mathbf {x} }$ is a ${\displaystyle d}$-dimensional vector and ${\displaystyle f(\mathbf {x} )}$ is a scalar function of ${\displaystyle \mathbf {x} }$, Laplace's approximation is usually written as:

{\displaystyle {\begin{aligned}&\int e^{Mf(\mathbf {x} )}\,d\mathbf {x} \approx &\approx \left({\frac {2\pi }{M}}\right)^{d/2}|-H(f)(\mathbf {x} _{0})|^{-1/2}e^{Mf(\mathbf {x} _{0})}&{\text{ as }}M\to \infty \,\end{aligned}}}

where ${\displaystyle H(f)(\mathbf {x} _{0})}$ is the Hessian matrix of ${\displaystyle f}$ evaluated at ${\displaystyle \mathbf {x} _{0}}$ and where ${\displaystyle |\cdot |}$ denotes matrix determinant. Analogously to the univariate case, the Hessian is required to be negative definite.[2]

By the way, although ${\displaystyle \mathbf {x} }$ denotes a ${\displaystyle d}$-dimensional vector, the term ${\displaystyle d\mathbf {x} }$ denotes an Infinitesimal volume here, i.e. ${\displaystyle d\mathbf {x} :=dx_{1}dx_{2}\cdots dx_{d}}$.

Laplace's method extension: Steepest descent

In extensions of Laplace's method, complex analysis, and in particular Cauchy's integral formula, is used to find a contour of steepest descent for an (asymptotically with large M) equivalent integral, expressed as a line integral. In particular, if no point x0 where the derivative of ƒ vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents).

The appropriate formulation for the complex z-plane is

{\displaystyle {\begin{aligned}&\int _{a}^{b}\!e^{Mf(z)}\,dz\approx &\approx {\sqrt {\frac {2\pi }{-Mf''(z_{0})}}}e^{Mf(z_{0})}&{\text{ as }}M\to \infty .\,\end{aligned}}}

for a path passing through the saddle point at z0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions).

Further generalizations

An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.

Given a contour C in the complex sphere, a function ƒ defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If ƒ and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.

An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.

The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).

The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.

Complex integrals

For complex integrals in the form:

${\displaystyle {\frac {1}{2\pi i}}\int _{c-i\infty }^{c+i\infty }g(s)e^{st}\,ds}$

with t >> 1, we make the substitution t = iu and the change of variable s = c + ix to get the bilateral Laplace transform:

${\displaystyle {\frac {1}{2\pi }}\int _{-\infty }^{\infty }g(c+ix)e^{-ux}e^{icu}\,dx.}$

We then split g(c+ix) in its real and complex part, after which we recover u = t / i. This is useful for inverse Laplace transforms, the Perron formula and complex integration.

Example 1: Stirling's approximation

Laplace's method can be used to derive Stirling's approximation

${\displaystyle N!\approx {\sqrt {2\pi N}}N^{N}e^{-N}\,}$

for a large integer N.

From the definition of the Gamma function, we have

${\displaystyle N!=\Gamma (N+1)=\int _{0}^{\infty }e^{-x}x^{N}\,dx.}$

Now we change variables, letting

${\displaystyle x=Nz\,}$

so that

${\displaystyle dx=N\,dz.}$

Plug these values back in to obtain

{\displaystyle {\begin{aligned}N!&=\int _{0}^{\infty }e^{-Nz}\left(Nz\right)^{N}N\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{-Nz}z^{N}\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{-Nz}e^{N\ln z}\,dz\\&=N^{N+1}\int _{0}^{\infty }e^{N(\ln z-z)}\,dz.\end{aligned}}}

This integral has the form necessary for Laplace's method with

${\displaystyle f\left(z\right)=\ln {z}-z}$

which is twice-differentiable:

${\displaystyle f'(z)={\frac {1}{z}}-1,\,}$
${\displaystyle f''(z)=-{\frac {1}{z^{2}}}.\,}$

The maximum of ƒ(z) lies at z0 = 1, and the second derivative of ƒ(z) has the value −1 at this point. Therefore, we obtain

${\displaystyle N!\approx N^{N+1}{\sqrt {\frac {2\pi }{N}}}e^{-N}={\sqrt {2\pi N}}N^{N}e^{-N}.\,}$

Example 2: parameter estimation and probabilistic inference

Azevedo-Filho & Shachter 1994 reviews Laplace's method results (univariate and multivariate) and presents a detailed example showing the method used in parameter estimation and probabilistic inference under a Bayesian perspective. Laplace's method is applied to a meta-analysis problem from the medical domain, involving experimental data, and compared to other techniques.

Notes

1. ^ Butler, Ronald W (2007). Saddlepoint approximations and applications. Cambridge University Press. ISBN 978-0-521-87250-8.
2. ^ MacKay, David J. C. (September 2003). Information Theory, Inference and Learning Algorithms. Cambridge: Cambridge University Press. ISBN 9780521642989.

References

• Azevedo-Filho, A.; Shachter, R. (1994), "Laplace's Method Approximations for Probabilistic Inference in Belief Networks with Continuous Variables", in Mantaras, R.; Poole, D., Uncertainty in Artificial Intelligence, San Francisco, CA: Morgan Kauffman, CiteSeerX .
• Deift, P.; Zhou, X. (1993), "A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the MKdV equation", Ann. of Math., 137 (2), pp. 295–368, doi:10.2307/2946540.
• Erdelyi, A. (1956), Asymptotic Expansions, Dover.
• Fog, A. (2008), "Calculation Methods for Wallenius' Noncentral Hypergeometric Distribution", Communications in Statistics, Simulation and Computation, 37 (2), pp. 258–273, doi:10.1080/03610910701790269.
• Kamvissis, S.; McLaughlin, K. T.-R.; Miller, P. (2003), "Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation", Annals of Mathematics Studies, Princeton University Press, 154.
• Laplace, P. S. (1774). Memoir on the probability of causes of events. Mémoires de Mathématique et de Physique, Tome Sixième. (English translation by S. M. Stigler 1986. Statist. Sci., 1(19):364–378).
• Wang, Xiang-Sheng; Wong, Roderick (2007). "Discrete analogues of Laplace's approximation". Asymptot. Anal. 54 (3-4): 165–180.