# Leibniz integral rule

In calculus, Leibniz's rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form

${\displaystyle \int _{y_{0}}^{y_{1}}f(x,y)\,\mathrm {d} y}$

then for x in (x0, x1) the derivative of this integral is thus expressible as

${\displaystyle {\mathrm {d} \over \mathrm {d} x}\left(\int _{y_{0}}^{y_{1}}f(x,y)\,\mathrm {d} y\right)=\int _{y_{0}}^{y_{1}}f_{x}(x,y)\,\mathrm {d} y}$

provided that f and its partial derivative fx are both continuous over a region in the form [x0, x1] × [y0, y1].

Thus under certain conditions, one may interchange the integral and partial differential operators. This important result is particularly useful in the differentiation of integral transforms. An example of such is the moment generating function in probability theory, a variation of the Laplace transform, which can be differentiated to generate the moments of a random variable. Whether Leibniz's integral rule applies is essentially a question about the interchange of limits.

## Formal statement

Let f(x, t) be a function such that the partial derivative of f with respect to t exists, and is continuous. Then,

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left(\int _{a(t)}^{b(t)}f(x,t)\,\mathrm {d} x\right)=\int _{a(t)}^{b(t)}{\frac {\partial f}{\partial t}}\,\mathrm {d} x\,+\,f{\big (}b(t),t{\big )}\cdot b'(t)\,-\,f{\big (}a(t),t{\big )}\cdot a'(t)}$

where the partial derivative indicates that inside the integral, only the variation of f(•,t) with t is considered in taking the derivative.

## General form: Differentiation under the integral sign

Theorem. Let f(x, t) be a function such that both f(x, t) and its partial derivative fx(x, t) are continuous in t and x in some region of the (x, t)-plane, including a(x) ≤ tb(x), x0xx1. Also suppose that the functions a(x) and b(x) are both continuous and both have continuous derivatives for x0xx1. Then for x0xx1:
${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\left(\int _{a(x)}^{b(x)}f(x,t)\,\mathrm {d} t\right)=f(x,b(x))\cdot b'(x)-f(x,a(x))\cdot a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t)\;\mathrm {d} t.}$

This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus. The (first) fundamental theorem of calculus is just a particular case of the above formula, for a(x) = a, a constant, b(x) = x and f(x, t) = f(t).

If both upper and lower limits are taken as constants, then the formula takes the shape of an operator equation:

ItDx = DxIt,

where Dx is the partial derivative with respect to x and It is the integral operator with respect to t over a fixed interval. That is, it is related to the symmetry of second derivatives, but involving integrals as well as derivatives. This case is also known as the Leibniz integral rule.

The following three basic theorems on the interchange of limits are essentially equivalent:

• the interchange of a derivative and an integral (differentiation under the integral sign; i.e., Leibniz integral rule)
• the change of order of partial derivatives
• the change of order of integration (integration under the integral sign; i.e., Fubini's theorem)

## Three-dimensional, time-dependent case

Figure 1: A vector field F(r, t) defined throughout space, and a surface Σ bounded by curve ∂Σ moving with velocity v over which the field is integrated.

A Leibniz integral rule for two dimensions is:[1]

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} =\iint _{\Sigma (t)}\left(\mathbf {F} _{t}(\mathbf {r} ,t)+\left[\mathrm {\nabla } \cdot \mathbf {F} (\mathbf {r} ,t)\right]\mathbf {v} \right)\cdot \mathrm {d} \mathbf {A} \,-\,\oint _{\partial \Sigma (t)}\left[\mathbf {v} \times \mathbf {F} (\mathbf {r} ,t)\right]\cdot \mathrm {d} \mathbf {s} }$

where:

F(r, t) is a vector field at the spatial position r at time t
Σ is a moving surface in three-space bounded by the closed curve ∂Σ
dA is a vector element of the surface Σ
ds is a vector element of the curve ∂Σ
v is the velocity of movement of the region Σ
∇⋅ is the vector divergence
× is the vector cross product
The double integrals are surface integrals over the surface Σ, and the line integral is over the bounding curve ∂Σ.

## Higher dimensions

The Leibniz integral rule can be extended to multidimensional integrals. In two and three dimensions, this rule is better known from the field of fluid dynamics as the Reynolds transport theorem:

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\int _{D(t)}F({\vec {\textbf {x}}},t)\,\mathrm {d} V=\int _{D(t)}{\frac {\partial }{\partial t}}\,F({\vec {\textbf {x}}},t)\,\mathrm {d} V+\int _{\partial D(t)}\,F({\vec {\textbf {x}}},t)\,{\vec {\textbf {v}}}_{b}\cdot \mathrm {d} \mathbf {\Sigma } }$

where ${\displaystyle F({\vec {\textbf {x}}},t)\,}$ is a scalar function, D(t) and ∂D(t) denote a time-varying connected region of R3 and its boundary, respectively, ${\displaystyle {\vec {\textbf {v}}}_{b}\,}$ is the Eulerian velocity of the boundary (see Lagrangian and Eulerian coordinates) and dΣ = n dS is the unit normal component of the surface element.

The general statement of the Leibniz integral rule requires concepts from differential geometry, specifically differential forms, exterior derivatives, wedge products and interior products. With those tools, the Leibniz integral rule in p-dimensions is:[1]

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Omega (t)}\omega =\int _{\Omega (t)}i_{\vec {\textbf {v}}}(\mathrm {d} _{x}\omega )+\int _{\partial \Omega (t)}i_{\vec {\textbf {v}}}\omega +\int _{\Omega (t)}{\dot {\omega }},\,}$

where Ω(t) is a time-varying domain of integration, ω is a p-form, ${\displaystyle {\vec {\textbf {v}}}\,}$ is the vector field of the velocity, ${\displaystyle {\vec {\textbf {v}}}={\frac {\partial {\vec {\textbf {x}}}}{\partial t}}\,}$, i denotes the interior product, dxω is the exterior derivative of ω with respect to the space variables only and ${\displaystyle {\dot {\omega }}\,}$ is the time-derivative of ω.

## Measure theory statement

Let ${\displaystyle X}$ be an open subset of ${\displaystyle \mathbb {R} }$ , and ${\displaystyle \Omega }$ be a measure space. Suppose ${\displaystyle f:X\times \Omega \rightarrow \mathbb {R} }$ satisfies the following conditions:

(1) ${\displaystyle f(x,\omega )}$ is a Lebesgue-integrable function of ${\displaystyle \omega }$ for each ${\displaystyle x\in X}$
(2) For almost all ${\displaystyle \omega \in \Omega }$ , the derivative ${\displaystyle f_{x}}$ exists for all ${\displaystyle x\in X}$
(3) There is an integrable function ${\displaystyle \theta :\Omega \rightarrow \mathbb {R} }$ such that ${\displaystyle |f_{x}(x,\omega )|\leq \theta (\omega )}$ for all ${\displaystyle x\in X}$ and almost every ${\displaystyle \omega \in \Omega }$

Then for all ${\displaystyle x\in X}$

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\int _{\Omega }\,f(x,\omega )\,\mathrm {d} \omega =\int _{\Omega }\,f_{x}(x,\omega )\,\mathrm {d} \omega }$

## Proofs

### Proof of basic form

Let:

${\displaystyle u(x)=\int _{y_{0}}^{y_{1}}f(x,y)\,\mathrm {d} y\qquad (1)}$

So that, using difference quotients

${\displaystyle u'(x)=\lim _{h\rightarrow 0}{\frac {u(x+h)-u(x)}{h}}\qquad (2)}$

Substitute equation (1) into equation (2), combine the integrals (since the difference of two integrals equals the integral of the difference) and use the fact that 1/h is a constant:

{\displaystyle {\begin{aligned}u'(x)&=\lim _{h\rightarrow 0}{\frac {\int _{y_{0}}^{y_{1}}f(x+h,y)\,\mathrm {d} y-\int _{y_{0}}^{y_{1}}f(x,y)\,\mathrm {d} y}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\int _{y_{0}}^{y_{1}}\left(f(x+h,y)-f(x,y)\right)\,\mathrm {d} y}{h}}\\&=\lim _{h\rightarrow 0}\int _{y_{0}}^{y_{1}}{\frac {f(x+h,y)-f(x,y)}{h}}\,\mathrm {d} y\end{aligned}}}

Provided that the limit can be passed under the integral sign, we obtain

${\displaystyle u'(x)=\int _{y_{0}}^{y_{1}}f_{x}(x,y)\,\mathrm {d} y}$

We claim that the passage of the limit under the integral sign is valid. Indeed, the bounded convergence theorem (a corollary of the dominated convergence theorem) of real analysis states that if a sequence of functions on a set of finite measure is uniformly bounded and converges pointwise, then passage of the limit under the integral is valid. To complete the proof, we show that these hypotheses are satisfied by the family of difference quotients

${\displaystyle f_{n}(y)={\frac {f(x+{\tfrac {1}{n}},y)-f(x,y)}{\tfrac {1}{n}}}.}$

Continuity of fx(x, y) and compactness implies that fx(x, y) is uniformly bounded. Uniform boundedness of the difference quotients follows from uniform boundedness of fx(x, y) and the mean value theorem, since for all y and n, there exists z in the interval [x, x + 1/n] such that

${\displaystyle f_{x}(z,y)={\frac {f(x+{\tfrac {1}{n}},y)-f(x,y)}{\tfrac {1}{n}}}.}$

The difference quotients converge pointwise to fx(x, y) since fx(x, y) exists. This completes the proof.

For a simpler proof using Fubini's theorem, see the references.

### Variable limits form

For a monovariant function g:

${\displaystyle {\mathrm {d} \over \mathrm {d} x}\left(\int _{f_{1}(x)}^{f_{2}(x)}g(t)\,\mathrm {d} t\right)=g[f_{2}(x)]{f_{2}'(x)}-g[f_{1}(x)]{f_{1}'(x)}}$

This follows from the chain rule.

### General form with variable limits

Now, set

${\displaystyle \varphi (\alpha )=\int _{a}^{b}f(x,\alpha )\,\mathrm {d} x,}$

where a and b are functions of α that exhibit increments Δa and Δb, respectively, when α is increased by Δα. Then,

{\displaystyle {\begin{aligned}\Delta \varphi &=\varphi (\alpha +\Delta \alpha )-\varphi (\alpha )\\&=\int _{a+\Delta a}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,\mathrm {d} x-\int _{a}^{b}f(x,\alpha )\,\mathrm {d} x\\&=\int _{a+\Delta a}^{a}f(x,\alpha +\Delta \alpha )\,\mathrm {d} x+\int _{a}^{b}f(x,\alpha +\Delta \alpha )\,\mathrm {d} x+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,\mathrm {d} x-\int _{a}^{b}f(x,\alpha )\,\mathrm {d} x\\&=-\int _{a}^{a+\Delta a}f(x,\alpha +\Delta \alpha )\,\mathrm {d} x+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\,\mathrm {d} x+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,\mathrm {d} x\end{aligned}}}

A form of the mean value theorem, ${\displaystyle \int _{a}^{b}f(x)\,\mathrm {d} x=(b-a)f(\xi )}$, where a < ξ < b, may be applied to the first and last integrals of the formula for Δφ above, resulting in

${\displaystyle \Delta \varphi =-\Delta af(\xi _{1},\alpha +\Delta \alpha )+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\,\mathrm {d} x+\Delta bf(\xi _{2},\alpha +\Delta \alpha )}$

Dividing by Δα, and letting Δα → 0, and noticing ξ1a and ξ2b and using the result

${\displaystyle \lim _{\Delta \alpha \to 0}\int _{a}^{b}{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }}\,\mathrm {d} x=\int _{a}^{b}{\frac {\mathrm {\partial } }{\mathrm {\partial } \alpha }}f(x,\alpha )\,\mathrm {d} x}$

yields the general form of the Leibniz integral rule below:

${\displaystyle {\frac {\mathrm {d} \varphi }{\mathrm {d} \alpha }}=\int _{a}^{b}{\frac {\mathrm {\partial } }{\mathrm {\partial } \alpha }}f(x,\alpha )\,\mathrm {d} x+f(b,\alpha ){\frac {\mathrm {d} b}{\mathrm {d} \alpha }}-f(a,\alpha ){\frac {\mathrm {d} a}{\mathrm {d} \alpha }}}$

### Three-dimensional, time-dependent form

At time t the surface Σ in Figure 1 contains a set of points arranged about a centroid C(t) and function F(r, t) can be written as {{{1}}}, with I independent of time. Variables are shifted to a new frame of reference attached to the moving surface, with origin at C(t). For a rigidly translating surface, the limits of integration are then independent of time, so:

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left(\iint _{\Sigma (t)}\mathrm {d} \mathbf {A} _{\mathbf {r} }\cdot \mathbf {F} (\mathbf {r} ,t)\right)=\iint _{\Sigma }\mathrm {d} \mathbf {A} _{\mathbf {I} }\cdot {\frac {\mathrm {d} }{\mathrm {d} t}}\mathbf {F} (\mathbf {C} (t)+\mathbf {I} ,t)}$

where the limits of integration confining the integral to the region Σ no longer are time dependent so differentiation passes through the integration to act on the integrand only:

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\mathbf {F} (\mathbf {C} (t)+\mathbf {I} ,t)=\mathbf {F} _{t}(\mathbf {C} (t)+\mathbf {I} ,t)+\mathbf {v\cdot \nabla F} (\mathbf {C} (t)+\mathbf {I} ,t)=\mathbf {F} _{t}(\mathbf {r} ,t)+\mathbf {v} \cdot \nabla \mathbf {F} (\mathbf {r} ,t)}$

with the velocity of motion of the surface defined by:

${\displaystyle \mathbf {v} ={\frac {\mathrm {d} }{\mathrm {d} t}}\mathbf {C} (t)}$

This equation expresses the material derivative of the field, that is, the derivative with respect to a coordinate system attached to the moving surface. Having found the derivative, variables can be switched back to the original frame of reference. We notice that (see article on curl ):

${\displaystyle \mathbf {\nabla \times } \left(\mathbf {v\times F} \right)=(\nabla \cdot \mathbf {F} +\mathbf {F} \cdot \nabla )\mathbf {v} -(\nabla \cdot \mathbf {v} +\mathbf {v} \cdot \nabla )\mathbf {F} }$

and that Stokes theorem allows the surface integral of the curl over Σ to be made a line integral over ∂Σ:

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left(\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} \right)=\iint _{\Sigma (t)}{\big (}\mathbf {F} _{t}(\mathbf {r} ,t)+\left(\mathbf {F\cdot \nabla } \right)\mathbf {v} +\left(\mathbf {\nabla \cdot F} \right)\mathbf {v} -(\nabla \cdot \mathbf {v} )\mathbf {F} {\big )}\,\cdot \,\mathrm {d} \mathbf {A} \,-\,\oint _{\partial \Sigma (t)}\left(\mathbf {\mathbf {v} \times F} \right)\mathbf {\cdot } \,\mathrm {d} \mathbf {s} .}$

The sign of the line integral is based on the right-hand rule for the choice of direction of line element ds. To establish this sign, for example, suppose the field F points in the positive z-direction, and the surface Σ is a portion of the xy-plane with perimeter ∂Σ. We adopt the normal to Σ to be in the positive z-direction. Positive traversal of ∂Σ is then counterclockwise (right-hand rule with thumb along z-axis). Then the integral on the left-hand side determines a positive flux of F through Σ. Suppose Σ translates in the positive x-direction at velocity v. An element of the boundary of Σ parallel to the y-axis, say ds, sweeps out an area vt × ds in time t. If we integrate around the boundary ∂Σ in a counterclockwise sense, vt × ds points in the negative z-direction on the left side of ∂Σ (where ds points downward), and in the positive z-direction on the right side of ∂Σ (where ds points upward), which makes sense because Σ is moving to the right, adding area on the right and losing it on the left. On that basis, the flux of F is increasing on the right of ∂Σ and decreasing on the left. However, the dot-product v × F • ds = −F × v • ds = −F • v × ds. Consequently, the sign of the line integral is taken as negative.

If v is a constant,

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} =\iint _{\Sigma (t)}{\big (}\mathbf {F} _{t}(\mathbf {r} ,t)+\left(\mathbf {\nabla \cdot F} \right)\mathbf {v} {\big )}\cdot \mathrm {d} \mathbf {A} -\oint _{\partial \Sigma (t)}\left(\mathbf {\mathbf {v} \times F} \right)\mathbf {\cdot } \,\mathrm {d} \mathbf {s} }$

which is the quoted result. This proof does not consider the possibility of the surface deforming as it moves.

### Alternative Derivation

Lemma. One has:
${\displaystyle {\frac {\partial }{\partial b}}\left(\int _{a}^{b}f(x)\;\mathrm {d} x\right)=f(b),\qquad {\frac {\partial }{\partial a}}\left(\int _{a}^{b}f(x)\;\mathrm {d} x\right)=-f(a).}$

Proof. From proof of the fundamental theorem of calculus,

{\displaystyle {\begin{aligned}{\frac {\partial }{\partial b}}\left(\int _{a}^{b}f(x)\;\mathrm {d} x\right)&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\left[\int _{a}^{b+\Delta b}f(x)\,\mathrm {d} x-\int _{a}^{b}f(x)\,\mathrm {d} x\right]\\&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\int _{b}^{b+\Delta b}f(x)\,\mathrm {d} x\\&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\left[f(b)\Delta b+{\mathcal {O}}\left(\Delta b^{2}\right)\right]\\&=f(b)\\{\frac {\partial }{\partial a}}\left(\int _{a}^{b}f(x)\;\mathrm {d} x\right)&=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\left[\int _{a+\Delta a}^{b}f(x)\,\mathrm {d} x-\int _{a}^{b}f(x)\,\mathrm {d} x\right]\\&=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\int _{a+\Delta a}^{a}f(x)\,\mathrm {d} x\\&=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\left[-f(a)\,\Delta a+{\mathcal {O}}\left(\Delta a^{2}\right)\right]\\&=-f(a).\end{aligned}}}

Suppose a and b are constant, and that f(x) involves a parameter α which is constant in the integration but may vary to form different integrals. Assuming that f(x, α) is a continuous function of x and α in the compact set {(x, α) : α0 ≤ α ≤ α1 and axb}, and that the partial derivative fα(x, α) exists and is continuous, then if one defines:

${\displaystyle \varphi (\alpha )=\int _{a}^{b}f(x,\alpha )\;\mathrm {d} x.}$
${\displaystyle \varphi }$ may be differentiated with respect to α by differentiating under the integral sign; i.e.,
${\displaystyle {\frac {\mathrm {d} \varphi }{\mathrm {d} \alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}\,f(x,\alpha )\,\mathrm {d} x.\,}$

By the Heine–Cantor theorem it is uniformly continuous in that set. In other words, for any ε > 0 there exists Δα such that for all values of x in [a, b]:

${\displaystyle |f(x,\alpha +\Delta \alpha )-f(x,\alpha )|<\varepsilon .}$

On the other hand:

{\displaystyle {\begin{aligned}\Delta \varphi &=\varphi (\alpha +\Delta \alpha )-\varphi (\alpha )\\&=\int _{a}^{b}f(x,\alpha +\Delta \alpha )\;\mathrm {d} x-\int _{a}^{b}f(x,\alpha )\;\mathrm {d} x\\&=\int _{a}^{b}\left(f(x,\alpha +\Delta \alpha )-f(x,\alpha )\right)\;\mathrm {d} x\\&\leq \varepsilon (b-a)\end{aligned}}}

Hence φ(α) is a continuous function.

Similarly if ${\displaystyle {\frac {\partial }{\partial \alpha }}\,f(x,\alpha )}$ exists and is continuous, then for all ε > 0 there exists Δα such that:

${\displaystyle \forall x\in [a,b]\quad \left|{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }}-{\frac {\partial f}{\partial \alpha }}\right|<\varepsilon .}$

Therefore,

${\displaystyle {\frac {\Delta \varphi }{\Delta \alpha }}=\int _{a}^{b}{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }}\;\mathrm {d} x=\int _{a}^{b}{\frac {\partial \,f(x,\alpha )}{\partial \alpha }}\,\mathrm {d} x+R}$

where

${\displaystyle |R|<\int _{a}^{b}\varepsilon \;\mathrm {d} x=\varepsilon (b-a).}$

Now, ε → 0 as Δα → 0, therefore,

${\displaystyle \lim _{{\Delta \alpha }\rightarrow 0}{\frac {\Delta \varphi }{\Delta \alpha }}={\frac {\mathrm {d} \varphi }{\mathrm {d} \alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}\,f(x,\alpha )\,\mathrm {d} x.\,}$

This is the formula we set out to prove.

Now, suppose

${\displaystyle \int _{a}^{b}f(x,\alpha )\;\mathrm {d} x=\varphi (\alpha ),}$

where a and b are functions of α which take increments Δa and Δb, respectively, when α is increased by Δα. Then,

{\displaystyle {\begin{aligned}\Delta \varphi &=\varphi (\alpha +\Delta \alpha )-\varphi (\alpha )\\&=\int _{a+\Delta a}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\;\mathrm {d} x\,-\int _{a}^{b}f(x,\alpha )\;\mathrm {d} x\,\\&=\int _{a+\Delta a}^{a}f(x,\alpha +\Delta \alpha )\;\mathrm {d} x+\int _{a}^{b}f(x,\alpha +\Delta \alpha )\;\mathrm {d} x+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\;\mathrm {d} x-\int _{a}^{b}f(x,\alpha )\;\mathrm {d} x\\&=-\int _{a}^{a+\Delta a}\,f(x,\alpha +\Delta \alpha )\;\mathrm {d} x+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\;\mathrm {d} x+\int _{b}^{b+\Delta b}\,f(x,\alpha +\Delta \alpha )\;\mathrm {d} x.\end{aligned}}}

A form of the mean value theorem, ${\displaystyle \int _{a}^{b}f(x)\;\mathrm {d} x=(b-a)f(\xi ),}$ where a < ξ < b, can be applied to the first and last integrals of the formula for Δφ above, resulting in

${\displaystyle \Delta \varphi =-\Delta a\,f(\xi _{1},\alpha +\Delta \alpha )+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\;\mathrm {d} x+\Delta b\,f(\xi _{2},\alpha +\Delta \alpha ).}$

Dividing by Δα, letting Δα → 0, noticing ξ1a and ξ2b and using the above derivation for

${\displaystyle {\frac {\mathrm {d} \varphi }{\mathrm {d} \alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}\,f(x,\alpha )\,\mathrm {d} x}$

yields

${\displaystyle {\frac {\mathrm {d} \varphi }{\mathrm {d} \alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}\,f(x,\alpha )\,\mathrm {d} x+f(b,\alpha ){\frac {\partial b}{\partial \alpha }}-f(a,\alpha ){\frac {\partial a}{\partial \alpha }}.}$

This is the general form of the Leibniz integral rule.

## Examples

### General examples

#### Example 1

${\displaystyle \varphi (\alpha )=\int _{0}^{1}{\frac {\alpha }{x^{2}+\alpha ^{2}}}\;\mathrm {d} x={\begin{cases}0&\alpha =0\\\arctan \left({\tfrac {1}{\alpha }}\right)&\alpha \neq 0\end{cases}}}$

The function under the integral sign is not continuous at the point (x, α) = (0, 0) and the function φ(α) has a discontinuity at α = 0, because φ(α) approaches ±π/2 as α → 0±.

If we now differentiate φ(α) with respect to α under the integral sign, we get

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} \alpha }}\varphi (\alpha )=\int _{0}^{1}{\frac {\partial }{\partial \alpha }}\left({\frac {\alpha }{x^{2}+\alpha ^{2}}}\right)\;\mathrm {d} x=\int _{0}^{1}{\frac {x^{2}-\alpha ^{2}}{(x^{2}+\alpha ^{2})^{2}}}\mathrm {d} x=-{\frac {x}{x^{2}+\alpha ^{2}}}{\bigg |}_{0}^{1}=-{\frac {1}{1+\alpha ^{2}}},}$

which is, of course, true for all values of α except α = 0.

#### Example 2

An example with variable limits:

{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}\int _{\sin x}^{\cos x}\cosh t^{2}\;\mathrm {d} t&=\cosh \left(\cos ^{2}x\right){\frac {\mathrm {d} }{\mathrm {d} x}}(\cos x)-\cosh \left(\sin ^{2}x\right){\frac {\mathrm {d} }{\mathrm {d} x}}(\sin x)+\int _{\sin x}^{\cos x}{\frac {\partial }{\partial x}}\left(\cosh t^{2}\right)\mathrm {d} t\\&=\cosh \left(\cos ^{2}x\right)(-\sin x)-\cosh \left(\sin ^{2}x\right)(\cos x)+0\\&=-\cosh \left(\cos ^{2}x\right)\sin x-\cosh \left(\sin ^{2}x\right)\cos x\end{aligned}}}

### Examples for evaluating a definite integral

#### Example 3

The principle of differentiating under the integral sign may sometimes be used to evaluate a definite integral. Consider:

${\displaystyle \varphi (\alpha )=\int _{0}^{\pi }\,\ln(1-2\alpha \cos(x)+\alpha ^{2})\;\mathrm {d} x\qquad |\alpha |>1.}$

Now,

{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} \alpha }}\,\varphi (\alpha )&=\int _{0}^{\pi }{\frac {-2\cos(x)+2\alpha }{1-2\alpha \cos(x)+\alpha ^{2}}}\;\mathrm {d} x\,\\[8pt]&={\frac {1}{\alpha }}\int _{0}^{\pi }\,\left(1-{\frac {1-\alpha ^{2}}{1-2\alpha \cos(x)+\alpha ^{2}}}\,\right)\,\mathrm {d} x\\[8pt]&={\frac {\pi }{\alpha }}-{\frac {2}{\alpha }}\left\{\,\arctan \left({\frac {1+\alpha }{1-\alpha }}\cdot \tan \left({\frac {x}{2}}\right)\right)\,\right\}\,{\bigg |}_{0}^{\pi }.\end{aligned}}}

Now as x varies from 0 to π we have:

${\displaystyle {\begin{cases}{\frac {1+\alpha }{1-\alpha }}\tan \left({\frac {x}{2}}\right)\geq 0&|\alpha |<1\\{\frac {1+\alpha }{1-\alpha }}\tan \left({\frac {x}{2}}\right)\leq 0&|\alpha |>1\end{cases}}}$

Hence,

${\displaystyle \arctan \left({\frac {1+\alpha }{1-\alpha }}\cdot \tan \left({\frac {x}{2}}\right)\right)\,{\bigg |}_{0}^{\pi }={\begin{cases}{\frac {\pi }{2}}&|\alpha |<1\\-{\frac {\pi }{2}}&|\alpha |>1\end{cases}}}$

Therefore,

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} \alpha }}\,\varphi (\alpha )={\begin{cases}0&|\alpha |<1\\{\frac {2\pi }{\alpha }}&|\alpha |>1\end{cases}}}$

Integrating both sides with respect to α, we get:

${\displaystyle \varphi (\alpha )={\begin{cases}C_{1}&|\alpha |<1\\2\pi \ln |\alpha |+C_{2}&|\alpha |>1\end{cases}}}$

C1 = 0 follows from evaluating φ(0):

${\displaystyle \varphi (0)=\int _{0}^{\pi }\ln(1)\;\mathrm {d} x=\int _{0}^{\pi }0\;\mathrm {d} x=0}$

To determine C2 in the same manner, we should need to substitute in a value of α greater than 1 in φ(α). This is somewhat inconvenient. Instead, we substitute α = 1/β, where |β| < 1. Then,

{\displaystyle {\begin{aligned}\varphi (\alpha )&=\int _{0}^{\pi }\left(\ln(1-2\beta \cos(x)+\beta ^{2})-2\ln |\beta |\right)\;\mathrm {d} x\ \\[8pt]&=0-2\pi \ln |\beta |\,\\[8pt]&=2\pi \ln |\alpha |\,\end{aligned}}}

Therefore, C2 = 0.

The definition of φ(α) is now complete:

${\displaystyle \varphi (\alpha )={\begin{cases}0&|\alpha |<1\\2\pi \ln |\alpha |&|\alpha |>1\end{cases}}}$

The foregoing discussion, of course, does not apply when α = ±1, since the conditions for differentiability are not met.

#### Example 4

${\displaystyle {\textbf {I}}=\int _{0}^{\frac {\pi }{2}}{\frac {1}{\left(a\cos ^{2}(x)+b\sin ^{2}(x)\right)^{2}}}\;\mathrm {d} x,\qquad a,b>0.}$

First we calculate:

{\displaystyle {\begin{aligned}{\textbf {J}}&=\int _{0}^{\frac {\pi }{2}}{\frac {1}{a\cos ^{2}(x)+b\sin ^{2}(x)}}\;\mathrm {d} x\\[6pt]&=\int _{0}^{\frac {\pi }{2}}{\frac {\frac {1}{\cos ^{2}(x)}}{a+b{\frac {\sin ^{2}(x)}{\cos ^{2}(x)}}}}\;\mathrm {d} x\\[6pt]&=\int _{0}^{\frac {\pi }{2}}{\frac {\sec ^{2}(x)}{a+b\tan ^{2}(x)}}\;\mathrm {d} x\\[6pt]&={\frac {1}{b}}\int _{0}^{\frac {\pi }{2}}{\frac {1}{\left({\sqrt {\frac {a}{b}}}\right)^{2}+\tan ^{2}(x)}}\;\mathrm {d} (\tan x)\\[6pt]&={\frac {1}{\sqrt {ab}}}\arctan \left({\sqrt {\frac {b}{a}}}\tan(x)\right){\Bigg |}_{0}^{\frac {\pi }{2}}\\[6pt]&={\frac {\pi }{2{\sqrt {ab}}}}.\end{aligned}}}

The limits of integration being independent of a, But we have:

${\displaystyle {\frac {\partial {\textbf {J}}}{\partial a}}=-\int _{0}^{\frac {\pi }{2}}{\frac {\cos ^{2}x}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\;\mathrm {d} x}$

On the other hand:

${\displaystyle {\frac {\partial {\textbf {J}}}{\partial a}}={\frac {\partial }{\partial a}}\left({\frac {\pi }{2{\sqrt {ab}}}}\right)=-{\frac {\pi }{4{\sqrt {a^{3}b}}}}.}$

Equating these two relations then yields

${\displaystyle \int _{0}^{\frac {\pi }{2}}{\frac {\cos ^{2}(x)}{\left(a\cos ^{2}(x)+b\sin ^{2}(x)\right)^{2}}}\;\mathrm {d} x={\frac {\pi }{4{\sqrt {a^{3}b}}}}.}$

In a similar fashion, pursuing ${\displaystyle {\frac {\partial {\textbf {J}}}{\partial b}}}$ yields

${\displaystyle \int _{0}^{\frac {\pi }{2}}{\frac {\sin ^{2}x}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\;\mathrm {d} x={\frac {\pi }{4{\sqrt {ab^{3}}}}}.}$

Adding the two results then produces

${\displaystyle {\textbf {I}}=\int _{0}^{\frac {\pi }{2}}{\frac {1}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\;\mathrm {d} x={\frac {\pi }{4{\sqrt {ab}}}}\left({\frac {1}{a}}+{\frac {1}{b}}\right),}$

Note that if we define

${\displaystyle {\textbf {I}}_{n}=\int _{0}^{\frac {\pi }{2}}{\frac {1}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{n}}}\;\mathrm {d} x,}$

it can easily be shown that

${\displaystyle {\frac {\partial {\textbf {I}}_{n-1}}{\partial a}}+{\frac {\partial {\textbf {I}}_{n-1}}{\partial b}}+(n-1){\textbf {I}}_{n}=0.}$

Given I1, this partial-derivative-based recursive relation (i.e., integral reduction formula) can then be utilized to compute all of the values of In for n > 1.

#### Example 5

Here, we consider the integral

${\displaystyle {\textbf {I}}(\alpha )=\int _{0}^{\frac {\pi }{2}}{\frac {\ln \,(1+\cos \alpha \,\cos \,x)}{\cos x}}\;\mathrm {d} x,\qquad 0<\alpha <\pi .}$

Differentiating under the integral with respect to α, we have

{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} \alpha }}{\textbf {I}}(\alpha )&=\int _{0}^{\frac {\pi }{2}}{\frac {\partial }{\partial \alpha }}\left({\frac {\ln(1+\cos \alpha \cos x)}{\cos x}}\right)\,\mathrm {d} x\\[6pt]&=-\int _{0}^{\frac {\pi }{2}}{\frac {\sin \alpha }{1+\cos \alpha \cos x}}\,\mathrm {d} x\\[6pt]&=-\int _{0}^{\frac {\pi }{2}}{\frac {\sin \alpha }{\left(\cos ^{2}{\frac {x}{2}}+\sin ^{2}{\frac {x}{2}}\right)+\cos \alpha \left(\cos ^{2}{\frac {x}{2}}-\sin ^{2}{\frac {x}{2}}\right)}}\,\mathrm {d} x\\[6pt]&=-{\frac {\sin \alpha }{1-\cos \alpha }}\int _{0}^{\frac {\pi }{2}}{\frac {1}{\cos ^{2}{\frac {x}{2}}}}{\frac {1}{{\frac {1+\cos \alpha }{1-\cos \alpha }}+\tan ^{2}{\frac {x}{2}}}}\,\mathrm {d} x\\[6pt]&=-{\frac {2\sin \alpha }{1-\cos \alpha }}\int _{0}^{\frac {\pi }{2}}{\frac {{\frac {1}{2}}\sec ^{2}{\frac {x}{2}}}{{\frac {2\cos ^{2}{\frac {\alpha }{2}}}{2\sin ^{2}{\frac {\alpha }{2}}}}+\tan ^{2}{\frac {x}{2}}}}\,\mathrm {d} x\\[6pt]&=-{\frac {2\left(2\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}\right)}{2\sin ^{2}{\frac {\alpha }{2}}}}\int _{0}^{\frac {\pi }{2}}\,{\frac {1}{\cot ^{2}{\frac {\alpha }{2}}+\tan ^{2}{\frac {x}{2}}}}\mathrm {d} \left(\tan {\frac {x}{2}}\right)\\[6pt]&=-2\cot {\frac {\alpha }{2}}\int _{0}^{\frac {\pi }{2}}{\frac {1}{\cot ^{2}{\frac {\alpha }{2}}+\tan ^{2}{\frac {x}{2}}}}\,\mathrm {d} \left(\tan {\frac {x}{2}}\right)\\[6pt]&=-2\arctan \left(\tan {\frac {\alpha }{2}}\tan {\frac {x}{2}}\right){\bigg |}_{0}^{\frac {\pi }{2}}\\&=-\alpha \end{aligned}}}

Therefore:

${\displaystyle {\textbf {I}}(\alpha )=C-{\frac {\alpha ^{2}}{2}}}$

However, by definition, I(π/2) = 0, hence: C = π2/8 and

${\displaystyle {\textbf {I}}(\alpha )={\frac {\pi ^{2}}{8}}-{\frac {\alpha ^{2}}{2}}.}$

#### Example 6

Here, we consider the integral

${\displaystyle \int _{0}^{2\pi }e^{\cos \theta }\cos(\sin \theta )\;\mathrm {d} \theta .}$

We introduce a new variable φ and rewrite the integral as

${\displaystyle f(\varphi )=\int _{0}^{2\pi }e^{\varphi \cos \theta }\cos(\varphi \sin \theta )\;\mathrm {d} \theta .}$

Note that for φ = 1 we recover the original integral, now we proceed:

{\displaystyle {\begin{aligned}{\frac {\mathrm {d} f}{\mathrm {d} \varphi }}&=\int _{0}^{2\pi }{\frac {\partial }{\partial \varphi }}\left(e^{\varphi \cos \theta }\;\cos(\varphi \sin \theta )\right)\;\mathrm {d} \theta \\&=\int _{0}^{2\pi }e^{\varphi \cos \theta }\left(\cos \theta \cos(\varphi \sin \theta )-\sin \theta \sin(\varphi \sin \theta )\right)\;\mathrm {d} \theta \\&=\int _{0}^{2\pi }{\frac {1}{\varphi }}\;{\frac {\partial }{\partial \theta }}\left(e^{\varphi \cos \theta }\sin(\varphi \sin \theta )\right)\;\mathrm {d} \theta \\&={\frac {1}{\varphi }}\int _{0}^{2\pi }\;\mathrm {d} \left(e^{\varphi \cos \theta }\sin(\varphi \sin \theta )\right)\\&={\frac {1}{\varphi }}\left(e^{\varphi \cos \theta }\;\sin(\varphi \sin \theta )\right)\;{\bigg |}_{0}^{2\pi }=0.\end{aligned}}}

Integrating both sides of ${\displaystyle {\frac {\mathrm {d} f}{\mathrm {d} \varphi }}=0}$ with respect to φ between the limits 0 and 1 yields

${\displaystyle f(1)-f(0)=\int _{f(0)}^{f(1)}\;\mathrm {d} f=\int _{0}^{1}0\;\mathrm {d} \varphi =0}$

Therefore, f(1) = f(0) however we note that from the equation for f(φ), we have f(0) = 2π, therefore the value of f at φ = 1, which is the same as the integral we set out to compute is 2π.

#### Other problems to solve

There are innumerable other integrals that can be solved "quickly" using the technique of differentiation under the integral sign. For example, consider the following cases where one adds a new variable α:

{\displaystyle {\begin{aligned}\int _{0}^{\infty }\;{\frac {\sin \,x}{x}}\;\mathrm {d} x&\to \int _{0}^{\infty }\;e^{-\alpha \,x}\;{\frac {\sin \,x}{x}}\;\mathrm {d} x,\\\int _{0}^{\frac {\pi }{2}}\;{\frac {x}{\tan \,x}}\;\mathrm {d} x&\to \int _{0}^{\frac {\pi }{2}}\;{\frac {\tan ^{-1}(\alpha \,\tan \,x)}{\tan \,x}}\;\mathrm {d} x,\\\int _{0}^{\infty }\;{\frac {\ln \,(1+x^{2})}{1+x^{2}}}\;\mathrm {d} x&\to \int _{0}^{\infty }\;{\frac {\ln \,(1+\alpha ^{2}\,x^{2})}{1+x^{2}}}\;\mathrm {d} x\\\int _{0}^{1}\;{\frac {x-1}{\ln \,x}}\;\mathrm {d} x&\to \int _{0}^{1}\;{\frac {x^{\alpha }-1}{\ln \,x}}\;\mathrm {d} x.\end{aligned}}}

The first integral, the Dirichlet integral, is absolutely convergent for positive α but only conditionally convergent when α is 0. Therefore, differentiation under the integral sign is easy to justify when α > 0, but proving that the resulting formula remains valid when α is 0 requires some careful work.

### Applications to series

Differentiating under the integral can also be applied to differentiating under summation, interpreting summation as counting measure. An example of an application is the fact that power series are differentiable in their radius of convergence.

## Popular culture

Differentiation under the integral sign is mentioned in the late physicist Richard Feynman's best-selling memoir Surely You're Joking, Mr. Feynman! (in the chapter "A Different Box of Tools"), where he mentions learning it from an old text, Advanced Calculus (1926), by Frederick S. Woods (who was a professor of mathematics in the Massachusetts Institute of Technology) while in high school. The technique was not often taught when Feynman later received his formal education in calculus and, knowing it, Feynman was able to use the technique to solve some otherwise difficult integration problems upon his arrival at graduate school at Princeton University. The direct quotation from Surely You're Joking, Mr. Feynman! regarding the method of differentiation under the integral sign is as follows: