# Picard–Lindelöf theorem

In mathematics – specifically, in differential equations – the Picard–Lindelöf theorem, Picard's existence theorem, Cauchy–Lipschitz theorem, or existence and uniqueness theorem gives a set of conditions under which an initial value problem has a unique solution.

The theorem is named after Émile Picard, Ernst Lindelöf, Rudolf Lipschitz and Augustin-Louis Cauchy.

Consider the initial value problem

${\displaystyle y'(t)=f(t,y(t)),\qquad y(t_{0})=y_{0}.}$

Suppose f is uniformly Lipschitz continuous in y (meaning the Lipschitz constant can be taken independent of t) and continuous in t, then for some value ε > 0, there exists a unique solution y(t) to the initial value problem on the interval ${\displaystyle [t_{0}-\varepsilon ,t_{0}+\varepsilon ]}$.[1]

## Proof sketch

The proof relies on transforming the differential equation, and applying fixed-point theory. By integrating both sides, any function satisfying the differential equation must also satisfy the integral equation

${\displaystyle y(t)-y(t_{0})=\int _{t_{0}}^{t}f(s,y(s))\,ds.}$

A simple proof of existence of the solution is obtained by successive approximations. In this context, the method is known as Picard iteration.

Set

${\displaystyle \varphi _{0}(t)=y_{0}}$

and

${\displaystyle \varphi _{k+1}(t)=y_{0}+\int _{t_{0}}^{t}f(s,\varphi _{k}(s))\,ds.}$

It can then be shown, by using the Banach fixed point theorem, that the sequence of "Picard iterates" φk is convergent and that the limit is a solution to the problem. An application of Grönwall's lemma to |φ(t) − ψ(t)|, where φ and ψ are two solutions, shows that φ(t) = ψ(t), thus proving the global uniqueness (the local uniqueness is a consequence of the uniqueness of the Banach fixed point).

Piccard's method is most often stated without proof or graphing. See Newton's method of successive approximation for instruction.

## Example of Picard iteration

Let ${\displaystyle y(t)=\tan(t),}$ the solution to the equation ${\displaystyle y'(t)=1+y(t)^{2}}$ with initial condition ${\displaystyle y(t_{0})=y_{0}=0,t_{0}=0.}$ Starting with ${\displaystyle \varphi _{0}(t)=0,}$ we iterate

${\displaystyle \varphi _{k+1}(t)=\int _{0}^{t}(1+(\varphi _{k}(s))^{2})\,ds}$

so that ${\displaystyle \varphi _{n}(t)\to y(t)}$:

${\displaystyle \varphi _{1}(t)=\int _{0}^{t}(1+0^{2})\,ds=t}$
${\displaystyle \varphi _{2}(t)=\int _{0}^{t}(1+s^{2})\,ds=t+{\frac {t^{3}}{3}}}$
${\displaystyle \varphi _{3}(t)=\int _{0}^{t}\left(1+\left(s+{\frac {s^{3}}{3}}\right)^{2}\right)\,ds=t+{\frac {t^{3}}{3}}+{\frac {2t^{5}}{15}}+{\frac {t^{7}}{63}}}$

and so on. Evidently, the functions are computing the Taylor series expansion of our known solution ${\displaystyle y=\tan(t).}$ Since ${\displaystyle \tan }$ has poles at ${\displaystyle \pm {\tfrac {\pi }{2}},}$ this converges toward a local solution only for ${\displaystyle |t|<{\tfrac {\pi }{2}},}$ not on all of R.

## Example of non-uniqueness

To understand uniqueness of solutions, consider the following examples.[2] A differential equation can possess a stationary point. For example, for the equation dy/dt = ay, the stationary solution is y(t) = 0, which is obtained for the initial condition y(0) = 0. Beginning with another initial condition y(0) = y0 ≠ 0, the solution y(t) tends toward the stationary point, but reaches it only at the limit of infinite time, so the uniqueness of solutions (over all finite times) is guaranteed.

However, for an equation in which the stationary solution is reached after a finite time, the uniqueness fails. This happens for example for the equation dy/dt = ay2/3, which has two solutions corresponding to the initial condition y(0) = 0: either y(t) = 0 or

${\displaystyle y(t)={\begin{cases}\left({\tfrac {at}{3}}\right)^{3}&t<0\\\ \ \ \ 0&t\geq 0,\end{cases}}}$

so the previous state of the system is not uniquely determined by its state after t = 0. The uniqueness theorem does not apply because the function f (y) = y2/3 has an infinite slope at y = 0 and therefore is not Lipschitz continuous, violating the hypothesis of the theorem.

## Detailed proof

Let

${\displaystyle C_{a,b}={\overline {I_{a}(t_{0})}}\times {\overline {B_{b}(y_{0})}}}$

where:

{\displaystyle {\begin{aligned}{\overline {I_{a}(t_{0})}}&=[t_{0}-a,t_{0}+a]\\{\overline {B_{b}(y_{0})}}&=[y_{0}-b,y_{0}+b].\end{aligned}}}

This is the compact cylinder where f is defined. Let

${\displaystyle M=\sup _{C_{a,b}}\|f\|,}$

this is, the maximum slope of the function in modulus. Finally, let L be the Lipschitz constant of f with respect to the second variable.

We will proceed to apply Banach fixed point theorem using the metric on ${\displaystyle {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))}$ induced by the uniform norm

${\displaystyle \|\varphi \|_{\infty }=\sup _{t\in I_{a}}|\varphi (t)|.}$

We define an operator between two functional spaces of continuous functions, Picard's operator, as follows:

${\displaystyle \Gamma :{\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))\longrightarrow {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))}$

defined by:

${\displaystyle \Gamma \varphi (t)=y_{0}+\int _{t_{0}}^{t}f(s,\varphi (s))\,ds.}$

We must show that this operator maps a complete non-empty metric space X into itself and also is a contraction mapping.

We first show that, given certain restrictions on ${\displaystyle a,\Gamma }$ takes ${\displaystyle {\overline {B_{b}(y_{0})}}}$ into itself in the space of continuous functions with uniform norm. Here, ${\displaystyle {\overline {B_{b}(y_{0})}}}$ is a closed ball in the space of continuous (and bounded) functions "centered" at the constant function ${\displaystyle y_{0}}$. Hence we need to show that

${\displaystyle \|\varphi _{1}\|_{\infty }\leq b.}$

implies

${\displaystyle \left\|\Gamma \varphi (t)-y_{0}\right\|=\left\|\int _{t_{0}}^{t}f(s,\varphi (s))\,ds\right\|\leq \int _{t_{0}}^{t'}\left\|f(s,\varphi (s))\right\|ds\leq M\left|t'-t_{0}\right|\leq Ma\leq b}$

where ${\displaystyle t'}$ is some number in ${\displaystyle [t_{0}-a,t_{0}+a]}$ where the maximum is achieved. The last step is true if we impose the requirement a < b/M.

Now let's try to prove that this operator is a contraction.

Given two functions ${\displaystyle \varphi _{1},\varphi _{2}\in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))}$, in order to apply the Banach fixed point theorem we want

${\displaystyle \left\|\Gamma \varphi _{1}-\Gamma \varphi _{2}\right\|_{\infty }\leq q\left\|\varphi _{1}-\varphi _{2}\right\|_{\infty },}$

for some q < 1. So let t be such that

${\displaystyle \|\Gamma \varphi _{1}-\Gamma \varphi _{2}\|_{\infty }=\left\|\left(\Gamma \varphi _{1}-\Gamma \varphi _{2}\right)(t)\right\|}$

then using the definition of Γ

{\displaystyle {\begin{aligned}\left\|\left(\Gamma \varphi _{1}-\Gamma \varphi _{2}\right)(t)\right\|&=\left\|\int _{t_{0}}^{t}\left(f(s,\varphi _{1}(s))-f(s,\varphi _{2}(s))\right)ds\right\|\\&\leq \int _{t_{0}}^{t}\left\|f\left(s,\varphi _{1}(s)\right)-f\left(s,\varphi _{2}(s)\right)\right\|ds\\&\leq L\int _{t_{0}}^{t}\left\|\varphi _{1}(s)-\varphi _{2}(s)\right\|ds&&f{\text{ is Lipschitz-continuous}}\\&\leq La\left\|\varphi _{1}-\varphi _{2}\right\|_{\infty }\end{aligned}}}

This is a contraction if ${\displaystyle a<{\tfrac {1}{L}}.}$

We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function

${\displaystyle \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))}$

such that Γφ = φ. This function is the unique solution of the initial value problem, valid on the interval Ia where a satisfies the condition

${\displaystyle a<\min \left\{{\tfrac {b}{M}},{\tfrac {1}{L}}\right\}.}$

## Optimization of the solution's interval

Nevertheless, there is a corollary of the Banach fixed point theorem: if an operator Tn is a contraction for some n in N, then T has a unique fixed point. Before applying this theorem to the Picard operator, recall the following:

Lemma:   ${\displaystyle \left\|\Gamma ^{m}\varphi _{1}-\Gamma ^{m}\varphi _{2}\right\|\leq {\frac {L^{m}\alpha ^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|}$

Proof. Induction on m. For the base of the induction (m = 1) we have already seen this, so suppose the inequality holds for m − 1, then we have:

{\displaystyle {\begin{aligned}\left\|\Gamma ^{m}\varphi _{1}-\Gamma ^{m}\varphi _{2}\right\|&=\left\|\Gamma \Gamma ^{m-1}\varphi _{1}-\Gamma \Gamma ^{m-1}\varphi _{2}\right\|\\&\leq \left|\int _{t_{0}}^{t}\left\|f\left(s,\Gamma ^{m-1}\varphi _{1}(s)\right)-f\left(s,\Gamma ^{m-1}\varphi _{2}(s)\right)\right\|ds\right|\\&\leq L\left|\int _{t_{0}}^{t}\left\|\Gamma ^{m-1}\varphi _{1}(s)-\Gamma ^{m-1}\varphi _{2}(s)\right\|ds\right|\\&\leq {\frac {L^{m}\alpha ^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|.\end{aligned}}}

This inequality assures that for some large m,

${\displaystyle {\frac {L^{m}\alpha ^{m}}{m!}}<1,}$

and hence Γm will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking α = min{a, b/M}.

In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value.

## Other existence theorems

The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The Peano existence theorem shows only existence, not uniqueness, but it assumes only that f is continuous in y, instead of Lipschitz continuous. For example, the right-hand side of the equation dy/dt = y1/3 with initial condition y(0) = 0 is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has three solutions:[3]

${\displaystyle y(t)=0,\qquad y(t)=\pm \left({\tfrac {2}{3}}t\right)^{\frac {3}{2}}}$.

Even more general is Carathéodory's existence theorem, which proves existence (in a more general sense) under weaker conditions on f. Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura's theorem.[4]