Jump to content

Picard–Lindelöf theorem

From Wikipedia, the free encyclopedia
(Redirected from Picard's Method)

In mathematics, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem.

The theorem is named after Émile Picard, Ernst Lindelöf, Rudolf Lipschitz and Augustin-Louis Cauchy.

Theorem

[edit]

Let be a closed rectangle with , the interior of . Let be a function that is continuous in and Lipschitz continuous in (with Lipschitz constant independent from ). Then there exists some such that the initial value problem

has a unique solution on the interval .[1][2]

Proof sketch

[edit]

A standard proof relies on transforming the differential equation into an integral equation, then applying the Banach fixed-point theorem to prove the existence of a solution, and then applying Grönwall's lemma to prove the uniqueness of the solution.

Integrating both sides of the differential equation shows that any solution to the differential equation must also satisfy the integral equation

Given the hypotheses that is continuous in and Lipschitz continuous in , this integral operator is a contraction and so the Banach fixed-point theorem proves that a solution can be obtained by fixed-point iteration of successive approximations. In this context, this fixed-point iteration method is known as Picard iteration.

Set

and

It follows from the Banach fixed-point theorem that the sequence of "Picard iterates" is convergent and that its limit is a solution to the original initial value problem. Next, applying Grönwall's lemma to , where and are any two solutions, shows that for any two solutions, thus proving that they must be the same solution and thus proving global uniqueness of the solution on the domain where the theorem's hypotheses hold.

Example of Picard iteration

[edit]
Four Picard iteration steps and their limit

Let the solution to the equation with initial condition Starting with we iterate

so that :

and so on. Evidently, the functions are computing the Taylor series expansion of our known solution Since has poles at it is not Lipschitz continuous in the neighborhood of those points, and the iteration converges toward a local solution for only that is not valid over all of .

Example of non-uniqueness

[edit]

To understand uniqueness of solutions, contrast the following two examples of first order ordinary differential equations for y(t).[3] Both differential equations will possess a single stationary point y = 0.

First, the homogeneous linear equation dy/dt = ay (), a stationary solution is y(t) = 0, which is obtained for the initial condition y(0) = 0. Beginning with any other initial condition y(0) = y0 ≠ 0, the solution tends toward the stationary point y = 0, but it only approaches it in the limit of infinite time, so the uniqueness of solutions over all finite times is guaranteed.

By contrast for an equation in which the stationary point can be reached after a finite time, uniqueness of solutions does not hold. Consider the homogeneous nonlinear equation dy/dt = ay2/3, which has at least these two solutions corresponding to the initial condition y(0) = 0: y(t) = 0 and

so the previous state of the system is not uniquely determined by its state at or after t = 0. The uniqueness theorem does not apply because the derivative of the function f (y) = y2/3 is not bounded in the neighborhood of y = 0 and therefore it is not Lipschitz continuous, violating the hypothesis of the theorem.

Detailed proof

[edit]

Let

where:

This is the compact cylinder where  f  is defined.

Let L be the Lipschitz constant of f with respect to the second variable.

Let

this is, the supremum of (the absolute values of) the slopes of the function.

This maximum exists since the conditions imply that is continuous function of two variables, for since is continuous function of , for any point and there exist such that when . We have

provided and , which shows that is continuous at .


We will proceed to apply the Banach fixed-point theorem using the metric on induced by the uniform norm

We define an operator between two function spaces of continuous functions, Picard's operator, as follows:

defined by:

We must show that this operator maps a complete non-empty metric space X into itself and also is a contraction mapping.

We first show that, given certain restrictions on , takes into itself in the space of continuous functions with the uniform norm. Here, is a closed ball in the space of continuous (and bounded) functions "centered" at the constant function . Hence we need to show that

implies

where is some number in where the maximum is achieved. The last inequality in the chain is true if we impose the requirement .

Now let's prove that this operator is a contraction mapping.

Given two functions , in order to apply the Banach fixed-point theorem we require

for some . So let be such that

Then using the definition of ,

This is a contraction if

We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed-point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function

such that Γφ = φ. This function is the unique solution of the initial value problem, valid on the interval Ia where a satisfies the condition

Optimization of the solution's interval

[edit]

We wish to remove the dependence of the interval Ia on L. To this end, there is a corollary of the Banach fixed-point theorem: if an operator Tn is a contraction for some n in N, then T has a unique fixed point. Before applying this theorem to the Picard operator, recall the following:

Lemma —  for all

Proof. Induction on m. For the base of the induction (m = 1) we have already seen this, so suppose the inequality holds for m − 1, then we have:

By taking a supremum over we see that .

This inequality assures that for some large m, and hence Γm will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking α = min{a, b/M}.

In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value.

Other existence theorems

[edit]

The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The Peano existence theorem shows only existence, not uniqueness, but it assumes only that f is continuous in y, instead of Lipschitz continuous. For example, the right-hand side of the equation dy/dt = y1/3 with initial condition y(0) = 0 is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has at least three solutions:[4]

.

Even more general is Carathéodory's existence theorem, which proves existence (in a more general sense) under weaker conditions on f. Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura's theorem.[5]

Global existence of solution

[edit]

The Picard–Lindelöf theorem ensures that solutions to initial value problems exist uniquely within a local interval , possibly dependent on each solution. The behavior of solutions beyond this local interval can vary depending on the properties of f and the domain over which f is defined. For instance, if f is globally Lipschitz, then the local interval of existence of each solution can be extended to the entire real line and all the solutions are defined over the entire R.

If f is only locally Lipschitz, some solutions may not be defined for certain values of t, even if f is smooth. For instance, the differential equation dy/dt = y 2 with initial condition y(0) = 1 has the solution y(t) = 1/(1-t), which is not defined at t = 1. Nevertheless, if f is a differentiable function defined over a compact subset of Rn, then the initial value problem has a unique solution defined over the entire R.[6] Similar result exists in differential geometry: if f is a differentiable vector field defined over a domain which is a compact smooth manifold, then all its trajectories (integral curves) exist for all time.[6][7]

See also

[edit]

Notes

[edit]
  1. ^ Coddington & Levinson (1955), Theorem I.3.1
  2. ^ Murray, Francis; Miller, Kenneth. Existence Theorems for Ordinary Differential Equations. p. 50.
  3. ^ Arnold, V. I. (1978). Ordinary Differential Equations. The MIT Press. ISBN 0-262-51018-9.
  4. ^ Coddington & Levinson (1955), p. 7
  5. ^ Agarwal, Ravi P.; Lakshmikantham, V. (1993). Uniqueness and Nonuniqueness Criteria for Ordinary Differential Equations. World Scientific. p. 159. ISBN 981-02-1357-3.
  6. ^ a b Perko, Lawrence Marion (2001). Differential equations and dynamical systems. Texts in applied mathematics (3rd ed.). New York: Springer. p. 189. ISBN 978-1-4613-0003-8.
  7. ^ Lee, John M. (2003), "Smooth Manifolds", Introduction to Smooth Manifolds, Graduate Texts in Mathematics, vol. 218, New York, NY: Springer New York, pp. 1–29, doi:10.1007/978-0-387-21752-9_1, ISBN 978-0-387-95448-6

References

[edit]
[edit]