# Pontryagin's maximum principle

Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the Hamiltonian.[a] These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.

The maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students, and its initial application was to the maximization of the terminal speed of a rocket. The result was derived using ideas from the classical calculus of variations. After a slight perturbation of the optimal control, one considers the first-order term of a Taylor expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows.

Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization. A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time. The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not.

## Notation

In what follows we will be making use of the following notation.

$\Psi _{T}(x(T))={\frac {\partial \Psi (x)}{\partial T}}|_{x=x(T)}\,$ $\Psi _{x}(x(T))={\begin{bmatrix}{\frac {\partial \Psi (x)}{\partial x_{1}}}|_{x=x(T)}&\cdots &{\frac {\partial \Psi (x)}{\partial x_{n}}}|_{x=x(T)}\end{bmatrix}}$ $H_{x}(x^{*},u^{*},\lambda ^{*},t)={\begin{bmatrix}{\frac {\partial H}{\partial x_{1}}}|_{x=x^{*},u=u^{*},\lambda =\lambda ^{*}}&\cdots &{\frac {\partial H}{\partial x_{n}}}|_{x=x^{*},u=u^{*},\lambda =\lambda ^{*}}\end{bmatrix}}$ $L_{x}(x^{*},u^{*})={\begin{bmatrix}{\frac {\partial L}{\partial x_{1}}}|_{x=x^{*},u=u^{*}}&\cdots &{\frac {\partial L}{\partial x_{n}}}|_{x=x^{*},u=u^{*}}\end{bmatrix}}$ $f_{x}(x^{*},u^{*})={\begin{bmatrix}{\frac {\partial f_{1}}{\partial x_{1}}}|_{x=x^{*},u=u^{*}}&\cdots &{\frac {\partial f_{1}}{\partial x_{n}}}|_{x=x^{*},u=u^{*}}\\\vdots &\ddots &\vdots \\{\frac {\partial f_{n}}{\partial x_{1}}}|_{x=x^{*},u=u^{*}}&\ldots &{\frac {\partial f_{n}}{\partial x_{n}}}|_{x=x^{*},u=u^{*}}\end{bmatrix}}$ ## Formal statement of necessary conditions for minimization problem

Here the necessary conditions are shown for minimization of a functional. Take $x$ to be the state of the dynamical system with input $u$ , such that

${\dot {x}}=f(x,u),\quad x(0)=x_{0},\quad u(t)\in {\mathcal {U}},\quad t\in [0,T]$ where ${\mathcal {U}}$ is the set of admissible controls and $T$ is the terminal (i.e., final) time of the system. The control $u\in {\mathcal {U}}$ must be chosen for all $t\in [0,T]$ to minimize the objective functional $J$ which is defined by the application and can be abstracted as

$J=\Psi (x(T))+\int _{0}^{T}L(x(t),u(t))\,dt$ The constraints on the system dynamics can be adjoined to the Lagrangian $L$ by introducing time-varying Lagrange multiplier vector $\lambda$ , whose elements are called the costates of the system. This motivates the construction of the Hamiltonian $H$ defined for all $t\in [0,T]$ by:

$H(x(t),u(t),\lambda (t),t)=\lambda ^{\rm {T}}(t)f(x(t),u(t))+L(x(t),u(t))\,$ where $\lambda ^{\rm {T}}$ is the transpose of $\lambda$ .

Pontryagin's minimum principle states that the optimal state trajectory $x^{*}$ , optimal control $u^{*}$ , and corresponding Lagrange multiplier vector $\lambda ^{*}$ must minimize the Hamiltonian $H$ so that

$(1)\qquad H(x^{*}(t),u^{*}(t),\lambda ^{*}(t),t)\leq H(x^{*}(t),u,\lambda ^{*}(t),t)\,$ for all time $t\in [0,T]$ and for all permissible control inputs $u\in {\mathcal {U}}$ . It must also be the case that

$(2)\qquad \Psi _{T}(x(T))+H(T)=0\,$ Additionally, the costate equations

$(3)\qquad -{\dot {\lambda }}^{\rm {T}}(t)=H_{x}(x^{*}(t),u^{*}(t),\lambda (t),t)=\lambda ^{\rm {T}}(t)f_{x}(x^{*}(t),u^{*}(t))+L_{x}(x^{*}(t),u^{*}(t))$ must be satisfied. If the final state $x(T)$ is not fixed (i.e., its differential variation is not zero), it must also be that the terminal costates are such that

$(4)\qquad \lambda ^{\rm {T}}(T)=\Psi _{x}(x(T))\,$ These four conditions in (1)-(4) are the necessary conditions for an optimal control. Note that (4) only applies when $x(T)$ is free. If it is fixed, then this condition is not necessary for an optimum.