# Pontryagin's maximum principle

(Redirected from Pontryagin's minimum principle)

Pontryagin's maximum (or minimum) principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students.[1] It has as a special case the Euler–Lagrange equation of the calculus of variations.

The principle states, informally, that the control Hamiltonian must take an extreme value over controls in the set of all permissible controls. Whether the extreme value is maximum or minimum depends both on the problem and on the sign convention used for defining the Hamiltonian. The normal convention, which is the one used in Hamiltonian, leads to a maximum hence maximum principle but the sign convention used in this article makes the extreme value a minimum.

If ${\displaystyle {\mathcal {U}}}$ is the set of values of permissible controls then the principle states that the optimal control ${\displaystyle u^{*}}$ must satisfy:

${\displaystyle H(x^{*}(t),u^{*}(t),\lambda ^{*}(t),t)\leq H(x^{*}(t),u,\lambda ^{*}(t),t),\quad \forall u\in {\mathcal {U}},\quad t\in [t_{0},t_{f}]}$

where ${\displaystyle x^{*}\in C^{1}[t_{0},t_{f}]}$ is the optimal state trajectory and ${\displaystyle \lambda ^{*}\in BV[t_{0},t_{f}]}$ is the optimal costate trajectory.[2]

The result was first successfully applied to minimum time problems where the input control is constrained, but it can also be useful in studying state-constrained problems.

Special conditions for the Hamiltonian can also be derived. When the final time ${\displaystyle t_{f}}$ is fixed and the Hamiltonian does not depend explicitly on time ${\displaystyle \left({\tfrac {\partial H}{\partial t}}\equiv 0\right)}$, then:

${\displaystyle H(x^{*}(t),u^{*}(t),\lambda ^{*}(t))\equiv \mathrm {constant} \,}$

and if the final time is free, then:

${\displaystyle H(x^{*}(t),u^{*}(t),\lambda ^{*}(t))\equiv 0.\,}$

More general conditions on the optimal control are given below.

When satisfied along a trajectory, Pontryagin's minimum principle is a necessary condition for an optimum. The Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, but this condition must be satisfied over the whole of the state space.

## Maximization and minimization

The principle was first known as Pontryagin's maximum principle and its proof is historically based on maximizing the Hamiltonian. The initial application of this principle was to the maximization of the terminal speed of a rocket. However, as it was subsequently mostly used for minimization of a performance index it has here been referred to as the minimum principle. Pontryagin's book solved the problem of minimizing a performance index.[3]

## Notation

In what follows we will be making use of the following notation.

${\displaystyle \Psi _{T}(x(T))={\frac {\partial \Psi (x)}{\partial T}}|_{x=x(T)}\,}$
${\displaystyle \Psi _{x}(x(T))={\begin{bmatrix}{\frac {\partial \Psi (x)}{\partial x_{1}}}|_{x=x(T)}&\cdots &{\frac {\partial \Psi (x)}{\partial x_{n}}}|_{x=x(T)}\end{bmatrix}}}$
${\displaystyle H_{x}(x^{*},u^{*},\lambda ^{*},t)={\begin{bmatrix}{\frac {\partial H}{\partial x_{1}}}|_{x=x^{*},u=u^{*},\lambda =\lambda ^{*}}&\cdots &{\frac {\partial H}{\partial x_{n}}}|_{x=x^{*},u=u^{*},\lambda =\lambda ^{*}}\end{bmatrix}}}$
${\displaystyle L_{x}(x^{*},u^{*})={\begin{bmatrix}{\frac {\partial L}{\partial x_{1}}}|_{x=x^{*},u=u^{*}}&\cdots &{\frac {\partial L}{\partial x_{n}}}|_{x=x^{*},u=u^{*}}\end{bmatrix}}}$
${\displaystyle f_{x}(x^{*},u^{*})={\begin{bmatrix}{\frac {\partial f_{1}}{\partial x_{1}}}|_{x=x^{*},u=u^{*}}&\cdots &{\frac {\partial f_{1}}{\partial x_{n}}}|_{x=x^{*},u=u^{*}}\\\vdots &\ddots &\vdots \\{\frac {\partial f_{n}}{\partial x_{1}}}|_{x=x^{*},u=u^{*}}&\ldots &{\frac {\partial f_{n}}{\partial x_{n}}}|_{x=x^{*},u=u^{*}}\end{bmatrix}}}$

## Formal statement of necessary conditions for minimization problem

Here the necessary conditions are shown for minimization of a functional. Take ${\displaystyle x}$ to be the state of the dynamical system with input ${\displaystyle u}$, such that

${\displaystyle {\dot {x}}=f(x,u),\quad x(0)=x_{0},\quad u(t)\in {\mathcal {U}},\quad t\in [0,T]}$

where ${\displaystyle {\mathcal {U}}}$ is the set of admissible controls and ${\displaystyle T}$ is the terminal (i.e., final) time of the system. The control ${\displaystyle u\in {\mathcal {U}}}$ must be chosen for all ${\displaystyle t\in [0,T]}$ to minimize the objective functional ${\displaystyle J}$ which is defined by the application and can be abstracted as

${\displaystyle J=\Psi (x(T))+\int _{0}^{T}L(x(t),u(t))\,dt}$

The constraints on the system dynamics can be adjoined to the Lagrangian ${\displaystyle L}$ by introducing time-varying Lagrange multiplier vector ${\displaystyle \lambda }$, whose elements are called the costates of the system. This motivates the construction of the Hamiltonian ${\displaystyle H}$ defined for all ${\displaystyle t\in [0,T]}$ by:

${\displaystyle H(x(t),u(t),\lambda (t),t)=\lambda ^{\rm {T}}(t)f(x(t),u(t))+L(x(t),u(t))\,}$

where ${\displaystyle \lambda ^{\rm {T}}}$ is the transpose of ${\displaystyle \lambda }$.

Pontryagin's minimum principle states that the optimal state trajectory ${\displaystyle x^{*}}$, optimal control ${\displaystyle u^{*}}$, and corresponding Lagrange multiplier vector ${\displaystyle \lambda ^{*}}$ must minimize the Hamiltonian ${\displaystyle H}$ so that

${\displaystyle (1)\qquad H(x^{*}(t),u^{*}(t),\lambda ^{*}(t),t)\leq H(x^{*}(t),u,\lambda ^{*}(t),t)\,}$

for all time ${\displaystyle t\in [0,T]}$ and for all permissible control inputs ${\displaystyle u\in {\mathcal {U}}}$. It must also be the case that

${\displaystyle (2)\qquad \Psi _{T}(x(T))+H(T)=0\,}$

Additionally, the costate equations

${\displaystyle (3)\qquad -{\dot {\lambda }}^{\rm {T}}(t)=H_{x}(x^{*}(t),u^{*}(t),\lambda (t),t)=\lambda ^{\rm {T}}(t)f_{x}(x^{*}(t),u^{*}(t))+L_{x}(x^{*}(t),u^{*}(t))}$

must be satisfied. If the final state ${\displaystyle x(T)}$ is not fixed (i.e., its differential variation is not zero), it must also be that the terminal costates are such that

${\displaystyle (4)\qquad \lambda ^{\rm {T}}(T)=\Psi _{x}(x(T))\,}$

These four conditions in (1)-(4) are the necessary conditions for an optimal control. Note that (4) only applies when ${\displaystyle x(T)}$ is free. If it is fixed, then this condition is not necessary for an optimum.

## Notes

1. ^ See ref. below for first published work.
2. ^ More info on C1 and BV spaces
3. ^ See p.13 of the 1962 book of Pontryagin et al. referenced below.

## References

• Boltyanskii, V. G.; Gamkrelidze, R. V.; Pontryagin, L. S. (1956). К теории оптимальных процессов [Towards a Theory of Optimal Processes]. Dokl. Akad. Nauk SSSR (in Russian). 110 (1): 7–10. MR 0084444.
• Pontryagin, L. S.; Boltyanskii, V. G.; Gamkrelidze, R. V.; Mishchenko, E. F. (1962). The Mathematical Theory of Optimal Processes. English translation. Interscience. ISBN 2-88124-077-1.
• Fuller, A. T. (1963). "Bibliography of Pontryagin's maximum principle". J. Electronics & Control. 15 (5): 513–517.
• Kirk, D. E. (1970). Optimal Control Theory: An Introduction. Prentice Hall. ISBN 0-486-43484-2.
• Sethi, S. P.; Thompson, G. L. (2000). Optimal Control Theory: Applications to Management Science and Economics (2nd ed.). Springer. ISBN 0-387-28092-8. Slides are available at [1]
• Geering, H. P. (2007). Optimal Control with Engineering Applications. Springer. ISBN 978-3-540-69437-3.
• Ross, I. M. (2009). A Primer on Pontryagin's Principle in Optimal Control. Collegiate. ISBN 978-0-9843571-0-9.
• Cassel, Kevin W. (2013). Variational Methods with Applications in Science and Engineering. Cambridge University Press.