where denotes a vector of state variables, and a vector of control variables. Once initial conditions and controls are specified, a solution to the differential equations, called a trajectory, can be found. The problem of optimal control is to choose (from a compact and convex set) so that maximizes or minimizes a certain objective function between an initial time and a terminal time (where may be infinity). Specifically, the goal is to optimize a performance index at each point in time,
subject to the above equations of motion of the state variables. The solution method involves defining an ancillary function known as the Hamiltonian
which combines the objective function and the state equations much like a Lagrangian in a static optimization problem, only that the multipliers , referred to as costate variables, are functions of time rather than constants.
The goal is to find an optimal control policy function and, with it, an optimal trajectory of the state variable , which by Pontryagin's maximum principle are the arguments that maximize the Hamiltonian,
The first-order necessary conditions for a maximum are given by
which generates ,
the latter of which are referred to as the costate equations. Together, the state and costate equations describe the Hamiltonian dynamical system (again analogous to but distinct from the Hamiltonian system in physics), the solution of which involves a two-point boundary value problem, given that there are boundary conditions involving two different points in time, the initial time (the differential equations for the state variables), and the terminal time (the differential equations for the costate variables; unless a final function is specified, the boundary conditions are , or for infinite time horizons).
A sufficient condition for a maximum is the concavity of the Hamiltonian evaluated at the solution, i.e.
where is the optimal control, and is resulting optimal trajectory for the state variable. Alternatively, by a result due to Olvi L. Mangasarian, the necessary conditions are sufficient if the functions and are both concave in and .
which can be substituted back into the Lagrangian expression to give
To derive the first-order conditions for an optimum, assume that the solution has been found and the Lagrangian is maximized. Then any change to or or must cause the value of the Lagrangian to decline. Specifically, the total derivative of obeys
For this expression to equal zero necessitates the following optimization conditions:
If both the initial value and terminal value are fixed, i.e. , no conditions on and are needed. If the terminal value is free, as is often the case, the additional condition is necessary for optimality. The latter is called a transversality condition for a fixed horizon problem.
It can be seen that the necessary conditions are identical to the ones stated above for the Hamiltonian. Thus the Hamiltonian can be understood as a device to generate the first-order necessary conditions.
(Note that the discrete time Hamiltonian at time involves the costate variable at time  This small detail is essential so that when we differentiate with respect to we get a term involving on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).
Hamilton then formulated his equations to describe the dynamics of the system as
The Hamiltonian of control theory describes not the dynamics of a system but conditions for extremizing some scalar function thereof (the Lagrangian) with respect to a control variable . As normally defined, it is a function of 4 variables
where is the state variable and is the control variable with respect to that which we are extremizing.
The associated conditions for a maximum are
This definition agrees with that given by the article by Sussmann and Willems. (see p. 39, equation 14). Sussmann and Willems show how the control Hamiltonian can be used in dynamics e.g. for the brachistochrone problem, but do not mention the prior work of Carathéodory on this approach.
which is referred to as the current value Hamiltonian, in contrast to the present value Hamiltonian defined in the first section. Most notably the costate variables are redefined as , which leads to modified first-order conditions.
which follows immediately from the product rule. Economically, represent current-valued shadow prices for the capital goods .
to be maximized by choice of an optimal consumption path . The function indicates the utility the representative agent of consuming at any given point in time. The factor represents discounting. The maximization problem is subject to the following differential equation for capital intensity, describing the time evolution of capital per effective worker:
where is period t consumption, is period t capital per worker (with ), is period t production, is the population growth rate, is the capital depreciation rate, the agent discounts future utility at rate , with and .
Here, is the state variable which evolves according to the above equation, and is the control variable. The Hamiltonian becomes
The optimality conditions are
in addition to the transversality condition . If we let , then log-differentiating the first optimality condition with respect to yields
Inserting this equation into the second optimality condition yields
which is known as the Keynes–Ramsey rule, which gives a condition for consumption in every period which, if followed, ensures maximum lifetime utility.
^Mangasarian, O. L. (1966). "Sufficient Conditions for the Optimal Control of Nonlinear Systems". SIAM Journal on Control. 4 (1): 139–152. doi:10.1137/0304013.
^Kamien, Morton I.; Schwartz, Nancy L. (1991). Dynamic Optimization : The Calculus of Variances and Optimal Control in Economics and Management (Second ed.). Amsterdam: North-Holland. pp. 126–127. ISBN0-444-01609-0.