# Hamilton–Jacobi equation

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, Schrödinger's equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics.[1][2] The qualitative form of this connection is called Hamilton's optico-mechanical analogy.

In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming.[3]

## Overview

The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation

${\displaystyle -{\frac {\partial S}{\partial t}}=H\!\!\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right).}$

for a system of particles at coordinates ${\displaystyle \mathbf {q} }$. The function ${\displaystyle H}$ is the system's Hamiltonian giving the system's energy. The solution of the equation is the action functional, ${\displaystyle S}$,[4] called Hamilton's principal function in older textbooks. The solution can be related to the system Lagrangian ${\displaystyle \ {\mathcal {L}}\ }$ by an indefinite integral of the form used in the principle of least action:[5]: 431

${\displaystyle \ S=\int {\mathcal {L}}\ \operatorname {d} t+~{\mathsf {some\ constant}}~}$
Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.[6]: 175

## Mathematical formulation

### Notation

Boldface variables such as ${\displaystyle \mathbf {q} }$ represent a list of ${\displaystyle N}$ generalized coordinates,

${\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})}$

A dot over a variable or list signifies the time derivative (see Newton's notation). For example,

${\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.}$

The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as

${\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.}$

### The action functional (a.k.a. Hamilton's principal function)

#### Definition

Let the Hessian matrix ${\textstyle H_{\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\cal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}}$ be invertible. The relation

${\displaystyle {\frac {d}{dt}}{\frac {\partial {\cal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\cal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\cal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\cal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,}$
shows that the Euler–Lagrange equations form a ${\displaystyle n\times n}$ system of second-order ordinary differential equations. Inverting the matrix ${\displaystyle H_{\cal {L}}}$ transforms this system into
${\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.}$

Let a time instant ${\displaystyle t_{0}}$ and a point ${\displaystyle \mathbf {q} _{0}\in M}$ in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every ${\displaystyle \mathbf {v} _{0},}$ the initial value problem with the conditions ${\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}}$ and ${\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}}$ has a locally unique solution ${\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).}$ Additionally, let there be a sufficiently small time interval ${\displaystyle (t_{0},t_{1})}$ such that extremals with different initial velocities ${\displaystyle \mathbf {v} _{0}}$ would not intersect in ${\displaystyle M\times (t_{0},t_{1}).}$ The latter means that, for any ${\displaystyle \mathbf {q} \in M}$ and any ${\displaystyle t\in (t_{0},t_{1}),}$ there can be at most one extremal ${\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}$ for which ${\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}}$ and ${\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}$ Substituting ${\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}$ into the action functional results in the Hamilton's principal function (HPF)

${\displaystyle S(\mathbf {q} ,t;\mathbf {q} _{0},t_{0})\ {\stackrel {\text{def}}{=}}\int _{t_{0}}^{t}{\mathcal {L}}(\gamma (\tau ;\cdot ),{\dot {\gamma }}(\tau ;\cdot ),\tau )\,d\tau ,}$

where

• ${\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),}$
• ${\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},}$
• ${\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}$

### Formula for the momenta

The momenta are defined as the quantities ${\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\cal {L}}/\partial {\dot {q}}^{i}.}$ This section shows that the dependency of ${\displaystyle p_{i}}$ on ${\displaystyle \mathbf {\dot {q}} }$ disappears, once the HPF is known.

Indeed, let a time instant ${\displaystyle t_{0}}$ and a point ${\displaystyle \mathbf {q} _{0}}$ in the configuration space be fixed. For every time instant ${\displaystyle t}$ and a point ${\displaystyle \mathbf {q} ,}$ let ${\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})}$ be the (unique) extremal from the definition of the Hamilton's principal function ${\displaystyle S}$. Call ${\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}}$ the velocity at ${\displaystyle \tau =t}$. Then

${\displaystyle {\frac {\partial S}{\partial q^{i}}}=\left.{\frac {\partial {\cal {L}}}{\partial {\dot {q}}^{i}}}\right|_{\mathbf {\dot {q}} =\mathbf {v} }\!\!\!\!\!\!\!,\quad i=1,\ldots ,n.}$

Proof

While the proof below assumes the configuration space to be an open subset of ${\displaystyle \mathbb {R} ^{n},}$ the underlying technique applies equally to arbitrary spaces. In the context of this proof, the calligraphic letter ${\displaystyle {\cal {S}}}$ denotes the action functional, and the italic ${\displaystyle S}$ the Hamilton's principal function.

Step 1. Let ${\displaystyle \xi =\xi (t)}$ be a path in the configuration space, and ${\displaystyle \delta \xi =\delta \xi (t)}$ a vector field along ${\displaystyle \xi }$. (For each ${\displaystyle t,}$ the vector ${\displaystyle \delta \xi (t)}$ is called perturbation, infinitesimal variation or virtual displacement of the mechanical system at the point ${\displaystyle \xi (t)}$). Recall that the variation ${\displaystyle \delta {\cal {S}}_{\delta \xi }[\gamma ,t_{1},t_{0}]}$ of the action ${\displaystyle {\cal {S}}}$ at the point ${\displaystyle \xi }$ in the direction ${\displaystyle \delta \xi }$ is given by the formula

${\displaystyle \delta {\cal {S}}_{\delta \xi }[\xi ,t_{1},t_{0}]=\int _{t_{0}}^{t_{1}}\left({\frac {\partial {\cal {L}}}{\partial \mathbf {q} }}-{\frac {d}{dt}}{\frac {\partial {\cal {L}}}{\partial \mathbf {\dot {q}} }}\right)\delta \xi \,dt+{\frac {\partial {\cal {L}}}{\partial \mathbf {\dot {q}} }}\,\delta \xi {\Biggl |}_{t_{0}}^{t_{1}},}$
where one should substitute ${\displaystyle q^{i}=\xi ^{i}(t)}$ and ${\displaystyle {\dot {q}}^{i}={\dot {\xi }}^{i}(t)}$ after calculating the partial derivatives on the right-hand side. (This formula follows from the definition of Gateaux derivative via integration by parts).

Assume that ${\displaystyle \xi }$ is an extremal. Since ${\displaystyle \xi }$ now satisfies the Euler–Lagrange equations, the integral term vanishes. If ${\displaystyle \xi }$'s starting point ${\displaystyle \mathbf {q} _{0}}$ is fixed, then, by the same logic that was used to derive the Euler–Lagrange equations, ${\displaystyle \delta \xi (t_{0})=0.}$ Thus,

${\displaystyle \delta {\cal {S}}_{\delta \xi }[\xi ,t;t_{0}]=\left.{\frac {\partial {\cal {L}}}{\partial \mathbf {\dot {q}} }}\right|_{\mathbf {\dot {q}} ={\dot {\xi }}(t)}^{\mathbf {q} =\xi (t)}\,\delta \xi (t).}$

Step 2. Let ${\displaystyle \gamma =\gamma (\tau ;\mathbf {q} ,\mathbf {q} _{0},t,t_{0})}$ be the (unique) extremal from the definition of HPF, ${\displaystyle \delta \gamma =\delta \gamma (\tau )}$ a vector field along ${\displaystyle \gamma ,}$ and ${\displaystyle \gamma _{\varepsilon }=\gamma _{\varepsilon }(\tau ;\mathbf {q} _{\varepsilon },\mathbf {q} _{0},t,t_{0})}$ a variation of ${\displaystyle \gamma }$ "compatible" with ${\displaystyle \delta \gamma .}$ In precise terms, ${\displaystyle \gamma _{\varepsilon }|_{\varepsilon =0}=\gamma ,}$ ${\displaystyle {\dot {\gamma }}_{\varepsilon }|_{\varepsilon =0}=\delta \gamma ,}$ ${\displaystyle \gamma _{\varepsilon }|_{\tau =t_{0}}=\gamma |_{\tau =t_{0}}=\mathbf {q} _{0}.}$

By definition of HPF and Gateaux derivative,

${\displaystyle \delta {\cal {S}}_{\delta \gamma }[\gamma ,t]{\overset {\text{def}}{{}={}}}\left.{\frac {d{\cal {S}}[\gamma _{\varepsilon },t]}{d\varepsilon }}\right|_{\varepsilon =0}=\left.{\frac {dS(\gamma _{\varepsilon }(t),t)}{d\varepsilon }}\right|_{\varepsilon =0}={\frac {\partial S}{\mathbf {\partial q} }}\,\delta \gamma (t).}$

Here, we took into account that ${\displaystyle \mathbf {q} =\gamma (t;\mathbf {q} ,\mathbf {q} _{0},t,t_{0})}$ and dropped ${\displaystyle t_{0}}$ for compactness.

Step 3. We now substitute ${\displaystyle \xi =\gamma }$ and ${\displaystyle \delta \xi =\delta \gamma }$ into the expression for ${\displaystyle \delta {\cal {S}}_{\delta \xi }[\xi ,t;t_{0}]}$ from Step 1 and compare the result with the formula derived in Step 2. The fact that, for ${\displaystyle t>t_{0},}$ the vector field ${\displaystyle \delta \gamma }$ was chosen arbitrarily completes the proof.

### Formula

Given the Hamiltonian ${\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)}$ of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function ${\displaystyle S}$,[7]

${\displaystyle -{\frac {\partial S}{\partial t}}=H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right).}$

Derivation

For an extremal ${\displaystyle \xi =\xi (t;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}),}$ where ${\displaystyle \mathbf {v} _{0}={\dot {\xi }}|_{t=t_{0}}}$ is the initial speed (see discussion preceding the definition of HPF),

${\displaystyle {\cal {L}}(\xi (t),{\dot {\xi }}(t),t)={\frac {dS(\xi (t),t)}{dt}}=\left[{\frac {\partial S}{\partial \mathbf {q} }}\mathbf {\dot {q}} +{\frac {\partial S}{\partial t}}\right]_{\mathbf {\dot {q}} ={\dot {\xi }}(t)}^{\mathbf {q} =\xi (t)}.}$

From the formula for ${\displaystyle p_{i}=p_{i}(\mathbf {q} ,t)}$ and the coordinate-based definition of the Hamiltonian

${\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)=\mathbf {p} \mathbf {\dot {q}} -{\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t),}$
with ${\displaystyle \mathbf {\dot {q}} (\mathbf {p} ,\mathbf {q} ,t)}$ satisfying the (uniquely solvable for ${\displaystyle \mathbf {\dot {q}} )}$ equation ${\textstyle \mathbf {p} ={\frac {\partial {\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)}{\partial \mathbf {\dot {q}} }},}$ obtain
${\displaystyle {\frac {\partial S}{\partial t}}={\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)-{\frac {\partial S}{\mathbf {\partial q} }}\mathbf {\dot {q}} =-H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right),}$
where ${\displaystyle \mathbf {q} =\xi (t)}$ and ${\displaystyle \mathbf {\dot {q}} ={\dot {\xi }}(t).}$

Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating ${\displaystyle S}$ as the generating function for a canonical transformation of the classical Hamiltonian

${\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).}$

The conjugate momenta correspond to the first derivatives of ${\displaystyle S}$ with respect to the generalized coordinates

${\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.}$

As a solution to the Hamilton–Jacobi equation, the principal function contains ${\displaystyle N+1}$ undetermined constants, the first ${\displaystyle N}$ of them denoted as ${\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}}$, and the last one coming from the integration of ${\displaystyle {\frac {\partial S}{\partial t}}}$.

The relationship between ${\displaystyle \mathbf {p} }$ and ${\displaystyle \mathbf {q} }$ then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities

${\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N}$
are also constants of motion, and these equations can be inverted to find ${\displaystyle \mathbf {q} }$ as a function of all the ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ constants and time.[8]

## Comparison with other formulations of mechanics

The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the ${\displaystyle N}$ generalized coordinates ${\displaystyle q_{1},\,q_{2},\dots ,q_{N}}$ and the time ${\displaystyle t}$. The generalized momenta do not appear, except as derivatives of ${\displaystyle S}$, the classical action.

For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of ${\displaystyle N}$, generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta ${\displaystyle p_{1},\,p_{2},\dots ,p_{N}}$.

Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.[5]: 444

## Derivation using a canonical transformation

Any canonical transformation involving a type-2 generating function ${\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}$ leads to the relations

${\displaystyle \mathbf {p} ={\partial G_{2} \over \partial \mathbf {q} },\quad \mathbf {Q} ={\partial G_{2} \over \partial \mathbf {P} },\quad K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}}$
and Hamilton's equations in terms of the new variables ${\displaystyle \mathbf {P} ,\,\mathbf {Q} }$ and new Hamiltonian ${\displaystyle K}$ have the same form:
${\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.}$

To derive the HJE, a generating function ${\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}$ is chosen in such a way that, it will make the new Hamiltonian ${\displaystyle K=0}$. Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial

${\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0}$
so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta ${\displaystyle \mathbf {P} }$ are usually denoted ${\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}}$, i.e. ${\displaystyle P_{m}=\alpha _{m}}$ and the new generalized coordinates ${\displaystyle \mathbf {Q} }$ are typically denoted as ${\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}}$, so ${\displaystyle Q_{m}=\beta _{m}}$.

Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant ${\displaystyle A}$:

${\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,}$
the HJE automatically arises
${\displaystyle \mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\,\rightarrow \,H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\,\rightarrow \,H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)+{\partial S \over \partial t}=0.}$

When solved for ${\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}$, these also give us the useful equations

${\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},}$
or written in components for clarity
${\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.}$

Ideally, these N equations can be inverted to find the original generalized coordinates ${\displaystyle \mathbf {q} }$ as a function of the constants ${\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},}$ and ${\displaystyle t}$, thus solving the original problem.

## Separation of variables

When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative ${\displaystyle {\frac {\partial S}{\partial t}}}$ in the HJE must be a constant, usually denoted (${\displaystyle -E}$), giving the separated solution

${\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et}$
where the time-independent function ${\displaystyle W(\mathbf {q} )}$ is sometimes called the abbreviated action or Hamilton's characteristic function [5]: 434  and sometimes[9]: 607  written ${\displaystyle S_{0}}$ (see action principle names). The reduced Hamilton–Jacobi equation can then be written
${\displaystyle H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)=E.}$

To illustrate separability for other variables, a certain generalized coordinate ${\displaystyle q_{k}}$ and its derivative ${\displaystyle {\frac {\partial S}{\partial q_{k}}}}$ are assumed to appear together as a single function

${\displaystyle \psi \left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}$
in the Hamiltonian
${\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).}$

In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates

${\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).}$

Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as ${\displaystyle \Gamma _{k}}$), yielding a first-order ordinary differential equation for ${\displaystyle S_{k}(q_{k}),}$

${\displaystyle \psi \left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)=\Gamma _{k}.}$

In fortunate cases, the function ${\displaystyle S}$ can be separated completely into ${\displaystyle N}$ functions ${\displaystyle S_{m}(q_{m}),}$

${\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.}$

In such a case, the problem devolves to ${\displaystyle N}$ ordinary differential equations.

The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, ${\displaystyle S}$ will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections.

### Examples in various coordinate systems

#### Spherical coordinates

In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written

${\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).}$

The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions ${\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )}$ such that ${\displaystyle U}$ can be written in the analogous form

${\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.}$

Substitution of the completely separated solution

${\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et}$
into the HJE yields
${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.}$

This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for ${\displaystyle \phi }$

${\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }}$
where ${\displaystyle \Gamma _{\phi }}$ is a constant of the motion that eliminates the ${\displaystyle \phi }$ dependence from the Hamilton–Jacobi equation
${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )+{\frac {\Gamma _{\phi }}{\sin ^{2}\theta }}\right]=E.}$

The next ordinary differential equation involves the ${\displaystyle \theta }$ generalized coordinate

${\displaystyle \left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )+{\frac {\Gamma _{\phi }}{\sin ^{2}\theta }}=\Gamma _{\theta }}$
where ${\displaystyle \Gamma _{\theta }}$ is again a constant of the motion that eliminates the ${\displaystyle \theta }$ dependence and reduces the HJE to the final ordinary differential equation
${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E}$
whose integration completes the solution for ${\displaystyle S}$.

#### Elliptic cylindrical coordinates

The Hamiltonian in elliptic cylindrical coordinates can be written

${\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)}$
where the foci of the ellipses are located at ${\displaystyle \pm a}$ on the ${\displaystyle x}$-axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that ${\displaystyle U}$ has an analogous form
${\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)}$
where ${\displaystyle U_{\mu }(\mu )}$, ${\displaystyle U_{\nu }(\nu )}$ and ${\displaystyle U_{z}(z)}$ are arbitrary functions. Substitution of the completely separated solution
${\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et}$
into the HJE yields
${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )\right]=E.}$

Separating the first ordinary differential equation

${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}$
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
${\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)}$
which itself may be separated into two independent ordinary differential equations
${\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu =\Gamma _{\mu }}$
${\displaystyle \left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\nu }(\nu )+2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu =\Gamma _{\nu }}$
that, when solved, provide a complete solution for ${\displaystyle S}$.

#### Parabolic cylindrical coordinates

The Hamiltonian in parabolic cylindrical coordinates can be written

${\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).}$

The Hamilton–Jacobi equation is completely separable in these coordinates provided that ${\displaystyle U}$ has an analogous form

${\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)}$
where ${\displaystyle U_{\sigma }(\sigma )}$, ${\displaystyle U_{\tau }(\tau )}$, and ${\displaystyle U_{z}(z)}$ are arbitrary functions. Substitution of the completely separated solution
${\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}}$
into the HJE yields
${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2mU_{\sigma }(\sigma )+2mU_{\tau }(\tau )\right]=E.}$

Separating the first ordinary differential equation

${\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}$
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
${\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2mU_{\sigma }(\sigma )+2mU_{\tau }(\tau )=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)}$
which itself may be separated into two independent ordinary differential equations
${\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+2mU_{\sigma }(\sigma )+2m\sigma ^{2}\left(\Gamma _{z}-E\right)=\Gamma _{\sigma }}$
${\displaystyle \left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2mU_{\tau }(\tau )+2m\tau ^{2}\left(\Gamma _{z}-E\right)=\Gamma _{\tau }}$
that, when solved, provide a complete solution for ${\displaystyle S}$.

## Waves and particles

### Optical wave fronts and trajectories

The HJE establishes a duality between trajectories and wavefronts.[10] For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface ${\textstyle {\cal {C}}_{t}}$ that the light emitted at time ${\textstyle t=0}$ has reached at time ${\textstyle t}$. Light rays and wave fronts are dual: if one is known, the other can be deduced.

More precisely, geometrical optics is a variational problem where the “action” is the travel time ${\textstyle T}$ along a path,

${\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds}$
where ${\textstyle n}$ is the medium's index of refraction and ${\textstyle ds}$ is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other.

The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation.

The wave front at time ${\textstyle t}$, for a system initially at ${\textstyle \mathbf {q} _{0}}$ at time ${\textstyle t_{0}}$, is defined as the collection of points ${\textstyle \mathbf {q} }$ such that ${\textstyle S(\mathbf {q} ,t)={\text{const}}}$. If ${\textstyle S(\mathbf {q} ,t)}$ is known, the momentum is immediately deduced.

${\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.}$

Once ${\textstyle \mathbf {p} }$ is known, tangents to the trajectories ${\textstyle {\dot {\mathbf {q} }}}$ are computed by solving the equation

${\displaystyle {\frac {\partial {\cal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}}$
for ${\textstyle {\dot {\mathbf {q} }}}$, where ${\textstyle {\cal {L}}}$ is the Lagrangian. The trajectories are then recovered from the knowledge of ${\textstyle {\dot {\mathbf {q} }}}$.

### Relationship to the Schrödinger equation

The isosurfaces of the function ${\displaystyle S(\mathbf {q} ,t)}$ can be determined at any time t. The motion of an ${\displaystyle S}$-isosurface as a function of time is defined by the motions of the particles beginning at the points ${\displaystyle \mathbf {q} }$ on the isosurface. The motion of such an isosurface can be thought of as a wave moving through ${\displaystyle \mathbf {q} }$-space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave

${\displaystyle \psi =\psi _{0}e^{iS/\hbar }}$
where ${\displaystyle \hbar }$ is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having ${\displaystyle S}$ be a complex number. The Hamilton–Jacobi equation is then rewritten as
${\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}}$
which is the Schrödinger equation.

Conversely, starting with the Schrödinger equation and our ansatz for ${\displaystyle \psi }$, it can be deduced that[11]

${\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}\nabla ^{2}S.}$

The classical limit (${\displaystyle \hbar \rightarrow 0}$) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation,

${\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.}$

## Applications

### HJE in a gravitational field

Using the energy–momentum relation in the form[12]

${\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0}$
for a particle of rest mass ${\displaystyle m}$ travelling in curved space, where ${\displaystyle g^{\alpha \beta }}$ are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and ${\displaystyle c}$ is the speed of light. Setting the four-momentum ${\displaystyle P_{\alpha }}$ equal to the four-gradient of the action ${\displaystyle S}$,
${\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}}$
gives the Hamilton–Jacobi equation in the geometry determined by the metric ${\displaystyle g}$:
${\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,}$
in other words, in a gravitational field.

### HJE in electromagnetic fields

For a particle of rest mass ${\displaystyle m}$ and electric charge ${\displaystyle e}$ moving in electromagnetic field with four-potential ${\displaystyle A_{i}=(\phi ,\mathrm {A} )}$ in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor ${\displaystyle g^{ik}=g_{ik}}$ has a form

${\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}}$
and can be solved for the Hamilton principal action function ${\displaystyle S}$ to obtain further solution for the particle trajectory and momentum:[13]
${\displaystyle x=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,}$
${\displaystyle y=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,}$
${\displaystyle z=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int (\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}})\,d\xi ,}$
${\displaystyle \xi =ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int (\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}})\,d\xi ,}$
${\displaystyle p_{x}=-{\frac {e}{c}}A_{x},\quad p_{y}=-{\frac {e}{c}}A_{y},}$
${\displaystyle p_{z}={\frac {e^{2}}{2\gamma c}}(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}),}$
${\displaystyle {\mathcal {E}}=c\gamma +{\frac {e^{2}}{2\gamma c}}(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}),}$
where ${\displaystyle \xi =ct-z}$ and ${\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}}$ with ${\displaystyle {\overline {\mathbf {A} }}}$ the cycle average of the vector potential.

#### A circularly polarized wave

In the case of circular polarization,

${\displaystyle E_{x}=E_{0}\sin \omega \xi _{1},\quad E_{y}=E_{0}\cos \omega \xi _{1},}$
${\displaystyle A_{x}={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},\quad A_{y}=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.}$

Hence

${\displaystyle x=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},}$
${\displaystyle y=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},}$
${\displaystyle p_{x}=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},}$
${\displaystyle p_{y}={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},}$
where ${\displaystyle \xi _{1}=\xi /c}$, implying the particle moving along a circular trajectory with a permanent radius ${\displaystyle ecE_{0}/\gamma \omega ^{2}}$ and an invariable value of momentum ${\displaystyle eE_{0}/\omega ^{2}}$ directed along a magnetic field vector.

#### A monochromatic linearly polarized plane wave

For the flat, monochromatic, linearly polarized wave with a field ${\displaystyle E}$ directed along the axis ${\displaystyle y}$

${\displaystyle E_{y}=E_{0}\cos \omega \xi _{1},}$
${\displaystyle A_{y}=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},}$
hence
${\displaystyle x={\text{const}},}$
${\displaystyle y_{0}=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},}$
${\displaystyle y=y_{0}\cos \omega \xi _{1},\quad z=C_{z}y_{0}\sin 2\omega \xi _{1},}$
${\displaystyle C_{z}={\frac {eE_{0}}{8\gamma \omega }},\quad \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},}$
${\displaystyle p_{x}=0,}$
${\displaystyle p_{y,0}={\frac {eE_{0}}{\omega }},}$
${\displaystyle p_{y}=p_{y,0}\sin \omega \xi _{1},}$
${\displaystyle p_{z}=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}}$
implying the particle figure-8 trajectory with a long its axis oriented along the electric field ${\displaystyle E}$ vector.

#### An electromagnetic wave with a solenoidal magnetic field

For the electromagnetic wave with axial (solenoidal) magnetic field:[14]

${\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},}$
${\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},}$
hence
${\displaystyle x={\text{constant}},}$
${\displaystyle y_{0}=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},}$
${\displaystyle y=y_{0}\cos \omega \xi _{1},}$
${\displaystyle z=C_{z}y_{0}\sin 2\omega \xi _{1},}$
${\displaystyle C_{z}={\frac {e\rho _{0}B_{0}}{8c\gamma }},}$
${\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},}$
${\displaystyle p_{x}=0,}$
${\displaystyle p_{y,0}={\frac {e\rho _{0}B_{0}}{c}},}$
${\displaystyle p_{y}=p_{y,0}\sin \omega \xi _{1},}$
${\displaystyle p_{z}=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},}$
where ${\displaystyle B_{0}}$ is the magnetic field magnitude in a solenoid with the effective radius ${\displaystyle \rho _{0}}$, inductivity ${\displaystyle L_{s}}$, number of windings ${\displaystyle N_{s}}$, and an electric current magnitude ${\displaystyle I_{0}}$ through the solenoid windings. The particle motion occurs along the figure-8 trajectory in ${\displaystyle yz}$ plane set perpendicular to the solenoid axis with arbitrary azimuth angle ${\displaystyle \varphi }$ due to axial symmetry of the solenoidal magnetic field.

## References

1. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, MA: Addison-Wesley. pp. 484–492. ISBN 978-0-201-02918-5. (particularly the discussion beginning in the last paragraph of page 491)
2. ^ Sakurai, J. J. (1994). Modern Quantum Mechanics (rev. ed.). Reading, MA: Addison-Wesley. pp. 103–107. ISBN 0-201-53929-2.
3. ^ Kálmán, Rudolf E. (1963). "The Theory of Optimal Control and the Calculus of Variations". In Bellman, Richard (ed.). Mathematical Optimization Techniques. Berkeley: University of California Press. pp. 309–331. OCLC 1033974.
4. ^ Hand, L.N.; Finch, J.D. (2008). Analytical Mechanics. Cambridge University Press. ISBN 978-0-521-57572-0.
5. ^ a b c Goldstein, Herbert; Poole, Charles P.; Safko, John L. (2008). Classical mechanics (3, [Nachdr.] ed.). San Francisco Munich: Addison Wesley. ISBN 978-0-201-65702-9.
6. ^ Coopersmith, Jennifer (2017). The lazy universe : an introduction to the principle of least action. Oxford, UK / New York, NY: Oxford University Press. ISBN 978-0-19-874304-0.
7. ^ Hand, L. N.; Finch, J. D. (2008). Analytical Mechanics. Cambridge University Press. ISBN 978-0-521-57572-0.
8. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, MA: Addison-Wesley. p. 440. ISBN 978-0-201-02918-5.
9. ^ Hanc, Jozef; Taylor, Edwin F.; Tuleja, Slavomir (2005-07-01). "Variational mechanics in one and two dimensions". American Journal of Physics. 73 (7): 603–610. Bibcode:2005AmJPh..73..603H. doi:10.1119/1.1848516. ISSN 0002-9505.
10. ^ Houchmandzadeh, Bahram (2020). "The Hamilton-Jacobi Equation : an alternative approach". American Journal of Physics. 85 (5): 10.1119/10.0000781. arXiv:1910.09414. Bibcode:2020AmJPh..88..353H. doi:10.1119/10.0000781. S2CID 204800598.
11. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, MA: Addison-Wesley. pp. 490–491. ISBN 978-0-201-02918-5.
12. ^ Wheeler, John; Misner, Charles; Thorne, Kip (1973). Gravitation. W.H. Freeman & Co. pp. 649, 1188. ISBN 978-0-7167-0344-0.
13. ^ Landau, L.; Lifshitz, E. (1959). The Classical Theory of Fields. Reading, Massachusetts: Addison-Wesley. OCLC 17966515.
14. ^ E. V. Shun'ko; D. E. Stevenson; V. S. Belkin (2014). "Inductively Coupling Plasma Reactor With Plasma Electron Energy Controllable in the Range from ~6 to ~100 eV". IEEE Transactions on Plasma Science. 42, part II (3): 774–785. Bibcode:2014ITPS...42..774S. doi:10.1109/TPS.2014.2299954. S2CID 34765246.