# Verlet integration

Verlet integration (French pronunciation: ​[vɛʁˈlɛ]) is a numerical method used to integrate Newton's equations of motion.[1] It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 by Delambre, and has been rediscovered many times since then, most recently by Loup Verlet in the 1960s for use in molecular dynamics. It was also used by Cowell and Crommelin in 1909 to compute the orbit of Halley's Comet, and by Carl Størmer in 1907 to study the motion of electrical particles in a magnetic field. The Verlet integrator provides good numerical stability, as well as other properties that are important in physical systems such as time-reversibility and preservation of the symplectic form on phase space, at no significant additional computational cost over the simple Euler method. Verlet integration was used by Carl Størmer to compute the trajectories of particles moving in a magnetic field (hence it is also called Störmer's method)[2] and was popularized in molecular dynamics by French physicist Loup Verlet in 1967.

## Basic Störmer–Verlet

For a differential equation of second order of the type $\ddot{\vec x}(t)=\vec A(\vec x(t))$ with initial conditions $\vec x(t_0)=\vec x_0$ and $\dot{\vec x}(t_0)=\vec v_0$, an approximate numerical solution $\vec x_n\approx \vec x(t_n)$ at the times $t_n=t_0+n\,\Delta t$ with step size $\Delta t>0$ can be obtained by the following method:

• set $\vec x_1=\vec x_0+\vec v_0\,\Delta t+\frac12 A(\vec x_0)\,\Delta t^2$
• for n=1,2,... iterate
$\vec x_{n+1}=2 \vec x_n- \vec x_{n-1}+ A(\vec x_n)\,\Delta t^2.$

### Equations of motion

Newton's equation of motion for conservative physical systems is

$M\ddot {\vec x}(t)=F(\vec x(t))=-\nabla V(\vec x(t))$

or individually

$m_k\ddot {\vec x}_k(t)=F_k(\vec x(t))=-\nabla_{\vec x_k} V(\vec x(t))$

where

• t is the time,
• $\vec x(t)=(\vec x_1(t),\ldots,\vec x_N(t))$ is the ensemble of the position vector of N objects,
• V is the scalar potential function,
• F is the negative gradient of the potential giving the ensemble of forces on the particles,
• M is the mass matrix, typically diagonal with blocks with mass $m_k$ for every particle.

This equation, for various choices of the potential function V, can be used to describe the evolution of diverse physical systems, from the motion of interacting molecules to the orbit of the planets.

After a transformation to bring the mass to the right side and forgetting the structure of multiple particles, the equation may be simplified to

$\ddot {\vec x}(t)=A(\vec x(t))$

with some suitable vector valued function A representing the position dependent acceleration. Typically, an initial position $\vec x(0)=\vec x_0$ and an initial velocity $\vec v(0)=\dot {\vec x}(0)=\vec v_0$ are also given.

### Verlet integration (without velocities)

To discretize and numerically solve this initial value problem, a time step $\Delta t>0$ is chosen and the sampling point sequence $t_n=n\Delta t$ considered. The task is to construct a sequence of points $\vec x_n$ that closely follow the points $\vec x(t_n)$ on the trajectory of the exact solution.

Where Euler's method uses the forward difference approximation to the first derivative in differential equations of order one, Verlet Integration can be seen as using the central difference approximation to the second derivative:

$\frac{\Delta^2\vec x_n}{\Delta t^2} =\frac{\frac{\vec x_{n+1}-\vec x_n}{\Delta t}-\frac{\vec x_n-\vec x_{n-1}}{\Delta t}}{\Delta t} =\frac{\vec x_{n+1}-2\vec x_n+\vec x_{n-1}}{\Delta t^2}=\vec a_n=A(\vec x_n)$

Verlet integration in the form used as the Störmer method[3] uses this equation to obtain the next position vector from the previous two without using the velocity as

$\vec x_{n+1}=2\vec x_n-\vec x_{n-1}+\vec a_n\,\Delta t^2, \qquad \vec a_n=A(\vec x_n).$

### Discretization error

The time symmetry inherent in the method reduces the level of local errors introduced into the integration by the discretization by removing all odd degree terms, here the terms in $\Delta t$ of degree three. The local error is quantified by inserting the exact values $\vec x(t_{n-1}),\vec x(t_n),\vec x(t_{n+1})$ into the iteration and computing the Taylor expansions at time $t=t_n$ of the position vector $\vec{x}(t\pm\Delta t)$ in different time directions.

\begin{align} \vec{x}(t + \Delta t) &= \vec{x}(t) + \vec{v}(t)\Delta t + \frac{\vec{a}(t) \Delta t^2}{2} + \frac{\vec{b}(t) \Delta t^3}{6} + \mathcal{O}(\Delta t^4)\\[0.7em] \vec{x}(t - \Delta t) &= \vec{x}(t) - \vec{v}(t)\Delta t + \frac{\vec{a}(t) \Delta t^2}{2} - \frac{\vec{b}(t) \Delta t^3}{6} + \mathcal{O}(\Delta t^4),\, \end{align}

where $\vec{x}$ is the position, $\vec{v}=\dot{\vec x}$ the velocity, $\vec{a}=\ddot{\vec x}$ the acceleration and $\vec{b}$ the jerk (third derivative of the position with respect to the time) $t$.

$\vec{x}(t + \Delta t) = 2\vec{x}(t) - \vec{x}(t - \Delta t) + \vec{a}(t) \Delta t^2 + \mathcal{O}(\Delta t^4).\,$

We can see that the first and third-order terms from the Taylor expansion cancel out, thus making the Verlet integrator an order more accurate than integration by simple Taylor expansion alone.

Caution should be applied to the fact that the acceleration here is computed from the exact solution, $\vec a(t)=A(\vec x(t))$, whereas in the iteration it is computed at the central iteration point, $\vec a_n=A(\vec x_n)$. In computing the global error, that is the distance between exact solution and approximation sequence, those two terms do not cancel exactly, influencing the order of the global error.

### A simple example

To gain insight into the relation of local and global errors it is helpful to examine simple examples where the exact as well as the approximative solution can be expressed in explicit formulas. The standard example for this task is the exponential function.

Consider the linear differential equation $\ddot x(t)=w^2x(t)$ with a constant w. Its exact basis solutions are $e^{wt}$ and $e^{-wt}$.

The Störmer method applied to this differential equation leads to a linear recurrence relation

\begin{align} x_{n+1}-2x_n+x_{n-1}&=h^2w^2x_n\\ \iff \quad x_{n+1}-2(1+\tfrac12(wh)^2)x_n+x_{n-1}&=0. \end{align}

It can be solved by finding the roots of its characteristic polynomial $q^2-2(1+\tfrac12(wh)^2)q+1=0$. These are

$q_\pm=1+\tfrac12(wh)^2\pm wh\sqrt{1+\tfrac14(wh)^2}$.

The basis solutions of the linear recurrence are $x_n=q_+^{\;n}$ and $x_n=q_-^{\;n}$. To compare them with the exact solutions, Taylor expansions are computed.

\begin{align} q_+ &=1+\tfrac12(wh)^2+wh(1+\tfrac18(wh)^2-\tfrac{3}{128}(wh)^4+\mathcal O(h^6))\\[.3em] &=1+(wh)+\tfrac12(wh)^2+\tfrac18(wh)^3-\tfrac{3}{128}(wh)^5+ \mathcal O(h^7). \end{align}

The quotient of this series with the one of the exponential $e^{wh}$ starts with $1-\tfrac1{24}(wh)^3+\mathcal O(h^5)$, so

\begin{align} q_+&=(1-\tfrac1{24}(wh)^3+\mathcal O(h^5))e^{wh}\\[.3em] &=e^{-\frac{1}{24}(wh)^3+\mathcal O(h^5)}\,e^{wh}. \end{align}

From there it follows that for the first basis solution the error can be computed as

\begin{align} x_n=q_+^{\;n}&=e^{-\frac{1}{24}(wh)^2\,wt_n+\mathcal O(h^4)}\,e^{wt_n}\\[.3em] &=e^{wt_n}\left(1-\tfrac{1}{24}(wh)^2\,wt_n+\mathcal O(h^4)\right)\\[.3em] &=e^{wt_n}+\mathcal O(h^2t_ne^{wt_n}). \end{align}

That is, although the local discretization error is of order 4, due to the second order of the differential equation the global error is of order 2, with a constant that grows exponentially in time.

### Starting the iteration

Note that at the start of the Verlet-iteration at step $n=1$, time $t = t_1=\Delta t$, computing $\vec x_2$, one already needs the position vector $\vec x_1$ at time $t=t_1$. At first sight this could give problems, because the initial conditions are known only at the initial time $t_0=0$. However, from these the acceleration $\vec a_0=A(\vec x_0)$ is known, and a suitable approximation for the first time step position can be obtained using the Taylor polynomial of degree two:

$\vec x_1 = \vec{x}_0 + \vec{v}_0\Delta t + \tfrac12 \vec a_0\Delta t^2 \approx \vec{x}(\Delta t) + \mathcal{O}(\Delta t^3).\,$

The error on the first time step calculation then is of order $\mathcal O(\Delta t^3)$. This is not considered a problem because on a simulation of over a large amount of timesteps, the error on the first timestep is only a negligibly small amount of the total error, which at time $t_n$ is of the order $\mathcal O(e^{Lt_n}\Delta t^2)$, both for the distance of the position vectors $\vec x_n$ to $\vec x(t_n)$ as for the distance of the divided differences $\tfrac{\vec x_{n+1}-\vec x_n}{\Delta t}$ to $\tfrac{\vec x(t_{n+1})-\vec x(t_n)}{\Delta t}$. Moreover, to obtain this second order global error, the initial error needs to be of at least third order.

### Non-constant time differences

A disadvantage of the Störmer–Verlet method is that if the time-step ($\Delta t$) changes, the method does not approximate the solution to the differential equation. This is particularly an issue for game designers, who may be integrating at a variable framerate.

This can be corrected using the formula:[4]

$\vec x_{i+1} = \vec x_i + (\vec x_i - \vec x_{i-1}) (\Delta t_i / \Delta t_{i-1}) + \vec a\Delta t_i^2$

A more exact derivation uses the Taylor series (to second order) at $t_i$ for times $t_{i+1}=t_i+\Delta t_i$ and $t_{i-1}=t_i-\Delta t_{i-1}$ to obtain after elimination of $\vec v_i$

$\frac{\vec x_{i+1} - \vec x_i}{\Delta t_i} +\frac{\vec x_{i-1} - \vec x_i}{\Delta t_{i-1}} =\vec a_i\,\frac{\Delta t_{i}+\Delta t_{i-1}}2$

so that the iteration formula becomes

$\vec x_{i+1} = \vec x_i + (\vec x_i - \vec x_{i-1}) \frac{\Delta t_i }{ \Delta t_{i-1}} + \vec a_i\, \frac{\Delta t_{i}+\Delta t_{i-1}}2\,\Delta t_i$

### Computing velocities – Störmer–Verlet method

The velocities are not explicitly given in the basic Störmer equation, but often they are necessary for the calculation of certain physical quantities like the kinetic energy. This can create technical challenges in molecular dynamics simulations, because kinetic energy and instantaneous temperatures at time $t$ cannot be calculated for a system until the positions are known at time $t + \Delta t$. This deficiency can either be dealt with using the Velocity Verlet algorithm, or estimating the velocity using the position terms and the mean value theorem:

$\vec{v}(t) = \frac{\vec{x}(t + \Delta t) - \vec{x}(t - \Delta t)}{2\Delta t} + \mathcal{O}(\Delta t^2).$

Note that this velocity term is a step behind the position term, since this is for the velocity at time $t$, not $t + \Delta t$, meaning that $\vec v_n=\tfrac{\vec x_{n+1}-\vec x_{n-1}}{2\Delta t}$ is an order two approximation to $\vec{v}(t_n)$. With the same argument, but halving the time step, $\vec v_{n+1/2}=\tfrac{\vec x_{n+1}-\vec x_n}{\Delta t}$ is an order two approximation to $\vec{v}(t_{n+1/2})$, with $t_{n+1/2}=t_n+\tfrac12\Delta t$.

One can shorten the interval to approximate the velocity at time $t + \Delta t$ at the cost of accuracy:

$\vec{v}(t + \Delta t) = \frac{\vec{x}(t + \Delta t) - \vec{x}(t)}{\Delta t} + \mathcal{O}(\Delta t).$

## Velocity Verlet

A related, and more commonly used, algorithm is the Velocity Verlet algorithm,[5] similar to the leapfrog method, except that the velocity and position are calculated at the same value of the time variable (Leapfrog does not, as the name suggests). This uses a similar approach but explicitly incorporates velocity, solving the first-timestep problem in the Basic Verlet algorithm:

$\vec{x}(t + \Delta t) = \vec{x}(t) + \vec{v}(t)\, \Delta t + \frac{1}{2} \,\vec{a}(t) \Delta t^2 \,$
$\vec{v}(t + \Delta t) = \vec{v}(t) + \frac{\vec{a}(t) + \vec{a}(t + \Delta t)}{2} \Delta t \,$

It can be shown that the error on the Velocity Verlet is of the same order as the Basic Verlet. Note that the Velocity algorithm is not necessarily more memory consuming, because it's not necessary to keep track of the velocity at every timestep during the simulation. The standard implementation scheme of this algorithm is:

1. Calculate: $\vec{v}\left(t + \tfrac12\,\Delta t\right) = \vec{v}(t) + \tfrac12\,\vec{a}(t)\,\Delta t\,$
2. Calculate: $\vec{x}(t + \Delta t) = \vec{x}(t) + \vec{v}\left(t + \tfrac12\,\Delta t\right)\, \Delta t\,$
3. Derive $\vec{a}(t + \Delta t)$ from the interaction potential using $\vec{x}(t + \Delta t)$
4. Calculate: $\vec{v}(t + \Delta t) = \vec{v}\left(t + \tfrac12\,\Delta t\right) + \tfrac12\,\vec{a}(t + \Delta t)\Delta t,$

Eliminating the half-step velocity, this algorithm may be shortened to

1. Calculate: $\vec{x}(t + \Delta t) = \vec{x}(t) + \vec{v}(t)\, \Delta t+\tfrac12 \,\vec{a}(t)\,\Delta t^2$
2. Derive $\vec{a}(t + \Delta t)$ from the interaction potential using $\vec{x}(t + \Delta t)$
3. Calculate: $\vec{v}(t + \Delta t) = \vec{v}(t) + \tfrac12\,\left(\vec{a}(t)+\vec{a}(t + \Delta t)\right)\Delta t\,$

Note, however, that this algorithm assumes that acceleration $\vec{a}(t + \Delta t)$ only depends on position $\vec{x}(t + \Delta t)$, and does not depend on velocity $\vec{v}(t + \Delta t)$.

One might note that the long-term results of Velocity Verlet, and similarly of Leapfrog are one order better than the semi-implicit Euler method. The algorithms are almost identical up to a shifted by half of a timestep in the velocity. This is easily proven by rotating the above loop to start at Step 3 and then noticing that the acceleration term in Step 1 could be eliminated by combining Steps 2 and 4. The only difference is that the midpoint velocity in velocity Verlet is considered the final velocity in semi-implicit Euler method.

The global error of all Euler methods is of order one, whereas the global error of this method is, similar to the midpoint method, of order two. Additionally, if the acceleration indeed results from the forces in a conservative mechanical or Hamiltonian system, the energy of the approximation essentially oscillates around the constant energy of the exactly solved system, with a global error bound again of order one for semi-explicit Euler and order two for Verlet-leapfrog. The same goes for all other conservered quantities of the system like linear or angular momentum, that are always preserved or nearly preserved in a symplectic integrator.[6]

The Velocity Verlet method is a special case of the Newmark-beta method with $\beta=0$ and $\gamma = 1/2$.

## Error terms

The local error in position of the Verlet integrator is $O(\Delta t^4)$ as described above, and the local error in velocity is $O(\Delta t^2)$.

The global error in position, in contrast, is $O(\Delta t^2)$ and the global error in velocity is $O(\Delta t^2)$. These can be derived by noting the following:

$\mathrm{error}\bigl(x(t_0 + \Delta t)\bigr) = O(\Delta t^4)$

and

$x(t_0 + 2\Delta t) = 2x(t_0 + \Delta t) - x(t_0) + \Delta t^2 x''(t_0 + \Delta t) + O(\Delta t^4) \,$

Therefore:

$\mathrm{error}\bigl(x(t_0 + 2\Delta t)\bigr) = 2\mathrm{error}\bigl(x(t_0 + \Delta t)\bigr) + O(\Delta t^4) = 3\,O(\Delta t^4)$

Similarly:

$\mathrm{error}\bigl(x(t_0 + 3\Delta t)\bigl) = 6\,O(\Delta t^4)$
$\mathrm{error}\bigl(x(t_0 + 4\Delta t)\bigl) = 10\,O(\Delta t^4)$
$\mathrm{error}\bigl(x(t_0 + 5\Delta t)\bigl) = 15\,O(\Delta t^4)$

Which can be generalized to (it can be shown by induction, but it is given here without proof):

$\mathrm{error}\bigl(x(t_0 + n\Delta t)\bigr) = \frac{n(n+1)}{2}\,O(\Delta t^4)$

If we consider the global error in position between $x(t)$ and $x(t+T)$, where $T = n\Delta t$, it is clear that:

$\mathrm{error}\bigl(x(t_0 + T)\bigr) = \left(\frac{T^2}{2\Delta t^2} + \frac{T}{2\Delta t}\right) O(\Delta t^4)$

And therefore, the global (cumulative) error over a constant interval of time is given by:

$\mathrm{error}\bigr(x(t_0 + T)\bigl) = O(\Delta t^2)$

Because the velocity is determined in a non-cumulative way from the positions in the Verlet integrator, the global error in velocity is also $O(\Delta t^2)$.

In molecular dynamics simulations, the global error is typically far more important than the local error, and the Verlet integrator is therefore known as a second-order integrator.

## Constraints

Main article: Constraint algorithm

The most notable thing that is now easier due to using Verlet integration rather than Eulerian is that constraints between particles are very easy to do. A constraint is a connection between multiple points that limits them in some way, perhaps setting them at a specific distance or keeping them apart, or making sure they are closer than a specific distance. Often physics systems use springs between the points in order to keep them in the locations they are supposed to be. However, using springs of infinite stiffness between two points usually gives the best results coupled with the verlet algorithm. Here's how:

$d_1=x_2^{(t)}-x_1^{(t)}\,$
$d_2=\|d_1\|\,$
$d_3=\frac{d_2-r}{d_2}\,$
$x_1^{(t+\Delta t)}=\tilde{x}_1^{(t+\Delta t)}+\frac{1}{2}d_1d_3\,$
$x_2^{(t+\Delta t)}=\tilde{x}_2^{(t+\Delta t)}-\frac{1}{2}d_1d_3\,$

The $x_i^{(t)}$ variables are the positions of the points i at time t, the $\tilde{x}_i^{(t)}$ are the unconstrained positions (i.e. the point positions before applying the constraints) of the points i at time t, the d variables are temporary (they are added for optimization as the results of their expressions are needed multiple times), and r is the distance that is supposed to be between the two points. Currently this is in one dimension; however, it is easily expanded to two or three. Simply find the delta (first equation) of each dimension, and then add the deltas squared to the inside of the square root of the second equation (Pythagorean theorem). Then, duplicate the last two equations for the number of dimensions there are. This is where verlet makes constraints simple – instead of say, applying a velocity to the points that would eventually satisfy the constraint, you can simply position the point where it should be and the verlet integrator takes care of the rest.

Problems, however, arise when multiple constraints position a vertex. One way to solve this is to loop through all the vertices in a simulation in a criss cross manner, so that at every vertex the constraint relaxation of the last vertex is already used to speed up the spread of the information. Either use fine time steps for the simulation, use a fixed number of constraint solving steps per time step, or solve constraints until they are met by a specific deviation.

When approximating the constraints locally to first order this is the same as the Gauss–Seidel method. For small matrices it is known that LU decomposition is faster. Large systems can be divided into clusters (for example: each ragdoll = cluster). Inside clusters the LU method is used, between clusters the Gauss–Seidel method is used. The matrix code can be reused: The dependency of the forces on the positions can be approximated locally to first order, and the verlet integration can be made more implicit.

For big matrices sophisticated solvers (look especially for "The sizes of these small dense matrices can be tuned to match the sweet spot" in [1]) for sparse matrices exist, any self made Verlet integration has to compete with these. The usage of (clusters of) matrices is not generally more precise or stable, but addresses the specific problem, that a force on one vertex of a sheet of cloth should reach any other vertex in a low number of time steps even if a fine grid is used for the cloth [2] (link needs refinement) and not form a sound wave.

Another way to solve holonomic constraints is to use constraint algorithms.

## Collision reactions

One way of reacting to collisions is to use a penalty-based system which basically applies a set force to a point upon contact. The problem with this is that it is very difficult to choose the force imparted. Use too strong a force and objects will become unstable, too weak and the objects will penetrate each other. Another way is to use projection collision reactions which takes the offending point and attempts to move it the shortest distance possible to move it out of the other object.

The Verlet integration would automatically handle the velocity imparted via the collision in the latter case, however note that this is not guaranteed to do so in a way that is consistent with collision physics (that is, changes in momentum are not guaranteed to be realistic). Instead of implicitly changing the velocity term, one would need to explicitly control the final velocities of the objects colliding (by changing the recorded position from the previous time step).

The two simplest methods for deciding on a new velocity are perfectly elastic collisions and inelastic collisions. A slightly more complicated strategy that offers more control would involve using the coefficient of restitution.