Adaptive stepsize is a technique in numerical analysis used for many problems, but mainly for integration. It can be used for both normal integration (i.e. quadrature), or the process of solving an ordinary differential equation. This article focuses on the latter. For an explanation of adaptive stepsize in normal integration, see for example Romberg's method.

As usual, an initial value problem is stated:

$y'(t) = f(t,y(t)), \qquad y(a)=y_a$

Here, it is made clear that y and f can be vectors, as they will be when dealing with a system of coupled differential equations. In the rest of the article, this fact will be implicit.

Suppose we are interested in obtaining a solution at point $t=b$, given a function $f(t,y)$, an initial time point, $t=a$, and an initial solution $y_a=y(a)$. Of course a numerical solution will generally have an error, so we assume $y_b + \epsilon = y(b)$, where $\epsilon$ is the error.

For simplicity, the following example uses the simplest integration method, the Euler method. Note that the Euler method is almost exclusively useful for educational purposes; in practice, higher-order (Runge-Kutta) methods are used due to their superior convergence and stability properties.

Recall that the Euler method is derived from Taylor's theorem with the intermediate value theorem and the fact that $y'(t)=f(t,y)$:

$y(t+h)=y(t)+hf(t,y(t))+\frac{h^2}{2}f'(\eta,y(\eta)), \qquad t \le \eta \le t+h$

Which leads to the Euler method:

$y_{n+1}^{(0)}=y_n+hf(t_n,y_n)$

And its local truncation error

$\tau_{n+1}^{(0)}=ch^2$
$y_{n+1}^{(0)} + \tau_{n+1}^{(0)}=y(t+h)$

We mark this solution and its error with a $(0)$. Since $c$ is not known to us in the general case (it depends on the derivatives of $f$), in order to say something useful about the error, a second solution should be created, using a stepsize that is smaller. For example half the original stepsize. Note that we have to apply Euler's method twice now, meaning we get two times the local error (in the worst case). Our new, and presumably more accurate solution is marked with a $(1)$.

$y_{n+\frac{1}{2}}=y_n+\frac{h}{2}f(t_n,y_n)$
$y_{n+1}^{(1)}=y_{n+\frac{1}{2}}+\frac{h}{2}f(t_{n+\frac{1}{2}},y_{n+\frac{1}{2}})$
$\tau_{n+1}^{(1)}=c\left(\frac{h}{2}\right)^2+c\left(\frac{h}{2}\right)^2=2c\left(\frac{h}{2}\right)^2=\frac{1}{2}ch^2=\frac{1}{2}\tau_{n+1}^{(0)}$
$y_{n+1}^{(1)} + \tau_{n+1}^{(1)}=y(t+h)$

Here, we assume error factor $c$ is constant over the interval $[t, t+h]$. In reality its rate of change is proportional to $y^{(3)}(t)$. Subtracting solutions gives the error estimate:

$y_{n+1}^{(1)}-y_{n+1}^{(0)} = \tau_{n+1}^{(1)}$

This local error estimate is third order accurate.

The local error estimate can be used to decide how stepsize $h$ should be modified to achieve the desired accuracy. For example, if a local tolerance of $tol$ is allowed, we could let h evolve like:

$h \rightarrow 0.9 \times h \times \frac{tol}{|\tau_{n+1}^{(1)}|}$

The $0.9$ is a safety factor to ensure success on the next try. This should, in principle give an error of about $0.9 \times tol$ in the next try. If $|\tau_{n+1}^{(1)}| < tol$, we consider the step successful, and the error estimate is used to improve the solution:

$y_{n+1}^{(2)} = y_{n+1}^{(1)} + \tau_{n+1}^{(1)}$

This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps. This technique is called Richardson extrapolation.

Beginning with an initial stepsize of $h=b-a$, this theory facilitates our controllable integration of the ODE from point $a$ to $b$, using an optimal number of steps given a local error tolerance.

Similar methods can be developed for higher order methods, such as the Runge-Kutta 4th order method. Also, a global error tolerance can be achieved by scaling the local error to global scope. However, you might end up with a stepsize that is prohibitively small, especially using this Euler based method.

If you are interested in adaptive stepsize methods that use a so-called 'embedded' error estimate, see Fehlberg, Cash-Karp and Dormand-Prince. These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates.