# Kolmogorov equations

In probability theory, Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, characterize continuous-time Markov processes. In particular, they describe how the probability that a continuous-time Markov process is in a certain state changes over time.

## Diffusion processes vs. jump processes

Writing in 1931, Andrei Kolmogorov started from the theory of discrete time Markov processes, which are described by the Chapman–Kolmogorov equation, and sought to derive a theory of continuous time Markov processes by extending this equation. He found that there are two kinds of continuous time Markov processes, depending on the assumed behavior over small intervals of time:

If you assume that "in a small time interval there is an overwhelming probability that the state will remain unchanged; however, if it changes, the change may be radical",[1] then you are led to what are called jump processes.

The other case leads to processes such as those "represented by diffusion and by Brownian motion; there it is certain that some change will occur in any time interval, however small; only, here it is certain that the changes during small time intervals will be also small".[1]

For each of these two kinds of processes, Kolmogorov derived a forward and a backward system of equations (four in all).

## History

The equations are named after Andrei Kolmogorov since they were highlighted in his 1931 foundational work.[2]

William Feller, in 1949, used the names "forward equation" and "backward equation" for his more general version of the Kolmogorov's pair, in both jump and diffusion processes.[1] Much later, in 1956, he referred to the equations for the jump process as "Kolmogorov forward equations" and "Kolmogorov backward equations".[3]

Other authors, such as Motoo Kimura,[4] referred to the diffusion (Fokker–Planck) equation as Kolmogorov forward equation, a name that has persisted.

## Continuous-time Markov chains

The original derivation of the equations by Kolmogorov starts with the Chapman–Kolmogorov equation (Kolmogorov called it fundamental equation) for time-continuous and differentiable Markov processes on a finite, discrete state space.[2] In this formulation, it is assumed that the probabilities ${\displaystyle P(i,s;j,t)}$ are continuous and differentiable functions of ${\displaystyle t>s}$, where ${\displaystyle x,y\in \Omega }$ (the state space) and ${\displaystyle t>s,t,s\in \mathbb {R} _{\geq 0}}$ are the final and initial times, respectively. Also, adequate limit properties for the derivatives are assumed. Feller derives the equations under slightly different conditions, starting with the concept of purely discontinuous Markov process and then formulating them for more general state spaces.[5] Feller proves the existence of solutions of probabilistic character to the Kolmogorov forward equations and Kolmogorov backward equations under natural conditions.[5]

For the case of a countable state space we put ${\displaystyle i,j}$ in place of ${\displaystyle x,y}$. The Kolmogorov forward equations read

${\displaystyle {\frac {\partial P_{ij}}{\partial t}}(s;t)=\sum _{k}P_{ik}(s;t)A_{kj}(t)}$,

where ${\displaystyle A(t)}$ is the transition rate matrix (also known as the generator matrix),

while the Kolmogorov backward equations are

${\displaystyle {\frac {\partial P_{ij}}{\partial s}}(s;t)=-\sum _{k}P_{kj}(s;t)A_{ik}(s)}$

The functions ${\displaystyle P_{ij}(s;t)}$ are continuous and differentiable in both time arguments. They represent the probability that the system that was in state ${\displaystyle i}$ at time ${\displaystyle s}$ jumps to state ${\displaystyle j}$ at some later time ${\displaystyle t>s}$. The continuous quantities ${\displaystyle A_{ij}(t)}$ satisfy

${\displaystyle A_{ij}(t)=\left[{\frac {\partial P_{ij}}{\partial u}}(t;u)\right]_{u=t},\quad A_{jk}(t)\geq 0,\ j\neq k,\quad \sum _{k}A_{jk}(t)=0.}$

### Relation with the generating function

Still in the discrete state case, letting ${\displaystyle s=0}$ and assuming that the system initially is found in state ${\displaystyle i}$, the Kolmogorov forward equations describe an initial-value problem for finding the probabilities of the process, given the quantities ${\displaystyle A_{jk}(t)}$. We write ${\displaystyle p_{k}(t)=P_{ik}(0;t)}$ where ${\displaystyle \sum _{k}p_{k}(t)=1}$, then

${\displaystyle {\frac {dp_{k}}{dt}}(t)=\sum _{j}A_{jk}(t)p_{j}(t);\quad p_{k}(0)=\delta _{ik},\qquad k=0,1,\dots .}$

For the case of a pure death process with constant rates the only nonzero coefficients are ${\displaystyle A_{j,j-1}=\mu j,\ j\geq 1}$. Letting

${\displaystyle \Psi (x,t)=\sum _{k}x^{k}p_{k}(t),\quad }$

the system of equations can in this case be recast as a partial differential equation for ${\displaystyle {\Psi }(x,t)}$ with initial condition ${\displaystyle \Psi (x,0)=x^{i}}$. After some manipulations, the system of equations reads,[6]

${\displaystyle {\frac {\partial \Psi }{\partial t}}(x,t)=\mu (1-x){\frac {\partial {\Psi }}{\partial x}}(x,t);\qquad \Psi (x,0)=x^{i},\quad \Psi (1,t)=1.}$

## An example from biology

One example from biology is given below:[7]

${\displaystyle p_{n}'(t)=(n-1)\beta p_{n-1}(t)-n\beta p_{n}(t)}$

This equation is applied to model population growth with birth. Where ${\displaystyle n}$ is the population index, with reference the initial population, ${\displaystyle \beta }$ is the birth rate, and finally ${\displaystyle p_{n}(t)=\Pr(N(t)=n)}$, i.e. the probability of achieving a certain population size.

The analytical solution is:[7]

${\displaystyle p_{n}(t)=(n-1)\beta e^{-n\beta t}\int _{0}^{t}\!p_{n-1}(s)\,e^{n\beta s}\mathrm {d} s}$

This is a formula for the probability ${\displaystyle p_{n}(t)}$ in terms of the preceding ones, i.e. ${\displaystyle p_{n-1}(t)}$.

## References

1. ^ a b c Feller, W. (1949). "On the Theory of Stochastic Processes, with Particular Reference to Applications". Proceedings of the (First) Berkeley Symposium on Mathematical Statistics and Probability. Vol. 1. University of California Press. pp. 403–432.
2. ^ a b Kolmogorov, Andrei (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung" [On Analytical Methods in the Theory of Probability]. Mathematische Annalen (in German). 104: 415–458. doi:10.1007/BF01457949. S2CID 119439925.
3. ^ Feller, William (1957). "On Boundaries and Lateral Conditions for the Kolmogorov Differential Equations". Annals of Mathematics. 65 (3): 527–570. doi:10.2307/1970064. JSTOR 1970064.
4. ^ Kimura, Motoo (1957). "Some Problems of Stochastic Processes in Genetics". Annals of Mathematical Statistics. 28 (4): 882–901. doi:10.1214/aoms/1177706791. JSTOR 2237051.
5. ^ a b Feller, Willy (1940) "On the Integro-Differential Equations of Purely Discontinuous Markoff Processes", Transactions of the American Mathematical Society, 48 (3), 488-515 JSTOR 1990095
6. ^ Bailey, Norman T.J. (1990) The Elements of Stochastic Processes with Applications to the Natural Sciences, Wiley. ISBN 0-471-52368-2 (page 90)
7. ^ a b Logan, J. David; Wolesensky, William R. (2009). Mathematical Methods in Biology. Pure and Applied Mathematics. John Wiley& Sons. pp. 325–327. ISBN 978-0-470-52587-6.