# Continuous-time Markov chain

A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.

An example of a CTMC with three states ${\displaystyle \{0,1,2\}}$ is as follows: the process makes a transition after the amount of time specified by the holding time—an exponential random variable ${\displaystyle E_{i}}$, where i is its current state. Each random variable is independent and such that ${\displaystyle E_{0}\sim {\text{Exp}}(6)}$, ${\displaystyle E_{1}\sim {\text{Exp}}(12)}$ and ${\displaystyle E_{2}\sim {\text{Exp}}(18)}$. When a transition is to be made, the process moves according to the jump chain, a discrete-time Markov chain with stochastic matrix:

${\displaystyle {\begin{bmatrix}0&{\frac {1}{2}}&{\frac {1}{2}}\\{\frac {1}{3}}&0&{\frac {2}{3}}\\{\frac {5}{6}}&{\frac {1}{6}}&0\end{bmatrix}}.}$

Equivalently, by the theory of competing exponentials, this CTMC changes state from state i according to the minimum of two random variables, which are independent and such that ${\displaystyle E_{i,j}\sim {\text{Exp}}(q_{i,j})}$ for ${\displaystyle i\neq j}$ where the parameters are given by the Q-matrix ${\displaystyle Q=(q_{i,j})}$

${\displaystyle {\begin{bmatrix}-6&3&3\\4&-12&8\\15&3&-18\end{bmatrix}}.}$

Each non-diagonal value can be computed as the product of the original state's holding time with the probability from the jump chain of moving to the given state. The diagonal values are chosen so that each row sums to 0.

A CTMC satisfies the Markov property, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains.

## Definition

A continuous-time Markov chain (Xt)t ≥ 0 is defined by:[1]

• a finite or countable state space S;
• a transition rate matrix Q with dimensions equal to that of S; and
• an initial state ${\displaystyle k}$ such that ${\displaystyle X_{0}=k}$, or a probability distribution for this first state.

For i ≠ j, the elements qij are non-negative and describe the rate of the process transitions from state i to state j. The elements qii could be chosen to be zero, but for mathematical convenience a common convention is to choose them such that each row of ${\displaystyle Q}$ sums to zero, that is:

${\displaystyle q_{ii}=-\sum _{k\neq i}q_{ik}.}$

Note how this differs from the definition of transition matrix for discrete Markov chains, where the row sums are all equal to one.

There are three other definitions of the process, equivalent to the one above.[2]

### Transition probability definition

Another common way to define continuous-time Markov chains is to, instead of the transition rate matrix ${\displaystyle Q}$, use the following:[1]

• ${\displaystyle v_{i}}$, for ${\displaystyle i\in S}$, representing the decay rate (of an exponential distribution) that the system stays in state ${\displaystyle i}$ once it enters it; and
• ${\displaystyle m_{ij}}$, for ${\displaystyle i,j\in S}$, representing the probability that the system goes to state ${\displaystyle j}$, given that it is currently leaving state ${\displaystyle i}$.

Naturally, ${\displaystyle m_{ii}}$ must be zero for all ${\displaystyle i}$.

The values ${\displaystyle v_{i}}$ and ${\displaystyle m_{ij}}$ are closely related to the transition rate matrix ${\displaystyle Q}$, by the formulas:

${\displaystyle v_{i}=\sum _{k\neq i}q_{ik}=-q_{ii},{\text{ for all }}i,}$
${\displaystyle m_{ij}={\frac {q_{ij}}{\sum _{k\neq i}q_{ik}}},{\text{ for all }}i\neq j.}$

Consider an ordered sequence of time instants ${\displaystyle t_{0} and the states recorded at these times ${\displaystyle i_{0},i_{1},\dots ,i_{n}}$, then it holds that:

${\displaystyle \Pr(X_{t_{n+1}}=i_{n+1}\mid X_{t_{0}}=i_{0},X_{t_{1}}=i_{1},\ldots ,X_{t_{n}}=i_{n})=\Pr(X_{t_{n+1}}=i_{n+1}\mid X_{t_{n}}=i_{n})=p_{i_{n}i_{n+1}}(t_{n+1}-t_{n})}$[dubious ]

where the pij is the solution of the forward equation (a first-order differential equation):

${\displaystyle P'(t)=P(t)Q}$

with initial condition P(0) being the identity matrix.

### Infinitesimal definition

The continuous time Markov chain is characterized by the transition rates, the derivatives with respect to time of the transition probabilities between states i and j.

Let ${\displaystyle X_{t}}$ be the random variable describing the state of the process at time t, and assume the process is in a state i at time t. By definition of the continuous-time Markov chain, ${\displaystyle X_{t+h}=j}$ is independent of values prior to instant ${\displaystyle t}$; that is, it is independent of ${\displaystyle \left(X_{s}:s. With that in mind, for all ${\displaystyle i,j}$, for all ${\displaystyle t}$ and for small values of ${\displaystyle h}$, the following holds:

${\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h)}$,

where ${\displaystyle \delta _{ij}}$is the Kronecker delta and the little-o notation has been employed.

The above equation shows that ${\displaystyle q_{ij}}$ can be seen as measuring how quickly the transition from ${\displaystyle i}$ to ${\displaystyle j}$ happens for ${\displaystyle i\neq j}$, and how quickly the transition away from ${\displaystyle i}$ happens for ${\displaystyle i=j}$.

### Jump chain/holding time definition

Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where Si follows the exponential distribution with rate parameter −qYiYi.

## Properties

### Communicating classes

Communicating classes, transience, recurrence and positive and null recurrence are defined identically as for discrete-time Markov chains.

### Transient behaviour

Write P(t) for the matrix with entries pij = P(Xt = j | X0 = i). Then the matrix P(t) satisfies the forward equation, a first-order differential equation

${\displaystyle P'(t)=P(t)Q}$

where the prime denotes differentiation with respect to t. The solution to this equation is given by a matrix exponential

${\displaystyle P(t)=e^{tQ}}$

In a simple case such as a CTMC on the state space {1,2}. The general Q matrix for such a process is the following 2 × 2 matrix with α,β > 0

${\displaystyle Q={\begin{pmatrix}-\alpha &\alpha \\\beta &-\beta \end{pmatrix}}.}$

The above relation for forward matrix can be solved explicitly in this case to give

${\displaystyle P(t)={\begin{pmatrix}{\frac {\beta }{\alpha +\beta }}+{\frac {\alpha }{\alpha +\beta }}e^{-(\alpha +\beta )t}&{\frac {\alpha }{\alpha +\beta }}-{\frac {\alpha }{\alpha +\beta }}e^{-(\alpha +\beta )t}\\{\frac {\beta }{\alpha +\beta }}-{\frac {\beta }{\alpha +\beta }}e^{-(\alpha +\beta )t}&{\frac {\alpha }{\alpha +\beta }}+{\frac {\beta }{\alpha +\beta }}e^{-(\alpha +\beta )t}\end{pmatrix}}}$

However, direct solutions are complicated to compute for larger matrices. The fact that Q is the generator for a semigroup of matrices

${\displaystyle P(t+s)=e^{(t+s)Q}=e^{tQ}e^{sQ}=P(t)P(s)}$

is used.

### Stationary distribution

The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of t. Observe that for the two-state process considered earlier with P(t) given by

${\displaystyle P(t)={\begin{pmatrix}{\frac {\beta }{\alpha +\beta }}+{\frac {\alpha }{\alpha +\beta }}e^{-(\alpha +\beta )t}&{\frac {\alpha }{\alpha +\beta }}-{\frac {\alpha }{\alpha +\beta }}e^{-(\alpha +\beta )t}\\{\frac {\beta }{\alpha +\beta }}-{\frac {\beta }{\alpha +\beta }}e^{-(\alpha +\beta )t}&{\frac {\alpha }{\alpha +\beta }}+{\frac {\beta }{\alpha +\beta }}e^{-(\alpha +\beta )t}\end{pmatrix}}}$

as t → ∞ the distribution tends to

${\displaystyle P_{\pi }={\begin{pmatrix}{\frac {\beta }{\alpha +\beta }}&{\frac {\alpha }{\alpha +\beta }}\\{\frac {\beta }{\alpha +\beta }}&{\frac {\alpha }{\alpha +\beta }}\end{pmatrix}}}$

Observe that each row has the same distribution as this does not depend on starting state. The row vector π may be found by solving[3]

${\displaystyle \pi Q=0.}$

${\displaystyle \sum _{i\in S}\pi _{i}=1.}$

#### Example 1

Directed graph representation of a continuous-time Markov chain describing the state of financial markets (note: numbers are made-up).

The image to the right describes a continuous-time Markov chain with state-space {Bull market, Bear market, Stagnant market} and transition rate matrix

${\displaystyle Q={\begin{pmatrix}-0.025&0.02&0.005\\0.3&-0.5&0.2\\0.02&0.4&-0.42\end{pmatrix}}.}$

The stationary distribution of this chain can be found by solving ${\displaystyle \pi Q=0}$, subject to the constraint that elements must sum to 1 to obtain

${\displaystyle \pi ={\begin{pmatrix}0.885&0.071&0.044\end{pmatrix}}.}$

#### Example 2

Transition graph with transition probabilities, exemplary for the states 1, 5, 6 and 8. There is a bidirectional secret passage between states 2 and 8.

The image to the right describes a discrete-time Markov chain modeling Pac-Man with state-space {1,2,3,4,5,6,7,8,9}. The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the monsters move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition rate matrix:

${\displaystyle Q={\begin{pmatrix}-1&{\frac {1}{2}}&&{\frac {1}{2}}\\{\frac {1}{4}}&-1&{\frac {1}{4}}&&{\frac {1}{4}}&&&{\frac {1}{4}}\\&{\frac {1}{2}}&-1&&&{\frac {1}{2}}\\{\frac {1}{3}}&&&-1&{\frac {1}{3}}&&{\frac {1}{3}}\\&{\frac {1}{4}}&&{\frac {1}{4}}&-1&{\frac {1}{4}}&&{\frac {1}{4}}\\&&{\frac {1}{3}}&&{\frac {1}{3}}&-1&&&{\frac {1}{3}}\\&&&{\frac {1}{2}}&&&-1&{\frac {1}{2}}\\&{\frac {1}{4}}&&&{\frac {1}{4}}&&{\frac {1}{4}}&-1&{\frac {1}{4}}\\&&&&&{\frac {1}{2}}&&{\frac {1}{2}}&-1\end{pmatrix}}}$

This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the monsters can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solving ${\displaystyle \pi Q=0}$, subject to the constraint that elements must sum to 1. The solution of this linear equation subject to the constraint is ${\displaystyle \pi =(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)\%.}$ The central state and the border states 2 and 8 of the adjacent secret passageway are visited most and the corner states are visited least.

### Time reversal

For a CTMC Xt, the time-reversed process is defined to be ${\displaystyle {\hat {X}}_{t}=X_{T-t}}$. By Kelly's lemma this process has the same stationary distribution as the forward process.

A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.

### Embedded Markov chain

One method of finding the stationary probability distribution, π, of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by

${\displaystyle s_{ij}={\begin{cases}{\frac {q_{ij}}{\sum _{k\neq i}q_{ik}}}&{\text{if }}i\neq j\\0&{\text{otherwise}}.\end{cases}}}$

From this, S may be written as

${\displaystyle S=I-\left(\operatorname {diag} (Q)\right)^{-1}Q}$

where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero.

To find the stationary probability distribution vector, we must next find ${\displaystyle \varphi }$ such that

${\displaystyle \varphi S=\varphi ,}$

with ${\displaystyle \varphi }$ being a row vector, such that all elements in ${\displaystyle \varphi }$ are greater than 0 and ${\displaystyle \|\varphi \|_{1}}$ = 1. From this, π may be found as

${\displaystyle \pi ={-\varphi (\operatorname {diag} (Q))^{-1} \over \left\|\varphi (\operatorname {diag} (Q))^{-1}\right\|_{1}}.}$

(S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.)

Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.