Jump to content

Continuous-time Markov chain: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m →‎Mathematical definitions: minor formatting
No edit summary
Line 14: Line 14:


Continuous-time Markov processes are most easily defined by specifying the transition rates ''q''<sub>''ij''</sub>, and these are typically given as the ''ij''-th elements of the '''transition rate matrix''' ''Q''. For the probabilities to be conserved, i.e., to add up to one, the off-diagonal elements of ''Q'' must be non-negative and the diagonal elements must satisfy
Continuous-time Markov processes are most easily defined by specifying the transition rates ''q''<sub>''ij''</sub>, and these are typically given as the ''ij''-th elements of the '''transition rate matrix''' ''Q''. For the probabilities to be conserved, i.e., to add up to one, the off-diagonal elements of ''Q'' must be non-negative and the diagonal elements must satisfy
:<math>q_{ii} = \sum_{j\neq i} q_{ij}.</math>
:<math>q_{ii} = - \sum_{j\neq i} q_{ij}.</math>
With this notation, and letting ''p''<sub>''t''</sub> = <math>\Pr(X(t) = j)</math>, the evolution of a continuous-time Markov process is given by the first-order differential equation
With this notation, and letting ''p''<sub>''t''</sub> = <math>\Pr(X(t) = j)</math>, the evolution of a continuous-time Markov process is given by the first-order differential equation
::<math>\frac{\partial}{\partial t} p_t = p_t Q</math>
::<math>\frac{\partial}{\partial t} p_t = p_t Q</math>

Revision as of 23:33, 2 February 2010

In probability theory, a continuous-time Markov process is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.

Mathematical definitions

Intuitively, one can define a time-homogeneous Markov process as follows. Let X(t) be the random variable describing the state of the process at time t. Now prescribe that, given that the process starts in a state i at time t, it has made the transition to some other state j (j ≠ i) at time t + h with probability given by[1]

and, for j = i, the probability is given by

where o(h) represents a quantity that goes to zero faster than h goes to zero (see the article on order notation). Hence, over a sufficiently small interval of time, the probability of a particular transition (between different states) is roughly proportional to the duration of that interval.

Continuous-time Markov processes are most easily defined by specifying the transition rates qij, and these are typically given as the ij-th elements of the transition rate matrix Q. For the probabilities to be conserved, i.e., to add up to one, the off-diagonal elements of Q must be non-negative and the diagonal elements must satisfy

With this notation, and letting pt = , the evolution of a continuous-time Markov process is given by the first-order differential equation

The probability that no transition happens in some time r is

That is, the probability distribution of the waiting time until the first transition is an exponential distribution with rate parameter qii, and continuous-time Markov processes are thus memoryless processes.


A time dependent (time heterogeneous) Markov process is a Markov process as above, but with the q-rate a function of time, denoted qij(t).

Embedded Markov chain

One method of finding the stationary probability distribution, π, of an ergodic Continuous-time Markov process, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by

From this, S may be written as

where DQ = diag{Q} is the diagonal matrix of Q.

To find the stationary probability distribution vector, we must next find φ such that

with φ being a row vector, such that all elements in φ are greater than 0 and ||φ||1 = 1, and the 0 on the right side also being a row vector of 0's. From this, π may be found as

Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.

Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.

See also

References

  1. ^ Parzen, E. (1962) Stochastic Processes, Holden-Day. ISBN 0-8162-6664-6
  • William J. Stewart (1994). Introduction to the Numerical Solution of Markov Chains. Princeton University Press. pp. 17–23. ISBN 0691036993.