Uniformization (probability theory)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In probability theory, uniformization method, (also known as Jensen's method[1] or the randomization method[2]) is a method to compute transient solutions of finite state continuous-time Markov chains, by approximating the process by a discrete time Markov chain.[2] The original chain is scaled by the fastest transition rate γ, so that transitions occur at the same rate in every state, hence the name uniformisation. The method is simple to program and efficiently calculates an approximation to the transient distribution at a single point in time (near zero).[1] The method was first introduced by Winfried Grassmann in 1977.[3][4][5]

Method description[edit]

For a continuous time Markov chain with transition rate matrix Q, the uniformized discrete time Markov chain has probability transition matrix P:=(p_{ij})_{i,j}, which is defined by[1][6][7]

p_{ij} = \begin{cases} q_{ij}/\gamma &\text{ if } i \neq j \\ 1 - \sum_{j \neq i} q_{ij}/\gamma &\text{ if } i=j \end{cases}

with γ, the uniform rate parameter, chosen such that

\gamma \geq \max_i |q_{ii}|.

In matrix notation:

P=I+\frac{1}{\gamma}Q.

For a starting distribution π(0), the distribution at time t, π(t) is computed by[1]

\pi(t) = \sum_{n=0}^\infty \pi(0) P^n \frac{(\gamma t)^n}{n!}e^{-\gamma t}.

This representation shows, that a continuous time Markov Chain can be described by a discrete Markov Chain with transition matrix P as defined above where jumps occur according to a Poisson Process with intensity γt.

In practice this series is terminated after finitely many terms.

Implementation[edit]

Pseudocode for the algorithm is included in Appendix A of Reibman and Trivedi's 1988 paper.[8] Using a parallel version of the algorithm, chains with state spaces of larger than 107 have been analysed.[9]

Limitations[edit]

Reibman and Trivedi state that "uniformization is the method of choice for typical problems," though they note that for stiff problems some tailored algorithms are likely to perform better.[8]

External links[edit]

Notes[edit]

  1. ^ a b c d Stewart, William J. (2009). Probability, Markov chains, queues, and simulation: the mathematical basis of performance modeling. Princeton University Press. p. 361. ISBN 0-691-14062-6. 
  2. ^ a b Ibe, Oliver C. (2009). Markov processes for stochastic modeling. Academic Press. p. 98. ISBN 0-12-374451-2. 
  3. ^ Gross, D.; Miller, D. R. (1984). "The Randomization Technique as a Modeling Tool and Solution Procedure for Transient Markov Processes". Operations Research 32 (2): 343–361. doi:10.1287/opre.32.2.343.  edit
  4. ^ Grassmann, W. K. (1977). "Transient solutions in markovian queueing systems". Computers & Operations Research 4: 47–00. doi:10.1016/0305-0548(77)90007-7.  edit
  5. ^ Grassmann, W. K. (1977). "Transient solutions in Markovian queues". European Journal of Operational Research 1 (6): 396–242. doi:10.1016/0377-2217(77)90049-2.  edit
  6. ^ Cassandras, Christos G.; Lafortune, Stéphane (2008). Introduction to discrete event systems. Springer. ISBN 0-387-33332-0. 
  7. ^ Ross, Sheldon M. (2007). Introduction to probability models. Academic Press. ISBN 0-12-598062-0. 
  8. ^ a b Reibman, A.; Trivedi, K. (1988). "Numerical transient analysis of markov models". Computers & Operations Research 15: 19. doi:10.1016/0305-0548(88)90026-3.  edit
  9. ^ Dingle, N.; Harrison, P. G.; Knottenbelt, W. J. (2004). "Uniformization and hypergraph partitioning for the distributed computation of response time densities in very large Markov models". Journal of Parallel and Distributed Computing 64 (8): 908–920. doi:10.1016/j.jpdc.2004.03.017.  edit