Markov chain approximation method

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In numerical methods for stochastic differential equations, the Markov chain approximation method (MCAM) belongs to the several numerical (schemes) approaches used in stochastic control theory. Regrettably the simple adaptation of the deterministic schemes for matching up to stochastic models such as the Runge–Kutta method does not work at all.

It is a powerful and widely usable set of ideas, due to the current infancy of stochastic control it might be even said 'insights.' for numerical and other approximations problems in stochastic processes.[1][2] They represent counterparts from deterministic control theory such as optimal control theory.[3]

The basic idea of the MCAM is to approximate the original controlled process by a chosen controlled markov process on a finite state space. In case of need, one must as well approximate the cost function for one that matches up the Markov chain chosen to approximate the original stochastic process.

See also[edit]

References[edit]

  1. ^ Harold J Kushner, Paul G Dupuis, Numerical methods for stochastic control problems in continuous time, Applications of mathematics 24, Springer-Verlag, 1992.
  2. ^ P E Kloeden, Eckhard Platen, Numerical Solutions of Stochastic Differential Equations, Applications of Mathematics 23, Stochastic Modelling and Applied probability, Springer, 1992.
  3. ^ F. B. Hanson, "Markov Chain Approximation", in C. T. Leondes, ed., Stochastic Digital Control System Techniques, Academic Press, 1996, ISBN 978-0120127764.