Jump to content

Markov reward model: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
start article
(No difference)

Revision as of 16:54, 23 October 2013

In probability theory, a Markov reward process is a stochastic process which extends either a Markov chain or continuous-time Markov chain by adding a reward rate to each state. An additional variable records the reward accumulated up to the current time.[1]

References

  1. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/978-1-4615-1387-2_2, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/978-1-4615-1387-2_2 instead.