Markov strategy

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Bender235 (talk | contribs) at 17:00, 23 November 2017. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In game theory, a Markov strategy is one that depends only on state variables that summarize the history of the game in one way or another.[1] For instance, a state variable can be the current play in a repeated game, or it can be any interpretation of a recent sequence of play.

A profile of Markov strategies is a Markov perfect equilibrium if it is a Nash equilibrium in every state of the game.

References

  1. ^ Fudenberg, Drew (1995). Game Theory. Cambridge, MA: The MIT Press. pp. 501–40. ISBN 0-262-06141-4.