Stochastic game

From Wikipedia, the free encyclopedia
  (Redirected from Stochastic games)
Jump to: navigation, search

In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

Stochastic games generalize both Markov decision processes and repeated games.

Theory[edit]

The ingredients of a stochastic game are: a finite set of players I; a state space M (either a finite set or a measurable space (M,{\mathcal A})); for each player i\in I, an action set S^i (either a finite set or a measurable space (S^i,{\mathcal S}^i)); a transition probability P from M\times S, where S=\times_{i\in I}S^i is the action profiles, to M, where P(A \mid m, s) is the probability that the next state is in A given the current state m and the current action profile s; and a payoff function g from M\times S to R^I, where the i-th coordinate of g, g^i, is the payoff to player i as a function of the state m and the action profile s.

The game starts at some initial state m_1. At stage t, players first observe m_t, then simultaneously choose actions s^i_t\in S^i, then observe the action profile s_t=(s^i_t)_i, and then nature selects m_{t+1} according to the probability P(\cdot\mid m_t,s_t). A play of the stochastic game, m_1,s_1,\ldots,m_t,s_t,\ldots, defines a stream of payoffs g_1,g_2,\ldots, where g_t=g(m_t,s_t).

The discounted game \Gamma_\lambda with discount factor \lambda (0<\lambda \leq 1) is the game where the payoff to player i is \lambda \sum_{t=1}^{\infty}(1-\lambda)^{t-1}g^i_t. The n-stage game is the game where the payoff to player i is \bar{g}^i_n:=\frac1n\sum_{t=1}^ng^i_t.

The value v_n(m_1), respectively v_{\lambda}(m_1), of a two-person zero-sum stochastic game \Gamma_n, respectively \Gamma_{\lambda}, with finitely many states and actions exists, and Truman Bewley and Elon Kohlberg (1976) proved that v_n(m_1) converges to a limit as n goes to infinity and that v_{\lambda}(m_1) converges to the same limit as \lambda goes to 0.

The "undiscounted" game \Gamma_\infty is the game where the payoff to player i is the "limit" of the averages of the stage payoffs. Some precautions are needed in defining the value of a two-person zero-sum \Gamma_{\infty} and in defining equilibrium payoffs of a non-zero-sum \Gamma_{\infty}. The uniform value v_{\infty} of a two-person zero-sum stochastic game \Gamma_\infty exists if for every \varepsilon>0 there is a positive integer N and a strategy pair \sigma_{\varepsilon} of player 1 and \tau_{\varepsilon} of player 2 such that for every \sigma and \tau and every n\geq N the expectation of \bar{g}^i_n with respect to the probability on plays defined by \sigma_{\varepsilon} and \tau is at least v_{\infty} -\varepsilon , and the expectation of \bar{g}^i_n with respect to the probability on plays defined by \sigma and \tau_{\varepsilon} is at most v_{\infty} +\varepsilon . Jean-François Mertens and Abraham Neyman (1981) proved that every two-person zero-sum stochastic game with finitely many states and actions has a uniform value.

If there is a finite number of players and the action sets and the set of states are finite, then a stochastic game with a finite number of stages always has a Nash equilibrium. The same is true for a game with infinitely many stages if the total payoff is the discounted sum. Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have approximate Nash equilibria when the total payoff is the limit inferior of the averages of the stage payoffs. Whether such equilibria exist when there are more than two players is a challenging open question.

A Markov perfect equilibrium is a refinement of the concept of sub-game perfect Nash equilibrium to stochastic games..

Applications[edit]

Stochastic games have applications in economics, evolutionary biology and computer networks.[1] They are generalizations of repeated games which correspond to the special case where there is only one state.

Referring book[edit]

The most complete reference is the book of articles edited by Neyman and Sorin. The more elementary book of Filar and Vrieze provides a unified rigorous treatment of the theories of Markov Decision Processes and two-person stochastic games. They coin the term Competitive MDPs to encompass both one- and two-player stochastic games.

Notes[edit]

  1. ^ Constrained Stochastic Games in Wireless Networks by E.Altman, K.Avratchenkov, N.Bonneau, M.Debbah, R.El-Azouzi, D.S.Menasche

Further reading[edit]