Continuous game

A continuous game is a mathematical generalization, used in game theory. It extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.

Formal definition

Define the n-player continuous game ${\displaystyle G=(P,\mathbf {C} ,\mathbf {U} )}$ where

${\displaystyle P={1,2,3,\ldots ,n}}$ is the set of ${\displaystyle n\,}$ players,
${\displaystyle \mathbf {C} =(C_{1},C_{2},\ldots ,C_{n})}$ where each ${\displaystyle C_{i}\,}$ is a compact metric space corresponding to the ${\displaystyle i\,}$ th player's set of pure strategies,
${\displaystyle \mathbf {U} =(u_{1},u_{2},\ldots ,u_{n})}$ where ${\displaystyle u_{i}:\mathbf {C} \to \mathbb {R} }$ is the utility function of player ${\displaystyle i\,}$
We define ${\displaystyle \Delta _{i}\,}$ to be the set of Borel probability measures on ${\displaystyle C_{i}\,}$, giving us the mixed strategy space of player i.
Define the strategy profile ${\displaystyle {\boldsymbol {\sigma }}=(\sigma _{1},\sigma _{2},\ldots ,\sigma _{n})}$ where ${\displaystyle \sigma _{i}\in \Delta _{i}\,}$

Let ${\displaystyle {\boldsymbol {\sigma }}_{-i}}$ be a strategy profile of all players except for player ${\displaystyle i}$. As with discrete games, we can define a best response correspondence for player ${\displaystyle i\,}$, ${\displaystyle b_{i}\ }$. ${\displaystyle b_{i}\,}$ is a relation from the set of all probability distributions over opponent player profiles to a set of player ${\displaystyle i}$'s strategies, such that each element of

${\displaystyle b_{i}(\sigma _{-i})\,}$

is a best response to ${\displaystyle \sigma _{-i}}$. Define

${\displaystyle \mathbf {b} ({\boldsymbol {\sigma }})=b_{1}(\sigma _{-1})\times b_{2}(\sigma _{-2})\times \cdots \times b_{n}(\sigma _{-n})}$.

A strategy profile ${\displaystyle {\boldsymbol {\sigma }}*}$ is a Nash equilibrium if and only if ${\displaystyle {\boldsymbol {\sigma }}*\in \mathbf {b} ({\boldsymbol {\sigma }}*)}$ The existence of a Nash equilibrium for any continuous game with continuous utility functions can been proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem.[1] In general, there may not be a solution if we allow strategy spaces, ${\displaystyle C_{i}\,}$'s which are not compact, or if we allow non-continuous utility functions.

Separable games

A separable game is a continuous game where, for any i, the utility function ${\displaystyle u_{i}:\mathbf {C} \to \mathbb {R} }$ can be expressed in the sum-of-products form:

${\displaystyle u_{i}(\mathbf {s} )=\sum _{k_{1}=1}^{m_{1}}\ldots \sum _{k_{n}=1}^{m_{n}}a_{i\,,\,k_{1}\ldots k_{n}}f_{1}(s_{1})\ldots f_{n}(s_{n})}$, where ${\displaystyle \mathbf {s} \in \mathbf {C} }$, ${\displaystyle s_{i}\in C_{i}}$, ${\displaystyle a_{i\,,\,k_{1}\ldots k_{n}}\in \mathbb {R} }$, and the functions ${\displaystyle f_{i\,,\,k}:C_{i}\to \mathbb {R} }$ are continuous.

A polynomial game is a separable game where each ${\displaystyle C_{i}\,}$ is a compact interval on ${\displaystyle \mathbb {R} \,}$ and each utility function can be written as a multivariate polynomial.

In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem:

For any separable game there exists at least one Nash equilibrium where player i mixes at most ${\displaystyle m_{i}+1\,}$ pure strategies.[2]

Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.

Examples

Separable games

A polynomial game

Consider a zero-sum 2-player game between players X and Y, with ${\displaystyle C_{X}=C_{Y}=\left[0,1\right]}$. Denote elements of ${\displaystyle C_{X}\,}$ and ${\displaystyle C_{Y}\,}$ as ${\displaystyle x\,}$ and ${\displaystyle y\,}$ respectively. Define the utility functions ${\displaystyle H(x,y)=u_{x}(x,y)=-u_{y}(x,y)\,}$ where

${\displaystyle H(x,y)=(x-y)^{2}\,}$.

The pure strategy best response relations are:

${\displaystyle b_{X}(y)={\begin{cases}1,&{\mbox{if }}y\in \left[0,1/2\right)\\0{\text{ or }}1,&{\mbox{if }}y=1/2\\0,&{\mbox{if }}y\in \left(1/2,1\right]\end{cases}}}$
${\displaystyle b_{Y}(x)=x\,}$

${\displaystyle b_{X}(y)\,}$ and ${\displaystyle b_{Y}(x)\,}$ do not intersect, so there is

no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, ${\displaystyle v=\mathbb {E} [H(x,y)]}$ as a linear combination of the first and second moments of the probability distributions of X and Y:

${\displaystyle v=\mu _{X2}-2\mu _{X1}\mu _{Y1}+\mu _{Y2}\,}$

(where ${\displaystyle \mu _{XN}=\mathbb {E} [x^{N}]}$ and similarly for Y).

The constraints on ${\displaystyle \mu _{X1}\,}$ and ${\displaystyle \mu _{X2}}$ (with similar constraints for y,) are given by Hausdorff as:

{\displaystyle {\begin{aligned}\mu _{X1}\geq \mu _{X2}\\\mu _{X1}^{2}\leq \mu _{X2}\end{aligned}}\qquad {\begin{aligned}\mu _{Y1}\geq \mu _{Y2}\\\mu _{Y1}^{2}\leq \mu _{Y2}\end{aligned}}}

Each pair of constraints defines a compact convex subset in the plane. Since ${\displaystyle v\,}$ is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on

${\displaystyle \mu _{i1}=\mu _{i2}{\text{ or }}\mu _{i1}^{2}=\mu _{i2}}$

Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on ${\displaystyle \mu _{i1}=\mu _{i2}\,}$, it will lie on the whole line, so that both 0 and 1 are a best response. ${\displaystyle b_{Y}(\mu _{X1},\mu _{X2})\,}$ simply gives the pure strategy ${\displaystyle y=\mu _{X1}\,}$, so ${\displaystyle b_{Y}\,}$ will never give both 0 and 1. However ${\displaystyle b_{x}\,}$ gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when:

${\displaystyle (\mu _{X1}*,\mu _{X2}*,\mu _{Y1}*,\mu _{Y2}*)=(1/2,1/2,1/2,1/4)\,}$

This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.

Non-Separable Games

A rational pay-off function

Consider a zero-sum 2-player game between players X and Y, with ${\displaystyle C_{X}=C_{Y}=\left[0,1\right]}$. Denote elements of ${\displaystyle C_{X}\,}$ and ${\displaystyle C_{Y}\,}$ as ${\displaystyle x\,}$ and ${\displaystyle y\,}$ respectively. Define the utility functions ${\displaystyle H(x,y)=u_{x}(x,y)=-u_{y}(x,y)\,}$ where

${\displaystyle H(x,y)={\frac {(1+x)(1+y)(1-xy)}{(1+xy)^{2}}}.}$

This game has no pure strategy Nash equilibrium. It can be shown[3] that a unique mixed strategy Nash equilibrium exists with the following pair of probability density functions:

${\displaystyle f^{*}(x)={\frac {2}{\pi {\sqrt {x}}(1+x)}}\qquad g^{*}(y)={\frac {2}{\pi {\sqrt {y}}(1+y)}}.}$

The value of the game is ${\displaystyle 4/\pi }$.

Requiring a Cantor distribution

Consider a zero-sum 2-player game between players X and Y, with ${\displaystyle C_{X}=C_{Y}=\left[0,1\right]}$. Denote elements of ${\displaystyle C_{X}\,}$ and ${\displaystyle C_{Y}\,}$ as ${\displaystyle x\,}$ and ${\displaystyle y\,}$ respectively. Define the utility functions ${\displaystyle H(x,y)=u_{x}(x,y)=-u_{y}(x,y)\,}$ where

${\displaystyle H(x,y)=\sum _{n=0}^{\infty }{\frac {1}{2^{n}}}\left(2x^{n}-\left(\left(1-{\frac {x}{3}}\right)^{n}-\left({\frac {x}{3}}\right)^{n}\right)\right)\left(2y^{n}-\left(\left(1-{\frac {y}{3}}\right)^{n}-\left({\frac {y}{3}}\right)^{n}\right)\right)}$.

This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the cantor singular function as the cumulative distribution function.[4]