# Mean field game theory

Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal,[1] in the engineering literature by Peter E. Caines and his co-workers[2][3] and independently and around the same time by mathematician Jean-Michel Lasry (fr) and Pierre-Louis Lions.[4][5][6][7]

Use of the term 'mean field' is inspired by mean field theory in physics which considers the behaviour of systems of large numbers of particles where individual particles have negligible impact upon the system.

In continuous time a mean field game is typically composed by a Hamilton-Jacobi-Bellman equation that describes the optimal control problem of an individual and a Fokker-Planck-Kolmogorov-forward equation that describes the dynamics of the aggregate distribution of agents. Under fairly general assumptions it can be proved that a class of mean field games is the limit as ${\displaystyle N\rightarrow \infty }$ of a N-player Nash equilibrium.[8]

## Mean-field-type control theory

A related concept to that of mean-field games is "mean-field-type control". In this case a social planner controls a distribution of states and chooses a control strategy. The solution to a mean-field-type control problem can typically be expressed as dual Hamilton-Jacobi-Bellman equation coupled with Kolmogorov equation. Mean-field-type game theory is the multi-agent generalization of mean-field-type control.