Gibbs measure

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is the measure associated with the canonical ensemble. Gibbs measure implies the Markov property (a certain kind of statistical independence); and importantly, it implies the Hammersley–Clifford theorem that the energy function can be written as a multiplication of parts, thus leading to its widespread appearance in many problems outside of physics, such as Hopfield networks, Markov networks, and Markov logic networks. In addition, the Gibbs measure is the unique measure that maximizes the entropy for a given expected energy; thus, the Gibbs measure underlies maximum entropy methods and the algorithms derived therefrom.

The measure gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as

P(X=x) = \frac{1}{Z(\beta)} \exp ( - \beta E(x)).

Here, E(x) is a function from the space of states to the real numbers; in physics applications, E(x) is interpreted as the energy of the configuration x. The parameter β is a free parameter; in physics, it is the inverse temperature. The normalizing constant Z(β) is the partition function.

Markov property[edit]

An example of the Markov property of the Gibbs measure can be seen in the Ising model. Here, the probability of a given spin σk being in state s is, in principle, dependent on all other spins in the model; thus one writes

P(\sigma_k = s|\sigma_j,\, j\ne k)

for this probability. However, the interactions in the Ising model are nearest-neighbor interactions, and thus, one actually has

P(\sigma_k = s|\sigma_j,\, j\ne k) = P(\sigma_k = s|\sigma_j,\, j\isin N_k)

where Nk is the set of nearest neighbors of site k. That is, the probability at site k depends only on the nearest neighbors. This last equation is in the form of a Markov-type statistical independence. Measures with this property are sometimes called Markov random fields. More strongly, the converse is also true: any positive probability distribution (non-zero everywhere) having the Markov property can be represented with the Gibbs measure, given an appropriate energy function;[1] this is the Hammersley–Clifford theorem.

Gibbs measure on lattices[edit]

What follows is a formal definition for the special case of a random field on a group lattice. The idea of a Gibbs measure is, however, much more general than this.

The definition of a Gibbs random field on a lattice requires some terminology:

  • The lattice: A countable set \mathbb{L}.
  • The single-spin space: A probability space (S,\mathcal{S},\lambda).
  • The configuration space: (\Omega, \mathcal{F}), where \Omega = S^{\mathbb{L}} and \mathcal{F} = \mathcal{S}^{\mathbb{L}}.
  • Given a configuration ω ∈ Ω and a subset \Lambda \subset \mathbb{L}, the restriction of ω to Λ is \omega_\Lambda = (\omega(t))_{t\in\Lambda}. If \Lambda_1\cap\Lambda_2=\emptyset and \Lambda_1\cup\Lambda_2=\mathbb{L}, then the configuration \omega_{\Lambda_1}\omega_{\Lambda_2} is the configuration whose restrictions to Λ1 and Λ2 are \omega_{\Lambda_1} and \omega_{\Lambda_2}, respectively. These will be used to define cylinder sets, below.
  • The set \mathcal{L} of all finite subsets of \mathbb{L}.
  • For each subset \Lambda\subset\mathbb{L}, \mathcal{F}_\Lambda is the σ-algebra generated by the family of functions (\sigma(t))_{t\in\Lambda}, where \sigma(t)(\omega)=\omega(t). This σ-algebra is just the algebra of cylinder sets on the lattice.
  • The potential: A family \Phi=(\Phi_A)_{A\in\mathcal{L}} of functions ΦA : Ω → R such that
    1. For each A\in\mathcal{L}, \Phi_A is \mathcal{F}_A-measurable.
    2. For all \Lambda\in\mathcal{L} and ω ∈ Ω, the following series exists:
H_\Lambda^\Phi(\omega) = \sum_{A\in\mathcal{L}, A\cap\Lambda\neq\emptyset} \Phi_A(\omega).
  • The Hamiltonian in \Lambda\in\mathcal{L} with boundary conditions \bar\omega, for the potential Φ, is defined by
H_\Lambda^\Phi(\omega | \bar\omega) = H_\Lambda^\Phi \left(\omega_\Lambda\bar\omega_{\Lambda^c} \right )
where \Lambda^c = \mathbb{L}\setminus\Lambda.
  • The partition function in \Lambda\in\mathcal{L} with boundary conditions \bar\omega and inverse temperature β > 0 (for the potential Φ and λ) is defined by
Z_\Lambda^\Phi(\bar\omega) = \int \lambda^\Lambda(\mathrm{d}\omega) \exp(-\beta H_\Lambda^\Phi(\omega | \bar\omega)),
where
\lambda^\Lambda(\mathrm{d}\omega) = \prod_{t\in\Lambda}\lambda(\mathrm{d}\omega(t)),
is the product measure
A potential Φ is λ-admissible if Z_\Lambda^\Phi(\bar\omega) is finite for all \Lambda\in\mathcal{L}, \bar\omega\in\Omega and β > 0.
A probability measure μ on (\Omega,\mathcal{F}) is a Gibbs measure for a λ-admissible potential Φ if it satisfies the Dobrushin-Lanford-Ruelle (DLR) equations
\int \mu(\mathrm{d}\bar\omega)Z_\Lambda^\Phi(\bar\omega)^{-1} \int\lambda^\Lambda(\mathrm{d}\omega) \exp(-\beta H_\Lambda^\Phi(\omega | \bar\omega)) 1_A(\omega_\Lambda\bar\omega_{\Lambda^c}) = \mu(A),
for all A\in\mathcal{F} and \Lambda\in\mathcal{L}.

An example[edit]

To help understand the above definitions, here are the corresponding quantities in the important example of the Ising model with nearest-neighbour interactions (coupling constant J) and a magnetic field (h), on Zd:

  • The lattice is simply \mathbb{L} = \mathbf{Z}^d.
  • The single-spin space is S = {−1, 1}.
  • The potential is given by
\Phi_A(\omega) = \begin{cases}
-J\,\omega(t_1)\omega(t_2) & \mathrm{if\ } A=\{t_1,t_2\} \mathrm{\ with\ } \|t_2-t_1\|_1 = 1 \\
-h\,\omega(t) & \mathrm{if\ } A=\{t\}\\
0 & \mathrm{otherwise}
\end{cases}

See also[edit]

References[edit]

  1. ^ Ross Kindermann and J. Laurie Snell, Markov Random Fields and Their Applications (1980) American Mathematical Society, ISBN 0-8218-5001-6

Further reading[edit]

  • Georgii, H.-O. (2011) [1988]. Gibbs Measures and Phase Transitions (2nd ed.). Berlin: de Gruyter. ISBN 978-3-11-025029-9.