Ergodicity

From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Ergodic (disambiguation).

In mathematics, the term ergodic is used to describe a dynamical system which, broadly speaking, has the same behavior averaged over time as averaged over the space of all the system's states (phase space). In physics the term is used to imply that a system satisfies the ergodic hypothesis of thermodynamics.

In statistics, the term describes a random process for which the time average of one sequence of events is the same as the ensemble average. In other words, for a Markov chain, as one increases the steps, there exists a positive probability measure at step  n that is independent of probability distribution at initial step 0 (Feller, 1971, p. 271).[1]

Etymology[edit]

The term "ergodic" was derived from the Greek words έργον (ergon: "work") and οδός (odos: "path" or "way"). It was chosen by Boltzmann while he was working on a problem in statistical mechanics.[2]

Formal definition[edit]

Let (X,\; \Sigma ,\; \mu\,) be a probability space, and T:X \to X be a measure-preserving transformation. We say that T is ergodic with respect to \mu (or alternatively that \mu is ergodic with respect to T) if one of the following equivalent statements is true:[3]

  • for every  E \in \Sigma with T^{-1}(E)=E\, either \mu(E)=0\, or \mu(E)=1\,.
  • for every  E \in \Sigma with \mu(T^{-1}(E)\bigtriangleup E)=0 we have \mu(E)=0 or \mu(E)=1\, (where \bigtriangleup denotes the symmetric difference).
  • for every  E \in \Sigma with positive measure we have \mu(\bigcup_{n=1}^\infty T^{-n}E) = 1.
  • for every two sets E and H of positive measure, there exists an n > 0 such that \mu((T^{-n}E)\cap H)>0.
  • Every measurable function f:X\to\mathbb{R} with f\circ T=f is almost surely constant.

Measurable flows[edit]

These definitions have natural analogues for the case of measurable flows and, more generally, measure-preserving semigroup actions. Let {Tt} be a measurable flow on (X, Σ, μ). An element A of Σ is invariant mod 0 under {Tt} if

\mu(T^{t}(A)\bigtriangleup A)=0

for each tR. Measurable sets invariant mod 0 under a flow or a semigroup action form the invariant subalgebra of Σ, and the corresponding measure-preserving dynamical system is ergodic if the invariant subalgebra is the trivial σ-algebra consisting of the sets of measure 0 and their complements in X.

Markov chains[edit]

In a Markov chain, a state i is said to be ergodic if it is aperiodic and positive recurrent (a state is recurrent if it either has a nonzero probability to exit the state or a probability of 1 to remain in it, otherwise it becomes "absorbing"). If all states in a Markov chain are ergodic, then the chain is said to be ergodic. A Markov chain is ergodic if there is a strictly positive probability to pass from any state to any other state in one step (Markov's theorem).

Examples in electronics[edit]

Ergodicity is where the ensemble average equals the time average. Each resistor has thermal noise associated with it and it depends on the temperature. Take N resistors (N should be very large) and plot the voltage across those resistors for a long period. For each resistor you will have a waveform. Calculate the average value of that waveform. This gives you the time average. You should also note that you have N waveforms as we have N resistors. These N plots are known as an ensembles. Now take a particular instant of time in all those plots and find the average value of the voltage. That gives you the ensemble average for each plot. If both ensemble average and time average are the same then it is ergodic.

Ergodic decomposition[edit]

Conceptually, ergodicity of a dynamical system is a certain irreducibility property, akin to the notions of irreducible representation in algebra and prime number in arithmetic. A general measure-preserving transformation or flow on a Lebesgue space admits a canonical decomposition into its ergodic components, each of which is ergodic.

See also[edit]

Notes[edit]

  1. ^ Feller, W. 1971 An introduction to probability theory and its applications, vol. 2, Wiley
  2. ^ Walters 1982, §0.1, p. 2
  3. ^ Walters 1982, §1.5, p. 27

References[edit]

  • Walters, Peter (1982), An Introduction to Ergodic Theory, Springer, ISBN 0-387-95152-0 
  • Brin, Michael; Garrett, Stuck (2002), Introduction to Dynamical Systems, Cambridge University Press, ISBN 0-521-80841-3 
  • Birkhoff, G. D. (1931). Proof of the ergodic theorem. Proceedings of the National Academy of Sciences of the United States of America, 17(12), 656.
  • Alaoglu, L., & Birkhoff, G. (1940). General ergodic theorems. The Annals of Mathematics, 41(2), 293-309.

External links[edit]