Harris chain: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
rewrite lead
Line 1: Line 1:
In the mathematical study of [[stochastic process]]es, a '''Harris chain''' is a [[Markov chain]] satisfying an additional property. The name comes from American mathematician [[Ted Harris (mathematician)|Theodore Harris]].
In the mathematical study of [[stochastic process]]es, a '''Harris chain''' is a [[Markov chain]] where the chain returns to a particular part of the state space an unbounded number of times.<ref>{{cite doi|10.1007/0-387-21525-5_7}}</ref> Harris chains are [[regenerative process]]es and are named after [[Ted Harris (mathematician)|Theodore Harris]].


==Definition==
==Definition==

Revision as of 14:30, 11 June 2013

In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times.[1] Harris chains are regenerative processes and are named after Theodore Harris.

Definition

A Markov chain {Xn} on state space Ω with stochastic kernel K is a Harris chain[2] if there exist A, B ⊆ Ω, ϵ > 0, and probability measure ρ with ρ(B) = 1 such that

  1. If τA := inf {n ≥ 0 : XnA}, then P(τA < ∞|X0 = x) > 0 for all x ∈ Ω.
  2. If xA and CB then K(x, C) ≥ ερ(C).

In essence, this technical definition can be rephrased as follows: given two points x1 and x2 in A, then there is at least an ϵ chance that they can be moved together to the same point at the next time step.

Another way to say it is that suppose that x and y are in A. Then at the next time step I first flip a Bernoulli with parameter ϵ. If it comes up one, I move the points to a point chosen using ρ. If it comes up zero, the points move independently, with x moving according to P(Xn+1 ∈ C|Xn = x) = K(x, C) − ερ(C) and y moving according to P(Yn+1C|Yn = y) = K(y, C) − ερ(C).

Examples

Example 1: Countable state space

Given a countable set S and a pair (A′, B′ ) satisfying (1) and (2) in the above definition, we can without loss of generality take B′ to be a single point b. Upon setting A = {b}, pick c such that K(bc) > 0 and set B = {c}. Then, (1) and (2) hold with A and B as singletons.

Example 2: Chains with continuous densities

Let {Xn}, XnRd be a Markov chain with a kernel that is absolutely continuous with respect to Lebesgue measure:

K(x, dy) = K(x, ydy

such that K(x, y) is a continuous function.

Pick (x0y0) such that K(x0y0 ) > 0, and let A and B be open sets containing x0 and y0 respectively that are sufficiently small so that K(xy) ≥ ε > 0 on A × B. Letting ρ(C) = |B ∩ C|/|B| where |B| is the Lebesgue measure of B, we have that (2) in the above definition holds. If (1) holds, then {Xn} is a Harris chain.

Reducibility and periodicity

In the following, R := inf {n ≥ 1 : XnA}; i.e. R is the first time after time 0 that the process enters region A.

Definition: If for all L(X0), P(R < ∞|X0A) = 1, then the Harris chain is called recurrent.

Definition: A recurrent Harris chain Xn is aperiodic if ∃N, such that ∀nN, ∀L(X0), P(XnA|X0A) > 0.

Theorem: Let Xn be an aperiodic recurrent Harris chain with stationary distribution π. If P(R < ∞|X0 = x) =1 then as n → ∞, distTV (L(Xn|X0 = x), π) → 0.

References

  1. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/0-387-21525-5_7, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/0-387-21525-5_7 instead.
  2. ^ R. Durrett. Probability: Theory and Examples. Thomson, 2005. ISBN 0-534-42441-4.