Harris chain: Difference between revisions
m Disambiguate Kernel of a transform to Integral kernel using popups |
Gareth Jones (talk | contribs) rewrite lead |
||
Line 1: | Line 1: | ||
In the mathematical study of [[stochastic process]]es, a '''Harris chain''' is a [[Markov chain]] |
In the mathematical study of [[stochastic process]]es, a '''Harris chain''' is a [[Markov chain]] where the chain returns to a particular part of the state space an unbounded number of times.<ref>{{cite doi|10.1007/0-387-21525-5_7}}</ref> Harris chains are [[regenerative process]]es and are named after [[Ted Harris (mathematician)|Theodore Harris]]. |
||
==Definition== |
==Definition== |
Revision as of 14:30, 11 June 2013
In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times.[1] Harris chains are regenerative processes and are named after Theodore Harris.
Definition
A Markov chain {Xn} on state space Ω with stochastic kernel K is a Harris chain[2] if there exist A, B ⊆ Ω, ϵ > 0, and probability measure ρ with ρ(B) = 1 such that
- If τA := inf {n ≥ 0 : Xn ∈ A}, then P(τA < ∞|X0 = x) > 0 for all x ∈ Ω.
- If x ∈ A and C ⊆ B then K(x, C) ≥ ερ(C).
In essence, this technical definition can be rephrased as follows: given two points x1 and x2 in A, then there is at least an ϵ chance that they can be moved together to the same point at the next time step.
Another way to say it is that suppose that x and y are in A. Then at the next time step I first flip a Bernoulli with parameter ϵ. If it comes up one, I move the points to a point chosen using ρ. If it comes up zero, the points move independently, with x moving according to P(Xn+1 ∈ C|Xn = x) = K(x, C) − ερ(C) and y moving according to P(Yn+1 ∈ C|Yn = y) = K(y, C) − ερ(C).
Examples
Example 1: Countable state space
Given a countable set S and a pair (A′, B′ ) satisfying (1) and (2) in the above definition, we can without loss of generality take B′ to be a single point b. Upon setting A = {b}, pick c such that K(b, c) > 0 and set B = {c}. Then, (1) and (2) hold with A and B as singletons.
Example 2: Chains with continuous densities
Let {Xn}, Xn ∈ Rd be a Markov chain with a kernel that is absolutely continuous with respect to Lebesgue measure:
- K(x, dy) = K(x, y) dy
such that K(x, y) is a continuous function.
Pick (x0, y0) such that K(x0, y0 ) > 0, and let A and B be open sets containing x0 and y0 respectively that are sufficiently small so that K(x, y) ≥ ε > 0 on A × B. Letting ρ(C) = |B ∩ C|/|B| where |B| is the Lebesgue measure of B, we have that (2) in the above definition holds. If (1) holds, then {Xn} is a Harris chain.
Reducibility and periodicity
In the following, R := inf {n ≥ 1 : Xn ∈ A}; i.e. R is the first time after time 0 that the process enters region A.
Definition: If for all L(X0), P(R < ∞|X0 ∈ A) = 1, then the Harris chain is called recurrent.
Definition: A recurrent Harris chain Xn is aperiodic if ∃N, such that ∀n ≥ N, ∀L(X0), P(Xn ∈ A|X0 ∈ A) > 0.
Theorem: Let Xn be an aperiodic recurrent Harris chain with stationary distribution π. If P(R < ∞|X0 = x) =1 then as n → ∞, distTV (L(Xn|X0 = x), π) → 0.
References
- ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/0-387-21525-5_7, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1007/0-387-21525-5_7
instead. - ^ R. Durrett. Probability: Theory and Examples. Thomson, 2005. ISBN 0-534-42441-4.