Jump to content

Markov chains on a measurable state space: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Elenktik (talk | contribs)
Elenktik (talk | contribs)
Line 36: Line 36:
=== Construction of Markov kernels ===
=== Construction of Markov kernels ===
For a given Markov kernel <math>p</math> and probability measure <math>\mu</math> one finds a time-homogenous Markov chain <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P})</math> with
For a given Markov kernel <math>p</math> and probability measure <math>\mu</math> one finds a time-homogenous Markov chain <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P})</math> with
:<math>\mathbb{P}[X_0 \in A , \, X_1 \in B ] = \int_A p(x,B) \, \mu(dx)</math>. If only the time-homogenous Markov chain <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P})</math> is given, then one can obtain the Markov kernel <math>p\quad \mu</math>-[almost everywhere] <ref name="nielsen">Adam Nielsen: Von Femtosekunden zu Minuten. Masterarbeit, Freie Universität Berlin, 2012.</ref>.
:<math>\mathbb{P}[X_0 \in A , \, X_1 \in B ] = \int_A p(x,B) \, \mu(dx)</math>.
If only the time-homogenous Markov chain <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P})</math> is given, then one can obtain the Markov kernel <math>p</math> only [[almost everywhere | <math>\mu</math>-almost everywhere]] <ref name="nielsen">Adam Nielsen: Von Femtosekunden zu Minuten. Masterarbeit, Freie Universität Berlin, 2012.</ref>. This means that one can rebuild the Markov kernel only up to the start distribution.
For example, if <math>\mu</math> is a Dirac measure in <math>x</math>, then we can rebuild from <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P}_x)</math> the Markov kernel p only at point x.
For example, if <math>\mu</math> is a Dirac measure in <math>x</math>, then we can rebuild from <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P}_x)</math> the Markov kernel p only at point x.


One can show that one can embed any two random variables <math>X,Y \colon \Omega \to E</math> into the beginning of a Markov chain<ref name="nielsen"/>, i.e. there exists
One can show that one can embed any two random variables <math>X,Y \colon \Omega \to E</math> into the beginning of a Markov chain<ref name="nielsen" />, i.e. there exists
a Markov chain <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P}_\mu)</math> with
a Markov chain <math>(X_n)_{n \in \mathbb{N}}</math> on <math>(\Omega,\mathcal{F},\mathbb{P}_\mu)</math> with
:<math>\mathbb{P}[X \in A ,\, Y \in B] = \mathbb{P}_\mu[X_0 \in A , \, X_1].
:<math>\mathbb{P}[X \in A ,\, Y \in B] = \mathbb{P}_\mu[X_0 \in A , \, X_1].

Revision as of 13:06, 4 November 2015

A Markov chain on a measurable state space is a discrete-time-homogenous Markov chain with a measurable space as state space.

History

The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used for stochastic processes with discrete or continuous index set, living on a countable or finite state space, see Doob[1] or Chung[2]. Since the late 20th century it became more popular to consider a Markov chain as a stochastic process with discrete index set, living on a measurable state space[3][4][5].

Definition

Denote with a measurable space and with a Markov kernel with source and target . A stochastic process on is called a time homogeneous Markov chain with Markov kernel and start distribution if

is satisfied for any . One can construct for any Markov kernel and any probability measure an associated Markov chain[4].

Remark about Markov kernel integration

For any measure we denote for -integrable function the Lebesgue integral as . For the measure defined by we used the following notation:

Basic properties

Starting in a single point

If is a Dirac measure in , we denote for a Markov kernel with starting distribution the associated Markov chain as on and the expectation value

for a -integrable function . By definition, we have then .

We have for any measurable function the following relation [4]:

Family of Markov kernels

For a Markov kernel with starting distribution one can introduce a family of Markov kernels by

for and . For the associated Markov chain according to and one obtains

.

Construction of Markov kernels

For a given Markov kernel and probability measure one finds a time-homogenous Markov chain on with

.

If only the time-homogenous Markov chain on is given, then one can obtain the Markov kernel only -almost everywhere [6]. This means that one can rebuild the Markov kernel only up to the start distribution. For example, if is a Dirac measure in , then we can rebuild from on the Markov kernel p only at point x.

One can show that one can embed any two random variables into the beginning of a Markov chain[6], i.e. there exists a Markov chain on with

<math>\mathbb{P}[X \in A ,\, Y \in B] = \mathbb{P}_\mu[X_0 \in A , \, X_1].

References

  1. ^ Joseph L. Doob: Stochastic Processes. New York: John Wiley & Sons, 1953.
  2. ^ Kai L. Chung: Markov Chains with Stationary Transition Probabilities. Second edition. Berlin: Springer-Verlag, 1974.
  3. ^ Sean Meyn and Richard L. Tweedie: Markov Chains and Stochastic Stability. 2nd edition, 2009.
  4. ^ a b c Daniel Revuz: Markov Chains. 2nd edition, 1984.
  5. ^ Rick Durrett: Probability:Theory and Examples' Fourth edition, 2005.
  6. ^ a b Adam Nielsen: Von Femtosekunden zu Minuten. Masterarbeit, Freie Universität Berlin, 2012.