Doob's martingale convergence theorems: Difference between revisions
Citation bot (talk | contribs) m Add: doi, arxiv, pages, issue, volume, journal, isbn, author pars. 1-1. Removed parameters. You can use this bot yourself. Report bugs here. |
KolbertBot (talk | contribs) m Bot: HTTP→HTTPS (v485) |
||
Line 75: | Line 75: | ||
==See also== |
==See also== |
||
*[[Backwards martingale convergence theorem]]<ref>{{cite book|author=Doob, Joseph L.|title=Measure theory|series=Graduate Texts in Mathematics, Vol. 143|publisher=Springer|year=2012|page=197|url= |
*[[Backwards martingale convergence theorem]]<ref>{{cite book|author=Doob, Joseph L.|title=Measure theory|series=Graduate Texts in Mathematics, Vol. 143|publisher=Springer|year=2012|page=197|url=https://books.google.com/books?id=H0PhBwAAQBAJ&pg=PA197|isbn=9781461208778}}</ref> |
||
{{More footnotes|date=January 2012}} |
{{More footnotes|date=January 2012}} |
Revision as of 16:26, 22 March 2018
In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the long-time limits of supermartingales, named after the American mathematician Joseph L. Doob.[1]
Statement of the theorems
In the following, (Ω, F, F∗, P), F∗ = (Ft)t ≥ 0, will be a filtered probability space and N : [0, +∞) × Ω → R will be a right-continuous supermartingale with respect to the filtration F∗; in other words, for all 0 ≤ s ≤ t < +∞,
Doob's first martingale convergence theorem
Doob's first martingale convergence theorem provides a sufficient condition for the random variables Nt to have a limit as t → +∞ in a pointwise sense, i.e. for each ω in the sample space Ω individually.
For t ≥ 0, let Nt− = max(−Nt, 0) and suppose that
Then the pointwise limit
exists and is finite for P-almost all ω ∈ Ω.[2]
Doob's second martingale convergence theorem
It is important to note that the convergence in Doob's first martingale convergence theorem is pointwise, not uniform, and is unrelated to convergence in mean square, or indeed in any Lp space. In order to obtain convergence in L1 (i.e., convergence in mean), one requires uniform integrability of the random variables Nt. By Chebyshev's inequality, convergence in L1 implies convergence in probability and convergence in distribution.
The following are equivalent:
- (Nt)t > 0 is uniformly integrable, i.e.
- there exists an integrable random variable N ∈ L1(Ω, P; R) such that Nt → N as t → +∞ both P-almost surely and in L1(Ω, P; R), i.e.
Corollary: convergence theorem for continuous martingales
Let M : [0, +∞) × Ω → R be a continuous martingale such that
for some p > 1. Then there exists a random variable M ∈ Lp(Ω, P; R) such that Mt → M as t → +∞ both P-almost surely and in Lp(Ω, P; R).
Discrete-time results
Similar results can be obtained for discrete-time supermartingales and submartingales, the obvious difference being that no continuity assumptions are required. For example, the result above becomes
Let M : N × Ω → R be a discrete-time martingale such that
for some p > 1. Then there exists a random variable M ∈ Lp(Ω, P; R) such that Mk → M as k → +∞ both P-almost surely and in Lp(Ω, P; R)
Convergence of conditional expectations: Lévy's zero–one law
Doob's martingale convergence theorems imply that conditional expectations also have a convergence property.
Let (Ω, F, P) be a probability space and let X be a random variable in L1. Let F∗ = (Fk)k∈N be any filtration of F, and define F∞ to be the minimal σ-algebra generated by (Fk)k∈N. Then
both P-almost surely and in L1.
This result is usually called Lévy's zero–one law or Levy's upwards theorem. The reason for the name is that if A is an event in F∞, then the theorem says that almost surely, i.e., the limit of the probabilities is 0 or 1. In plain language, if we are learning gradually all the information that determines the outcome of an event, then we will become gradually certain what the outcome will be. This sounds almost like a tautology, but the result is still non-trivial. For instance, it easily implies Kolmogorov's zero–one law, since it says that for any tail event A, we must have almost surely, hence .
Similarly we have the Levy's downwards theorem :
Let (Ω, F, P) be a probability space and let X be a random variable in L1. Let (Fk)k∈N be any decreasing sequence of sub-sigma algebras of F, and define F∞ to be the intersection. Then
both P-almost surely and in L1.
Doob's upcrossing inequality
The following result, called Doob's upcrossing inequality or, sometimes, Doob's upcrossing lemma, is used in proving Doob's martingale convergence theorems.[2]
Hypothesis.
Let be a natural number. Let , for , be a martingale with respect to a filtration , for . Let , be two real numbers with .
Define the random variables , for , as follows: if and only if is the largest integer such that there exist integers , satisfying 1 ≤ < and, for , for each pair the inequalities and are satisfied. Each is called the number of upcrossings with respect to the interval for the martingale , .
Conclusion.
See also
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (January 2012) |
References
- ^ Doob, J. L. (1953). Stochastic Processes. New York: Wiley.
- ^ a b "Martingale Convergence Theorem" (PDF). Massachusetts Institute of Tecnnology, 6.265/15.070J Lecture 11-Additional Material, Advanced Stochastic Processes, Fall 2013, 10/9/2013.
- ^ Bobrowski, Adam (2005). Functional Analysis for Probability and Stochastic Processe: An Introduction. Cambridge University Press. pp. 113–114. ISBN 9781139443883.
- ^ Gushchin, A. A. (2014). "On pathwise counterparts of Doob's maximal inequalities". Proceedings of the Steklov Institute of Mathematics. 287 (287): 118–121. arXiv:1410.8264. doi:10.1134/S0081543814080070.
- ^ Doob, Joseph L. (2012). Measure theory. Graduate Texts in Mathematics, Vol. 143. Springer. p. 197. ISBN 9781461208778.
- Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. ISBN 3-540-04758-1. (See Appendix C)
- Durrett, Rick (1996). Probability: theory and examples (Second ed.). Duxbury Press. ISBN 978-0-534-24318-0.; Durrett, Rick (2010). 4th edition. ISBN 9781139491136.