Doob decomposition theorem

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In the theory of stochastic processes in discrete time, a part of the mathematical theory of probability, the Doob decomposition theorem gives a unique decomposition of every adapted and integrable stochastic process as the sum of a martingale and a predictable process (or "drift") starting at zero. The theorem was proved by and is named for Joseph L. Doob.[1]

The analogous theorem in the continuous-time case is the Doob–Meyer decomposition theorem.

Statement of the theorem[edit]

Let (Ω, F, ℙ) be a probability space, I = {0, 1, 2, . . . , N} with I ∈ ℕ or I = ℕ0 a finite or an infinite index set, (Fn)nI a filtration of F, and X = (Xn)nI an adapted stochastic process with E[|Xn|] < ∞ for all nI. Then there exists a martingale M = (Mn)nI and an integrable predictable process A = (An)nI starting with A0 = 0 such that Xn = Mn + An for every nI. Here predictable means that An is Fn−1-measurable for every nI \ {0}. This decomposition is almost surely unique.[2][3][4]


A real-valued stochastic process X is a submartingale if and only if it has a Doob decomposition into a martingale M and an integrable predictable process A that is almost surely increasing.[5] It is a supermartingale, if and only if A is almost surely decreasing.


The theorem is valid word by word also for stochastic processes X taking values in the d-dimensional Euclidean space d or the complex vector space d. This follows from the one-dimensional version by considering the components individually.


Proof of the theorem[edit]


Using conditional expectations, define the processes A and M, for every nI, explicitly by














where the sums for n = 0 are empty and defined as zero. Here A adds up the expected increments of X, and M adds up the surprises, i.e., the part of every Xk that is not known one time step before. Due to these definitions, An+1 (if n + 1 ∈ I) and Mn are Fn-measurable because the process X is adapted, E[|An|] < ∞ and E[|Mn|] < ∞ because the process X is integrable, and the decomposition Xn = Mn + An is valid for every nI. The martingale property

\mathbb{E}[M_n-M_{n-1}\,|\,\mathcal{F}_{n-1}]=0    a.s.

also follows from the above definition (2), for every nI \ {0}.


To prove uniqueness, let X = M' + A' be an additional decomposition. Then the process Y := MM' = A'A is a martingale, implying that

\mathbb{E}[Y_n\,|\,\mathcal{F}_{n-1}]=Y_{n-1}    a.s.,

and also predictable, implying that

\mathbb{E}[Y_n\,|\,\mathcal{F}_{n-1}]= Y_n    a.s.

for any nI \ {0}. Since Y0 = A'0A0 = 0 by the convention about the starting point of the predictable processes, this implies iteratively that Yn = 0 almost surely for all nI, hence the decomposition is almost surely unique.

Proof of the corollary[edit]

If X is a submartingale, then

\mathbb{E}[X_k\,|\,\mathcal{F}_{k-1}]\ge X_{k-1}    a.s.

for all kI \ {0}, which is equivalent to saying that every term in definition (1) of A is almost surely positive, hence A is almost surely increasing. The equivalence for supermartingales is proved similarly.


Let X = (Xn)n∈ℕ0 be a sequence in independent, integrable, real-valued random variables. They are adapted to the filtration generated by the sequence, i.e. Fn = σ(X0, . . . , Xn) for all n ∈ ℕ0. By (1) and (2), the Doob decomposition is given by

A_n=\sum_{k=1}^{n}\bigl(\mathbb{E}[X_k]-X_{k-1}\bigr),\quad n\in\mathbb{N}_0,


M_n=X_0+\sum_{k=1}^{n}\bigl(X_k-\mathbb{E}[X_k]\bigr),\quad n\in\mathbb{N}_0.

If the random variables of the original sequence X have mean zero, this simplifies to

A_n=-\sum_{k=0}^{n-1}X_k    and    M_n=\sum_{k=0}^{n}X_k,\quad n\in\mathbb{N}_0,

hence both processes are (possibly time-inhomogenious) random walks. If the sequence X = (Xn)n∈ℕ0 consists of symmetric random variables taking the values +1 and −1, then X is bounded, but the martingale M and the predictable process A are unbounded simple random walks (and not uniformly integrable), and Doob's optional stopping theorem might not be applicable to the martingale M unless the stopping time has a finite expectation.


In mathematical finance, the Doob decomposition theorem can be used to determine the largest optimal exercise time of an American option.[6][7] Let X = (X0, X1, . . . , XN) denote the non-negative, discounted payoffs of an American option in a N-period financial market model, adapted to a filtration (F0, F1, . . . , FN), and let denote an equivalent martingale measure. Let U = (U0, U1, . . . , UN) denote the Snell envelope of X with respect to . The Snell envelope is the smallest -supermartingale dominating X[8] and in a complete financial market it represents the minimal amount of capital necessary to hedge the American option up to maturity.[9] Let U = M + A denote the Doob decomposition with respect to  of the Snell envelope U into a martingale M = (M0, M1, . . . , MN) and a decreasing predictable process A = (A0, A1, . . . , AN) with A0 = 0. Then the largest stopping time to exercise the American option in an optimal way[10][11] is

\tau_{\text{max}}:=\begin{cases}N&\text{if }A_N=0,\\\min\{n\in\{0,\dots,N-1\}\mid A_{n+1}<0\}&\text{if } A_N<0.\end{cases}

Since A is predictable, the event {τmax = n} = {An = 0, An+1 < 0} is in Fn for every n ∈ {0, 1, . . . , N − 1}, hence τmax is indeed a stopping time. It gives the last moment before the discounted value of the American option will drop in expectation; up to time τmax the discounted value process U is a martingale with respect to .


The Doob decomposition theorem can be generalized from probability spaces to σ-finite measure spaces.[12]


  1. ^ Doob (1953), see (Doob 1990, pp. 296−298)
  2. ^ Durrett (2005)
  3. ^ (Föllmer & Schied 2011, Proposition 6.1)
  4. ^ (Williams 1991, Section 12.11, part (a) of the Theorem)
  5. ^ (Williams 1991, Section 12.11, part (b) of the Theorem)
  6. ^ (Lamberton & Lapeyre 2008, Chapter 2: Optimal stopping problem and American options)
  7. ^ (Föllmer & Schied 2011, Chapter 6: American contingent claims)
  8. ^ (Föllmer & Schied 2011, Proposition 6.10)
  9. ^ (Föllmer & Schied 2011, Theorem 6.11)
  10. ^ (Lamberton & Lapeyre 2008, Proposition 2.3.2)
  11. ^ (Föllmer & Schied 2011, Theorem 6.21)
  12. ^ (Schilling 2005, Problem 23.11)