# Doob martingale

In the mathematical theory of probability, a Doob martingale (named after Joseph L. Doob, also known as a Levy martingale) is a stochastic process that approximates a given random variable and has the martingale property with respect to the given filtration. It may be thought of as the evolving sequence of best approximations to the random variable based on information accumulated up to a certain time.

When analyzing sums, random walks, or other additive functions of independent random variables, one can often apply the central limit theorem, law of large numbers, Chernoff's inequality, Chebyshev's inequality or similar tools. When analyzing similar objects where the differences are not independent, the main tools are martingales and Azuma's inequality.[clarification needed]

## Definition

Let $Y$ be any random variable with $\mathbb {E} [|Y|]<\infty$ . Suppose $\left\{{\mathcal {F}}_{0},{\mathcal {F}}_{1},\dots \right\}$ is a filtration, i.e. ${\mathcal {F}}_{s}\subset {\mathcal {F}}_{t}$ when $s . Define

$Z_{t}=\mathbb {E} [Y\mid {\mathcal {F}}_{t}],$ then $\left\{Z_{0},Z_{1},\dots \right\}$ is a martingale, namely Doob martingale, with respect to filtration $\left\{{\mathcal {F}}_{0},{\mathcal {F}}_{1},\dots \right\}$ .

To see this, note that

• $\mathbb {E} [|Z_{t}|]=\mathbb {E} [|\mathbb {E} [Y\mid {\mathcal {F}}_{t}]|]\leq \mathbb {E} [\mathbb {E} [|Y|\mid {\mathcal {F}}_{t}]]=\mathbb {E} [|Y|]<\infty$ ;
• $\mathbb {E} [Z_{t}\mid {\mathcal {F}}_{t-1}]=\mathbb {E} [\mathbb {E} [Y\mid {\mathcal {F}}_{t}]\mid {\mathcal {F}}_{t-1}]=\mathbb {E} [Y\mid {\mathcal {F}}_{t-1}]=Z_{t-1}$ as ${\mathcal {F}}_{t-1}\subset {\mathcal {F}}_{t}$ .

In particular, for any sequence of random variables $\left\{X_{1},X_{2},\dots ,X_{n}\right\}$ on probability space $(\Omega ,{\mathcal {F}},{\text{P}})$ and function $f$ such that $\mathbb {E} [|f(X_{1},X_{2},\dots ,X_{n})|]<\infty$ , one could choose

$Y:=f(X_{1},X_{2},\dots ,X_{n})$ and filtration $\left\{{\mathcal {F}}_{0},{\mathcal {F}}_{1},\dots \right\}$ such that

{\begin{aligned}{\mathcal {F}}_{0}&:=\left\{\phi ,\Omega \right\},\\{\mathcal {F}}_{t}&:=\sigma (X_{1},X_{2},\dots ,X_{t}),\forall t\geq 1,\end{aligned}} i.e. $\sigma$ -algebra generated by $X_{1},X_{2},\dots ,X_{t}$ . Then, by definition of Doob martingale, process $\left\{Z_{0},Z_{1},\dots \right\}$ where

{\begin{aligned}Z_{0}&:=\mathbb {E} [f(X_{1},X_{2},\dots ,X_{n})\mid {\mathcal {F}}_{0}]=\mathbb {E} [f(X_{1},X_{2},\dots ,X_{n})],\\Z_{t}&:=\mathbb {E} [f(X_{1},X_{2},\dots ,X_{n})\mid {\mathcal {F}}_{t}]=\mathbb {E} [f(X_{1},X_{2},\dots ,X_{n})\mid X_{1},X_{2},\dots ,X_{t}],\forall t\geq 1\end{aligned}} forms a Doob martingale. Note that $Z_{n}=f(X_{1},X_{2},\dots ,X_{n})$ . This martingale can be used to prove McDiarmid's inequality.

## McDiarmid's inequality

The Doob martingale was introduced by Joseph L. Doob in 1940 to establish concentration inequalities such as McDiarmid's inequality, which applies to functions that satisfy a bounded differences property (defined below) when they are evaluated on random independent function arguments.

A function $f:{\mathcal {X}}_{1}\times {\mathcal {X}}_{2}\times \cdots \times {\mathcal {X}}_{n}\rightarrow \mathbb {R}$ satisfies the bounded differences property if substituting the value of the $i$ th coordinate $x_{i}$ changes the value of $f$ by at most $c_{i}$ . More formally, if there are constants $c_{1},c_{2},\dots ,c_{n}$ such that for all $i\in [n]$ , and all $x_{1}\in {\mathcal {X}}_{1},\,x_{2}\in {\mathcal {X}}_{2},\,\ldots ,\,x_{n}\in {\mathcal {X}}_{n}$ ,

$\sup _{x_{i}'\in {\mathcal {X}}_{i}}\left|f(x_{1},\dots ,x_{i-1},x_{i},x_{i+1},\ldots ,x_{n})-f(x_{1},\dots ,x_{i-1},x_{i}',x_{i+1},\ldots ,x_{n})\right|\leq c_{i}.$ McDiarmid's Inequality — Let $f:{\mathcal {X}}_{1}\times {\mathcal {X}}_{2}\times \cdots \times {\mathcal {X}}_{n}\rightarrow \mathbb {R}$ satisfy the bounded differences property with bounds $c_{1},c_{2},\dots ,c_{n}$ .

Consider independent random variables $X_{1},X_{2},\dots ,X_{n}$ where $X_{i}\in {\mathcal {X}}_{i}$ for all $i$ . Then, for any $\varepsilon >0$ ,

${\text{P}}\left(f(X_{1},X_{2},\ldots ,X_{n})-\mathbb {E} [f(X_{1},X_{2},\ldots ,X_{n})]\geq \varepsilon \right)\leq \exp \left(-{\frac {2\varepsilon ^{2}}{\sum _{i=1}^{n}c_{i}^{2}}}\right),$ ${\text{P}}(f(X_{1},X_{2},\ldots ,X_{n})-\mathbb {E} [f(X_{1},X_{2},\ldots ,X_{n})]\leq -\varepsilon )\leq \exp \left(-{\frac {2\varepsilon ^{2}}{\sum _{i=1}^{n}c_{i}^{2}}}\right),$ and as an immediate consequence,

${\text{P}}(|f(X_{1},X_{2},\ldots ,X_{n})-\mathbb {E} [f(X_{1},X_{2},\ldots ,X_{n})]|\geq \varepsilon )\leq 2\exp \left(-{\frac {2\varepsilon ^{2}}{\sum _{i=1}^{n}c_{i}^{2}}}\right).$ 