# Random measure

In probability theory, a random measure is a measure-valued random element.[1][2] Random measures are for example used in the theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

## Definition

Random measures can be defined as transition kernels or as random elements. Both definitions are equivalent. For the definitions, let ${\displaystyle E}$ be a separable complete metric space and let ${\displaystyle {\mathcal {E}}}$ be its Borel ${\displaystyle \sigma }$-algebra. (The most common example of a separable complete metric space is ${\displaystyle \mathbb {R} ^{n}}$)

### As a transition kernel

A random measure ${\displaystyle \zeta }$ is a (a.s.) locally finite transition kernel from a (abstract) probability space ${\displaystyle (\Omega ,{\mathcal {A}},P)}$ to ${\displaystyle (E,{\mathcal {E}})}$.[3]

Being a transition kernel means that

• For any fixed ${\displaystyle B\in {\mathcal {\mathcal {E}}}}$, the mapping
${\displaystyle \omega \mapsto \zeta (\omega ,B)}$
is measurable from ${\displaystyle (\Omega ,{\mathcal {A}})}$ to ${\displaystyle (\mathbb {R} ,{\mathcal {B}}(\mathbb {R} ))}$
• For every fixed ${\displaystyle \omega \in \Omega }$, the mapping
${\displaystyle B\mapsto \zeta (\omega ,B)\quad (B\in {\mathcal {E}})}$
is a measure on ${\displaystyle (E,{\mathcal {E}})}$

Being locally finite means that the measures

${\displaystyle B\mapsto \zeta (\omega ,B)}$

satisfy ${\displaystyle \zeta (\omega ,{\tilde {B}})<\infty }$ for all bounded measurable sets ${\displaystyle {\tilde {B}}\in {\mathcal {E}}}$ and for all ${\displaystyle \omega \in \Omega }$ except some ${\displaystyle P}$-null set

### As a random element

Define

${\displaystyle {\tilde {\mathcal {M}}}:=\{\mu \mid \mu {\text{ is measure on }}(E,{\mathcal {E}})\}}$

and the subset of locally finite measures by

${\displaystyle {\mathcal {M}}:=\{\mu \in {\tilde {\mathcal {M}}}\mid \mu ({\tilde {B}})<\infty {\text{ for all bounded measurable }}{\tilde {B}}\in {\mathcal {E}}\}}$

For all bounded measurable ${\displaystyle {\tilde {B}}}$, define the mappings

${\displaystyle I_{\tilde {B}}\colon \mu \mapsto \mu ({\tilde {B}})}$

from ${\displaystyle {\tilde {\mathcal {M}}}}$ to ${\displaystyle \mathbb {R} }$. Let ${\displaystyle {\tilde {\mathbb {M} }}}$ be the ${\displaystyle \sigma }$-algebra induced by the mappings ${\displaystyle I_{\tilde {B}}}$ on ${\displaystyle {\tilde {\mathcal {M}}}}$ and ${\displaystyle \mathbb {M} }$ the ${\displaystyle \sigma }$-algebra induced by the mappings ${\displaystyle I_{\tilde {B}}}$ on ${\displaystyle {\tilde {\mathcal {M}}}}$. Note that ${\displaystyle {\tilde {\mathbb {M} }}|_{\mathcal {M}}=\mathbb {M} }$.

A random measure is a random element from ${\displaystyle (\Omega ,{\mathcal {A}},P)}$ to ${\displaystyle ({\tilde {\mathcal {M}}},{\tilde {\mathbb {M} }})}$ that almost surely takes values in ${\displaystyle ({\mathcal {M}},\mathbb {M} )}$[3][4][5]

## Basic properties

### Measurability of integrals

For a random measure ${\displaystyle \zeta }$, the integrals

${\displaystyle \int f(x)\zeta (\mathrm {d} x)}$

and ${\displaystyle \zeta (A):=\int \mathbf {1} _{A}(x)\zeta (\mathrm {d} x)}$

for positive ${\displaystyle {\mathcal {E}}}$-measurable ${\displaystyle f}$ are measurable, so they are random variables.

### Uniqueness

The distribution of a random measure is uniquely determinded by the distributions of

${\displaystyle \int f(x)\zeta (\mathrm {d} x)}$

for all continuous functions with compact support ${\displaystyle f}$ on ${\displaystyle S}$. For a fixed semiring ${\displaystyle {\mathcal {I}}\subset {\mathcal {E}}}$ that generates ${\displaystyle {\mathcal {E}}}$ in the sense that ${\displaystyle \sigma ({\mathcal {I}})={\mathcal {E}}}$, the distribution of a random measure is also uniquely determined by the integral over all positive simple ${\displaystyle {\mathcal {I}}}$-measurable functions ${\displaystyle f}$.[6]

### Decomposition

A measure generally might be decomposed as:

${\displaystyle \mu =\mu _{d}+\mu _{a}=\mu _{d}+\sum _{n=1}^{N}\kappa _{n}\delta _{X_{n}},}$

Here ${\displaystyle \mu _{d}}$ is a diffuse measure without atoms, while ${\displaystyle \mu _{a}}$ is a purely atomic measure.

## Random counting measure

A random measure of the form:

${\displaystyle \mu =\sum _{n=1}^{N}\delta _{X_{n}},}$

where ${\displaystyle \delta }$ is the Dirac measure, and ${\displaystyle X_{n}}$ are random variables, is called a point process[1][2] or random counting measure. This random measure describes the set of N particles, whose locations are given by the (generally vector valued) random variables ${\displaystyle X_{n}}$. The diffuse component ${\displaystyle \mu _{d}}$ is null for a counting measure.

In the formal notation of above a random counting measure is a map from a probability space to the measurable space (${\displaystyle N_{X}}$, ${\displaystyle {\mathfrak {B}}(N_{X})}$) a measurable space. Here ${\displaystyle N_{X}}$ is the space of all boundedly finite integer-valued measures ${\displaystyle N\in M_{X}}$ (called counting measures).

The definitions of expectation measure, Laplace functional, moment measures and stationarity for random measures follow those of point processes. Random measures are useful in the description and analysis of Monte Carlo methods, such as Monte Carlo numerical quadrature and particle filters.[7]