# Poisson point process

(Redirected from Inhomogeneous Poisson process)

In probability, statistics and related fields, a Poisson point process or Poisson process (also called a Poisson random measure, Poisson random point field or Poisson point field) is a type of random mathematical object that consists of points randomly located on a mathematical space.[1] The process has convenient mathematical properties,[2] which has led to it being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numerous disciplines such as astronomy,[3] biology,[4] ecology,[5] geology,[6] physics,[7] image processing,[8] and telecommunications.[9][10]

The Poisson point process is often defined on the real line. For example, in queueing theory [11] it is used to model random events, such as the arrival of customers at a store or phone calls at an exchange, distributed in time. In the plane, the point process, also known as a spatial Poisson process,[12] may represent scattered objects such as transmitters in a wireless network,[9][13][14][15] particles colliding into a detector, or trees in a forest.[1] In this setting, the process is often used in mathematical models and in the related fields of spatial point processes,[12][16] stochastic geometry,[1] spatial statistics [12][17] and continuum percolation theory.[18] In more abstract spaces, the Poisson point process serves as an object of mathematical study in its own right.[2]

In all settings, the Poisson point process has the property that each point is stochastically independent to all the other points in the process, which is why it is sometimes called a purely or completely random process.[16] Despite its wide use as a stochastic model of phenomena representable as points, the inherent nature of the process implies that it does not adequately describe phenomena in which there is sufficiently strong interaction between the points. This has sometimes led to the overuse of the point process in mathematical models,[1][2][13] and has inspired other point processes, some of which are constructed via the Poisson point process, that seek to capture this interaction.[1]

The process is named after French mathematician Siméon Denis Poisson despite Poisson never having studied the process.[16][19][20] It's named owing to the fact that if a collection of random points in some space forms a Poisson process, then the number of points in a region of finite size is directly related to the Poisson distribution. The process was discovered independently in several different settings.

The process is defined with a single non-negative mathematical object, which, depending on the context, may be a constant, an integrable function or, in more general settings, a Radon measure.[1][16] If this object is a constant, then the resulting process is called a homogeneous [2] or stationary [1] Poisson point process. Otherwise, the parameter depends on its location in the underlying space, which leads to the inhomogeneous or nonhomogeneous Poisson point process.[16] The word point is often omitted, but there are other Poisson processes of objects, which, instead of points, consist of more complicated mathematical objects such as lines and polygons, and such processes can be based on the Poisson point process.[2]

## History

### Poisson distribution

Despite its name, the Poisson point process was not discovered nor studied by the French mathematician Siméon Denis Poisson; the name is cited as an example of Stigler's law.[19][20] The name stems from its inherent relation to the Poisson distribution, derived by Poisson as a limiting case of the binomial distribution.[21] This describes the probability of the sum of ${\displaystyle \textstyle n}$ Bernoulli trials with probability ${\displaystyle \textstyle p}$, often likened to the number of heads (or tails) after ${\displaystyle \textstyle n}$ biased flips of a coin with the probability of a head (or tail) occurring being ${\displaystyle \textstyle p}$. For some positive constant ${\displaystyle \textstyle \Lambda >0}$, as ${\displaystyle \textstyle n}$ increases towards infinity and ${\displaystyle \textstyle p}$ decreases towards zero such that the product ${\displaystyle \textstyle np=\Lambda }$ is fixed, the Poisson distribution more closely approximates that of the binomial.[22]

Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in the limit of ${\displaystyle \textstyle p}$ (to zero) and ${\displaystyle \textstyle n}$ (to infinity). It only appears once in all of Poisson's work,[23] and the result was not well-known during his time,[16] even though over the following years a number of people would use the distribution without citing Poisson including Philipp Ludwig von Seidel and Ernst Abbe.[16][19] The distribution would be studied years after Poisson at the end of the 19th century in a different setting by Ladislaus Bortkiewicz who did cite Poisson and used the distribution with real data to study the number of deaths from horse kicks in the Prussian army.[21][24]

### Discovery

There are a number of claims for early uses or discoveries of the Poisson point process.[19][20] It has been proposed that the earliest use of the Poisson point process was by John Michell in 1767, a decade before Poisson was born. Michell was interested in the probability a star being within a certain region of another star under the assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brightest stars in the Pleiades, without deriving the Poisson distribution. This work inspired Simon Newcomb to study the problem and to calculate the Poisson distribution as an approximation for the binomial distribution.[20]

At the beginning of the 20th century the Poisson point process would arise independently during the same period in three different situations.[19][22] In 1909 the Danish mathematician and engineer A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang, not at the time aware of Poisson's earlier work, assumed that the number phone calls arriving in each interval of time were independent to each other, and then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.[19] In 1910 physicists Ernest Rutherford and Hans Geiger, after conducting an experiment in counting the number of alpha particles, published their results in which English mathematician Harry Bateman derived the Poisson probabilities as a solution to a family of differential equations, although Bateman acknowledged that the solutions had been previously solved by others.[16] This experimental work by Rutherford and Geiger partly inspired physicist Norman Campbell who in 1909 and 1910 published two key papers on thermionic noise, also known as shot noise, in vacuum tubes,[16][19] where it is believed he independently discovered and used the Poisson process.[22] In Campbell's work, he also outlined a form of Campbell's theorem,[19] a key result in the theory of point processes,[2][12][16] but Campbell credited the proof to the mathematician G. H. Hardy.[19] The three above discoveries and applications of the Poisson point process has motivated some to say that 1909 should be considered the discovery year of the Poisson point process.[19][22]

### Early applications

The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and others working in the physical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used.[19] For example, in 1922 Swedish chemist and Nobel Laureate Theodor Svedberg proposed a model in which a spatial Poisson point process is the underlying process in order to study how plants are distributed in plant communities.[25] A number of mathematicians started studying the process in the early 1930s, and important contributions were made by Andrey Kolmogorov, William Feller and Aleksandr Khinchin,[19] among others.[26] As an application, Kolmogorov used a spatial Poisson point process to model the formation of crystals in metals.[1] In the field of teletraffic engineering, where a lot of the early researchers were Danes, such as Erlang, and Swedes, mathematicians and statisticians studied and used Poisson and other point processes.[27]

### History of terms

The Swede Conny Palm in his 1943 dissertation studied the Poisson and other point processes in the one-dimensional setting by examining them in terms of the statistical or stochastic dependence between the points in time.[16][27] In his work exists the first known recorded use of the term point process as Punktprozesse in German.[16][20]

It is believed [19] that William Feller, who also made the term random variable popular over a competing term chance variable through a coin flip with Joseph Doob, was the first in print to refer to it as the Poisson process in a 1940 paper. Although the Swedish statistician Ove Lundberg used the term Poisson process in his 1940 PhD dissertation,[20] in which Feller was acknowledged as an influence,[28] it has been claimed that Feller coined the term before 1940.[22] It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use.[20] Feller worked from 1936 to 1939 alongside Swedish mathematician and statistician Harald Cramér at Stockholm University, where Lundberg was a PhD student under Cramér who did not use the term Poisson process in a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the term Poisson process was coined sometime between 1936 and 1939 at the Stockholm University.[20]

## Overview of definitions

The Poisson point process is one of the most studied point processes, in both the field of probability and in more applied disciplines concerning random phenomena,[16] due to its convenient properties as a mathematical model as well as being mathematically interesting.[2] Depending on the setting, the process has several equivalent definitions [29] as well definitions of varying generality owing to its many applications and characterizations.[16] It may be defined, studied and used in one dimension (on the real line) where it can be interpreted as a counting process or part of a queueing model;[29][30] in higher dimensions such as the plane where it plays a role in stochastic geometry and spatial statistics;[1][31] or on more abstract mathematical spaces.[32] Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context.[1][16] Despite its different forms and varying generality, the Poisson point process has two key properties.

### First key property: Poisson distributed number of points

The Poisson point process is related to the Poisson distribution, which implies that the probability of a Poisson random variable ${\displaystyle \textstyle N}$ is equal to ${\displaystyle \textstyle n}$ is given by:

${\displaystyle P\{N=n\}={\frac {\Lambda ^{n}}{n!}}e^{-\Lambda }}$

where ${\displaystyle \textstyle n!}$ denotes ${\displaystyle \textstyle n}$ factorial and ${\displaystyle \textstyle \Lambda }$ is the single Poisson parameter that is used to define the Poisson distribution. If a Poisson point process is defined on some underlying mathematical space, called a state space [2][33] or carrier space,[34][35] then the number of points in a bounded region of the space will be a Poisson random variable with some parameter whose form will depend on the setting.[2]

### Second key property: complete independence

The other key property is that for a collection of disjoint and bounded subregions of the underlying space, the number of points in each bounded subregion will be completely independent of all the others. This property is known under several names such as complete randomness, complete independence,[16] or independent scattering [1][17][33] and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general,[36] which motivates the Poisson process being sometimes called a purely or completely random process.[16]

### Different definitions

The Poisson point process is often defined on the real line in the homogeneous setting, and then extended to a more general settings with more mathematical rigour.[16][36] For all the instances of the Poisson point process, the two key properties[a] of the Poisson distribution and complete independence play an important role.[1]

## Homogeneous Poisson point process

If a Poisson point process has a constant parameter, say, ${\displaystyle \textstyle \lambda }$, then it is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region.[2][17] In fact, the parameter ${\displaystyle \textstyle \lambda }$ can be interpreted as the average number of points per some unit of extent such as length, area, volume, or time, depending on the underlying mathematical space, hence it is sometimes called the mean density;[16] see Terminology; the extent is sometimes called the exposure.[37][38]

### Defined on the real line

Consider two real numbers ${\displaystyle \textstyle a}$ and ${\displaystyle \textstyle b}$, where ${\displaystyle \textstyle a\leq b}$, and which may represent points in time. Denote by ${\displaystyle \textstyle N(a,b]}$ the random number of points of a homogeneous Poisson point process existing with values greater than ${\displaystyle \textstyle a}$ but less than or equal to ${\displaystyle \textstyle b}$, or in other words, the number of points of the process in the interval ${\displaystyle \textstyle (a,b]}$. If the points form or belong to a homogeneous Poisson process with parameter ${\displaystyle \textstyle \lambda >0}$, then the probability of ${\displaystyle \textstyle n}$ points existing in the above interval ${\displaystyle \textstyle (a,b]}$ is given by:

${\displaystyle P\{N(a,b]=n\}={\frac {[\lambda (b-a)]^{n}}{n!}}e^{-\lambda (b-a)},}$

In other words, ${\displaystyle \textstyle N(a,b]}$ is a Poisson random variable with mean ${\displaystyle \textstyle \lambda (b-a)}$. Furthermore, the number of points in any two disjoint intervals, say, ${\displaystyle \textstyle (a_{1},b_{1}]}$ and ${\displaystyle \textstyle (a_{2},b_{2}]}$ are independent of each other, and this extends to any finite number of disjoint intervals.[16] In the queueing theory context, one can consider a point existing (in an interval) as an event, but this is different to the word event in the probability theory sense.[b] It follows that ${\displaystyle \textstyle \lambda }$ is the expected number of arrivals that occur per unit of time, and it is sometimes called the rate parameter.[30]

For a more formal definition of a stochastic process, such as a point process, one can use the Kolmogorov theorem, which in this context gives the joint probability of some number of points existing in each disjoint finite interval. More specifically, let ${\displaystyle \textstyle N(a_{i},b_{i}]}$ denote the number of points of (a point process) happening in the half-open interval ${\displaystyle \textstyle (a_{i},b_{i}]}$, where the real numbers ${\displaystyle \textstyle a_{i}. Then for some positive integer ${\displaystyle \textstyle k}$, the homogeneous Poisson point process on the real line with parameter ${\displaystyle \textstyle \lambda >0}$ is defined with the finite-dimensional distribution:[16]

${\displaystyle P\{N(a_{i},b_{i}]=n_{i},i=1,\dots ,k\}=\prod _{i=1}^{k}{\frac {[\lambda (b_{i}-a_{i})]^{n_{i}}}{n_{i}!}}e^{-\lambda (b_{i}-a_{i})},}$

#### Key properties

The above definition has two important features pertaining to the Poisson point processes in general:

• the number of points in each finite interval has a Poisson distribution;
• the number of points in disjoint intervals are independent random variables.

Furthermore, it has a third feature related to just the homogeneous Poisson process:

• the distribution of each interval ${\displaystyle \textstyle (a+t,b+t]}$ only depends on the interval's length ${\displaystyle \textstyle b-a}$.

In other words, for any finite ${\displaystyle \textstyle t>0}$, the random variable ${\displaystyle \textstyle N(a+t,b+t]}$ is independent of ${\displaystyle \textstyle t}$,[16] and, hence, the process is Stationary process, which is why it is sometimes called the stationary Poisson process.

#### Law of large numbers

The quantity ${\displaystyle \textstyle \lambda (b_{i}-a_{i})}$ can be interpreted as the expected or average number of points occurring in the interval ${\displaystyle \textstyle (a_{i},b_{i}]}$, namely:

${\displaystyle E\{N(a_{i},b_{i}]\}=\lambda (b_{i}-a_{i}),}$

where ${\displaystyle \textstyle E}$ denotes the expectation operator. In other words, the parameter ${\displaystyle \textstyle \lambda }$ of the Poisson process coincides with the density of points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers.[2] More specifically, with probability one:

${\displaystyle \lim _{t\rightarrow \infty }{\frac {N(t)}{t}}=\lambda ,}$

where ${\displaystyle \textstyle \lim }$ denotes the limit of a function.

#### Memoryless property

The distance between two consecutive points of a point process on the real line will be an exponential random variable with parameter ${\displaystyle \textstyle \lambda }$ (or equivalently, mean ${\displaystyle \textstyle 1/\lambda }$). This implies that the points have the memoryless property: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing. This property is directly related to the complete independence of the Poisson process, however, it has no natural equivalence when the Poisson process is defined in higher dimensions.[2]

#### Orderliness and simplicity

A stochastic process with stationary increments is sometimes said to be orderly,[39] ordinary [32] or regular [30] if

${\displaystyle P\{N(t,t+\delta ]>1\}=o(\delta ),}$

where little-o notation is used. A point process is called a simple point process when the probability of any of its two points coinciding in the same position (on the underlying state space) is zero. For point processes in general on the real line, the (probability distribution) property of orderliness implies that the process is simple [39] or has the (sample path) property of simplicity,[32] which is the case for the homogeneous Poisson point process.

#### Relationship to other processes

On the real line, the Poisson point process is a type of continuous-time Markov process known as a birth-death process (with just births and zero deaths) and is called a pure [30] or simple birth process.[40] More complicated processes with the Markov property, such as Markov arrival processes, have been defined where the Poisson process is a special case.[29][41]

#### Counting process interpretation

The homogeneous Poisson point process, when considered on the positive half-line, is sometimes defined as a counting process, which can be denoted as ${\displaystyle \textstyle \{N(t),t\geq 0\}}$.[29][30] A counting process represents the total number of occurrences or events that have happened up to and including time ${\displaystyle \textstyle t}$. A counting process is a Poisson counting process with rate ${\displaystyle \textstyle \lambda >0}$ if it has the following three properties:

• ${\displaystyle \textstyle N(0)=0}$;
• has independent increments; and
• the number of events (or points) in any interval of length ${\displaystyle \textstyle t}$ is a Poisson random variable with parameter (or mean) ${\displaystyle \textstyle \lambda t}$.

The last property implies

${\displaystyle E[N(t)]=\lambda t.}$

The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean ${\displaystyle \textstyle 1/\lambda }$.[29] The time differences between the events or arrivals are known as interrarrival [30][36] or interoccurence times.[29] These two definitions of the Poisson counting process agree with the previous definition of the Poisson point process.

#### Martingale characterization

On the real line, the homogeneous Poisson point process has a connection to the theory of martingales via the following characterization: a point process is the homogeneous Poisson point process if and only if

${\displaystyle N(-\infty ,t]-t,}$

is a martingale.[42]

#### Restricted to the half-line

If the homogeneous Poisson point process is considered just on the half-line ${\displaystyle \textstyle [0,\infty )}$, which is often the case when ${\displaystyle \textstyle t}$ represents time, as it does for the previous counting process,[29][30] then the resulting process is not truly invariant under translation.[2] In that case the process is no longer stationary, according to some definitions of stationarity.[1][2][39]

#### Applications

There have been many applications of the homogeneous Poisson point process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role in queueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena.[11][29][30][36] For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory. In the original paper proposing the online payment system known as Bitcoin featured a mathematical model based on a homogeneous Poisson point process.[43]

#### Generalizations

The Poisson counting process or, more generally, the homogeneous Poisson point process on the real line is considered one of the simplest stochastic processes for counting random numbers of points.[39][44] The process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as a renewal process. Another generalization is to define it on higher dimensional spaces such as the plane.[16]

### Spatial Poisson point process

A spatial Poisson process is a Poisson point process defined in the plane ${\displaystyle \textstyle {\textbf {R}}^{2}}$.[42][45] For its definition, consider a bounded, open or closed (or more precisely, Borel measurable) region ${\displaystyle \textstyle B}$ of the plane. Denote by ${\displaystyle \textstyle N(B)}$ the (random) number of points of ${\displaystyle \textstyle N}$ existing in this region ${\displaystyle \textstyle B\subset {\textbf {R}}^{2}}$. If the points belong to a homogeneous Poisson process with parameter ${\displaystyle \textstyle \lambda >0}$, then the probability of ${\displaystyle \textstyle n}$ points existing in ${\displaystyle \textstyle B}$ is given by:

${\displaystyle P\{N(B)=n\}={\frac {(\lambda |B|)^{n}}{n!}}e^{-\lambda |B|}}$

where ${\displaystyle \textstyle |B|}$ denotes the area of ${\displaystyle \textstyle B}$.

More formally, for some finite integer ${\displaystyle \textstyle k\geq 1}$, consider a collection of disjoint, bounded Borel (measurable) sets ${\displaystyle \textstyle B_{1},\dots ,B_{k}}$. Let ${\displaystyle \textstyle N(B_{i})}$ denote the number of points of existing in ${\displaystyle \textstyle B_{i}}$. Then the homogeneous Poisson point process with parameter ${\displaystyle \textstyle \lambda >0}$ has the finite-dimensional distribution [16]

${\displaystyle P\{N(B_{i})=n_{i},i=1,\dots ,k\}=\prod _{i=1}^{k}{\frac {(\lambda |B_{i}|)^{n_{i}}}{n_{i}!}}e^{-\lambda |B_{i}|}.}$

#### Applications

According to one statistical study, the positions of cellular or mobile phone base stations in the Australian city Sydney, pictured above, resemble a Poisson point process, while in many other cities around the world they do not and other point processes are required.[46]

The spatial Poisson point process features prominently in spatial statistics, stochastic geometry, and continuum percolation theory. This process is applied in various physical sciences such as a model developed for alpha particles being detected.[1] In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks.[13][14][15] For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process.

### Defined in higher dimensions

The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded region ${\displaystyle \textstyle B}$ of Euclidean space ${\displaystyle \textstyle {\textbf {R}}^{d}}$, if the points form a homogeneous Poisson process with parameter ${\displaystyle \textstyle \lambda >0}$, then the probability of ${\displaystyle \textstyle n}$ points existing in ${\displaystyle \textstyle B\subset {\textbf {R}}^{d}}$ is given by:

${\displaystyle P\{N(B)=n\}={\frac {(\lambda |B|)^{n}}{n!}}e^{-\lambda |B|}}$

where ${\displaystyle \textstyle |B|}$ now denotes the ${\displaystyle \textstyle n}$-dimensional volume of ${\displaystyle \textstyle B}$. Furthermore, for a collection of disjoint, bounded Borel sets ${\displaystyle \textstyle B_{1},\dots ,B_{k}\subset {\textbf {R}}^{d}}$, let ${\displaystyle \textstyle N(B_{i})}$ denote the number of points of ${\displaystyle \textstyle N}$ existing in ${\displaystyle \textstyle B_{i}}$. Then the corresponding homogeneous Poisson point process with parameter ${\displaystyle \textstyle \lambda >0}$ has the finite-dimensional distribution [16]

${\displaystyle P\{N(B_{i})=n_{i},i=1,\dots ,k\}=\prod _{i=1}^{k}{\frac {(\lambda |B_{i}|)^{n_{i}}}{n_{i}!}}e^{-\lambda |B_{i}|}.}$

Homogeneous Poisson point processes do not depend on the position of the underlying state space through its parameter ${\displaystyle \textstyle \lambda }$, which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process.[1] Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset of ${\displaystyle \textstyle {\textbf {R}}^{d}}$, then depending on some definitions of stationarity, the process is no longer stationary.[1][39]

### Points are uniformly distributed

If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval ${\displaystyle \textstyle (a-b]}$ where ${\displaystyle \textstyle a\leq b}$, then its location will be a uniform random variable defined on that interval.[16] Furthermore, the homogeneous point process is sometimes called the uniform Poisson point process (see Terminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but it does not hold in other coordinate systems (for example, polar or spherical).

## Inhomogeneous Poisson point process

The inhomogeneous or nonhomogeneous Poisson point process (see Terminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean space ${\displaystyle \textstyle R^{d}}$, this is achieved by introducing a locally integrable positive function ${\displaystyle \textstyle \lambda (x)}$, where ${\displaystyle \textstyle x}$ is a ${\displaystyle \textstyle d}$-dimensional point located in ${\displaystyle \textstyle R^{d}}$, such that for any bounded region ${\displaystyle \textstyle B}$ the (${\displaystyle \textstyle d}$-dimensional) volume integral of ${\displaystyle \textstyle \lambda (x)}$ over region ${\displaystyle \textstyle B}$ is finite. In other words, if this integral, denoted by ${\displaystyle \textstyle \Lambda (B)}$, is:[17]

${\displaystyle \Lambda (B)=\int _{B}\lambda (x)dx<\infty ,}$

where ${\displaystyle \textstyle dx}$ is a (${\displaystyle \textstyle d}$-dimensional) volume element,[c] then for any collection of disjoint bounded Borel measurable sets ${\displaystyle \textstyle B_{1},\dots ,B_{k}}$, an inhomogeneous Poisson process with (intensity) function ${\displaystyle \textstyle \lambda (x)}$ has the finite-dimensional distribution:[16]

${\displaystyle P\{N(B_{i})=n_{i},i=1,\dots ,k\}=\prod _{i=1}^{k}{\frac {(\Lambda (B_{i}))^{n_{i}}}{n_{i}!}}e^{-\Lambda (B_{i})}.}$

Furthermore, ${\displaystyle \textstyle \Lambda (B)}$ has the interpretation of being the expected number of points of the Poisson process located in the bounded region ${\displaystyle \textstyle B}$, namely

${\displaystyle \Lambda (B)=E[N(B)].}$

### Defined on the real line

On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbers ${\displaystyle \textstyle a}$ and ${\displaystyle \textstyle b}$, where ${\displaystyle \textstyle a\leq b}$, denote by ${\displaystyle \textstyle N(a,b]}$ the number points of an inhomogeneous Poisson process with intensity function ${\displaystyle \textstyle \lambda (t)}$ with values greater than ${\displaystyle \textstyle a}$ but less than or equal to ${\displaystyle \textstyle b}$. The probability of ${\displaystyle \textstyle n}$ points existing in the above interval ${\displaystyle \textstyle (a,b]}$ is given by:

${\displaystyle P\{N(a,b]=n\}={\frac {[\Lambda (a,b)]^{n}}{n!}}e^{-\Lambda (a,b)}.}$

where the mean or intensity measure is:

${\displaystyle \Lambda (a,b)=\int _{a}^{b}\lambda (t)dt,}$

which means that the random variable ${\displaystyle \textstyle N(a,b]}$ is a Poisson random variable with mean ${\displaystyle \textstyle E\{N(a,b]\}=\Lambda (a,b)}$.

A feature of the one-dimension setting considered useful is that an inhomogeneous Poisson point process can be transformed into a homogeneous by a monotone transformation or mapping, which is achieved with the inverse of ${\displaystyle \textstyle \Lambda }$.[2][33]

#### Counting process interpretation

The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as ${\displaystyle \textstyle \{N(t),t\geq 0\}}$, represents the total number of occurrences or events that have happened up to and including time ${\displaystyle \textstyle t}$. A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties:[29][30]

• ${\displaystyle \textstyle N(0)=0}$;
• has independent increments;
• ${\displaystyle \textstyle P\{N(t+h)-N(t)=1\}=\lambda (t)h+o(h)}$; and
• ${\displaystyle \textstyle P\{N(t+h)-N(t)\geq 2\}=o(h)}$,

where ${\displaystyle \textstyle o(h)}$ is asymptotic or little-o notation for ${\displaystyle \textstyle o(h)/h\rightarrow 0}$ as ${\displaystyle \textstyle h\rightarrow 0}$. In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies:[47] ${\displaystyle P(N(t+h)-N(t)\geq 2)=o(h^{2})}$.

The above properties imply that ${\displaystyle \textstyle N(t+h)-N(t)}$ is a Poisson random variable with the parameter (or mean)

${\displaystyle E[N(t+h)-N(t)]=\int _{t}^{t+h}\lambda (s)ds,}$

which implies

${\displaystyle E[N(h)]=\int _{0}^{h}\lambda (s)ds.}$

### Spatial Poisson point process

An inhomogeneous Poisson process, just like a homogeneous Poisson process, defined in the plane ${\displaystyle \textstyle {\textbf {R}}^{2}}$ is called a spatial Poisson point process.[12] Calculating its intensity measure requires performing an area integral of its intensity function over some region. For example, its intensity function (as a function of Cartesian coordinates ${\displaystyle \textstyle x}$ and ${\displaystyle \textstyle y}$) may be

${\displaystyle \lambda (x,y)=e^{-(x^{2}+y^{2})},}$

hence it has an intensity measure given by the area integral

${\displaystyle \Lambda (B)=\int _{B}e^{-(x^{2}+y^{2})}dxdy,}$

where ${\displaystyle \textstyle B}$ is some bounded region in the plane ${\displaystyle \textstyle R^{2}}$. The previous intensity function can be re-written, via a change of coordinates, in polar coordinates as

${\displaystyle \lambda (r,\theta )=e^{-r^{2}},}$

which reveals that the intensity function in this example is independent of the angular coordinate ${\displaystyle \textstyle \theta }$, or, in other words, it is isotropic or rotationally invariant. The intensity measure is then given by the area integral

${\displaystyle \Lambda (B')=\int _{B'}e^{-r^{2}}rdrdd\theta ,}$

where ${\displaystyle \textstyle B'}$ is some bounded region in the plane ${\displaystyle \textstyle R^{2}}$.

### In higher dimensions

In the plane, ${\displaystyle \textstyle \Lambda (B)}$ corresponds to an area integral while in ${\displaystyle \textstyle {\textbf {R}}^{d}}$ the integral becomes a (${\displaystyle \textstyle d}$-dimensional) volume integral.

### Applications

The real line, as mentioned earlier, is often interpreted as time and in this setting the inhomogeneous process is used in the fields of counting processes and in queueing theory.[29][30] Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include:

• Goals being scored in a soccer game.[48]
• Defects in a circuit board [49]

In the plane, the Poisson point process is of fundamental importance in the related disciplines of stochastic geometry [1][31] and spatial statistics.[12][17] This point process is not stationary owing to the fact that its distribution is dependent on the location of underlying space or state space. Hence, it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density. Uses for this process as a mathematical model are diverse and have appeared across various disciplines including the study of salmon and sea lice in the oceans,[50] forestry,[5] and search problems.[51]

### Interpretation of the intensity function

The Poisson intensity function ${\displaystyle \textstyle \lambda (x)}$ has an interpretation, considered intuitive,[1] with the volume element ${\displaystyle \textstyle dx}$ in the infinitesimal sense: ${\displaystyle \textstyle \lambda (x)dx}$ is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volume ${\displaystyle \textstyle dx}$ located at ${\displaystyle \textstyle x}$.[1]

For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of width ${\displaystyle \textstyle \delta }$ is approximately ${\displaystyle \textstyle \lambda \delta x}$. In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived.[2][36][39]

### Simple point process

If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is a simple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space.[1][14]

## Simulation

Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulation window, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated.[1][33]

### Step 1: Number of points

The number of points ${\displaystyle \textstyle N}$ in the window, denoted here by ${\displaystyle \textstyle W}$, needs to be simulated, which is done by using a (pseudo)-random number generating function capable of simulating Poisson random variables.

#### Homogeneous case

For the homogeneous case with the constant ${\displaystyle \textstyle \lambda }$, the mean of the Poisson random variable ${\displaystyle \textstyle N}$ is set to ${\displaystyle \textstyle \lambda |W|}$ where ${\displaystyle \textstyle |W|}$ is the length, area or (${\displaystyle \textstyle d}$-dimensional) volume of ${\displaystyle \textstyle W}$.

#### Inhomogeneous case

For the inhomogeneous case, ${\displaystyle \textstyle \lambda |W|}$ is replaced with the (${\displaystyle \textstyle d}$-dimensional) volume integral

${\displaystyle \Lambda (W)=\int _{W}\lambda (x)dx}$

### Step 2: Positioning of points

The second stage requires randomly placing the ${\displaystyle \textstyle N}$ points in the window ${\displaystyle \textstyle W}$.

#### Homogeneous case

For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or interval ${\displaystyle \textstyle W}$. For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the window ${\displaystyle \textstyle W}$. If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed in ${\displaystyle \textstyle W}$, and suitable change of coordinates (from Cartesian) are needed.[1]

#### Inhomogeneous case

For the inhomogeneous, a couple of different methods can be used depending on the nature of the intensity function ${\displaystyle \textstyle \lambda (x)}$.[1] If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinates ${\displaystyle \textstyle r}$ and ${\displaystyle \textstyle \theta }$), implying it is rotationally variant or independent of ${\displaystyle \textstyle \theta }$ but dependent on ${\displaystyle \textstyle r}$, by a change of variable in ${\displaystyle \textstyle r}$ if the intensity function is sufficiently simple.[1]

For more complicated intensity functions, one can use an acceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio [33]

${\displaystyle {\frac {\lambda (x_{i})}{\Lambda (W)}}={\frac {\lambda (x_{i})}{\int _{W}\lambda (x)dx.}}}$

where ${\displaystyle \textstyle x_{i}}$ is the point under consideration for acceptance or rejection.

## General Poisson point process

The Poisson point process can be further generalized to what is sometimes known as the general Poisson point process[1][12][15] by using a Radon measure ${\displaystyle \textstyle \Lambda }$, hence this measure is locally finite. The Radon measure can be atomic, that is it can have atoms at points in the underlying state space, while some researchers assume the converse where the Radon measure ${\displaystyle \textstyle \Lambda }$ is diffuse or non-atomic.[1] If the measure is atomic ${\displaystyle \textstyle \Lambda }$, then the number of points at ${\displaystyle \textstyle x}$ is a Poisson random variable with mean ${\displaystyle \textstyle \Lambda ({x})}$.[15]

Assuming that the underlying space of the Poisson point process is ${\displaystyle \textstyle {\textbf {R}}^{d}}$ (the space can be more general), then ${\displaystyle \textstyle \Lambda (\{x\})=0}$ for any single point ${\displaystyle \textstyle x}$ in ${\displaystyle \textstyle {\textbf {R}}^{d}}$ and ${\displaystyle \textstyle \Lambda (B)}$ is finite for any bounded subset ${\displaystyle \textstyle B}$ of ${\displaystyle \textstyle {\textbf {R}}^{d}}$.[17] Then a point process ${\displaystyle \textstyle {N}}$ is a general Poisson point process with intensity ${\displaystyle \textstyle \Lambda }$ if it has the two following properties:[1]

• the number of points in a bounded Borel set ${\displaystyle \textstyle B}$ is a Poisson random variable with mean ${\displaystyle \textstyle \Lambda (B)}$. In other words, denote the total number of points located in ${\displaystyle \textstyle B}$ by ${\displaystyle \textstyle {N}(B)}$, then the probability that the random variable ${\displaystyle \textstyle {N}(B)}$ is equal to ${\displaystyle \textstyle n}$ is given by:
${\displaystyle P\{{N}(B)=n\}={\frac {(\Lambda (B))^{n}}{n!}}e^{-\Lambda (B)}}$
• the number of points in ${\displaystyle \textstyle n}$ disjoint Borel sets forms ${\displaystyle \textstyle n}$ independent random variables.

The Radon measure ${\displaystyle \textstyle \Lambda }$ maintains its previous interpretation of being the expected number of points of ${\displaystyle \textstyle {N}}$ located in the bounded region ${\displaystyle \textstyle B}$, namely

${\displaystyle \Lambda (B)=E[{N}(B)].}$

Furthermore, if ${\displaystyle \textstyle \Lambda }$ is absolutely continuous such that it has a density (or more precisely, a Radon–Nikodym density or derivative) with respect to the Lebesgue measure, then for all Borel sets ${\displaystyle \textstyle B}$ it can be written as:

${\displaystyle \Lambda (B)=\int _{B}\lambda (x)dx,}$

where the density ${\displaystyle \textstyle \lambda (x)}$ is known, among other terms, as the intensity function.

## Terminology

In addition to the word point often being omitted, the terminology of the Poisson point process and point process theory varies, which has been criticized.[20] The homogeneous Poisson (point) process is also called a stationary Poisson (point) process,[16] sometimes the uniform Poisson (point) process,[2] and in the past it was, by William Feller and others, referred to as a Poisson ensemble of points.[36][52] The term point process has been criticized and some authors prefer the term random point field,[1] hence the terms Poisson random point field or Poisson point field are also used.[53] A point process is considered, and sometimes called, a random counting measure,[54] hence the Poisson point process is also referred to as a Poisson random measure,[55] a term used in the study of Lévy processes,[55][56] but some choose to use the two terms for slightly different random objects.[57]

The inhomogenous Poisson point process, as well as be being called nonhomogeneous [16] or non-homogeneous,[39] is sometimes referred to as the non-stationary,[29] heterogeneous [45][58][59] or spatially dependent Poisson (point) process.[50][60]

The measure ${\displaystyle \textstyle \Lambda }$ is sometimes called the parameter measure [16] or intensity measure [1] or mean measure.[2] If ${\displaystyle \textstyle \Lambda }$ has a derivative or density, denoted by ${\displaystyle \textstyle \lambda (x)}$, it may be called the intensity function of the general Poisson point process [1] or simply the rate or intensity,[2] since there are no standard terms.[2] For the homogeneous Poisson point process, the intensity is simply a constant ${\displaystyle \textstyle \lambda >0}$, which can be referred to as the mean rate or mean density [16] or rate parameter.[30] For ${\displaystyle \textstyle \lambda =1}$, the corresponding process is sometimes referred to as the standard Poisson (point) process.[17][42][61]

The underlying mathematical space on which the point process, Poisson or other, is defined is known as a state space [2] or carrier space.[34][35]

## Notation

The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation ${\displaystyle \textstyle \{N(t),t\geq 0\}}$ is used to represent the Poisson process.[29][30]

Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notation ${\displaystyle \textstyle x\in {N}}$, implying that ${\displaystyle \textstyle x}$ is a random point belonging to or being an element of the Poisson point process ${\displaystyle \textstyle {N}}$. Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point process ${\displaystyle \textstyle {N}}$ being found or located in some (Borel measurable) region ${\displaystyle \textstyle B}$ as ${\displaystyle \textstyle {N}(B)}$, which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory.[1]

For general point processes, sometimes a subscript on the point symbol, for example ${\displaystyle \textstyle x}$, is included so one writes (with set notation) ${\displaystyle \textstyle x_{i}\in {N}}$ instead of ${\displaystyle \textstyle x\in {N}}$, and ${\displaystyle \textstyle x}$ can be used for the dummy variable in integral expressions such as Campbell's theorem, instead of denoting random points.[14] Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the processs, so, for example, the point ${\displaystyle \textstyle x}$ or ${\displaystyle \textstyle x_{i}}$ belongs to or is a point of the point process ${\displaystyle \textstyle X}$, and be written with set notation as ${\displaystyle \textstyle x\in X}$ or ${\displaystyle \textstyle x_{i}\in X}$.[17]

Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point process ${\displaystyle \textstyle N}$ defined on the Euclidean state space ${\displaystyle \textstyle {{\textbf {R}}^{d}}}$ and a (measurable) function ${\displaystyle \textstyle f}$ on ${\displaystyle \textstyle {\textbf {R}}^{d}}$ , the expression

${\displaystyle \int _{{\textbf {R}}^{d}}f(x){N}(dx)=\sum \limits _{x_{i}\in N}f(x_{i}),}$

demonstrates two different ways to write a summation over a point process. More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation.[1]

## Functionals and moment measures

In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem.[62] In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively. For measures, often their densities (or Radon-Nikodym derivatives), if they exist, are also expressed with respect to the Lebesgue measure.[1][16]

### Laplace functionals

For a Poisson point process ${\displaystyle \textstyle {N}}$ with intensity measure ${\displaystyle \textstyle \Lambda }$, the Laplace functional is given by:[14]

${\displaystyle L_{N}(f)=e^{-\int _{{\textbf {R}}^{d}}(1-e^{f(x)})\Lambda (dx)},}$

which for the homogeneous case is:

${\displaystyle L_{N}(f)=e^{-\lambda \int _{{\textbf {R}}^{d}}(1-e^{f(x)})dx}.}$

One version of Campbell's theorem involves the Laplace functional of the Poisson point process.

### Probability generating functionals

The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded function ${\displaystyle \textstyle v}$ on ${\displaystyle \textstyle {\textbf {R}}^{d}}$ such that ${\displaystyle \textstyle 0\leq v(x)\leq 1}$. For a point process ${\displaystyle \textstyle {N}}$ the probability generating functional is defined as:[1]

${\displaystyle G(v)=E\left[\prod _{x\in {N}}v(x)\right]}$

where the product is performed for all the points in ${\displaystyle \textstyle {N}}$. If the intensity measure ${\displaystyle \textstyle \Lambda }$ of ${\displaystyle \textstyle {N}}$ is locally finite, then the ${\displaystyle \textstyle G}$ is well-defined for any measurable function ${\displaystyle \textstyle u}$ on ${\displaystyle \textstyle {\textbf {R}}^{d}}$. For a Poisson point process with intensity measure ${\displaystyle \textstyle \Lambda }$ the generating functional is given by:

${\displaystyle G(v)=e^{-\int _{{\textbf {R}}^{d}}[1-v(x)]\Lambda (dx)},}$

which in the homogeneous case reduces to

${\displaystyle G(v)=e^{-\lambda \int _{{\textbf {R}}^{d}}[1-v(x)]dx}.}$

### Moment measure

For a general Poisson point process with intensity measure ${\displaystyle \textstyle \Lambda }$ the first moment measure is its intensity measure:[14]

${\displaystyle M^{1}(B)=\Lambda (B),}$

which for a homogeneous Poisson point process with constant intensity ${\displaystyle \textstyle \lambda }$ means:

${\displaystyle M^{1}(B)=\lambda |B|,}$

where ${\displaystyle \textstyle |B|}$ is the length, area or volume (or more generally, the Lebesgue measure) of ${\displaystyle \textstyle B}$.

For the Poisson case with measure ${\displaystyle \textstyle \Lambda }$ the second moment measure is:[12]

${\displaystyle M^{2}(B)=\Lambda (B)+\Lambda (B)^{2}.}$

which in the homogeneous case reduces to

${\displaystyle M^{2}(B)=\lambda |B|+(\lambda |B|)^{2}.}$

### Factorial moment measure

For a general Poisson point process with intensity measure ${\displaystyle \textstyle \Lambda }$ the ${\displaystyle \textstyle n}$-th factorial moment measure is given by the expression:[1]

${\displaystyle M^{(n)}(B_{1}\times ,\dots ,\times B_{n})=\prod _{i=1}^{n}[\Lambda (B_{i})],}$

where ${\displaystyle \textstyle \Lambda }$ is the intensity measure or first moment measure of ${\displaystyle \textstyle {N}}$, which for some Borel set ${\displaystyle \textstyle B}$ is given by:

${\displaystyle \Lambda (B)=M^{1}(B)=E[{N}(B)].}$

For a homogeneous Poisson point process the ${\displaystyle \textstyle n}$-th factorial moment measure is simply:[14]

${\displaystyle M^{(n)}(B_{1}\times ,\dots ,\times B_{n})=\lambda ^{n}\prod _{i=1}^{n}|B_{i}|,}$

where ${\displaystyle \textstyle |B_{i}|}$ is the length, area, or volume (or more generally, the Lebesgue measure) of ${\displaystyle \textstyle B_{i}}$. Furthermore, the ${\displaystyle \textstyle n}$-th factorial moment density is:[1]

${\displaystyle \mu ^{(n)}(x_{1},\dots ,x_{n})=\lambda ^{n}.}$

## Avoidance function

The avoidance function [2][16][32] or void probability [1] ${\displaystyle \textstyle v}$ of a point process ${\displaystyle \textstyle {N}}$ is defined in relation to some set ${\displaystyle \textstyle B}$, which is a subset of the underlying space ${\displaystyle \textstyle {\textbf {R}}^{d}}$, as the probability of no points of ${\displaystyle \textstyle {N}}$ existing in ${\displaystyle \textstyle B}$. More precisely,[1] for a test set ${\displaystyle \textstyle B}$, the avoidance function is given by:

${\displaystyle v(B)=P({N}(B)=0).}$

For a general Poisson point process ${\displaystyle \textstyle {N}}$ with intensity measure ${\displaystyle \textstyle \Lambda }$, its avoidance function is give by:

${\displaystyle v(B)=e^{-\Lambda (B)}}$

### Rényi's theorem

It can be shown that simple point processes are completely characterized by their void probabilities. In other words, complete information of a simple point process is captured entirely in its void probabilities. The case for Poisson process is sometimes known as Rényi's theorem,[2][22] which is named after Alfréd Rényi who discovered the result for the case of a homogeneous point process in one-dimension.[2]

In one form,[2] the Rényi's theorem says for a diffuse (or non-atomic) Radon measure ${\displaystyle \textstyle \Lambda }$ on ${\displaystyle \textstyle {\textbf {R}}^{d}}$ and a set ${\displaystyle \textstyle A}$ is a finite union of rectangles (so not Borel[d]) that if ${\displaystyle \textstyle {N}}$ is a countable subset of ${\displaystyle \textstyle {\textbf {R}}^{d}}$ such that:

${\displaystyle P({N}(A)=0)=v(A)=e^{-\Lambda (A)}}$

then ${\displaystyle \textstyle {N}}$ is a Poisson point process with intensity measure ${\displaystyle \textstyle \Lambda }$.

## Point process operations

Mathematical operations can be performed on point processes in order to develop suitable mathematical models. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, hence creating a new process with the remaining points (the deleted points also form a point process). Another example of a point process operation is superimposing (or combining) point processes into one point process.

One of the reasons why the Poisson point process is often used as model is that, under suitable conditions, when performed on a Poisson point process these operations often produce another (usually different) Poisson point process, demonstrating an aspect of mathematical closure.[2] The operations can also be used to create new point processes, which are then also used as mathematical models for the random placement of certain objects.[1][14]

### Thinning

For the Poisson process, the independent ${\displaystyle \textstyle p(x)}$-thinning operations results in another Poisson point process. More specifically, a ${\displaystyle \textstyle p(x)}$-thinning operation applied to a Poisson point process with intensity measure ${\displaystyle \textstyle \Lambda }$ gives a point process of removed points that is also Poisson point process ${\displaystyle \textstyle {N}_{p}}$ with intensity measure ${\displaystyle \textstyle \Lambda _{p}}$, which for a bounded Borel set ${\displaystyle \textstyle B}$ is given by:

${\displaystyle \Lambda _{p}(B)=\int _{B}p(x)\Lambda (dx)}$

Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure

${\displaystyle \Lambda _{p}(B)=\int _{B}(1-p(x))\Lambda (dx).}$

The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other.[1][2] In other words, if a region is known to contain ${\displaystyle \textstyle n}$ kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known as splitting [63][64] the Poisson point process.

### Superposition

If there is a countable collection of point processes ${\displaystyle \textstyle {N}_{1},{N}_{2}\dots }$, then their superposition, or, in set theory language, their union

${\displaystyle {N}=\bigcup _{i=1}^{\infty }{N}_{i},}$

also forms a point process. In other words, any points located in any of the point processes ${\displaystyle \textstyle {N}_{1},{N}_{2}\dots }$ will also be located in the superposition of these point processes ${\displaystyle \textstyle {N}}$.

#### Superposition theorem

The Superposition theorem of the Poisson point process, which stems directly from the complete independence property, says [2][22] that the superposition of independent Poisson point processes ${\displaystyle \textstyle {N}_{1},{N}_{2}\dots }$ with mean measures ${\displaystyle \textstyle \Lambda _{1},\Lambda _{2},\dots }$ will also be a Poisson point process with mean measure

${\displaystyle \Lambda =\sum \limits _{i=1}^{\infty }\Lambda _{i}.}$

In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a point ${\displaystyle \textstyle x}$ is sampled from a countable ${\displaystyle \textstyle n}$ union of Poisson processes, then the probability that the point ${\displaystyle \textstyle x}$ belongs to the ${\displaystyle \textstyle j}$th Poisson process ${\displaystyle \textstyle {N}_{j}}$ is given by:

${\displaystyle P(x\in {N}_{j})={\frac {\Lambda _{j}}{\sum _{i=1}^{n}\Lambda _{i}}}.}$

#### Homogeneous case

In the homogeneous case with constant ${\displaystyle \textstyle \lambda _{1},\lambda _{2}\dots }$, the two previous expressions reduce to

${\displaystyle \lambda =\sum \limits _{i=1}^{\infty }\lambda _{i},}$

and

${\displaystyle P(x\in {N}_{j})={\frac {\lambda _{j}}{\sum _{i=1}^{n}\lambda _{i}}}.}$

### Clustering

The operation clustering is performed when each point ${\displaystyle \textstyle x}$ of some point process ${\displaystyle \textstyle {N}}$ is replaced by another (possibly different) point process. If the original process ${\displaystyle \textstyle {N}}$ is a Poisson point process, then the resulting process ${\displaystyle \textstyle {N}_{c}}$ is called a Poisson cluster point process.

### Random displacement

A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement [2] or translation.[32] The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem,[2] which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process.

#### Displacement theorem

One version of the displacement theorem [2] entails first considering a Poisson point process ${\displaystyle \textstyle {N}}$ on ${\displaystyle \textstyle {\textbf {R}}^{d}}$ with intensity function ${\displaystyle \textstyle \lambda (x)}$. It is then assumed the points of ${\displaystyle \textstyle {N}}$ are randomly displaced somewhere else in ${\displaystyle \textstyle {\textbf {R}}^{d}}$ so that each point's displacement is independent and that the displacement of a point formerly at ${\displaystyle \textstyle x}$ is a random vector with a probability density ${\displaystyle \textstyle \rho (x,\cdot )}$.[e] Then the new point process ${\displaystyle \textstyle {N}_{D}}$ is also a Poisson point process with intensity function

${\displaystyle \lambda _{D}(y)=\int _{{\textbf {R}}^{d}}\lambda (x)\rho (x,y)dx,}$

which for the homogeneous case with a constant ${\displaystyle \textstyle \lambda >0}$ means

${\displaystyle \lambda _{D}(y)=\lambda .}$

In other words, after each random and independent displacement of points, the original Poisson point process still exists.

The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean space ${\displaystyle \textstyle {\textbf {R}}^{d}}$ to another Euclidean space ${\displaystyle \textstyle {\textbf {R}}^{d'}}$, where ${\displaystyle \textstyle d'\geq 1}$ is not necessarily equal to ${\displaystyle \textstyle d}$.[14]

### Mapping

Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space.[2]

#### Mapping theorem

If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as the Mapping theorem.[2][22][33] The theorem involves some Poisson point process with mean measure ${\displaystyle \textstyle \Lambda }$ on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measure ${\displaystyle \textstyle \Lambda '}$.

More specifically, one can consider a (Borel measurable) function ${\displaystyle \textstyle f}$ that maps a point process ${\displaystyle \textstyle {N}}$ with intensity measure ${\displaystyle \textstyle \Lambda }$ from one space ${\displaystyle \textstyle S}$, to another space ${\displaystyle \textstyle T}$ in such a manner so that the new point process ${\displaystyle \textstyle {N}'}$ has the intensity measure:

${\displaystyle \Lambda (B)'=\Lambda (f^{-1}(B))}$

with no atoms, where ${\displaystyle \textstyle B}$ is a Borel set and ${\displaystyle \textstyle f^{-1}}$ denotes the inverse of the function ${\displaystyle \textstyle f}$. If ${\displaystyle \textstyle {N}}$ is a Poisson point process, then the new process ${\displaystyle \textstyle {N}'}$ is also a Poisson point process with the intensity measure ${\displaystyle \textstyle \Lambda '}$.

## Approximations with Poisson point processes

The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate the both number of points of some point process and the location of each point by a Poisson point process.[66] There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics.[67]

### Clumping heuristic

One method for approximating random events or phenomena with Poisson processes is called the clumping heuristic.[68] The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters or clumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable [67] and the locations of the clumps will be close to a Poisson process.[68]

### Stein's method

Stein's method, a rigorous mathematical technique originally developed for approximating random variables such as Gaussian and Poisson variables, has also been developed and applied to point processes. Stein's method can be used to derive upper bounds on probability metrics, which give way to quantify how different two random mathematical objects vary stochastically, of the Poisson and other point processes.[66][69] Upperbounds on probability metrics such as total variation and Wasserstein distance have been derived.[66]

Researchers have applied Stein's method to Poisson point processes in a number of ways,[66] such as using Palm calculus.[35] Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certain point process operations such as thinning and superposition.[70][71] Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as the Cox point process, which is a Poisson process with a random intensity measure.[66]

## Convergence to a Poisson point process

In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process.[32]

Similar convergence results have been developed for thinning and superposition operations [32] that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin[f] equations, which has its origins in the work of Conny Palm and Aleksandr Khinchin,[32] and help explains why the Poisson process can often be used as a mathematical model of various random phenomena.

## Generalizations of Poisson point processes

The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena.

### Poisson point processes on more general spaces

For mathematical models the Poisson point process is often defined in Euclidean space, but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures,[2] which requires an understanding of certain mathematical fields such as probability theory, measure theory, topology and functional analysis.[32]

In general, the concept of distance is of practical interest for applications while topological structure is needed for Palm distributions, hence point processes are often defined on mathematical spaces equipped with metrics.[72] The necessity of convergence of sequences requires the space to be complete, which has inspired point processes to be studied on specific complete metric spaces.[16][72] Furthermore, every realization of a point process in general can be regarded as a counting measure, which has motivated point processes being considered as random measures.[61] Using the techniques of random measures, the Poisson and other point processes has been defined and studied on a locally compact second countable Hausdorff space.[73]

### Cox point process

A Poisson point process can be generalized by letting its intensity measure ${\displaystyle \textstyle \Lambda }$ to be also random and independent of the underlying Poisson process, which gives rise to the Cox process or doubly stochastic Poisson process, introduced by David Cox in 1955 under the latter name.[2] The intensity measure may be a realization of random variable or a random field. For example, if the logarithm of the intensity measure is a Gaussian random field, then the resulting process is known as a log Gaussian Cox process.[74] More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit a clustering of points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics.[75]

### Marked Poisson point process

For a given point process, each random point of a point process can have a random mathematical object, known as a mark, assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes.[17] The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form a marked point process.[12] It is often assumed that the random marks are independent of each other and identically distributed, which makes the process easier to work with, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space.[2] If the underlying point process is a Poisson point process, then one obtains a marked Poisson point process.

#### Marking theorem

If a general point process is defined on some mathematical space and the random marks are defined on another mathematical space, then the marked point process is defined on the Cartesian product of these two spaces. For a marked Poisson point process with independent and identically distributed marks, the Marking theorem [2][33] states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the mathematical spaces, which is not true for general point processes.

### Compound Poisson point process

The compound Poisson point process is formed by adding random values or weights to each point of Poisson point process defined on some underlying state space, so the process is constructed from a marked Poisson point process, where the marks form a collection of independent and identically distributed non-negative random variables.[16] In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is then formed from the sum of all the random variables corresponding to points of the Poisson process located in a some region of the underlying mathematical space.

IF there is a marked Poisson point processes formed from a Poisson point process ${\displaystyle \textstyle N}$ (defined on, for example, ${\displaystyle \textstyle {\textbf {R}}^{d}}$) and a collection of independent and identically distributed non-negative marks ${\displaystyle \textstyle \{M_{i}\}}$ such that for each point ${\displaystyle \textstyle x_{i}}$ of the Poisson process ${\displaystyle \textstyle N}$, then there is a non-negative random variable ${\displaystyle \textstyle M_{i}}$. The resulting compound Poisson process is then:

${\displaystyle C(B)=\sum _{i=1}^{N(B)}M_{i},}$

where ${\displaystyle \textstyle B\subset {\textbf {R}}^{d}}$ is a Borel measurable set. If the collection of random variables or marks ${\displaystyle \textstyle \{M_{i}\}}$ are non-negative integer-valued random variables, then the resulting process is called a compound Poisson counting process.[16][68]

For general random variables, if the compound Poisson point process is formed from a homogeneous Point point process defined on the real line, often representing time, then the resulting compound Poisson process is an example of a Lévy process.

## Notes

1. ^ These two properties are not logically independent because complete independence requires the Poisson distribution, but not necessarily the converse. It has also been a subject of research whether a Poisson point process can be defined with only one of these properties; see Section 2.4.1, page 35, of Stoyan, Kendall, Mecke [1] or Section 1.3 of Kingman.[2]
2. ^ For example, it is possible for an event not happening in the queueing theory sense to be an event in the probability theory sense.
3. ^ Instead of ${\displaystyle \textstyle \lambda (x)}$ and ${\displaystyle \textstyle dx}$, one could write, for example, in (two-dimensional) polar coordinates ${\displaystyle \textstyle \lambda (r,\theta )}$ and ${\displaystyle \textstyle rdrd\theta }$ , where ${\displaystyle \textstyle r}$ and ${\displaystyle \textstyle \theta }$ denote the radial and angular coordinates respectively, and so ${\displaystyle \textstyle dx}$ would be an area element in this example.
4. ^ This set ${\displaystyle \textstyle A}$ is formed by a finite number of unions, whereas a Borel set is formed by a countable number of set operations.[16]
5. ^ Kingman [2] calls this a probability density, but in other resources this is called a probability kernel,[14] which is a object used in other areas of probability such as Markov chains.[65]
6. ^ Also spelt Palm–Khintchine in, for example, Point Processes by Cox and Isham [39]

## References

### General

#### Books

• Cox, D. R.; Isham, V. I. (1980). Point Processes. Chapman & Hall. ISBN 0-412-21910-7.
• Daley, Daryl J.; Vere-Jones, David (2003). An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods. Springer. ISBN 1475781091.
• Daley, Daryl J.; Vere-Jones, David (2007). An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure. Springer. ISBN 0387213376.
• Kingman, John Frank (1992). Poisson processes. Claredon Press. ISBN 978-0198536932.
• Moller, Jesper; Waagepetersen, Rasmus P. (2003). Statistical Inference and Simulation for Spatial Point Processes. CRC Press. ISBN 1584882654.
• Ross, S. M. (1996). Stochastic Processes. Wiley. ISBN 978-0-471-12062-9.
• Snyder, D. L.; Miller, M. I. (1991). Random Point Processes in Time and Space. Springer-Verlag. ISBN 0-387-97577-2.
• Stoyan, Dietrich; Kendall, Wilfred S.; Mecke, Joseph (1995). Stochastic geometry and its applications. Wiley. ISBN 0471950998.
• Streit, Streit (2010). Poisson Point Processes: Imaging, Tracking, and Sensing. Springer Science& Business Media. ISBN 1441969225.
• Tijms, Henk C. (2003). A First Course in Stochastic Models. Wiley. ISBN 0471498807.

#### Articles

• Stirzaker, David (2000). "Advice to hedgehogs, or, constants can vary". The Mathematical Gazette.
• Guttorp, Peter; Thorarinsdottir, Thordis L. (2012). "What happened to discrete chaos, the Quenouille process, and the sharp Markov property? Some history of stochastic point processes". International Statistical Review.

### Specific

1. D. Stoyan, W. S. Kendall, and J. Mecke. Stochastic geometry and its applications, volume 2. Wiley, 1995.
2. J. F. C. Kingman. Poisson processes, volume 3. Oxford university press, 1992.
3. ^ G. J. Babu and E. D. Feigelson. Spatial point processes in astronomy. Journal of statistical planning and inference, 50(3):311-326, 1996.
4. ^ H. G. Othmer, S. R. Dunbar, and W. Alt. Models of dispersal in biological systems. Journal of mathematical biology, 26(3):263-298, 1988.
5. ^ a b H. Thompson. Spatial point processes, with applications to ecology. Biometrika, 42(1/2):102-115, 1955.
6. ^ C. B. Connor and B. E. Hill. Three nonhomogeneous poisson models for the probability of basaltic volcanism: application to the yucca mountain region, nevada. Journal of Geophysical Research: Solid Earth (1978-2012), 100(B6):10107-10125, 1995.
7. ^ J. D. Scargle. Studies in astronomical time series analysis. v. bayesian blocks, a new method to analyze structure in photon counting data. The Astrophysical Journal, 504(1):405, 1998.
8. ^ M. Bertero, P. Boccacci, G. Desidera, and G. Vicidomini. Image deblurring with poisson data: from cells to galaxies. Inverse Problems, 25(12):123006, 2009.
9. ^ a b F. Baccelli and B. Błaszczyszyn. Stochastic Geometry and Wireless Networks, Volume II- Applications, volume 4, No 1-2 of Foundations and Trends in Networking. NoW Publishers, 2009.
10. ^ M. Haenggi, J. Andrews, F. Baccelli, O. Dousse, and M. Franceschetti. Stochastic geometry and random graphs for the analysis and design of wireless networks. IEEE JSAC, 27(7):1029-1046, september 2009.
11. ^ a b L. Kleinrock. Theory, volume 1, Queueing systems. Wiley-interscience, 1975.
12. A. Baddeley, I. Bárány, and R. Schneider. Spatial point processes and their applications. Stochastic Geometry: Lectures given at the CIME Summer School held in Martina Franca, Italy, September 13–18, 2004, pages 1-75, 2007.
13. ^ a b c J. G. Andrews, R. K. Ganti, M. Haenggi, N. Jindal, and S. Weber. A primer on spatial modeling and analysis in wireless networks. Communications Magazine, IEEE, 48(11):156-163, 2010.
14. F. Baccelli and B. Błaszczyszyn. Stochastic Geometry and Wireless Networks, Volume I - Theory, volume 3, No 3-4 of Foundations and Trends in Networking. NoW Publishers, 2009.
15. ^ a b c d M. Haenggi. Stochastic geometry for wireless networks. Cambridge University Press, 2012.
16. D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes: Volume I: Elementary Theory and Methods, Springer, New York, second edition, 2003.
17. J. Møller and R. P. Waagepetersen. Statistical inference and simulation for spatial point processes. CRC Press, 2003.
18. ^ R. Meester and R. Roy. Continuum percolation, volume 119 of cambridge tracts in mathematics, 1996.
19. D. Stirzaker. Advice to hedgehogs, or, constants can vary. The Mathematical Gazette, 84(500):197-210, 2000.sing
20. P. Guttorp and T. L. Thorarinsdottir. What happened to discrete chaos, the quenouille process, and the sharp markov property? some history of stochastic point processes. International Statistical Review, 80(2):253-268, 2012.
21. ^ a b I. Good. Some statistical applications of poisson's work. Statistical science, pages 157-170, 1986.
22. G. Grimmett and D. Stirzaker. Probability and random processes. Oxford university press, 2001.
23. ^ S. M. Stigler. Poisson on the Poisson distribution. Statistics \& Probability Letters, 1(1):33-35, 1982.
24. ^ M. Quine and E. Seneta. Bortkiewicz's data and the law of small numbers. International Statistical Review/Revue Internationale de Statistique, pages 173-181, 1987.
25. ^ J. Illian, A. Penttinen, H. Stoyan, and D. Stoyan. Statistical analysis and modelling of spatial point patterns, volume 70. John Wiley \& Sons, 2008.
26. ^ J. Kingman. The first erlang century—and the next. Queueing Systems, 63(1-4):3-12, 2009.
27. ^ a b R. B. Haugen. The life and work of Conny Palm. some personal comments and experiences. In VTT SYMPOSIUM, volume 154, pages 207-207. VALTION TEKNILLINEN TUTKIMUSKESKUS, 1995.
28. ^ J. Grandell. Mixed poisson processes, volume 77. CRC Press, 1997.
29. H. C. Tijms. A first course in stochastic models. Wiley. com, 2003.
30. S. Ross. Stochastic processes. Wiley series in probability and statistics: Probability and statistics. Wiley, 1996.
31. ^ a b A. Baddeley. A crash course in stochastic geometry. Stochastic Geometry: Likelihood and Computation Eds OE Barndorff-Nielsen, WS Kendall, HNN van Lieshout (London: Chapman and Hall) pp, pages 1-35, 1999.
32. D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes: Volume II: General Theory and Structure, Springer, New York, second edition, 2008.
33. R. L. Streit. Poisson Point Processes: Imaging, Tracking, and Sensing. Springer Science \& Business Media, 2010.
34. ^ a b E. F. Harding and R. Davidson. Stochastic geometry: a tribute to the memory of Rollo Davidson. Wiley, 1974.
35. ^ a b c L. H. Chen and A. Xia. Stein's method, Palm theory and Poisson process approximation. Annals of probability, pages 2545-2569, 2004.
36. W. Feller. Introduction to probability theory and its applications, vol. ii pod. 1974.
37. ^ Some Poisson models, Vose Software, retrieved 2016-01-18
38. ^ Helske, Jouni (2015-06-25), KFAS: Exponential family state space models in R (PDF), Comprehensive R Archive Network, retrieved 2016-01-18
39. D. D. R. Cox and V. Isham. Point processes, volume 12. CRC Press, 1980.
40. ^ A. Papoulis and S. U. Pillai. Probability, random variables, and stochastic processes. Tata McGraw-Hill Education, 2002.
41. ^ V. Ramaswami. Poisson process and its generalizations. Wiley Encyclopedia of Operations Research and Management Science, 2010.
42. ^ a b c E. Merzbach and D. Nualart. A characterization of the spatial poisson process and changing time. The Annals of Probability, 14(4):1380-1390, 1986.
43. ^ S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. Consulted, 1(2012):28, 2008.
44. ^ D. Snyder and M. Miller. Random point processes in time and space 2e springer-verlag. New York, NY, 1991.
45. ^ a b A. B. Lawson. A deviance residual for heterogeneous spatial poisson processes. Biometrics, pages 889-897, 1993.
46. ^ C.-H. Lee, C.-Y. Shih, and Y.-S. Chen. Stochastic geometry based models for modeling cellular networks in urban areas. Wireless Networks, pages 1–10, 2012.
47. ^ L. Citi; D. Ba; E.N. Brown & R. Barbieri (2014). "Likelihood methods for point processes with refractoriness". Neural Computation. doi:10.1162/NECO_a_00548.
48. ^ A. Heuer, C. Mueller, and O. Rubner. Soccer: Is scoring goals a predictable Poissonian process? EPL (Europhysics Letters), 89(3):38007, 2010.
49. ^ J. Y. Hwang, W. Kuo, and C. Ha. Modeling of integrated circuit yield using a spatial nonhomogeneous poisson process. Semiconductor Manufacturing, IEEE Transactions on, 24(3):377-384, 2011.
50. ^ a b M. Krko{\vs}ek, M. A. Lewis, and J. P. Volpe. Transmission dynamics of parasitic sea lice from farm to wild salmon. Proceedings of the Royal Society B: Biological Sciences, 272(1564):689-696, 2005.
51. ^ P. A. Lewis and G. S. Shedler. Simulation of nonhomogeneous poisson processes by thinning. Naval Research Logistics Quarterly, 26(3):403-413, 1979.
52. ^ F. Roberts. Nearest neighbours in a poisson ensemble. Biometrika, 56(2):401-406, 1969.
53. ^ G. Mikhailov and T. Averina. Statistical modeling of inhomogeneous random functions on the basis of poisson point fields. In Doklady Mathematics, volume 82, pages 701-704. Springer, 2010.
54. ^ I. Molchanov. Theory of random sets. Springer Science \& Business Media, 2006.
55. ^ a b K. Sato. Lévy processes and infinite divisibility, 1999.
56. ^ V. Mandrekar and B. Rüdiger. Stochastic Integration in Banach Spaces. Springer, 2015.
57. ^ D. Applebaum. Lévy processes and stochastic calculus}. Cambridge university press, 2009.
58. ^ M. Baudin. Multidimensional point processes and random closed sets. Journal of applied probability, pages 173-178, 1984.
59. ^ G. Shen, M. Yu, X.-S. Hu, X. Mi, H. Ren, I.-F. Sun, and K. Ma. Species-area relationships explained by the joint effects of dispersal limitation and habitat heterogeneity. Ecology, 90(11):3033-3041, 2009.
60. ^ H.-B. Pan, W. Zhang, and M.-Y. Cong. Detection algorithm for space dim moving object. In Fundamental Problems of Optoelectronics and Microelectronics III, pages 65951H-65951H. International Society for Optics and Photonics, 2007.
61. ^ a b J. Grandell. Point processes and random measures. Advances in Applied Probability, pages 502-526, 1977.
62. ^ A. Karr. Probability. Springer Texts in Statistics Series. Springer-Verlag, 1993.
63. ^ D. Bertsekas and J. Tsitsiklis. Introduction to probability, ser. Athena Scientific optimization and computation series. Athena Scientific, 2008.
64. ^ J. F. Hayes. Modeling and analysis of computer communications networks. Perseus Publishing, 1984.
65. ^ O. Kallenberg. Foundations of modern probability. springer, 2002.
66. L. H. Chen, A. Röllin, et al. Approximating dependent rare events. Bernoulli, 19(4):1243-1267, 2013.
67. ^ a b R. Arratia, S. Tavare, et al. {Review: D. Aldous, Probability Approximations via the Poisson Clumping Heuristic; AD Barbour, L. Holst, S. Janson, Poisson Approximation}. The Annals of Probability, 21(4):2269-2279, 1993.
68. ^ a b c D. Aldous. Poisson Clumping Heuristic. Wiley Online Library, 1989.
69. ^ A. D. Barbour and T. C. Brown. Stein's method and point process approximation. Stochastic Processes and their Applications, 43(1):9-31, 1992.
70. ^ D. Schuhmacher. Distance estimates for dependent superpositions of point processes. Stochastic processes and their applications, 115(11):1819-1837, 2005.
71. ^ D. Schuhmacher. Distance estimates for poisson process approximations of dependent thinnings. Electronic Journal of Probability, 10:165-201, 2005.
72. ^ a b A. E. Gelfand, P. Diggle, P. Guttorp, and M. Fuentes. Handbook of spatial statistics, Chapter 9. CRC press, 2010.
73. ^ O. Kallenberg. Random measures. Academic Pr, 1983.
74. ^ J. Møller, A. R. Syversveen, and R. P. Waagepetersen. Log Gaussian Cox Processes. Scandinavian journal of statistics, 25(3):451-482, 1998.
75. ^ J. Møller and R. P. Waagepetersen. Modern statistics for spatial point processes. Scandinavian Journal of Statistics, 34(4):643-684, 2007.