# Stochastic geometry models of wireless networks

In mathematics and telecommunications, stochastic geometry models of wireless networks refer to mathematical models based on stochastic geometry that are designed to represent aspects of wireless networks. The related research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields including point processes, spatial statistics, geometric probability, percolation theory, as well as methods from more general mathematical disciplines such as geometry, probability theory, stochastic processes, queueing theory, information theory, and Fourier analysis.[1][2][3][4]

In the early 1960s a stochastic geometry model [5] was developed to study wireless networks. This model is considered to be pioneering and the origin of continuum percolation.[6] Network models based on geometric probability were later proposed and used in the late 1970s [7] and continued throughout the 1980s [8][9] for examining packet radio networks. Later their use increased significantly for studying a number of wireless network technologies including mobile ad hoc networks, sensor networks, vehicular ad hoc networks, cognitive radio networks and several types of cellular networks, such as heterogeneous cellular networks.[10][11][12] Key performance and quality of service quantities are often based on concepts from information theory such as the signal-to-interference-plus-noise ratio, which forms the mathematical basis for defining network connectivity and coverage.[4][11]

The principal idea underlying the research of these stochastic geometry models, also known as random spatial models,[10] is that it is best to assume that the locations of nodes or the network structure and the aforementioned quantities are random in nature due to the size and unpredictability of users in wireless networks. The use of stochastic geometry can then allow for the derivation of closed-form or semi-closed-form expressions for these quantities without resorting to simulation methods or (possibly intractable or inaccurate) deterministic models.[10]

## Overview

The discipline of stochastic geometry entails the mathematical study of random objects defined on some (often Euclidean) space. In the context of wireless networks, the random objects are usually simple points (which may represent the locations of network nodes such as receivers and transmitters) or shapes (for example, the coverage area of a transmitter) and the Euclidean space is either 3-dimensional, or more often, the (2-dimensional) plane, which represents a geographical region. In wireless networks (for example, cellular networks) the underlying geometry (the relative locations of nodes) plays a fundamental role due to the interference of other transmitters, whereas in wired networks (for example, the Internet) the underlying geometry is less important.

### Channels in a wireless network

Three channel types or connection situations in wireless networks.

A wireless network can be seen as a collection of (information theoretic) channels sharing space and some common frequency band. Each channel consists of a set of transmitters trying to send data to a set of receivers. The simplest channel is the point-to-point channel which involves a single transmitter aiming at sending data to a single receiver. The broadcast channel, in information theory terminology,[13] is the one-to-many situation with a single transmitter aiming at sending different data to different receivers and it arises in, for example, the downlink of a cellular network.[14] The multiple access channel is the converse, with several transmitters aiming at sending different data to a single receiver.[13] This many-to-one situation arises in, for example, the uplink of cellular networks.[14] Other channel types exist such as the many-to-many situation. These (information theoretic) channels are also referred to as network links, many of which will be simultaneously active at any given time.

### Geometrical objects of interest in wireless networks

There are number of examples of geometric objects that can be of interest in wireless networks. For example, consider a collection of points in the Euclidean plane. For each point, place in the plane a disk with its center located at the point. The disks are allowed to overlap with each other and the radius of each disk is random and (stochastically) independent of all the other radii. The mathematical object consisting of the union of all these disks is known as a Boolean (random disk) model [4][15][16] and may represent, for example, the sensing region of a sensor network. If all the radii are not random, but common positive constant, then the resulting model is known as the Gilbert disk (Boolean) model.[17]

A Boolean model as a coverage model in a wireless network.
Simulation of four Poisson–Boolean (constant-radius or Gilbert disk) models as the density increases with largest clusters in red.

Instead of placing disks on the plane, one may assign a disjoint (or non-overlapping) subregion to each node. Then the plane is partitioned into a collection of disjoint subregions. For example, each subregion may consist of the collection of all the locations of this plane that are closer to some point of the underlying point pattern than any other point of the point pattern. This mathematical structure is known as a Voronoi tessellation and may represent, for example, the association cells in a cellular network where users associate with the closest base station.

Instead of placing a disk or a Voronoi cell on a point, one could place a cell defined from the information theoretic channels described above. For instance, the point-to-point channel cell of a point was defined [18] as the collection of all the locations of the plane where a receiver could sustain a point-to-point channel with a certain quality from a transmitter located at this point. This, given that the other point is also an active transmitter, is a point-to-point channel in its own right .

In each case, the fact that the underlying point pattern is random (for example, a point process) or deterministic (for example, a lattice of points) or some combination of both, will influence the nature of the Boolean model, the Voronoi tessellation, and other geometrical structures such as the point-to-point channel cells constructed from it.

## Key performance quantities

In wired communication, the field of information theory (in particular, the Shannon-Hartley theorem) motivates the need for studying the signal-to-noise ratio (SNR). In a wireless communication, when a collection of channels is active at the same time, the interference from the other channels is considered as noise, which motivates the need for the quantity known as the signal-to-interference-plus-noise ratio (SINR). For example, if we have a collection of point-to-point channels, the SINR of the channel of a particular transmitter–receiver pair is defined as:

${\displaystyle \mathrm {SINR} ={\frac {S}{I+N}}}$

where S is the power, at the receiver, of the incoming signal from said transmitter, I is the combined power of all the other (interfering) transmitters in the network, and N is the power of some thermal noise term. The SINR reduces to SNR when there is no interference (i.e. I = 0). In networks where the noise is negligible, also known as "interference limited" networks, we N = 0, which gives the signal-to-interference ratio (SIR).

### Coverage

A common goal of stochastic geometry wireless network models is to derive expressions for the SINR or for the functions of the SINR which determine coverage (or outage) and connectivity. For example, the concept of the outage probability pout, which is informally the probability of not being able to successfully send a signal on a channel, is made more precise in the point-to-point case by defining it as the probability that the SINR of a channel is less than or equal to some network-dependent threshold.[19] The coverage probability pc is then the probability that the SINR is larger than the SINR threshold. In short, given a SINR threshold t, the outage and coverage probabilities are given by

${\displaystyle p_{\mathrm {out} }=P(\mathrm {SINR} \leq t)}$

and

${\displaystyle p_{\mathrm {c} }=P(\mathrm {SINR} >t)=1-p_{\mathrm {out} }}$.
SINR cells of a wireless network model expand as the transmitter powers increase.

### Channel capacity

One aim of the stochastic geometry models is to derive the probability laws of the Shannon channel capacity or rate of a typical channel when taking into account the interference created by all other channels.

In the point-to-point channel case, the interference created by other transmitters is considered as noise, and when this noise is Gaussian, the law of the typical Shannon channel capacity is then determined by that of the SINR through Shannon's formula (in bits per second):

${\displaystyle C=B\log _{2}(1+\mathrm {SINR} )}$

where B is the bandwidth of the channel in hertz. In other words, there is a direct relationship between the coverage or outage probability and the Shannon channel capacity. The problem of determining the probability distribution of C under such a random setting has been studied in several types of wireless network architectures or types.

## Early history

In general, the use of methods from the theories of probability and stochastic processes in communication systems has a long and interwoven history stretching back over a century to the pioneering teletraffic work of Agner Erlang.[20] In the setting of stochastic geometry models, Edgar Gilbert [5] in the 1960s proposed a mathematical model for wireless networks, now known as a Gilbert disk model,[17] that gave rise to the field of continuum percolation theory, which in turn is a generalization of discrete percolation.[6] Starting in the late 1970s, Leonard Kleinrock and others used wireless models based on Poisson processes to study packet forward networks.[7][8][9] This work would continue until the 1990s where it would cross paths with the work on shot noise.

### Shot noise

The general theory and techniques of stochastic geometry and, in particular, point processes have often been motivated by the understanding of a type of noise that arises in electronic systems known as shot noise. Indeed, given some mathematical function of a point process, a standard method for finding the average (or expectation) of the sum of these functions is Campbell's formula [4][21] or theorem,[22] which has its origins in the pioneering work by Norman R. Campbell on shot noise over a century ago.[23][24] Much later in the 1960s Gilbert alongside Henry Pollak studied the shot noise process [25] formed from a sum of response functions of a Poisson process and identically distributed random variables. The shot noise process inspired more formal mathematical work in the field of point processes,[26][27] often involving the use of characteristic functions, and would later be used for models of signal interference from other nodes in the network.

### Network interference as shot noise

Around the early 1990s, shot noise based on a Poisson process and a power-law repulse function was studied and observed to have a stable distribution.[28] Independently, researchers [19][29] successfully developed Fourier and Laplace transform techniques for the interference experienced by a user in a wireless network in which the locations of the (interfering) nodes or transmitters are positioned according to a Poisson process. It was independently shown again that Poisson shot noise, now as a model for interference, has a stable distribution [29] by use of characteristic functions or, equivalently, Laplace transforms, which are often easier to work with than the corresponding probability distributions.[1][2][30]

Moreover, the assumption of the received (i.e. useful) signal power being exponentially distributed (for example, due to Rayleigh fading) and the Poisson shot noise (for which the Laplace is known) allows for explicit closed-form expression for the coverage probability based on the SINR.[19][31] This observation helps to explain why the Rayleigh fading assumption is frequently made when constructing stochastic geometry models.[1][2][4]

### SINR coverage and connectivity models

Later in the early 2000s researchers started examining the properties of the regions under SINR coverage in the framework of stochastic geometry and, in particular, coverage processes.[18] Connectivity in terms of the SINR was studied using techniques from continuum percolation theory. More specifically, the early results of Gilbert were generalized to the setting of the SINR case.[32][33]

## Model fundamentals

A wireless network consists of nodes (each of which is a transmitter, receiver or both, depending on the system) that produce, relay or consume data within the network. For example, base stations and users in a cellular phone network or sensor nodes in a sensor network. Before developing stochastic geometry wireless models, models are required for mathematically representing the signal propagation and the node positioning. The propagation model captures how signals propagate from transmitters to receivers. The node location or positioning model (idealizes and) represents the positions of the nodes as a point process. The choice of these models depends on the nature of the wireless network and its environment. The network type depends on such factors as the specific architecture (for instance cellular) and the channel or medium access control (MAC) protocol, which controls the channels and, hence, the communicating structures of the network. In particular, to prevent the collision of transmissions in the network, the MAC protocol dictates, based on certain rules, when transmitter-receiver pairs can access the network both in time and space, which also affects the active node positioning model.

### Propagation model

Suitable and manageable models are needed for the propagation of electromagnetic signals (or waves) through various media, such as air, taking into account multipath propagation (due to reflection, refraction, diffraction and dispersion) caused by signals colliding with obstacles such as buildings. The propagation model is a building block of the stochastic geometry wireless network model. A common approach is to consider propagation models with two separate parts consisting of the random and deterministic (or non-random) components of signal propagation.

The deterministic component is usually represented by some path-loss or attenuation function that uses the distance propagated by the signal (from its source) for modeling the power decay of electromagnetic signals. The distance-dependent path-loss function may be a simple power-law function (for example, the Hata model), a fast-decaying exponential function, some combination of both, or another decreasing function. Owing to its tractability, models have often incorporated the power-law function

${\displaystyle \ell (|x-y|){=}|x-y|^{\alpha }}$,

where the path-loss exponent α > 2, and |x − y| denotes the distance between point y and the signal source at point x.

The random component seeks to capture certain types of signal fading associated with absorption and reflections by obstacles. The fading models in use include Rayleigh (implying exponential random variables for the power), log-normal, Rice, and Nakagami distributions.

Both the deterministic and random components of signal propagation are usually considered detrimental to the overall performance of a wireless network.

### Node positioning model

An important task in stochastic geometry network models is choosing a mathematical model for the location of the network nodes. The standard assumption is that the nodes are represented by (idealized) points in some space (often Euclidean Rn, and even more often in the plane R2), which means they form a stochastic or random structure known as a (spatial) point process.[10]

Locations of cellular or mobile phone base stations resemble a Poisson point process in the Australian city of Sydney.[34]

#### Poisson process

A number of point processes have been suggested to model the positioning of wireless network nodes. Among these, the most frequently used is the Poisson process, which gives a Poisson network model.[10] The Poisson process in general is commonly used as a mathematical model across numerous disciplines due to its highly tractable and well-studied nature.[15][22] It is often assumed that the Poisson process is homogeneous (implying it is a stationary process) with some constant node density λ. For a Poisson process in the plane, this implies that the probability of having n points or nodes in a bounded region B is given by

${\displaystyle P(n){=}{\frac {(\lambda |B|)^{n}}{n!}}e^{-\lambda |B|},}$

where |B| is the area of B and n! denotes n factorial. The above equation quickly extends to the R3 case by replacing the area term with a volume term.

The mathematical tractability or ease of working with Poisson models is mostly because of its 'complete independence', which essentially says that two (or more) disjoint (or non-overlapping) bounded regions respectively contain two (or more) a Poisson number of points that are independent to each other. This important property characterizes the Poisson process and is often used as its definition.[22]

The complete independence or `randomness' [35] property of Poisson processes leads to some useful characteristics and results of point process operations such as the superposition property: the superposition of ${\displaystyle n}$ Poisson processes with densities λ1 to λn is another Poisson process with density

${\displaystyle \lambda {=}\sum _{i=1}^{n}\lambda _{i}.}$

Furthermore, randomly thinning a Poisson process (with density λ), where each point is independently removed (or kept) with some probability p (or 1 − p), forms another Poisson process (with density (1 − p)λ) while the kept points also form a Poisson process (with density ) that is independent to the Poisson process of removed points.[15][22]

These properties and the definition of the homogeneous Poisson process extend to the case of the inhomogeneous (or non-homogeneous) Poisson process, which is a non-stationary stochastic process with a location-dependent density λ(x) where x is a point (usually in the plane, R2) . For more information, see the articles on the Poisson process.

#### Other point processes

Despite its simplifying nature, the independence property of the Poisson process has been criticized for not realistically representing the configuration of deployed networks.[34] For example, it does not capture node "repulsion" where two (or more) nodes in a wireless network may not be normally placed (arbitrarily) close to each other (for examples, base stations in a cellular network). In addition to this, MAC protocols often induce correlations or non-Poisson configurations into the geometry of the simultaneously active transmitter pattern. Strong correlations also arise in the case of cognitive radio networks where secondary transmitters are only allowed to transmit if they far from primary receivers. To answer these and other criticisms, a number of point processes have been suggested to represent the positioning of nodes including the binomial process, cluster processes, Matérn hard-core processes,[2][4][36][37] and Strauss and Ginibre processes.[10][38][39] For example, Matérn hard-core processes are constructed by dependently thinning a Poisson point process. The dependent thinning is done in way such that for any point in the resulting hard-core process, there are no other points within a certain set radius of it, thus creating a "hard-core" around each point in the process.[4][15] On the other hand, soft-core processes have point repulsion that ranges somewhere between the hard-core processes and Poisson processes (which have no repulsion). More specifically, the probability of a point existing near another point in a soft-core point process decreases in some way as it approaches the other point, thus creating a "soft-core" around each point where other points can exist, but are less likely.

Although models based on these and other point processes come closer to resembling reality in some situations, for example in the configuration of cellular base stations,[34][40] they often suffer from a loss of tractability while the Poisson process greatly simplifies the mathematics and techniques, explaining its continued use for developing stochastic geometry models of wireless networks.[10] Also, it has been shown that the SIR distribution of non-Poisson cellular networks can be closely approximated by applying a horizontal shift to the SIR distribution of a Poisson network.[41]

## Classification of models

The type of network model is a combination of factors such as the network architectural organization (cellular, ad hoc, cognitive radio), the medium access control (MAC) protocol being used, the application running on it, and whether the network is mobile or static.

### Models based on specific network architectures

Around the beginning of the 21st century a number of new network technologies have arisen including mobile ad hoc networks and sensor networks. Stochastic geometry and percolation techniques have been used to develop models for these networks.[2][42] The increases in user traffic has resulted in stochastic geometry being applied to cellular networks.[43]

#### Mobile ad hoc network models

Poisson bipolar network model is a type of stochastic geometry model based on the Poisson process and is an early example of a model for mobile ad hoc networks (MANETs),[2][31][44] which are a self-organizing wireless communication network in which mobile devices rely on no infrastructure (base stations or access points). In MANET models, the transmitters form a random point process and each transmitter has its receiver located at some random distance and orientation. The channels form a collection of transmitter-receiver pairs or "bipoles"; the signal of a channel is that transmitted over the associated bipole, whereas the interference is that created by all other transmitters than that of the bipole. The approach of considering the transmitters-receive bipoles led to the development and analysis of one of the Poisson bipolar network model. The choice of the medium access probability, which maximizes the mean number of successful transmissions per unit space, was in particular derived in.[31]

#### Sensor network models

A wireless sensor network consists of a spatially distributed collection of autonomous sensor nodes. Each node is designed to monitor physical or environmental conditions, such as temperature, sound, pressure, etc. and to cooperatively relay the collected data through the network to a main location. In unstructured sensor networks,[45] the deployment of nodes may be done in a random manner. A chief performance criterion of all sensor networks is the ability of the network to gather data, which motivates the need to quantify the coverage or sensing area of the network. It is also important to gauge the connectivity of the network or its capability of relaying the collected data back to the main location.

The random nature of unstructured sensors networks has motivated the use of stochastic geometry methods. For example, the tools of continuous percolation theory and coverage processes have been used to study the coverage and connectivity.[42][46] One model that is used to study to these networks and wireless networks in general is the Poisson-Boolean model, which is a type of coverage process from continuum percolation theory.

One of the main limitations of sensor networks is energy consumption where usually each node has a battery and, perhaps, an embedded form of energy harvesting. To reduce energy consumption in sensor networks, various sleep schemes have been suggested that entail having a sub-collection of nodes go into a low energy-consuming sleep mode. These sleep schemes obviously affect the coverage and connectivity of sensor networks. Rudimentary power-saving models have been proposed such as the simple uncoordinated or decentralized "blinking" model where (at each time interval) each node independently powers down (or up) with some fixed probability. Using the tools of percolation theory, a new type model referred to as a blinking Boolean-Poisson model, was proposed to analyze the latency and connectivity performance of sensor networks with such sleep schemes.[42]

#### Cellular network models

A cellular network is a radio network distributed over some region with subdivisions called cells, each served by at least one fixed-location transceiver, known as a cell base station. In cellular networks, each cell uses a different set of frequencies from neighboring cells, to mitigate interference and provide higher bandwidth within each cell. The operators of cellular networks need to known certain performance or quality of service (QoS) metrics in order to dimension the networks, which means adjusting the density of the deployed base stations to meet the demand of user traffic for a required QoS level.

In cellular networks, the channel from the users (or phones) to the base station(s) is known as the uplink channel. Conversely, the downlink channel is from he base station(s) to the users. The downlink channel is the most studied with stochastic geometry models while models for the uplink case, which is a more difficult problem, are starting to be developed.[47]

In the downlink case, the transmitters and the receivers can be considered as two separate point processes. In the simplest case, there is one point-to-point channel per receiver (i.e. the user), and for a given receiver, this channel is from the closest transmitter (i.e. the base station) to the receiver. Another option consists in selecting the transmitter with the best signal power to the receiver. In any case, there may be several channels with the same transmitter.

A first approach for analyzing cellular networks is to consider the typical user, who can be assumed to be located anywhere on the plane. Under the assumption of point process ergodicity (satisfied when using homogeneous Poisson processes), the results for the typical user correspond to user averages. The coverage probability of the typical user is then interpreted as the proportion of network users who can connect to the cellular network.

Building off previous work done on an Aloha model,[44] the coverage probability for the typical user was derived for a Poisson network.[43][48] The Poisson model of a cellular network proves to be more tractable than a hexagonal model.[43] Meanwhile, this observation could be argued by the fact that a detailed and precise derivation for the channel attenuation probability distribution function between a random node and a reference base-station for a hexagonal model was explicitly derived in;[49] and this result could be used to tractably derive the outage probability. Furthermore, in the presence of sufficiently large log-normal shadow fading (or shadowing) and a singular power-law attenuation function, it was observed by simulation [50] for hexagonal networks and then later mathematically proved [51] for general stationary (including hexagonal) networks that quantities like the SINR and SIR of the typical user behave stochastically as though the underlying network were Poisson. In other words, given a power-law attention function, using a Poisson cellular network model with constant shadowing is equivalent (in terms of SIR, SINR, etc.) to assuming large log-normal shadowing in the mathematical model with the base stations positioned according to either a deterministic or random configuration with a constant density.[51]

#### Heterogeneous cellular network models

In the context of cellular networks, a heterogeneous network (sometimes known as a HetNet) is a network that uses several types of base stations macro-base stations, pico-base stations, and/or femto-base stations in order to provide better coverage and bit rates. This is in particular used to cope with the difficulty of covering with macro-base stations only open outdoor environment, office buildings, homes, and underground areas. Recent Poisson-based models have been developed to derive the coverage probability of such networks in the downlink case.[52][53][54] The general approach is to have a number or layers or "tiers"' of networks which are then combined or superimposed onto each other into one heterogeneous or multi-tier network. If each tier is a Poisson network, then the combined network is also a Poisson network owing to the superposition characteristic of Poisson processes.[22] Then the Laplace transform for this superimposed Poisson model is calculated, leading to the coverage probability in (the downlink channel) of a cellular network with multiple tiers when a user is connected to the instantaneously strongest base station[52] and when a user is connected to the strongest base station on average (not including small scale fading).[53]

#### Cellular network models with multiple users

In recent years the model formulating approach of considering a "typical user" in cellular (or other) networks has been used considerably. This is, however, just a first approach which allows one to characterize only the spectral efficiency (or information rate) of the network. In other words, this approach captures the best possible service that can be given to a single user who does not need to share wireless network resources with other users.

Models beyond the typical user approach have been proposed with the aim of analyzing QoS metrics of a population of users, and not just a single user. Broad speaking, these models can be classified into four types: static, semi-static, semi-dynamic and (fully) dynamic.[55] More specifically:

• Static models have a given number of active users with fixed positions.
• Semi-static models consider the networks at certain times by representing instances or "snapshots" of active users as realizations of spatial (usually Poisson) processes.[56][57][58][59][60]
• Semi-dynamic models have the phone calls of users occur at a random location and last for some random duration. Furthermore, it is assumed that each user is motionless during its call.[55][58][61] In this model, spatial birth-and-death processes,[62][63] which are, in a way, spatial extensions of (time-only) queueing models (for example, Erlang loss systems and processor sharing models), are used in this context to evaluate time averages of the user QoS metrics. Queueing models have been successfully used to dimension (or to suitably adjust the parameters of) circuit-switched and other communication networks. Adapting these models to the task of the dimensioning of the radio part of wireless cellular networks requires appropriate space-time averaging over the network geometry and the temporal evolution of the user (phone call) arrival process.[64]
• Dynamic models are more complicated and have the same assumptions as the semi-dynamic model, but users may move during their calls.[65][66][67][68]

The ultimate goal when constructing these models consists of relating the following three key network parameters: user traffic demand per surface unit, network density and user QoS metric(s). These relations form part of the network dimensioning tools, which allow the network operators to appropriately vary the density of the base stations to meet the traffic demands for a required performance level.

### Models based on MAC protocols

The MAC protocol controls when transmitters can access the wireless medium. The aim is to reduce or prevent collisions by limiting the power of interference experienced by an active receiver. The MAC protocol determines the pattern of simultaneously active channels, given the underlying pattern of available channels. Different MAC protocols hence perform different thinning operations on the available channels, which results in different stochastic geometry models being needed.

#### Aloha MAC models

A slotted Aloha wireless network employs the Aloha MAC protocol where the channels access the medium, independently at each time interval, with some probability p.[2] If the underlying channels (that is, their transmitters for the point-to-point case) are positioned according to a Poisson process (with density λ), then the nodes accessing the network also form a Poisson network (with density ), which allows the use of the Poisson model. ALOHA is not only one of the simplest and most classic MAC protocol but also was shown to achieve Nash equilibria when interpreted as a power control schemes.[69]

Several early stochastic models of wireless networks were based on Poisson point processes with the aim of studying the performance of slotted Aloha.[7][70][71] Under Rayleigh fading and the power-law path-loss function, outage (or equivalently, coverage) probability expressions were derived by treating the interference term as a shot noise and using Laplace transforms models,[19][72] which was later extended to a general path-loss function,[31][44][73] and then further extended to a pure or non-slotted Aloha case.[74]

#### Carrier sense multiple access MAC models

The carrier sense multiple access (CSMA) MAC protocol controls the network in such a way that channels close to each other never simultaneously access the medium simultaneously. When applied to a Poisson point process, this was shown to naturally lead to a Matérn-like hard-core (or soft-core in the case of fading) point process which exhibits the desired "repulsion".[2][36] The probability for a channel to be scheduled is known in closed-form, as well as the so-called pair-correlation function of the point process of scheduled nodes.[2]

#### Code division multiple access MAC models

In a network with code division multiple access (CDMA) MAC protocol, each transmitter modulates its signal by a code that is orthogonal to that of the other signals, and which is known to its receiver. This mitigates the interference from other transmitters, and can be represented in a mathematical model by multiplying the interference by an orthogonality factor. Stochastic geometry models based on this type of representation were developed to analyze the coverage areas of transmitters positioned according to a Poisson process.[18]

#### Network information theoretic models

In the previous MAC-based models, point-to-point channels were assumed and the interference was considered as noise. In recent years, models have been developed to study more elaborate channels arising from the discipline of network information theory.[75] More specifically, a model was developed for one of the simplest settings: a collection of transmitter-receiver pairs represented as a Poisson point process.[76] In this model, the effects of an interference reduction scheme involving "point-to-point codes" were examined. These codes, consisting of randomly and independently generated codewords, give transmitters-receivers permission when to exchange information, thus acting as a MAC protocol. Furthermore, in this model a collection or "party" of channels was defined for each such pair. This party is a multiple access channel,[75] namely the many-to-one situation for channels. The receiver of the party is the same as that of the pair, and the transmitter of the pair belongs to the set of transmitters of the party, together with other transmitters. Using stochastic geometry, the probability of coverage was derived as well as the geometric properties of the coverage cells.[76] It was also shown [75] that when using the point-to-point codes and simultaneous decoding, the statistical gain obtained over a Poisson configuration is arbitrarily large compared to the scenario where interference is treated as noise.

### Other network models

Stochastic geometry wireless models have been proposed for several network types including cognitive radio networks,[77][78] relay networks,[79] and vehicular ad hoc networks.