# Exponential family random graph models

Exponential Random Graph Models (ERGMs) are a family of statistical models for analyzing data from social and other networks. Examples of networks examined using ERGM include knowledge networks, organizational networks, colleague networks, social media networks, networks of scientific development, and others.

## Background

Many metrics exist to describe the structural features of an observed network such as the density, centrality, or assortativity. However, these metrics describe the observed network which is only one instance of a large number of possible alternative networks. This set of alternative networks may have similar or dissimilar structural features. To support statistical inference on the processes influencing the formation of network structure, a statistical model should consider the set of all possible alternative networks weighted on their similarity to an observed network. However because network data is inherently relational, it violates the assumptions of independence and identical distribution of standard statistical models like linear regression. Alternative statistical models should reflect the uncertainty associated with a given observation, permit inference about the relative frequency about network substructures of theoretical interest, disambiguating the influence of confounding processes, efficiently representing complex structures, and linking local-level processes to global-level properties. Degree-preserving randomization, for example, is a specific way in which an observed network could be considered in terms of multiple alternative networks.

## Definition

The Exponential family is a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks.

Formally a random graph $Y\in {\mathcal {Y}}$ consists of a set of $n$ nodes and $m$ dyads (edges) $\{Y_{ij}:i=1,\dots ,n;j=1,\dots ,n\}$ where $Y_{ij}=1$ if the nodes $(i,j)$ are connected and $Y_{ij}=0$ otherwise.

The basic assumption of these models is that the structure in an observed graph $y$ can be explained by a given vector of sufficient statistics $s(y)$ which are a function of the observed network and, in some cases, nodal attributes. This way, it is possible to describe any kind of dependence between the undyadic variables:

$P(Y=y|\theta )={\frac {\exp(\theta ^{T}s(y))}{c(\theta )}},\quad \forall y\in {\mathcal {Y}}$ where $\theta$ is a vector of model parameters associated with $s(y)$ and $c(\theta )=\sum _{y'\in {\mathcal {Y}}}\exp(\theta ^{T}s(y'))$ is a normalising constant.

These models represent a probability distribution on each possible network on $n$ nodes. However, the size of the set of possible networks for an undirected network (simple graph) of size $n$ is $2^{n(n-1)/2}$ . Because the number of possible networks in the set vastly outnumbers the number of parameters which can constrain the model, the ideal probability distribution is the one which maximizes the Gibbs entropy.