# Generalized linear mixed model

In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. They also inherit from GLMs the idea of extending linear mixed models to non-normal data.

GLMMs provide a broad range of models for the analysis of grouped data, since the differences between groups can be modelled as a random effect. These models are useful in the analysis of many kinds of data, including longitudinal data.

## Model

GLMMs are generally defined such that, conditioned on the random effects $u$ , the dependent variable $y$ is distributed according to the exponential family with its expectation related to the linear predictor ${\textstyle X\beta +Zu}$ via a link function ${\textstyle g}$ :

$g(E[y\vert u])=X\beta +Zu$ .

Here ${\textstyle X}$ and ${\textstyle \beta }$ are the fixed effects design matrix, and fixed effects respectively; ${\textstyle Z}$ and ${\textstyle u}$ are the random effects design matrix and random effects respectively. To understand this very brief definition you will first need to understand the definition of a generalized linear model and of a mixed model.

Generalized linear mixed models are a special cases of hierarchical generalized linear models in which the random effects are normally distributed.

The complete likelihood

$\ln {p}(y)=\ln \int p(y\vert u)p(u)du$ has no general closed form, and integrating over the random effects is usually extremely computationally intensive. In addition to numerically approximating this integral(e.g. via Gauss–Hermite quadrature), methods motivated by Laplace approximation have been proposed. For example, the penalized quasi-likelihood method, which essentially involves repeatedly fitting (i.e. doubly iterative) a weighted normal mixed model with a working variate, is implemented by various commercial and open source statistical programs.

## Fitting a model

Fitting GLMMs via maximum likelihood (as via AIC) involves integrating over the random effects. In general, those integrals cannot be expressed in analytical form. Various approximate methods have been developed, but none has good properties for all possible models and data sets (e.g. ungrouped binary data are particularly problematic). For this reason, methods involving numerical quadrature or Markov chain Monte Carlo have increased in use, as increasing computing power and advances in methods have made them more practical.

The Akaike information criterion (AIC) is a common criterion for model selection. Estimates of AIC for GLMMs based on certain exponential family distributions have recently been obtained.

## Software

• Several contributed packages in R provide GLMM functionality, including lme4 and glmm.
• GLMM can be fitted using SAS and SPSS
• MATLAB also provides a function called "fitglme" to fit GLMM models.
• The Python package Statsmodels supports binomial and poisson implementation 
• The Julia package MixedModels.jl provides a function called GeneralizedLinearMixedModel that fits a GLMM to provided data.
• DHARMa: residual diagnostics for hierarchical (multi-level/mixed) regression models (utk.edu)