# Latent Dirichlet allocation

In natural language processing, the latent Dirichlet allocation (LDA) is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. For example, if observations are words collected into documents, it posits that each document is a mixture of a small number of topics and that each word's presence is attributable to one of the document's topics. LDA is an example of a topic model and belongs to the machine learning toolbox and in wider sense to the artificial intelligence toolbox.

## History

In the context of population genetics, LDA was proposed by J. K. Pritchard, M. Stephens and P. Donnelly in 2000.[1][2]

LDA was applied in machine learning by David Blei, Andrew Ng and Michael I. Jordan in 2003.[3]

## Overview

### Evolutionary biology and bio-medicine

In evolutionary biology and bio-medicine, the model is used to detect the presence of structured genetic variation in a group of individuals. The model assumes that alleles carried by individuals under study have origin in various extant or past populations. The model and various inference algorithms allow scientists to estimate the allele frequencies in those source populations and the origin of alleles carried by individuals under study. The source populations can be interpreted ex-post in terms of various evolutionary scenarios. In association studies, detecting the presence of genetic structure is considered a necessary preliminary step to avoid confounding.

### Engineering

One example of LDA in engineering is to automatically classify documents and estimate their relevance to various topics.

In LDA, each document may be viewed as a mixture of various topics where each document is considered to have a set of topics that are assigned to it via LDA. This is identical to probabilistic latent semantic analysis (pLSA), except that in LDA the topic distribution is assumed to have a sparse Dirichlet prior. The sparse Dirichlet priors encode the intuition that documents cover only a small set of topics and that topics use only a small set of words frequently. LDA has an advantage over pLSA in terms of overfitting especially when the size of corpus increases. In practice, this results in a better disambiguation of words and a more precise assignment of documents to topics. LDA is a generalization of the pLSA model, which is equivalent to LDA under a uniform Dirichlet prior distribution.[4]

For example, an LDA model might have topics that can be classified as CAT_related and DOG_related. A topic has probabilities of generating various words, such as milk, meow, and kitten, which can be classified and interpreted by the viewer as "CAT_related". Naturally, the word cat itself will have high probability given this topic. The DOG_related topic likewise has probabilities of generating each word: puppy, bark, and bone might have high probability. Words without special relevance, such as "the" (see function word), will have roughly even probability between classes (or can be placed into a separate category). A topic is neither semantically nor epistemologically strongly defined. It is identified on the basis of automatic detection of the likelihood of term co-occurrence. A lexical word may occur in several topics with a different probability, however, with a different typical set of neighboring words in each topic.

Each document is assumed to be characterized by a particular set of topics. This is similar to the standard bag of words model assumption, and makes the individual words exchangeable.

## Model

Plate notation representing the LDA model.

With plate notation, which is often used to represent probabilistic graphical models (PGMs), the dependencies among the many variables can be captured concisely. The boxes are "plates" representing replicates, which are repeated entities. The outer plate represents documents, while the inner plate represents the repeated word positions in a given document; each position is associated with a choice of topic and word. The variable names are defined as follows:

M denotes the number of documents
N is number of words in a given document (document i has ${\displaystyle N_{i}}$ words)
α is the parameter of the Dirichlet prior on the per-document topic distributions
β is the parameter of the Dirichlet prior on the per-topic word distribution
${\displaystyle \theta _{i}}$ is the topic distribution for document i
${\displaystyle \varphi _{k}}$ is the word distribution for topic k
${\displaystyle z_{ij}}$ is the topic for the j-th word in document i
${\displaystyle w_{ij}}$ is the specific word.
Plate notation for LDA with Dirichlet-distributed topic-word distributions

The fact that W is grayed out means that words ${\displaystyle w_{ij}}$ are the only observable variables, and the other variables are latent variables. As proposed in the original paper,[3] a sparse Dirichlet prior can be used to model the topic-word distribution, following the intuition that the probability distribution over words in a topic is skewed, so that only a small set of words have high probability. The resulting model is the most widely applied variant of LDA today. The plate notation for this model is shown on the right, where ${\displaystyle K}$ denotes the number of topics and ${\displaystyle \varphi _{1},\dots ,\varphi _{K}}$ are ${\displaystyle V}$-dimensional vectors storing the parameters of the Dirichlet-distributed topic-word distributions (${\displaystyle V}$ is the number of words in the vocabulary).

It is helpful to think of the entities represented by ${\displaystyle \theta }$ and ${\displaystyle \varphi }$ as matrices created by decomposing the original document-word matrix that represents the corpus of documents being modeled. In this view, ${\displaystyle \theta }$ consists of rows defined by documents and columns defined by topics, while ${\displaystyle \varphi }$ consists of rows defined by topics and columns defined by words. Thus, ${\displaystyle \varphi _{1},\dots ,\varphi _{K}}$ refers to a set of rows, or vectors, each of which is a distribution over words, and ${\displaystyle \theta _{1},\dots ,\theta _{M}}$refers to a set of rows, each of which is a distribution over topics.

### Generative process

To actually infer the topics in a corpus, we imagine a generative process whereby the documents are created, so that we may infer, or reverse engineer, it. We imagine the generative process as follows. Documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over all the words. LDA assumes the following generative process for a corpus ${\displaystyle D}$ consisting of ${\displaystyle M}$ documents each of length ${\displaystyle N_{i}}$:

1. Choose ${\displaystyle \theta _{i}\sim \operatorname {Dir} (\alpha )}$, where ${\displaystyle i\in \{1,\dots ,M\}}$ and ${\displaystyle \mathrm {Dir} (\alpha )}$ is a Dirichlet distribution with a symmetric parameter ${\displaystyle \alpha }$ which typically is sparse (${\displaystyle \alpha <1}$)

2. Choose ${\displaystyle \varphi _{k}\sim \operatorname {Dir} (\beta )}$, where ${\displaystyle k\in \{1,\dots ,K\}}$ and ${\displaystyle \beta }$ typically is sparse

3. For each of the word positions ${\displaystyle i,j}$, where ${\displaystyle i\in \{1,\dots ,M\}}$, and ${\displaystyle j\in \{1,\dots ,N_{i}\}}$

(a) Choose a topic ${\displaystyle z_{i,j}\sim \operatorname {Multinomial} (\theta _{i}).}$
(b) Choose a word ${\displaystyle w_{i,j}\sim \operatorname {Multinomial} (\varphi _{z_{i,j}}).}$

(Note that multinomial distribution here refers to the multinomial with only one trial, which is also known as the categorical distribution.)

The lengths ${\displaystyle N_{i}}$ are treated as independent of all the other data generating variables (${\displaystyle w}$ and ${\displaystyle z}$). The subscript is often dropped, as in the plate diagrams shown here.

### Definition

A formal description of LDA is as follows:

Definition of variables in the model
Variable Type Meaning
${\displaystyle K}$ integer number of topics (e.g. 50)
${\displaystyle V}$ integer number of words in the vocabulary (e.g. 50,000 or 1,000,000)
${\displaystyle M}$ integer number of documents
${\displaystyle N_{d=1\dots M}}$ integer number of words in document d
${\displaystyle N}$ integer total number of words in all documents; sum of all ${\displaystyle N_{d}}$ values, i.e. ${\displaystyle N=\sum _{d=1}^{M}N_{d}}$
${\displaystyle \alpha _{k=1\dots K}}$ positive real prior weight of topic k in a document; usually the same for all topics; normally a number less than 1, e.g. 0.1, to prefer sparse topic distributions, i.e. few topics per document
${\displaystyle {\boldsymbol {\alpha }}}$ K-dimensional vector of positive reals collection of all ${\displaystyle \alpha _{k}}$ values, viewed as a single vector
${\displaystyle \beta _{w=1\dots V}}$ positive real prior weight of word w in a topic; usually the same for all words; normally a number much less than 1, e.g. 0.001, to strongly prefer sparse word distributions, i.e. few words per topic
${\displaystyle {\boldsymbol {\beta }}}$ V-dimensional vector of positive reals collection of all ${\displaystyle \beta _{w}}$ values, viewed as a single vector
${\displaystyle \varphi _{k=1\dots K,w=1\dots V}}$ probability (real number between 0 and 1) probability of word w occurring in topic k
${\displaystyle {\boldsymbol {\varphi }}_{k=1\dots K}}$ V-dimensional vector of probabilities, which must sum to 1 distribution of words in topic k
${\displaystyle \theta _{d=1\dots M,k=1\dots K}}$ probability (real number between 0 and 1) probability of topic k occurring in document d
${\displaystyle {\boldsymbol {\theta }}_{d=1\dots M}}$ K-dimensional vector of probabilities, which must sum to 1 distribution of topics in document d
${\displaystyle z_{d=1\dots M,w=1\dots N_{d}}}$ integer between 1 and K identity of topic of word w in document d
${\displaystyle \mathbf {Z} }$ N-dimensional vector of integers between 1 and K identity of topic of all words in all documents
${\displaystyle w_{d=1\dots M,w=1\dots N_{d}}}$ integer between 1 and V identity of word w in document d
${\displaystyle \mathbf {W} }$ N-dimensional vector of integers between 1 and V identity of all words in all documents

We can then mathematically describe the random variables as follows:

{\displaystyle {\begin{aligned}{\boldsymbol {\varphi }}_{k=1\dots K}&\sim \operatorname {Dirichlet} _{V}({\boldsymbol {\beta }})\\{\boldsymbol {\theta }}_{d=1\dots M}&\sim \operatorname {Dirichlet} _{K}({\boldsymbol {\alpha }})\\z_{d=1\dots M,w=1\dots N_{d}}&\sim \operatorname {Categorical} _{K}({\boldsymbol {\theta }}_{d})\\w_{d=1\dots M,w=1\dots N_{d}}&\sim \operatorname {Categorical} _{V}({\boldsymbol {\varphi }}_{z_{dw}})\end{aligned}}}

## Inference

Learning the various distributions (the set of topics, their associated word probabilities, the topic of each word, and the particular topic mixture of each document) is a problem of statistical inference.

### Monte Carlo simulation

The original paper by Pritchard et al.[1] used approximation of the posterior distribution by Monte Carlo simulation. Alternative proposal of inference techniques include Gibbs sampling.[5]

### Variational Bayes

The original ML paper used a variational Bayes approximation of the posterior distribution;[3]

### Likelihood maximization

A direct optimization of the likelihood with a block relaxation algorithm proves to a fast alternative to MCMC.[6]

### Unknown number of populations/topics

In practice, the optimal number of populations or topics is not known beforehand. It can be estimated by approximation of the posterior distribution with reversible-jump Markov chain Monte Carlo.[7]

### Alternative approaches

Alternative approaches include expectation propagation.[8]

Recent research has been focused on speeding up the inference of latent Dirichlet Allocation to support the capture of a massive number of topics in a large number of documents. The update equation of the collapsed Gibbs sampler mentioned in the earlier section has a natural sparsity within it that can be taken advantage of. Intuitively, since each document only contains a subset of topics ${\displaystyle K_{d}}$, and a word also only appears in a subset of topics ${\displaystyle K_{w}}$, the above update equation could be rewritten to take advantage of this sparsity.[9]

${\displaystyle p(Z_{d,n}=k)\propto {\frac {\alpha \beta }{C_{k}^{\neg n}+V\beta }}+{\frac {C_{k}^{d}\beta }{C_{k}^{\neg n}+V\beta }}+{\frac {C_{k}^{w}(\alpha +C_{k}^{d})}{C_{k}^{\neg n}+V\beta }}}$

In this equation, we have three terms, out of which two are sparse, and the other is small. We call these terms ${\displaystyle a,b}$ and ${\displaystyle c}$ respectively. Now, if we normalize each term by summing over all the topics, we get:

${\displaystyle A=\sum _{k=1}^{K}{\frac {\alpha \beta }{C_{k}^{\neg n}+V\beta }}}$
${\displaystyle B=\sum _{k=1}^{K}{\frac {C_{k}^{d}\beta }{C_{k}^{\neg n}+V\beta }}}$
${\displaystyle C=\sum _{k=1}^{K}{\frac {C_{k}^{w}(\alpha +C_{k}^{d})}{C_{k}^{\neg n}+V\beta }}}$

Here, we can see that ${\displaystyle B}$ is a summation of the topics that appear in document ${\displaystyle d}$, and ${\displaystyle C}$ is also a sparse summation of the topics that a word ${\displaystyle w}$ is assigned to across the whole corpus. ${\displaystyle A}$ on the other hand, is dense but because of the small values of ${\displaystyle \alpha }$ & ${\displaystyle \beta }$, the value is very small compared to the two other terms.

Now, while sampling a topic, if we sample a random variable uniformly from ${\displaystyle s\sim U(s|\mid A+B+C)}$, we can check which bucket our sample lands in. Since ${\displaystyle A}$ is small, we are very unlikely to fall into this bucket; however, if we do fall into this bucket, sampling a topic takes ${\displaystyle O(K)}$ time (same as the original Collapsed Gibbs Sampler). However, if we fall into the other two buckets, we only need to check a subset of topics if we keep a record of the sparse topics. A topic can be sampled from the ${\displaystyle B}$ bucket in ${\displaystyle O(K_{d})}$ time, and a topic can be sampled from the ${\displaystyle C}$ bucket in ${\displaystyle O(K_{w})}$ time where ${\displaystyle K_{d}}$ and ${\displaystyle K_{w}}$ denotes the number of topics assigned to the current document and current word type respectively.

Notice that after sampling each topic, updating these buckets is all basic ${\displaystyle O(1)}$ arithmetic operations.

### Aspects of computational details

Following is the derivation of the equations for collapsed Gibbs sampling, which means ${\displaystyle \varphi }$s and ${\displaystyle \theta }$s will be integrated out. For simplicity, in this derivation the documents are all assumed to have the same length ${\displaystyle N_{}}$. The derivation is equally valid if the document lengths vary.

According to the model, the total probability of the model is:

${\displaystyle P({\boldsymbol {W}},{\boldsymbol {Z}},{\boldsymbol {\theta }},{\boldsymbol {\varphi }};\alpha ,\beta )=\prod _{i=1}^{K}P(\varphi _{i};\beta )\prod _{j=1}^{M}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})P(W_{j,t}\mid \varphi _{Z_{j,t}}),}$

where the bold-font variables denote the vector version of the variables. First, ${\displaystyle {\boldsymbol {\varphi }}}$ and ${\displaystyle {\boldsymbol {\theta }}}$ need to be integrated out.

{\displaystyle {\begin{aligned}&P({\boldsymbol {Z}},{\boldsymbol {W}};\alpha ,\beta )=\int _{\boldsymbol {\theta }}\int _{\boldsymbol {\varphi }}P({\boldsymbol {W}},{\boldsymbol {Z}},{\boldsymbol {\theta }},{\boldsymbol {\varphi }};\alpha ,\beta )\,d{\boldsymbol {\varphi }}\,d{\boldsymbol {\theta }}\\={}&\int _{\boldsymbol {\varphi }}\prod _{i=1}^{K}P(\varphi _{i};\beta )\prod _{j=1}^{M}\prod _{t=1}^{N}P(W_{j,t}\mid \varphi _{Z_{j,t}})\,d{\boldsymbol {\varphi }}\int _{\boldsymbol {\theta }}\prod _{j=1}^{M}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d{\boldsymbol {\theta }}.\end{aligned}}}

All the ${\displaystyle \theta }$s are independent to each other and the same to all the ${\displaystyle \varphi }$s. So we can treat each ${\displaystyle \theta }$ and each ${\displaystyle \varphi }$ separately. We now focus only on the ${\displaystyle \theta }$ part.

${\displaystyle \int _{\boldsymbol {\theta }}\prod _{j=1}^{M}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d{\boldsymbol {\theta }}=\prod _{j=1}^{M}\int _{\theta _{j}}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d\theta _{j}.}$

We can further focus on only one ${\displaystyle \theta }$ as the following:

${\displaystyle \int _{\theta _{j}}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d\theta _{j}.}$

Actually, it is the hidden part of the model for the ${\displaystyle j^{th}}$ document. Now we replace the probabilities in the above equation by the true distribution expression to write out the explicit equation.

${\displaystyle \int _{\theta _{j}}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d\theta _{j}=\int _{\theta _{j}}{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\prod _{i=1}^{K}\theta _{j,i}^{\alpha _{i}-1}\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d\theta _{j}.}$

Let ${\displaystyle n_{j,r}^{i}}$ be the number of word tokens in the ${\displaystyle j^{th}}$ document with the same word symbol (the ${\displaystyle r^{th}}$ word in the vocabulary) assigned to the ${\displaystyle i^{th}}$ topic. So, ${\displaystyle n_{j,r}^{i}}$ is three dimensional. If any of the three dimensions is not limited to a specific value, we use a parenthesized point ${\displaystyle (\cdot )}$ to denote. For example, ${\displaystyle n_{j,(\cdot )}^{i}}$ denotes the number of word tokens in the ${\displaystyle j^{th}}$ document assigned to the ${\displaystyle i^{th}}$ topic. Thus, the right most part of the above equation can be rewritten as:

${\displaystyle \prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})=\prod _{i=1}^{K}\theta _{j,i}^{n_{j,(\cdot )}^{i}}.}$

So the ${\displaystyle \theta _{j}}$ integration formula can be changed to:

${\displaystyle \int _{\theta _{j}}{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\prod _{i=1}^{K}\theta _{j,i}^{\alpha _{i}-1}\prod _{i=1}^{K}\theta _{j,i}^{n_{j,(\cdot )}^{i}}\,d\theta _{j}=\int _{\theta _{j}}{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\prod _{i=1}^{K}\theta _{j,i}^{n_{j,(\cdot )}^{i}+\alpha _{i}-1}\,d\theta _{j}.}$

Clearly, the equation inside the integration has the same form as the Dirichlet distribution. According to the Dirichlet distribution,

${\displaystyle \int _{\theta _{j}}{\frac {\Gamma \left(\sum _{i=1}^{K}n_{j,(\cdot )}^{i}+\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (n_{j,(\cdot )}^{i}+\alpha _{i})}}\prod _{i=1}^{K}\theta _{j,i}^{n_{j,(\cdot )}^{i}+\alpha _{i}-1}\,d\theta _{j}=1.}$

Thus,

{\displaystyle {\begin{aligned}&\int _{\theta _{j}}P(\theta _{j};\alpha )\prod _{t=1}^{N}P(Z_{j,t}\mid \theta _{j})\,d\theta _{j}=\int _{\theta _{j}}{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\prod _{i=1}^{K}\theta _{j,i}^{n_{j,(\cdot )}^{i}+\alpha _{i}-1}\,d\theta _{j}\\[8pt]={}&{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}{\frac {\prod _{i=1}^{K}\Gamma (n_{j,(\cdot )}^{i}+\alpha _{i})}{\Gamma \left(\sum _{i=1}^{K}n_{j,(\cdot )}^{i}+\alpha _{i}\right)}}\int _{\theta _{j}}{\frac {\Gamma \left(\sum _{i=1}^{K}n_{j,(\cdot )}^{i}+\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (n_{j,(\cdot )}^{i}+\alpha _{i})}}\prod _{i=1}^{K}\theta _{j,i}^{n_{j,(\cdot )}^{i}+\alpha _{i}-1}\,d\theta _{j}\\[8pt]={}&{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}{\frac {\prod _{i=1}^{K}\Gamma (n_{j,(\cdot )}^{i}+\alpha _{i})}{\Gamma \left(\sum _{i=1}^{K}n_{j,(\cdot )}^{i}+\alpha _{i}\right)}}.\end{aligned}}}

Now we turn our attention to the ${\displaystyle {\boldsymbol {\varphi }}}$ part. Actually, the derivation of the ${\displaystyle {\boldsymbol {\varphi }}}$ part is very similar to the ${\displaystyle {\boldsymbol {\theta }}}$ part. Here we only list the steps of the derivation:

{\displaystyle {\begin{aligned}&\int _{\boldsymbol {\varphi }}\prod _{i=1}^{K}P(\varphi _{i};\beta )\prod _{j=1}^{M}\prod _{t=1}^{N}P(W_{j,t}\mid \varphi _{Z_{j,t}})\,d{\boldsymbol {\varphi }}\\[8pt]={}&\prod _{i=1}^{K}\int _{\varphi _{i}}P(\varphi _{i};\beta )\prod _{j=1}^{M}\prod _{t=1}^{N}P(W_{j,t}\mid \varphi _{Z_{j,t}})\,d\varphi _{i}\\[8pt]={}&\prod _{i=1}^{K}\int _{\varphi _{i}}{\frac {\Gamma \left(\sum _{r=1}^{V}\beta _{r}\right)}{\prod _{r=1}^{V}\Gamma (\beta _{r})}}\prod _{r=1}^{V}\varphi _{i,r}^{\beta _{r}-1}\prod _{r=1}^{V}\varphi _{i,r}^{n_{(\cdot ),r}^{i}}\,d\varphi _{i}\\[8pt]={}&\prod _{i=1}^{K}\int _{\varphi _{i}}{\frac {\Gamma \left(\sum _{r=1}^{V}\beta _{r}\right)}{\prod _{r=1}^{V}\Gamma (\beta _{r})}}\prod _{r=1}^{V}\varphi _{i,r}^{n_{(\cdot ),r}^{i}+\beta _{r}-1}\,d\varphi _{i}\\[8pt]={}&\prod _{i=1}^{K}{\frac {\Gamma \left(\sum _{r=1}^{V}\beta _{r}\right)}{\prod _{r=1}^{V}\Gamma (\beta _{r})}}{\frac {\prod _{r=1}^{V}\Gamma (n_{(\cdot ),r}^{i}+\beta _{r})}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i}+\beta _{r}\right)}}.\end{aligned}}}

For clarity, here we write down the final equation with both ${\displaystyle {\boldsymbol {\phi }}}$ and ${\displaystyle {\boldsymbol {\theta }}}$ integrated out:

${\displaystyle P({\boldsymbol {Z}},{\boldsymbol {W}};\alpha ,\beta )=\prod _{j=1}^{M}{\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}{\frac {\prod _{i=1}^{K}\Gamma (n_{j,(\cdot )}^{i}+\alpha _{i})}{\Gamma \left(\sum _{i=1}^{K}n_{j,(\cdot )}^{i}+\alpha _{i}\right)}}\times \prod _{i=1}^{K}{\frac {\Gamma \left(\sum _{r=1}^{V}\beta _{r}\right)}{\prod _{r=1}^{V}\Gamma (\beta _{r})}}{\frac {\prod _{r=1}^{V}\Gamma (n_{(\cdot ),r}^{i}+\beta _{r})}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i}+\beta _{r}\right)}}.}$

The goal of Gibbs Sampling here is to approximate the distribution of ${\displaystyle P({\boldsymbol {Z}}\mid {\boldsymbol {W}};\alpha ,\beta )}$. Since ${\displaystyle P({\boldsymbol {W}};\alpha ,\beta )}$ is invariable for any of Z, Gibbs Sampling equations can be derived from ${\displaystyle P({\boldsymbol {Z}},{\boldsymbol {W}};\alpha ,\beta )}$ directly. The key point is to derive the following conditional probability:

${\displaystyle P(Z_{(m,n)}\mid {\boldsymbol {Z_{-(m,n)}}},{\boldsymbol {W}};\alpha ,\beta )={\frac {P(Z_{(m,n)},{\boldsymbol {Z_{-(m,n)}}},{\boldsymbol {W}};\alpha ,\beta )}{P({\boldsymbol {Z_{-(m,n)}}},{\boldsymbol {W}};\alpha ,\beta )}},}$

where ${\displaystyle Z_{(m,n)}}$ denotes the ${\displaystyle Z}$ hidden variable of the ${\displaystyle n^{th}}$ word token in the ${\displaystyle m^{th}}$ document. And further we assume that the word symbol of it is the ${\displaystyle v^{th}}$ word in the vocabulary. ${\displaystyle {\boldsymbol {Z_{-(m,n)}}}}$ denotes all the ${\displaystyle Z}$s but ${\displaystyle Z_{(m,n)}}$. Note that Gibbs Sampling needs only to sample a value for ${\displaystyle Z_{(m,n)}}$, according to the above probability, we do not need the exact value of

${\displaystyle P\left(Z_{m,n}\mid {\boldsymbol {Z_{-(m,n)}}},{\boldsymbol {W}};\alpha ,\beta \right)}$

but the ratios among the probabilities that ${\displaystyle Z_{(m,n)}}$ can take value. So, the above equation can be simplified as:

{\displaystyle {\begin{aligned}P(&Z_{(m,n)}=v\mid {\boldsymbol {Z_{-(m,n)}}},{\boldsymbol {W}};\alpha ,\beta )\\[8pt]&\propto P(Z_{(m,n)}=v,{\boldsymbol {Z_{-(m,n)}}},{\boldsymbol {W}};\alpha ,\beta )\\[8pt]&=\left({\frac {\Gamma \left(\sum _{i=1}^{K}\alpha _{i}\right)}{\prod _{i=1}^{K}\Gamma (\alpha _{i})}}\right)^{M}\prod _{j\neq m}{\frac {\prod _{i=1}^{K}\Gamma \left(n_{j,(\cdot )}^{i}+\alpha _{i}\right)}{\Gamma \left(\sum _{i=1}^{K}n_{j,(\cdot )}^{i}+\alpha _{i}\right)}}\left({\frac {\Gamma \left(\sum _{r=1}^{V}\beta _{r}\right)}{\prod _{r=1}^{V}\Gamma (\beta _{r})}}\right)^{K}\prod _{i=1}^{K}\prod _{r\neq v}\Gamma \left(n_{(\cdot ),r}^{i}+\beta _{r}\right){\frac {\prod _{i=1}^{K}\Gamma \left(n_{m,(\cdot )}^{i}+\alpha _{i}\right)}{\Gamma \left(\sum _{i=1}^{K}n_{m,(\cdot )}^{i}+\alpha _{i}\right)}}\prod _{i=1}^{K}{\frac {\Gamma \left(n_{(\cdot ),v}^{i}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i}+\beta _{r}\right)}}\\[8pt]&\propto {\frac {\prod _{i=1}^{K}\Gamma \left(n_{m,(\cdot )}^{i}+\alpha _{i}\right)}{\Gamma \left(\sum _{i=1}^{K}n_{m,(\cdot )}^{i}+\alpha _{i}\right)}}\prod _{i=1}^{K}{\frac {\Gamma \left(n_{(\cdot ),v}^{i}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i}+\beta _{r}\right)}}\\[8pt]&\propto \prod _{i=1}^{K}\Gamma \left(n_{m,(\cdot )}^{i}+\alpha _{i}\right)\prod _{i=1}^{K}{\frac {\Gamma \left(n_{(\cdot ),v}^{i}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i}+\beta _{r}\right)}}.\end{aligned}}}

Finally, let ${\displaystyle n_{j,r}^{i,-(m,n)}}$ be the same meaning as ${\displaystyle n_{j,r}^{i}}$ but with the ${\displaystyle Z_{(m,n)}}$ excluded. The above equation can be further simplified leveraging the property of gamma function. We first split the summation and then merge it back to obtain a ${\displaystyle k}$-independent summation, which could be dropped:

{\displaystyle {\begin{aligned}&\propto \prod _{i\neq k}\Gamma \left(n_{m,(\cdot )}^{i,-(m,n)}+\alpha _{i}\right)\prod _{i\neq k}{\frac {\Gamma \left(n_{(\cdot ),v}^{i,-(m,n)}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i,-(m,n)}+\beta _{r}\right)}}\Gamma \left(n_{m,(\cdot )}^{k,-(m,n)}+\alpha _{k}+1\right){\frac {\Gamma \left(n_{(\cdot ),v}^{k,-(m,n)}+\beta _{v}+1\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{k,-(m,n)}+\beta _{r}+1\right)}}\\[8pt]&=\prod _{i\neq k}\Gamma \left(n_{m,(\cdot )}^{i,-(m,n)}+\alpha _{i}\right)\prod _{i\neq k}{\frac {\Gamma \left(n_{(\cdot ),v}^{i,-(m,n)}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i,-(m,n)}+\beta _{r}\right)}}\Gamma \left(n_{m,(\cdot )}^{k,-(m,n)}+\alpha _{k}\right){\frac {\Gamma \left(n_{(\cdot ),v}^{k,-(m,n)}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{k,-(m,n)}+\beta _{r}\right)}}\left(n_{m,(\cdot )}^{k,-(m,n)}+\alpha _{k}\right){\frac {n_{(\cdot ),v}^{k,-(m,n)}+\beta _{v}}{\sum _{r=1}^{V}n_{(\cdot ),r}^{k,-(m,n)}+\beta _{r}}}\\[8pt]&=\prod _{i}\Gamma \left(n_{m,(\cdot )}^{i,-(m,n)}+\alpha _{i}\right)\prod _{i}{\frac {\Gamma \left(n_{(\cdot ),v}^{i,-(m,n)}+\beta _{v}\right)}{\Gamma \left(\sum _{r=1}^{V}n_{(\cdot ),r}^{i,-(m,n)}+\beta _{r}\right)}}\left(n_{m,(\cdot )}^{k,-(m,n)}+\alpha _{k}\right){\frac {n_{(\cdot ),v}^{k,-(m,n)}+\beta _{v}}{\sum _{r=1}^{V}n_{(\cdot ),r}^{k,-(m,n)}+\beta _{r}}}\\[8pt]&\propto \left(n_{m,(\cdot )}^{k,-(m,n)}+\alpha _{k}\right){\frac {n_{(\cdot ),v}^{k,-(m,n)}+\beta _{v}}{\sum _{r=1}^{V}n_{(\cdot ),r}^{k,-(m,n)}+\beta _{r}}}\end{aligned}}}

Note that the same formula is derived in the article on the Dirichlet-multinomial distribution, as part of a more general discussion of integrating Dirichlet distribution priors out of a Bayesian network.

## Related problems

### Related models

Topic modeling is a classic solution to the problem of information retrieval using linked data and semantic web technology.[10] Related models and techniques are, among others, latent semantic indexing, independent component analysis, probabilistic latent semantic indexing, non-negative matrix factorization, and Gamma-Poisson distribution.

The LDA model is highly modular and can therefore be easily extended. The main field of interest is modeling relations between topics. This is achieved by using another distribution on the simplex instead of the Dirichlet. The Correlated Topic Model[11] follows this approach, inducing a correlation structure between topics by using the logistic normal distribution instead of the Dirichlet. Another extension is the hierarchical LDA (hLDA),[12] where topics are joined together in a hierarchy by using the nested Chinese restaurant process, whose structure is learnt from data. LDA can also be extended to a corpus in which a document includes two types of information (e.g., words and names), as in the LDA-dual model.[13] Nonparametric extensions of LDA include the hierarchical Dirichlet process mixture model, which allows the number of topics to be unbounded and learnt from data.

As noted earlier, pLSA is similar to LDA. The LDA model is essentially the Bayesian version of pLSA model. The Bayesian formulation tends to perform better on small datasets because Bayesian methods can avoid overfitting the data. For very large datasets, the results of the two models tend to converge. One difference is that pLSA uses a variable ${\displaystyle d}$ to represent a document in the training set. So in pLSA, when presented with a document the model hasn't seen before, we fix ${\displaystyle \Pr(w\mid z)}$—the probability of words under topics—to be that learned from the training set and use the same EM algorithm to infer ${\displaystyle \Pr(z\mid d)}$—the topic distribution under ${\displaystyle d}$. Blei argues that this step is cheating because you are essentially refitting the model to the new data.

### Spatial models

In evolutionary biology, it is often natural to assume that the geographic locations of the individuals observed bring some information about their ancestry. This is the rational of various models for geo-referenced genetic data[7] [14]

Variations on LDA have been used to automatically put natural images into categories, such as "bedroom" or "forest", by treating an image as a document, and small patches of the image as words;[15] one of the variations is called Spatial Latent Dirichlet Allocation.[16]

## References

1. ^ a b Pritchard, J. K.; Stephens, M.; Donnelly, P. (June 2000). "Inference of population structure using multilocus genotype data". Genetics. 155 (2): pp. 945–959. ISSN 0016-6731. PMC 1461096. PMID 10835412.
2. ^ Falush, D.; Stephens, M.; Pritchard, J. K. (2003). "Inference of population structure using multilocus genotype data: linked loci and correlated allele frequencies". Genetics. 164 (4): pp. 1567–1587. PMC 1462648. PMID 12930761.
3. ^ a b c Blei, David M.; Ng, Andrew Y.; Jordan, Michael I (January 2003). Lafferty, John (ed.). "Latent Dirichlet Allocation". Journal of Machine Learning Research. 3 (4–5): pp. 993–1022. doi:10.1162/jmlr.2003.3.4-5.993. Archived from the original on 2012-05-01. Retrieved 2006-12-19. CS1 maint: discouraged parameter (link)
4. ^ Girolami, Mark; Kaban, A. (2003). On an Equivalence between PLSI and LDA. Proceedings of SIGIR 2003. New York: Association for Computing Machinery. ISBN 1-58113-646-3.
5. ^ Griffiths, Thomas L.; Steyvers, Mark (April 6, 2004). "Finding scientific topics". Proceedings of the National Academy of Sciences. 101 (Suppl. 1): 5228–5235. Bibcode:2004PNAS..101.5228G. doi:10.1073/pnas.0307752101. PMC 387300. PMID 14872004.
6. ^ Alexander, David H.; Novembre, John; Lange, Kenneth (2009). "Fast model-based estimation of ancestry in unrelated individuals". Genome Research. 19 (9): 1655–1664. doi:10.1101/gr.094052.109. PMC 2752134. PMID 19648217.
7. ^ a b Guillot, G.; Estoup, A.; Mortier, F.; Cosson, J. (2005). "A spatial statistical model for landscape genetics". Genetics. 170 (3): pp. 1261–1280. doi:10.1534/genetics.104.033803. PMC 1451194. PMID 15520263.
8. ^ Minka, Thomas; Lafferty, John (2002). Expectation-propagation for the generative aspect model (PDF). Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence. San Francisco, CA: Morgan Kaufmann. ISBN 1-55860-897-4.
9. ^ Yao, Limin; Mimno, David; McCallum, Andrew (2009). Efficient methods for topic model inference on streaming document collections. 15th ACM SIGKDD international conference on Knowledge discovery and data mining.
10. ^ Lamba, Manika; Madhusudhan, Margam (2019). "Mapping of topics in DESIDOC Journal of Library and Information Technology, India: a study". Scientometrics. 120 (2): 477–505. doi:10.1007/s11192-019-03137-5. S2CID 174802673.
11. ^ Blei, David M.; Lafferty, John D. (2006). "Correlated topic models" (PDF). Advances in Neural Information Processing Systems. 18.
12. ^ Blei, David M.; Jordan, Michael I.; Griffiths, Thomas L.; Tenenbaum, Joshua B (2004). Hierarchical Topic Models and the Nested Chinese Restaurant Process (PDF). Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference. MIT Press. ISBN 0-262-20152-6.
13. ^ Shu, Liangcai; Long, Bo; Meng, Weiyi (2009). A Latent Topic Model for Complete Entity Resolution (PDF). 25th IEEE International Conference on Data Engineering (ICDE 2009).
14. ^ Guillot, G.; Leblois, R.; Coulon, A.; Frantz, A. (2009). "Statistical methods in spatial genetics". Molecular Ecology. 18 (23): pp. 4734–4756. doi:10.1111/j.1365-294X.2009.04410.x. PMID 19878454.
15. ^ Li, Fei-Fei; Perona, Pietro. "A Bayesian Hierarchical Model for Learning Natural Scene Categories". Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). 2: 524–531.
16. ^ Wang, Xiaogang; Grimson, Eric (2007). "Spatial Latent Dirichlet Allocation" (PDF). Proceedings of Neural Information Processing Systems Conference (NIPS).