||This article possibly contains original research. (June 2012)|
In probability and statistics, the Dirichlet-multinomial distribution is a probability distribution for a multivariate discrete random variable. It is also called the Dirichlet compound multinomial distribution (DCM) or multivariate Pólya distribution (after George Pólya). It is a compound probability distribution, where a probability vector p is drawn from a Dirichlet distribution with parameter vector , and a set of discrete samples is drawn from the categorical distribution with probability vector p. The compounding corresponds to a Polya urn scheme. In document classification, for example, the distribution is used to represent the distributions of word counts for different document types.
- 1 Probability mass function
- 1.1 For a set of individual outcomes
- 1.1.1 Joint distribution
- 1.1.2 Conditional distribution
- 1.1.3 In a Bayesian network
- 184.108.40.206 Multiple Dirichlet priors with the same hyperprior
- 220.127.116.11 Multiple Dirichlet priors with the same hyperprior, with dependent children
- 18.104.22.168 Multiple Dirichlet priors with shifting prior membership
- 22.214.171.124 A combined example: LDA topic models
- 126.96.36.199 A second example: Naive Bayes document clustering
- 1.2 For a multinomial distribution over category counts
- 1.1 For a set of individual outcomes
- 2 Related distributions
- 3 Uses
- 4 See also
- 5 References
Probability mass function
Conceptually, we are doing N independent draws from a categorical distribution with K categories. Let us represent the independent draws as random categorical variables for . Let us denote the number of times a particular category has been seen (for ) among all the categorical variables as . Note that . Then, we have two separate views onto this problem:
- A set of categorical variables .
- A single vector-valued variable , distributed according to a multinomial distribution.
The former case is a set of random variables specifying each individual outcome, while the latter is a variable specifying the number of outcomes of each of the K categories. The distinction is important, as the two cases have correspondingly different probability distributions.
The parameter of the categorical distribution is where is the probability to draw value ; is likewise the parameter of the multinomial distribution . Rather than specifying directly, we give it a conjugate prior distribution, and hence it is drawn from a Dirichlet distribution with parameter vector .
By integrating out , we obtain a compound distribution. However, the form of the distribution is different depending on which view we take.
For a set of individual outcomes
which results in the following explicit formula:
where is the gamma function, with
Note that, although the variables do not appear explicitly in the above formula, they enter in through the values.
Another useful formula, particularly in the context of Gibbs sampling, asks what the conditional density of a given variable is, conditioned on all the other variables (which we will denote ). It turns out to have an extremely simple form:
where specifies the number of counts of category seen in all variables other than .
It may be useful to show how to derive this formula. In general, conditional distributions are proportional to the corresponding joint distributions, so we simply start with the above formula for the joint distribution of all the values and then eliminate any factors not dependent on the particular in question. To do this, we make use of the notation defined above, and note that
We also use the fact that
In general, it is not necessary to worry about the normalizing constant at the time of deriving the equations for conditional distributions. The normalizing constant will be determined as part of the algorithm for sampling from the distribution (see Categorical distribution#Sampling). However, when the conditional distribution is written in the simple form above, it turns out that the normalizing constant assumes a simple form:
This formula is closely related to the Chinese restaurant process, which results from taking the limit as .
In a Bayesian network
In a larger Bayesian network in which categorical (or so-called "multinomial") distributions occur with Dirichlet distribution priors as part of a larger network, all Dirichlet priors can be collapsed provided that the only nodes depending on them are categorical distributions. The collapsing happens for each Dirichlet-distribution node separately from the others, and occurs regardless of any other nodes that may depend on the categorical distributions. It also occurs regardless of whether the categorical distributions depend on nodes additional to the Dirichlet priors (although in such a case, those other nodes must remain as additional conditioning factors). Essentially, all of the categorical distributions depending on a given Dirichlet-distribution node become connected into a single Dirichlet-multinomial joint distribution defined by the above formula. The joint distribution as defined this way will depend on the parent(s) of the integrated-out Dirichet prior nodes, as well as any parent(s) of the categorical nodes other than the Dirichlet prior nodes themselves.
In the following sections, we discuss different configurations commonly found in Bayesian networks. We repeat the probability density from above, and define it using the symbol :
Multiple Dirichlet priors with the same hyperprior
Imagine we have a hierarchical model as follows:
In cases like this, we have multiple Dirichet priors, each of which generates some number of categorical observations (possibly a different number for each prior). The fact that they are all dependent on the same hyperprior, even if this is a random variable as above, makes no difference. The effect of integrating out a Dirichlet prior links the categorical variables attached to that prior, whose joint distribution simply inherits any conditioning factors of the Dirichlet prior. The fact that multiple priors may share a hyperprior makes no difference:
where is simply the collection of categorical variables dependent on prior d.
Accordingly, the conditional probability distribution can be written as follows:
where specifically means the number of variables among the set , excluding itself, that have the value .
Note in particular that we need to count only the variables having the value k that are tied together to the variable in question through having the same prior. We do not want to count any other variables also having the value k.
Multiple Dirichlet priors with the same hyperprior, with dependent children
Now imagine a slightly more complicated hierarchical model as follows:
This model is the same as above, but in addition, each of the categorical variables has a child variable dependent on it. This is typical of a mixture model.
Again, in the joint distribution, only the categorical variables dependent on the same prior are linked into a single Dirichlet-multinomial:
The conditional distribution of the categorical variables dependent only on their parents and ancestors would have the identical form as above in the simpler case. However, in Gibbs sampling it is necessary to determine the conditional distribution of a given node dependent not only on and ancestors such as but on all the other parameters.
Note however that we derived the simplified expression for the conditional distribution above simply by rewriting the expression for the joint probability and removing constant factors. Hence, the same simplification would apply in a larger joint probability expression such as the one in this model, composed of Dirichlet-multinomial densities plus factors for many other random variables dependent on the values of the categorical variables.
This yields the following:
Here the probability density of appears directly. To do random sampling over , we would compute the unnormalized probabilities for all K possibilities for using the above formula, then normalize them and proceed as normal using the algorithm described in the categorical distribution article.
NOTE: Correctly speaking, the additional factor that appears in the conditional distribution is derived not from the model specification but directly from the joint distribution. This distinction is important when considering models where a given node with Dirichlet-prior parent has multiple dependent children, particularly when those children are dependent on each other (e.g. if they share a parent that is collapsed out). This is discussed more below.
Multiple Dirichlet priors with shifting prior membership
Now imagine we have a hierarchical model as follows:
Here we have a tricky situation where we have multiple Dirichlet priors as before and a set of dependent categorical variables, but the relationship between the priors and dependent variables isn't fixed, unlike before. Instead, the choice of which prior to use is dependent on another random categorical variable. This occurs, for example, in topic models, and indeed the names of the variables above are meant to correspond to those in latent Dirichlet allocation. In this case, the set is a set of words, each of which is drawn from one of possible topics, where each topic is a Dirichlet prior over a vocabulary of possible words, specifying the frequency of different words in the topic. However, the topic membership of a given word isn't fixed; rather, it's determined from a set of latent variables . There is one latent variable per word, a -dimensional categorical variable specifying the topic the word belongs to.
In this case, all variables dependent on a given prior are tied together (i.e. correlated) in a group, as before — specifically, all words belonging to a given topic are linked. In this case, however, the group membership shifts, in that the words are not fixed to a given topic but the topic depends on the value of a latent variable associated with the word. However, note that the definition of the Dirichlet-multinomial density doesn't actually depend on the number of categorical variables in a group (i.e. the number of words in the document generated from a given topic), but only on the counts of how many variables in the group have a given value (i.e. among all the word tokens generated from a given topic, how many of them are a given word). Hence, we can still write an explicit formula for the joint distribution:
Here we use the notation to denote the number of word tokens whose value is word symbol v and which belong to topic k.
The conditional distribution still has the same form:
Here again, only the categorical variables for words belonging to a given topic are linked (even though this linking will depend on the assignments of the latent variables), and hence the word counts need to be over only the words generated by a given topic. Hence the symbol , which is the count of words tokens having the word symbol v, but only among those generated by topic k, and excluding the word itself whose distribution is being described.
(Note that the reason why excluding the word itself is necessary, and why it even makes sense at all, is that in a Gibbs sampling context, we repeatedly resample the values of each random variable, after having run through and sampled all previous variables. Hence the variable will already have a value, and we need to exclude this existing value from the various counts that we make use of.)
A combined example: LDA topic models
The model is as follows:
Essentially we combine the previous three scenarios: We have categorical variables dependent on multiple priors sharing a hyperprior; we have categorical variables with dependent children (the latent variable topic identities); and we have categorical variables with shifting membership in multiple priors sharing a hyperprior. Note also that in the standard LDA model, the words are completely observed, and hence we never need to resample them. (However, Gibbs sampling would equally be possible if only some or none of the words were observed. In such a case, we would want to initialize the distribution over the words in some reasonable fashion — e.g. from the output of some process that generates sentences, such as a machine translation model — in order for the resulting posterior latent variable distributions to make any sense.)
Using the above formulas, we can write down the conditional probabilities directly:
Here we have defined the counts more explicitly to clearly separate counts of words and counts of topics:
Note that, as in the scenario above with categorical variables with dependent children, the conditional probability of those dependent children appears in the definition of the parent's conditional probability. In this case, each latent variable has only a single dependent child word, so only one such term appears. (If there were multiple dependent children, all would have to appear in the parent's conditional probability, regardless of whether there was overlap between different parents and the same children, i.e. regardless of whether the dependent children of a given parent also have other parents. In a case where a child has multiple parents, the conditional probability for that child appears in the conditional probability definition of each of its parents.)
Note, critically, however, that the definition above specifies only the unnormalized conditional probability of the words, while the topic conditional probability requires the actual (i.e. normalized) probability. Hence we have to normalize by summing over all word symbols:
It's also worth making another point in detail, which concerns the second factor above in the conditional probability. Remember that the conditional distribution in general is derived from the joint distribution, and simplified by removing terms not dependent on the domain of the conditional (the part on the left side of the vertical bar). When a node has dependent children, there will be one or more factors in the joint distribution that are dependent on . Usually there is one factor for each dependent node, and it has the same density function as the distribution appearing the mathematical definition. However, if a dependent node has another parent as well (a co-parent), and that co-parent is collapsed out, then the node will become dependent on all other nodes sharing that co-parent, and in place of multiple terms for each such node, the joint distribution will have only one joint term. We have exactly that situation here. Even though has only one child , that child has a Dirichlet co-parent that we have collapsed out, which induces a Dirichlet-multinomial over the entire set of nodes .
It happens in this case that this issue does not cause major problems, precisely because of the one-to-one relationship between and . We can rewrite the joint distribution as follows:
where we note that in the set (i.e. the set of nodes excluding ), none of the nodes have as a parent. Hence it can be eliminated as a conditioning factor (line 2), meaning that the entire factor can be eliminated from the conditional distribution (line 3).
A second example: Naive Bayes document clustering
Here is another model, with a different set of issues. This is an implementation of an unsupervised Naive Bayes model for document clustering. That is, we would like to classify documents into multiple categories (e.g. "spam" or "non-spam", or "scientific journal article", "newspaper article about finance", "newspaper article about politics", "love letter") based on textual content. However, we don't already know the correct category of any documents; instead, we want to cluster them based on mutual similarities. (For example, a set of scientific articles will tend to be similar to each other in word use but very different from a set of love letters.) This is a type of unsupervised learning. (The same technique can be used for doing semi-supervised learning, i.e. where we know the correct category of some fraction of the documents and would like to use this knowledge to help in clustering the remaining documents.)
The model is as follows:
In many ways, this model is very similar to the LDA topic model described above, but it assumes one topic per document rather than one topic per word, with a document consisting of a mixture of topics. This can be seen clearly in the above model, which is identical to the LDA model except that there is only one latent variable per document instead of one per word. Once again, we assume that we are collapsing all of the Dirichlet priors.
The conditional probability for a given word is almost identical to the LDA case. Once again, all words generated by the same Dirichlet prior are interdependent. In this case, this means the words of all documents having a given label — again, this can vary depending on the label assignments, but all we care about is the total counts. Hence:
However, there is a critical difference in the conditional distribution of the latent variables for the label assignments, which is that a given label variable has multiple children nodes instead of just one — in particular, the nodes for all the words in the label's document. This relates closely to the discussion above about the factor that stems from the joint distribution. In this case, the joint distribution needs to be taken over all words in all documents containing a label assignment equal to the value of , and has the value of a Dirichlet-multinomial distribution. Furthermore, we cannot reduce this joint distribution down to a conditional distribution over a single word. Rather, we can reduce it down only to a smaller joint conditional distribution over the words in the document for the label in question, and hence we cannot simplify it using the trick above that yields a simple sum of expected count and prior. Although it is in fact possible to rewrite it as a product of such individual sums, the number of factors is very large, and is not clearly more efficient than directly computing the Dirichlet-multinomial distribution probability.
For a multinomial distribution over category counts
which results in the following explicit formula:
where A is defined as the sum . Note that this differs crucially from the above formula in having an extra term at the front that looks like the factor at the front of a multinomial distribution. Another form for this same compound distribution, written more compactly in terms of the beta function, B, is as follows:
The one-dimensional version of the multivariate Pólya distribution is known as the Beta-binomial distribution.
||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (June 2012)|
- Beta-binomial distribution
- Chinese restaurant process
- Dirichlet process
- Generalized Dirichlet distribution
- Elkan, C. (2006) Clustering documents with an exponential-family approximation of the Dirichlet compound multinomial distribution. ICML, 289-296
- Kvam, P. and Day, D. (2001) The multivariate Polya distribution in combat modeling. Naval Research Logistics, 48, 1-17
- Madsen, RE., Kauchak, D. and Elkan, C. (2005) Modeling Word Burstiness Using the Dirichlet Distribution. ICML, 545-552
- Minka, T. (2003) Estimating a Dirichlet distribution. Technical report Microsoft Research. Includes Matlab code for fitting distributions to data.
- Wagner, U. and Taudes, A. (1986) A Multivariate Polya Model of Brand Choice and Purchase Incidence. Marketing Science, 5(3), 219-244.