# Chinese restaurant process

In probability theory, the Chinese restaurant process is a discrete-time stochastic process, analogous to seating customers at tables in a restaurant. Imagine a restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 sits at the first table. The next customer either sits at the same table as customer 1, or the next table. This continues, with each customer choosing to either sit at an occupied table with a probability proportional to the number of customers already there (i.e., they are more likely to sit at a table with many customers than few), or an unoccupied table. At time n, the n customers have been partitioned among m ≤ n tables (or blocks of the partition). The results of this process are exchangeable, meaning the order in which the customers sit does not affect the probability of the final distribution. This property greatly simplifies a number of problems in population genetics, linguistic analysis, and image recognition.

David J. Aldous attributes the restaurant analogy to Jim Pitman and Lester Dubins in his 1983 book.[1]

## Formal definition

For any positive integer ${\displaystyle n}$, let ${\displaystyle {\mathcal {P}}_{n}}$ denote the set of all partitions of the set ${\displaystyle \{1,2,3,...,n\}\triangleq [n]}$. The Chinese restaurant process takes values in the infinite Cartesian product ${\displaystyle \prod _{n\geq 1}{\mathcal {P}}_{n}}$.

The value of the process at time ${\displaystyle n}$ is a partition ${\displaystyle B_{n}}$ of the set ${\displaystyle [n]}$, whose probability distribution is determined as follows. At time ${\displaystyle n=1}$, the trivial partition ${\displaystyle B_{1}=\{\{1\}\}}$ is obtained (with probability one). At time ${\displaystyle n+1}$ the element "${\displaystyle n+1}$" is either:

1. added to one of the blocks of the partition ${\displaystyle B_{n}}$, where each block is chosen with probability ${\displaystyle |b|/(n+1)}$ where ${\displaystyle |b|}$ is the size of the block (i.e. number of elements), or
2. added to the partition ${\displaystyle B_{n}}$ as a new singleton block, with probability ${\displaystyle 1/(n+1)}$.

The random partition so generated has some special properties. It is exchangeable in the sense that relabeling ${\displaystyle \{1,...,n\}}$ does not change the distribution of the partition, and it is consistent in the sense that the law of the partition of ${\displaystyle [n-1]}$ obtained by removing the element ${\displaystyle n}$ from the random partition ${\displaystyle B_{n}}$ is the same as the law of the random partition ${\displaystyle B_{n-1}}$.

The probability assigned to any particular partition (ignoring the order in which customers sit around any particular table) is

${\displaystyle \Pr(B_{n}=B)={\frac {\prod _{b\in B}(|b|-1)!}{n!}},\qquad B\in {\mathcal {P}}_{n}}$

where ${\displaystyle b}$ is a block in the partition ${\displaystyle B}$ and ${\displaystyle |b|}$ is the size of ${\displaystyle b}$.

The definition can be generalized by introducing a parameter ${\displaystyle \theta >0}$ which modifies the probability of the new customer sitting at a new table to ${\displaystyle {\frac {\theta }{n+\theta }}}$ and correspondingly modifies the probability of them sitting at a table of size ${\displaystyle |b|}$ to ${\displaystyle {\frac {|b|}{n+\theta }}}$. The vanilla process introduced above can be recovered by setting ${\displaystyle \theta =1}$. Intuitively, ${\displaystyle \theta }$ can be interpreted as the effective number of customers sitting at the first empty table.

## Distribution

Parameters ${\displaystyle \theta >0}$ ${\displaystyle m\in \{0,1,2,\ldots \}}$ ${\displaystyle L\in \{0,1,2,\ldots ,m\}}$ ${\displaystyle {\frac {\Gamma (\theta )}{\Gamma (m+\theta )}}|s(m,\ell )|\theta ^{\ell }}$ ${\displaystyle \theta (\psi (\theta +m)-\psi (\theta ))}$(see digamma function)

The Chinese restaurant table distribution (CRT) is the probability distribution on the number of tables in the Chinese restaurant process.[2] It can be understood as the sum of ${\displaystyle n}$ independent Bernoulli random variables, each with a different bias:

{\displaystyle {\begin{aligned}L&=\sum _{n=1}^{m}b_{n}\\[4pt]b_{n}&\sim \operatorname {Bernoulli} \left({\frac {\theta }{n-1+\theta }}\right)\end{aligned}}}

The probability mass function of ${\displaystyle L}$ is given by [3]

${\displaystyle f(\ell )={\frac {\Gamma (\theta )}{\Gamma (m+\theta )}}|s(m,\ell )|\theta ^{\ell }}$

where ${\displaystyle s}$ denotes Stirling numbers of the first kind.

## Generalization

This construction can be generalized to a model with two parameters, ${\displaystyle \theta }$ & ${\displaystyle \alpha }$,[4][5] commonly called the strength (or concentration) and discount parameters respectively. At time ${\displaystyle n+1}$, the next customer to arrive finds ${\displaystyle |B|}$ occupied tables and decides to sit at an empty table with probability

${\displaystyle {\frac {\theta +|B|\alpha }{n+\theta }},}$

or at an occupied table ${\displaystyle b}$ of size ${\displaystyle |b|}$ with probability

${\displaystyle {\frac {|b|-\alpha }{n+\theta }}.}$

In order for the construction to define a valid probability measure it is necessary to suppose that either ${\displaystyle \alpha <0}$ and ${\displaystyle \theta =-L\alpha }$ for some ${\displaystyle L\in \{1,2,,...\}}$; or that ${\displaystyle 0\leq \alpha <1}$ and ${\displaystyle \theta >-\alpha }$.

Under this model the probability assigned to any particular partition ${\displaystyle B}$ of ${\displaystyle [n]}$, in terms of the Pochhammer k-symbol, is

${\displaystyle \Pr(B_{n}=B)={\frac {(\theta +\alpha )_{|B|-1,\alpha }}{(\theta +1)_{n-1,1}}}\prod _{b\in B}(1-\alpha )_{|b|-1,1}}$

where, by convention, ${\displaystyle (a)_{0,c}=1}$, and for ${\displaystyle b>0}$

${\displaystyle (a)_{b,c}=\prod _{i=0}^{b-1}(a+ic)={\begin{cases}a^{b}&{\text{if }}c=0,\\\\{\dfrac {c^{b}\,\Gamma (a/c+b)}{\Gamma (a/c)}}&{\text{otherwise}}.\end{cases}}}$

Thus, for the case when ${\displaystyle \theta >0}$ the partition probability can be expressed in terms of the Gamma function as

${\displaystyle \Pr(B_{n}=B)={\frac {\Gamma (\theta )}{\Gamma (\theta +n)}}{\dfrac {\alpha ^{|B|}\,\Gamma (\theta /\alpha +|B|)}{\Gamma (\theta /\alpha )}}\prod _{b\in B}{\dfrac {\Gamma (|b|-\alpha )}{\Gamma (1-\alpha )}}.}$

In the one-parameter case, where ${\displaystyle \alpha }$ is zero, this simplifies to

${\displaystyle \Pr(B_{n}=B)={\frac {\Gamma (\theta )\,\theta ^{|B|}}{\Gamma (\theta +n)}}\prod _{b\in B}\Gamma (|b|).}$

Or, when ${\displaystyle \theta }$ is zero,

${\displaystyle \Pr(B_{n}=B)={\frac {\alpha ^{|B|-1}\,\Gamma (|B|)}{\Gamma (n)}}\prod _{b\in B}{\frac {\Gamma (|b|-\alpha )}{\Gamma (1-\alpha )}}.}$

As before, the probability assigned to any particular partition depends only on the block sizes, so as before the random partition is exchangeable in the sense described above. The consistency property still holds, as before, by construction.

If ${\displaystyle \alpha =0}$, the probability distribution of the random partition of the integer ${\displaystyle n}$ thus generated is the Ewens distribution with parameter ${\displaystyle \theta }$, used in population genetics and the unified neutral theory of biodiversity.

Animation of a Chinese restaurant process with scaling parameter ${\displaystyle \theta =0.5,\ \alpha =0}$. Tables are hidden once the customers of a table can not be displayed anymore; however, every table has infinitely many seats. (Recording of an interactive animation.[6])

### Derivation

Here is one way to derive this partition probability. Let ${\displaystyle C_{i}}$ be the random block into which the number ${\displaystyle i}$ is added, for ${\displaystyle i=1,2,3,...}$. Then

${\displaystyle \Pr(C_{i}=c\mid C_{1},\ldots ,C_{i-1})={\begin{cases}{\dfrac {\theta +|B|\alpha }{\theta +i-1}}&{\text{if }}c\in {\text{new block}},\\\\{\dfrac {|b|-\alpha }{\theta +i-1}}&{\text{if }}c\in b;\end{cases}}}$

The probability that ${\displaystyle B_{n}}$ is any particular partition of the set ${\displaystyle \{1,...,n\}}$ is the product of these probabilities as ${\displaystyle i}$ runs from ${\displaystyle 1}$ to ${\displaystyle n}$. Now consider the size of block ${\displaystyle b}$: it increases by one each time we add one element into it. When the last element in block ${\displaystyle b}$ is to be added in, the block size is ${\displaystyle |b|-1}$. For example, consider this sequence of choices: (generate a new block ${\displaystyle b}$)(join ${\displaystyle b}$)(join ${\displaystyle b}$)(join ${\displaystyle b}$). In the end, block ${\displaystyle b}$ has 4 elements and the product of the numerators in the above equation gets ${\displaystyle \theta \cdot 1\cdot 2\cdot 3}$. Following this logic, we obtain ${\displaystyle \Pr(B_{n}=B)}$ as above.

### Expected number of tables

For the one parameter case, with ${\displaystyle \alpha =0}$ and ${\displaystyle 0<\theta <\infty }$, the number of tables is distributed according to the chinese restaurant table distribution. The expected value of this random variable, given that there are ${\displaystyle n}$ seated customers, is[7]

{\displaystyle {\begin{aligned}\sum _{k=1}^{n}{\frac {\theta }{\theta +k-1}}=\theta \cdot (\Psi (\theta +n)-\Psi (\theta ))\end{aligned}}}

where ${\displaystyle \Psi (\theta )}$ is the digamma function. In the general case (${\displaystyle \alpha >0}$) the expected number of occupied tables is[5]

{\displaystyle {\begin{aligned}{\frac {\Gamma (\theta +n+\alpha )\Gamma (\theta +1)}{\alpha \Gamma (\theta +n)\Gamma (\theta +\alpha )}}-{\frac {\theta }{\alpha }},\end{aligned}}}

however, note that the ${\displaystyle \Gamma (\cdot )}$ function here is not the standard gamma function.[5]

### The Indian buffet process

It is possible to adapt the model such that each data point is no longer uniquely associated with a class (i.e., we are no longer constructing a partition), but may be associated with any combination of the classes. This strains the restaurant-tables analogy and so is instead likened to a process in which a series of diners samples from some subset of an infinite selection of dishes on offer at a buffet. The probability that a particular diner samples a particular dish is proportional to the popularity of the dish among diners so far, and in addition the diner may sample from the untested dishes. This has been named the Indian buffet process and can be used to infer latent features in data.[8]

## Applications

The Chinese restaurant process is closely connected to Dirichlet processes and Pólya's urn scheme, and therefore useful in applications of Bayesian statistics including nonparametric Bayesian methods. The Generalized Chinese Restaurant Process is closely related to Pitman–Yor process. These processes have been used in many applications, including modeling text, clustering biological microarray data,[9] biodiversity modelling, and image reconstruction [10][11]

## References

1. ^ Aldous, D. J. (1985). "Exchangeability and related topics". École d'Été de Probabilités de Saint-Flour XIII — 1983. Lecture Notes in Mathematics. Vol. 1117. pp. 1–198. doi:10.1007/BFb0099421. ISBN 978-3-540-15203-3.
2. ^ Zhou, Mingyuan; Carin, Lawrence (2012). "Negative Binomial Process Count and Mixture Modeling". IEEE Transactions on Pattern Analysis and Machine Intelligence. 37 (2): 307–20. arXiv:1209.3442. Bibcode:2012arXiv1209.3442Z. doi:10.1109/TPAMI.2013.211. PMID 26353243. S2CID 1937045.
3. ^ Antoniak, Charles E (1974). "Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems". The Annals of Statistics. 2 (6): 1152–1174. doi:10.1214/aos/1176342871.
4. ^ Pitman, Jim (1995). "Exchangeable and Partially Exchangeable Random Partitions". Probability Theory and Related Fields. 102 (2): 145–158. doi:10.1007/BF01213386. MR 1337249. S2CID 16849229.
5. ^ a b c Pitman, Jim (2006). Combinatorial Stochastic Processes. Vol. 1875. Berlin: Springer-Verlag. ISBN 9783540309901. Archived from the original on 2012-09-25. Retrieved 2011-05-11.
6. ^
7. ^ Xinhua Zhang, "A Very Gentle Note on the Construction of Dirichlet Process", September 2008, The Australian National University, Canberra. Online: http://users.cecs.anu.edu.au/~xzhang/pubDoc/notes/dirichlet_process.pdf Archived April 11, 2011, at the Wayback Machine
8. ^ Griffiths, T.L. and Ghahramani, Z. (2005) Infinite Latent Feature Models and the Indian Buffet Process. Gatsby Unit Technical Report GCNU-TR-2005-001.
9. ^ Qin, Zhaohui S (2006). "Clustering microarray gene expression data using weighted Chinese restaurant process". Bioinformatics. 22 (16): 1988–1997. doi:10.1093/bioinformatics/btl284. PMID 16766561.
10. ^ White, J. T.; Ghosal, S. (2011). "Bayesian smoothing of photon‐limited images with applications in astronomy" (PDF). Journal of the Royal Statistical Society, Series B (Statistical Methodology). 73 (4): 579–599. CiteSeerX 10.1.1.308.7922. doi:10.1111/j.1467-9868.2011.00776.x.
11. ^ Li, M.; Ghosal, S. (2014). "Bayesian multiscale smoothing of Gaussian noised images". Bayesian Analysis. 9 (3): 733–758. doi:10.1214/14-ba871.