# Hidden Markov random field

Jump to: navigation, search

A hidden Markov random field is a generalization of a hidden Markov model. Instead of having an underlying Markov chain, hidden Markov random fields have an underlying Markov random field.

Suppose that we observe a random variable ${\displaystyle Y_{i}}$, where ${\displaystyle i\in S}$. Hidden Markov random fields assume that the probabilistic nature of ${\displaystyle Y_{i}}$ is determined by the unobservable Markov random field ${\displaystyle X_{i}}$, ${\displaystyle i\in S}$. That is, given the neighbors ${\displaystyle N_{i}}$ of ${\displaystyle X_{i}}$, ${\displaystyle X_{i}}$ is independent of all other ${\displaystyle X_{j}}$ (Markov property). The main difference with a hidden Markov model is that neighborhood is not defined in 1 dimension but within a network, i.e. ${\displaystyle X_{i}}$ is allowed to have more than the two neighbors that it would have in a Markov chain. The model is formulated in such a way that given ${\displaystyle X_{i}}$, ${\displaystyle Y_{i}}$ are independent (conditional independence of the observable variables given the Markov random field).

In the vast majority of the related literature, the number of possible latent states is considered a user-defined constant. However, ideas from nonparametric Bayesian statistics, which allow for data-driven inference of the number of states, have been also recently investigated with success, e.g.[1]

## References

1. ^ Sotirios P. Chatzis, Gabriel Tsechpenakis, “The Infinite Hidden Markov Random Field Model,” IEEE Transactions on Neural Networks, vol. 21, no. 6, pp. 1004-1014, June 2010. [1]