# Markov blanket

In a Bayesian network, the Markov blanket of node A includes its parents, children and the other parents of all of its children.

In statistics and machine learning, the Markov blanket for a node in a graphical model contains all the variables that shield the node from the rest of the network. This means that the Markov blanket of a node is the only knowledge needed to predict the behavior of that node and its children. The term was coined by Judea Pearl in 1988.[1]

In a Bayesian network, the values of the parents and children of a node evidently give information about that node. However, its children's parents also have to be included, because they can be used to explain away the node in question. In a Markov random field, the Markov blanket for a node is simply its adjacent (or neighboring) nodes.

The Markov blanket for a node ${\displaystyle A}$ in a Bayesian network, denoted here by ${\displaystyle \operatorname {MB} (A)}$, is the set of nodes composed of ${\displaystyle A}$'s parents, ${\displaystyle A}$'s children, and ${\displaystyle A}$'s children's other parents.

Every set of nodes in the network is conditionally independent of ${\displaystyle A}$ when conditioned on the set ${\displaystyle \operatorname {MB} (A)}$, that is, when conditioned on the Markov blanket of the node ${\displaystyle A}$: in other words, given the nodes in ${\displaystyle \operatorname {MB} (A)}$, A is conditionally independent of the other nodes in the graph. Formally, this property can be written, for distinct nodes ${\displaystyle A}$ and ${\displaystyle B}$, as follows

${\displaystyle \Pr(A\mid \operatorname {MB} (A),B)=\Pr(A\mid \operatorname {MB} (A)).\!}$