||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2011) (Learn how and when to remove this template message)|
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Formally, Bayesian networks are DAGs whose nodes represent random variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected (there is no path from one of the variables to the other in the Bayesian network) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if parent nodes represent Boolean variables then the probability function could be represented by a table of entries, one entry for each of the possible combinations of its parents being true or false. Similar ideas may be applied to undirected, and possibly cyclic, graphs; such as Markov networks.
Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
- 1 Example
- 2 Inference and learning
- 3 Statistical introduction
- 4 Definitions and concepts
- 5 Inference complexity and approximation algorithms
- 6 Applications
- 7 History
- 8 See also
- 9 Notes
- 10 References
- 11 Further reading
- 12 External links
Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian network (shown to the right). All three variables have two possible values, T (for true) and F (for false).
The joint probability function is:
where the names of the variables have been abbreviated to G = Grass wet (yes/no), S = Sprinkler turned on (yes/no), and R = Raining (yes/no).
Using the expansion for the joint probability function and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
Then the numerical results (subscripted by the associated variable values) are
If, on the other hand, we wish to answer an interventional question: "What is the probability that it would rain, given that we wet the grass?" the answer would be governed by the post-intervention joint distribution function obtained by removing the factor from the pre-intervention distribution. As expected, the probability of rain is unaffected by the action: .
If, moreover, we wish to predict the impact of turning the sprinkler on, we have
with the term removed, showing that the action has an effect on the grass but not on the rain.
These predictions may not be feasible when some of the variables are unobserved, as in most policy evaluation problems. The effect of the action can still be predicted, however, whenever a criterion called "back-door" is satisfied. It states that, if a set Z of nodes can be observed that d-separates (or blocks) all back-door paths from X to Y then . A back-door path is one that ends with an arrow into X. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set Z = R is admissible for predicting the effect of S = T on G, because R d-separate the (only) back-door path S ← R → G. However, if S is not observed, there is no other set that d-separates this path and the effect of turning the sprinkler on (S = T) on the grass (G) cannot be predicted from passive observations. We then say that P(G | do(S = T)) is not "identified." This reflects the fact that, lacking interventional data, we cannot determine if the observed dependence between S and G is due to a causal connection or is spurious (apparent dependence arising from a common cause, R). (see Simpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.
Using a Bayesian network can save considerable amounts of memory, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for values. If the local distributions of no variable depends on more than three parent variables, the Bayesian network representation only needs to store at most values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
Inference and learning
There are three main inference tasks for Bayesian networks.
Inferring unobserved variables
Because a Bayesian network is a complete model for the variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to find out updated knowledge of the state of a subset of variables when other variables (the evidence variables) are observed. This process of computing the posterior distribution of variables given evidence is called probabilistic inference. The posterior gives a universal sufficient statistic for detection applications, when one wants to choose values for the variable subset which minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems.
The most common exact inference methods are: variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for a space-time tradeoff and match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network's treewidth. The most common approximate inference algorithms are importance sampling, stochastic MCMC simulation, mini-bucket elimination, loopy belief propagation, generalized belief propagation, and variational methods.
In order to fully specify the Bayesian network and thus fully represent the joint probability distribution, it is necessary to specify for each node X the probability distribution for X conditional upon X's parents. The distribution of X conditional upon its parents may have any form. It is common to work with discrete or Gaussian distributions since that simplifies calculations. Sometimes only constraints on a distribution are known; one can then use the principle of maximum entropy to determine a single distribution, the one with the greatest entropy given the constraints. (Analogously, in the specific context of a dynamic Bayesian network, one commonly specifies the conditional distribution for the hidden state's temporal evolution to maximize the entropy rate of the implied stochastic process.)
Often these conditional distributions include parameters which are unknown and must be estimated from data, sometimes using the maximum likelihood approach. Direct maximization of the likelihood (or of the posterior probability) is often complex when there are unobserved variables. A classical approach to this problem is the expectation-maximization algorithm which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions this process converges on maximum likelihood (or maximum posterior) values for parameters.
A more fully Bayesian approach to parameters is to treat parameters as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, so in practice classical parameter-setting approaches are more common.
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications the task of defining the network is too complex for humans. In this case the network structure and the parameters of the local distributions must be learned from data.
Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl (1987) and rests on the distinction between the three possible types of adjacent triplets allowed in a directed acyclic graph (DAG):
Type 1 and type 2 represent the same dependencies ( and are independent given ) and are, therefore, indistinguishable. Type 3, however, can be uniquely identified, since and are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when and have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed.
An alternative method of structural learning uses optimization based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
In order to deal with problems with thousands of variables, it is necessary to use a different approach. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
As previously noted, learning Bayesian networks with bounded treewidth is necessary to allow exact tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use the concept of K-tree for effective learning.
Often the prior on depends in turn on other parameters that are not mentioned in the likelihood. So, the prior must be replaced by a likelihood , and a prior on the newly introduced parameters is required, resulting in a posterior probability
This is the simplest example of a hierarchical Bayes model.[clarification needed]
The process may be repeated; for example, the parameters may depend in turn on additional parameters , which will require their own prior. Eventually the process must terminate, with priors that do not depend on any other unmentioned parameters.
|This section needs expansion. You can help by adding to it. (March 2009)|
Suppose we are interested in estimating the . An approach would be to estimate the using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
However, if the quantities are related, so that for example we may think that the individual have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
with improper priors flat, flat. When , this is an identified model (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.
Restrictions on priors
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will be improper (not normalizable), and estimates made by minimizing the expected loss will be inadmissible.
Definitions and concepts
There are several equivalent definitions of a Bayesian network. For all the following, let G = (V,E) be a directed acyclic graph (or DAG), and let X = (Xv)v ∈ V be a set of random variables indexed by V.
X is a Bayesian network with respect to G if its joint probability density function (with respect to a product measure) can be written as a product of the individual density functions, conditional on their parent variables:
where pa(v) is the set of parents of v (i.e. those vertices pointing directly to v via a single edge).
For any set of random variables, the probability of any member of a joint distribution can be calculated from conditional probabilities using the chain rule (given a topological ordering of X) as follows:
Compare this with the definition above, which can be written as:
for each which is a parent of
The difference between the two expressions is the conditional independence of the variables from any of their non-descendants, given the values of their parent variables.
Local Markov property
where de(v) is the set of descendants and V \ de(v) is the set of non-descendants of v.
This can also be expressed in terms similar to the first definition, as
- for each which is not a descendant of for each which is a parent of
Note that the set of parents is a subset of the set of non-descendants because the graph is acyclic.
Developing Bayesian networks
To develop a Bayesian network, we often first develop a DAG G such that we believe X satisfies the local Markov property with respect to G. Sometimes this is done by creating a causal DAG. We then ascertain the conditional probability distributions of each variable given its parents in G. In many cases, in particular in the case where the variables are discrete, if we define the joint distribution of X to be the product of these conditional distributions, then X is a Bayesian network with respect to G.
The Markov blanket of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node. X is a Bayesian network with respect to G if every node is conditionally independent of all other nodes in the network, given its Markov blanket.
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional. Let P be a trail from node u to v. A trail is a loop-free, undirected path between two nodes (i.e. the direction of edges is ignored for constructing the path), in which edges may have any direction. Then P is said to be d-separated by a set of nodes Z if any of the following conditions holds:
- P contains a directed chain, or , such that the middle node m is in Z,
- P contains a fork, , such that the middle node m is in Z, or
- P contains an inverted fork (or collider), , such that the middle node m is not in Z and no descendant of m is in Z.
Thus u and v are said to be d-separated by Z if all trails between them are d-separated. If u and v are not d-separated, they are called d-connected.
X is a Bayesian network with respect to G if, for any two nodes u, v:
where Z is a set which d-separates u and v. (The Markov blanket is the minimal set of nodes which d-separates node v from all other nodes.)
The term hierarchical model is sometimes considered a particular type of Bayesian network, but has no formal definition. Sometimes the term is reserved for models with three or more levels of random variables; other times, it is reserved for models with latent variables. In general, however, any moderately complex Bayesian network is usually termed "hierarchical".
Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv is causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:
are equivalent: that is they impose exactly the same conditional independence requirements.
A causal network is a Bayesian network with an explicit requirement that the relationships be causal. The additional semantics of the causal networks specify that if a node X is actively caused to be in a given state x (an action written as do(X = x)), then the probability density function changes to the one of the network obtained by cutting the links from the parents of X to X, and setting X to the caused value x. Using these semantics, one can predict the impact of external interventions from data obtained prior to intervention.
Inference complexity and approximation algorithms
In 1990 while working at Stanford University on large bioinformatic applications, Greg Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted a surge in research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and Michael Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that there is no tractable deterministic algorithm that can approximate probabilistic inference to within an absolute error ɛ< 1/2. Second, they proved that there is no tractable randomized algorithm that can approximate probabilistic inference to within an absolute error ɛ < 1/2 with confidence probability greater than 1/2.
At about the same time, Dan Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a CNF formula) and that approximate inference, even for Bayesian networks with restricted architecture, is NP-hard.
In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n.
Bayesian networks are used for modelling beliefs in computational biology and bioinformatics (gene regulatory networks, protein structure, gene expression analysis, learning epistasis from GWAS data sets) medicine, biomonitoring, document classification, information retrieval, semantic search, image processing, data fusion, decision support systems, engineering, sports betting, gaming, law, study design and risk analysis. There are texts applying Bayesian networks to bioinformatics and financial and marketing informatics.The publications by Anderson and Vastag (2004), Lauría and Duchessi (2006), Gupta and Kim (2008), Lee et al. (2011),  Cardenas, Voordijk, and Dewulf (2017)  have shown applications of Bayesian Networks to very disparate fields.
- libDAI A free and open source C++ library for Discrete Approximate Inference in graphical models. libDAI supports such inference methods as exact inference by brute force enumeration, exact inference by junction-tree methods, Mean Field, Loopy Belief Propagation, Gibbs sampler, Conditioned Belief Propagation and some others.
- Mocapy++ A Dynamic Bayesian Network toolkit, implemented in C++. It supports discrete, multinomial, Gaussian, Kent, Von Mises and Poisson nodes. Inference and learning is done by Gibbs sampling/Stochastic-EM.
- WinBUGS One of the first computational implementations of MCMC samplers. No longer maintained and not recommended for active use.
- OpenBUGS (website), further (open source) development of WinBUGS.
- Just another Gibbs sampler (JAGS) (website) Another open source alternative to WinBUGS. Uses Gibbs sampling.
- Stan (software) (website) — Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler, a variant of Hamiltonian Monte Carlo. It's somewhat like BUGS, but with a different language for expressing models and a different sampler for sampling from their posteriors. RStan is the R interface to Stan. It is maintained by Andrew Gelman and colleagues.
- Direct Graphical Models (DGM) is an open source C++ library, implementing various tasks in probabilistic graphical models with pairwise dependencies.
- OpenMarkov, open source software and API implemented in Java
- Graphical Models Toolkit (GMTK) — GMTK is an open source, publicly available toolkit for rapidly prototyping statistical models using dynamic graphical models (DGMs) and dynamic Bayesian networks (DBNs). GMTK can be used for applications and research in speech and language processing, bioinformatics, activity recognition, and any time series application.
- PyMC — PyMC is a python module that implements Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo. Its flexibility and extensibility make it applicable to a large suite of problems. Along with core sampling functionality, PyMC includes methods for summarizing output, plotting, goodness-of-fit and convergence diagnostics.
- GeNIe & SMILE by BayesFusion — SMILE is a library for BNs, DBNs, IDs, equation-based models, and learning. SMILE is available for multiple platforms, is written in C++, and has wrappers for several programming environments through Java and .NET interfaces. GeNIe is graphical front end for SMILE, running natively on Windows and on Linux and Mac under Wine.
- SamIam (website), a Java-based system with GUI and Java API
- Bayes Server - User Interface and API for Bayesian networks, includes support for time series and sequences
- Blip - Blip is a web interface that offers structural learning of Bayesian networks directly from discrete data. It can handle datasets with thousands of variables, and offers both unbounded and treewidth-bounded structure learning.
- Belief and Decision Networks on AIspace
- BayesiaLab by Bayesia
- Netica by Norsys
- dVelox by Apara Software
- System Modeler by Inatas AB
- UnBBayes by GIA-UnB (Intelligence Artificial Group - University of Brasilia)
- Face2gene using the Facial Dysmorphology Novel Analysis (FDNA) technology
- Uninet — Continuous Bayesian networks modelling continuous variables, with a wide range of parametric and non-parametric marginal distributions, and dependence with copula. Hybrid discrete continuous models are also supported. Free for non-commercial use. Developed by LightTwist Software.
- Tetrad, an open-source project written in Java and developed by the Department of Philosophy at Carnegie Mellon University, that deals with causal models and statistical data.
- The often subjective nature of the input information.
- The reliance on Bayes' conditioning as the basis for updating information.
- The distinction between causal and evidential modes of reasoning, which underscores Thomas Bayes' posthumously published paper of 1763.
In the late 1980s Judea Pearl's text Probabilistic Reasoning in Intelligent Systems and Richard E. Neapolitan's text Probabilistic Reasoning in Expert Systems summarized the properties of Bayesian networks and established Bayesian networks as a field of study.
Informal variants of such networks were first used by legal scholar John Henry Wigmore, in the form of Wigmore charts, to analyse trial evidence in 1913.:66–76 Another variant, called path diagrams, was developed by the geneticist Sewall Wright and used in social and behavioral sciences (mostly with linear parametric models).
- Bayesian programming
- Causal inference
- Causal loop diagram
- Chow–Liu tree
- Computational intelligence
- Computational phylogenetics
- Deep belief network
- Dempster–Shafer theory – a Generalization of Bayes' theorem
- Expectation–maximization algorithm
- Factor graph
- Hierarchical temporal memory
- Kalman filter
- Memory-prediction framework
- Mixture distribution
- Mixture model
- Naive Bayes classifier
- Sensor fusion
- Sequence alignment
- Structural equation modeling
- Subjective logic
- Variable-order Bayesian network
- Pearl, Judea (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press. ISBN 0-521-77362-8. OCLC 42291253.
- "The Back-Door Criterion" (PDF). Retrieved 2014-09-18.
- "d-Separation without Tears" (PDF). Retrieved 2014-09-18.
- J., Pearl (1994). "A Probabilistic Calculus of Actions". In Lopez de Mantaras, R.; Poole, D. UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence. San Mateo CA: Morgan Kaufman. pp. 454–462. ISBN 1-55860-332-8. arXiv: .
- I. Shpitser, J. Pearl, "Identification of Conditional Interventional Distributions" In R. Dechter and T.S. Richardson (Eds.), Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, 437–444, Corvallis, OR: AUAI Press, 2006.
- Rebane, G. and Pearl, J., "The Recovery of Causal Poly-trees from Statistical Data," Proceedings, 3rd Workshop on Uncertainty in AI, (Seattle, WA) pages 222–228, 1987
- Spirtes, P.; Glymour, C. (1991). "An algorithm for fast recovery of sparse causal graphs" (PDF). Social Science Computer Review. 9 (1): 62–72. doi:10.1177/089443939100900106.
- Spirtes, Peter; Glymour, Clark N.; Scheines, Richard (1993). Causation, Prediction, and Search (1st ed.). Springer-Verlag. ISBN 978-0-387-97979-3.
- Verma, Thomas; Pearl, Judea (1991). "Equivalence and synthesis of causal models". In Bonissone, P.; Henrion, M.; Kanal, L.N.; Lemmer, J.F. UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence. Elsevier. pp. 255–270. ISBN 0-444-89264-8.
- Friedman, Nir; Geiger, Dan; Goldszmidt, Moises (November 1997). "Bayesian Network Classifiers". Machine Learning. 29 (2-3): 131–163. doi:10.1023/A:1007465528199. Retrieved 24 February 2015.
- Friedman, Nir; Linial, Michal; Nachman, Iftach; Pe'er, Dana (August 2000). "Using Bayesian Networks to Analyze Expression Data". Journal of Computational Biology. 7 (3-4): 601–620. PMID 11108481. doi:10.1089/106652700750050961. Retrieved 24 February 2015.
- Cussens, James (2011). "Bayesian network learning with cutting planes" (PDF). Proceedings of the 27th Conference Annual Conference on Uncertainty in Artificial Intelligence: 153–160.
- M. Scanagatta, C. P. de Campos, G. Corani, and M. Zaffalon. Learning Bayesian Networks with Thousands of Variables. In NIPS-15: Advances in Neural Information Processing Systems 28, pages 1855–1863, 2015.
- Petitjean, F.; Webb, G.I.; Nicholson, A.E. (2013). Scaling log-linear analysis to high-dimensional data (PDF). International Conference on Data Mining. Dallas, TX, USA: IEEE.
- M. Scanagatta, G. Corani, C. P. de Campos, and M. Zaffalon. Learning Treewidth-Bounded Bayesian Networks with Thousands of Variables. In NIPS-16: Advances in Neural Information Processing Systems 29, 2016.
- Russell & Norvig 2003, p. 496.
- Russell & Norvig 2003, p. 499.
- Neapolitan, Richard E. (2004). Learning Bayesian networks. Prentice Hall. ISBN 978-0-13-012534-7.
- Geiger, Dan; Verma, Thomas; Pearl, Judea (1990). "Identifying independence in Bayesian Networks" (PDF). Networks. 20: 507–534. doi:10.1177/089443939100900106.
- Richard Scheines, D-separation
- Gregory F. Cooper (1990). "The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks" (PDF). Artificial Intelligence. 42: 393–405. doi:10.1016/0004-3702(90)90060-d.
- Paul Dagum; Michael Luby (1993). "Approximating probabilistic inference in Bayesian belief networks is NP-hard". Artificial Intelligence. 60 (1): 141–153. doi:10.1016/0004-3702(93)90036-b.
- D. Roth, On the hardness of approximate reasoning, IJCAI (1993)
- D. Roth, On the hardness of approximate reasoning, Artificial Intelligence (1996)
- Paul Dagum; Michael Luby (1997). "An optimal approximation algorithm for Bayesian inference". Artificial Intelligence. 93 (1-2): 1–27. doi:10.1016/s0004-3702(97)00013-1.
- Friedman, N.; Linial, M.; Nachman, I.; Pe'er, D. (2000). "Using Bayesian Networks to Analyze Expression Data". Journal of Computational Biology. 7 (3–4): 601–620. PMID 11108481. doi:10.1089/106652700750050961.
- Jiang, X.; Neapolitan, R.E.; Barmada, M.M.; Visweswaran, S. (2011). "Learning Genetic Epistasis using Bayesian Network Scoring Criteria". BMC Bioinformatics. 12: 89. PMC . PMID 21453508. doi:10.1186/1471-2105-12-89.
- J. Uebersax (2004). Genetic Counseling and Cancer Risk Modeling: An Application of Bayes Nets. Marbella, Spain: Ravenpack International.
- Jiang X, Cooper GF (July–August 2010). "A Bayesian spatio-temporal method for disease outbreak detection". J Am Med Inform Assoc. 17 (4): 462–71. PMC . PMID 20595315. doi:10.1136/jamia.2009.000356.
- Luis M. de Campos; Juan M. Fernández-Luna; Juan F. Huete (2004). "Bayesian networks and information retrieval: an introduction to the special issue". Information Processing & Management. Elsevier. 40 (5): 727–733. ISBN 0-471-14182-8. doi:10.1016/j.ipm.2004.03.001.
- Christos L. Koumenides and Nigel R. Shadbolt. 2012. Combining link and content-based information in a Bayesian inference model for entity search. In Proceedings of the 1st Joint International Workshop on Entity-Oriented and Semantic Search (JIWES '12). ACM, New York, NY, USA, , Article 3 , 6 pages. doi:10.1145/2379307.2379310
- F.J. Díez; J. Mira; E. Iturralde; S. Zubillaga (1997). "DIAVAL, a Bayesian expert system for echocardiography". Artificial Intelligence in Medicine. 10 (1): 59–73. PMID 9177816. doi:10.1016/s0933-3657(97)00384-9.
- Constantinou, Anthony; Fenton, N.; Neil, M. (2012). "pi-football: A Bayesian network model for forecasting Association Football match outcomes". Knowledge-Based Systems. 36: 322–339. doi:10.1016/j.knosys.2012.07.008.
- Constantinou, Anthony; Fenton, N.; Neil, M. (2013). "Profiting from an inefficient Association Football gambling market: Prediction, Risk and Uncertainty using Bayesian networks.". Knowledge-Based Systems. 50: 60–86. doi:10.1016/j.knosys.2013.05.008.
- G. A. Davis (2003). "Bayesian reconstruction of traffic accidents". Law, Probability and Risk. 2 (2): 69–89. doi:10.1093/lpr/2.2.69.
- J. B. Kadane & D. A. Schum (1996). A Probabilistic Analysis of the Sacco and Vanzetti Evidence. New York: Wiley. ISBN 0-471-14182-8.
- O. Pourret, P. Naim & B. Marcot (2008). Bayesian Networks: A Practical Guide to Applications. Chichester, UK: Wiley. ISBN 978-0-470-06030-8.
- Karvanen, Juha (2014). "Study design in causal models". Scandinavian Journal of Statistics. 42: 361–377. doi:10.1111/sjos.12110.
- Trucco, P.; Cagno, E.; Ruggeri, F.; Grande, O. (2008). "A Bayesian Belief Network modelling of organisational factors in risk analysis: A case study in maritime transportation". Reliability Engineering & System Safety. 93 (6): 845–856. doi:10.1016/j.ress.2007.03.035.
- Cárdenas, IC; Al-Jibouri, SHS; Halman, JIM; van Tol, FA (2013). "Capturing and Integrating Knowledge for Managing Risks in Tunnel Works". Risk Analysis. 33 (1): 92–108. doi:10.1111/j.1539-6924.2012.01829.x.
- Neapolitan, Richard (2009). Probabilistic Methods for Bioinformatics. Burlington, MA: Morgan Kaufmann. p. 406. ISBN 9780123704764.
- Grau J.; Ben-Gal I.; Posch S.; Grosse I. (2006). "VOMBAT: Prediction of Transcription Factor Binding Sites using Variable Order Bayesian Trees," (PDF). Nucleic Acids Research, vol. 34, issue W529–W533, 2006.
- Neapolitan, Richard & Xia Jiang (2007). Probabilistic Methods for Financial and Marketing Informatics. Burlingon, MA: Morgan Kaufmann. p. 432. ISBN 0123704774.
- Shmilovici A., Kahiri Y., Ben-Gal I., Hauser S.(2009. "Measuring the Efficiency of the Intraday Forex Market with a Universal Data Compression Algorithm," (PDF). Computational Economics, Vol. 33 (2), 131-154, 2009.
- Anderson, RD; Vastag, G (2004). "Causal modeling alternatives in operations research: Overview and application". European Journal of Operational Research. 156 (1): 92–109. doi:10.1016/S0377-2217(02)00904-9.
- Lauría, EJ; Duchessi, PJ (2006). "A Bayesian belief network for IT implementation decision support". Decision Support Systems. 42 (3): 1573–1588. doi:10.1016/j.dss.2006.01.003.
- Gupta, S; Kim, HW (2008). "Linking structural equation modeling to Bayesian networks: Decision support for customer retention in virtual communities". European Journal of Operational Research. 190 (3): 818–833. doi:10.1016/j.ejor.2007.05.054.
- Lee, KC; Lee, DS; Seo, YW; Jo, NY (2011). "Antecedents of team creativity and the mediating effect of knowledge sharing: bayesian network approach to PLS modeling as an ancillary role". Intelligent Information and Database Systems: 545–555. doi:10.1007/978-3-642-20042-7_55.
- Cardenas, IC; Voordijk, H; Dewulf, G (2017). "Beyond theory: Towards a probabilistic causation model to support project governance in infrastructure projects". International Journal of Project Management. doi:10.1016/j.ijproman.2017.01.002.
- Pearl, J. (1985). Bayesian Networks: A Model of Self-Activated Memory for Evidential Reasoning (UCLA Technical Report CSD-850017). Proceedings of the 7th Conference of the Cognitive Science Society, University of California, Irvine, CA. pp. 329–334. Retrieved 2009-05-01.
- Bayes, T.; Price, Mr. (1763). "An Essay towards solving a Problem in the Doctrine of Chances". Philosophical Transactions of the Royal Society. 53: 370–418. doi:10.1098/rstl.1763.0053.
- Pearl, J. Probabilistic Reasoning in Intelligent Systems. San Francisco CA: Morgan Kaufmann. p. 1988. ISBN 1558604790.
- Neapolitan, Richard E. (1989). Probabilistic reasoning in expert systems: theory and algorithms. Wiley. ISBN 978-0-471-61840-9.
- Wright, S. (1921). "Correlation and Causation" (PDF). Journal of Agricultural Research. 20 (7): 557–585.
- Ben-Gal, Irad (2007). "Bayesian Networks". In Ruggeri, Fabrizio; Kennett, Ron S.; Faltin, Frederick W. Encyclopedia of Statistics in Quality and Reliability (PDF). Encyclopedia of Statistics in Quality and Reliability. John Wiley & Sons. ISBN 978-0-470-01861-3. doi:10.1002/9780470061572.eqr089.
- Bertsch McGrayne, Sharon. The Theory That Would not Die. New Haven: Yale University Press.
- Borgelt, Christian; Kruse, Rudolf (March 2002). Graphical Models: Methods for Data Analysis and Mining. Chichester, UK: Wiley. ISBN 0-470-84337-3.
- Borsuk, Mark Edward (2008). "Ecological informatics: Bayesian networks". In Jørgensen , Sven Erik; Fath, Brian. Encyclopedia of Ecology. Elsevier. ISBN 978-0-444-52033-3.
- Castillo, Enrique; Gutiérrez, José Manuel; Hadi, Ali S. (1997). "Learning Bayesian Networks". Expert Systems and Probabilistic Network Models. Monographs in computer science. New York: Springer-Verlag. pp. 481–528. ISBN 0-387-94858-9.
- Comley, Joshua W.; Dowe, David L. (June 2003). "General Bayesian networks and asymmetric languages". Proceedings of the 2nd Hawaii International Conference on Statistics and Related Fields. Hawaii.
- Comley, Joshua W.; Dowe, David L. (2005). "Minimum Message Length and Generalized Bayesian Nets with Asymmetric Languages". In Grünwald, Peter D.; Myung, In Jae; Pitt, Mark A. Advances in Minimum Description Length: Theory and Applications. Neural information processing series. Cambridge, Massachusetts: Bradford Books (MIT Press) (published April 2005). pp. 265–294. ISBN 0-262-07262-9. (This paper puts decision trees in internal nodes of Bayes networks using Minimum Message Length (MML). Camera-ready final version submitted on 15 October 2003. An earlier version is Comley and Dowe (2003), .pdf.)
- Darwiche, Adnan (2009). Modeling and Reasoning with Bayesian Networks. Cambridge University Press. ISBN 978-0521884389.
- Dowe, David L. (2010). MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness, in Handbook of Philosophy of Science (Volume 7: Handbook of Philosophy of Statistics), Elsevier,  ISBN 978-0-444-51862-0, pp 901–982.
- Fenton, Norman; Neil, Martin E. (November 2007). Managing Risk in the Modern World: Applications of Bayesian Networks – A Knowledge Transfer Report from the London Mathematical Society and the Knowledge Transfer Network for Industrial Mathematics. London (England): London Mathematical Society.
- Fenton, Norman; Neil, Martin E. (July 23, 2004). "Combining evidence in risk analysis using Bayesian Networks" (PDF). Safety Critical Systems Club Newsletter. 13 (4). Newcastle upon Tyne, England. pp. 8–13. Archived from the original (PDF) on 2007-09-27.
- Andrew Gelman; John B Carlin; Hal S Stern; Donald B Rubin (2003). "Part II: Fundamentals of Bayesian Data Analysis: Ch.5 Hierarchical models". Bayesian Data Analysis. CRC Press. pp. 120–. ISBN 978-1-58488-388-3.
- Heckerman, David (March 1, 1995). "Tutorial on Learning with Bayesian Networks". In Jordan, Michael Irwin. Learning in Graphical Models. Adaptive Computation and Machine Learning. Cambridge, Massachusetts: MIT Press (published 1998). pp. 301–354. ISBN 0-262-60032-3.
- Also appears as Heckerman, David (March 1997). "Bayesian Networks for Data Mining". Data Mining and Knowledge Discovery. 1 (1): 79–119. doi:10.1023/A:1009730122752.
- An earlier version appears as Technical Report MSR-TR-95-06, Microsoft Research, March 1, 1995. The paper is about both parameter and structure learning in Bayesian networks.
- Jensen, Finn V; Nielsen, Thomas D. (June 6, 2007). Bayesian Networks and Decision Graphs. Information Science and Statistics series (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-68281-5.
- Karimi, Kamran; Hamilton, Howard J. (2000). "Finding temporal relations: Causal bayesian networks vs. C4. 5" (PDF). Twelfth International Symposium on Methodologies for Intelligent Systems.
- Korb, Kevin B.; Nicholson, Ann E. (December 2010). Bayesian Artificial Intelligence. CRC Computer Science & Data Analysis (2nd ed.). Chapman & Hall (CRC Press). ISBN 1-58488-387-1. doi:10.1007/s10044-004-0214-5.
- Lunn, David; Spiegelhalter, David; Thomas, Andrew; Best, Nicky; et al. (November 2009). "The BUGS project: Evolution, critique and future directions". Statistics in Medicine. 28 (25): 3049–3067. PMID 19630097. doi:10.1002/sim.3680.
- Neil, Martin; Fenton, Norman E.; Tailor, Manesh (August 2005). Greenberg, Michael R., ed. "Using Bayesian Networks to Model Expected and Unexpected Operational Losses" (pdf). Risk Analysis. 25 (4): 963–972. PMID 16268944. doi:10.1111/j.1539-6924.2005.00641.x.
- Pearl, Judea (September 1986). "Fusion, propagation, and structuring in belief networks". Artificial Intelligence. 29 (3): 241–288. doi:10.1016/0004-3702(86)90072-X.
- Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Representation and Reasoning Series (2nd printing ed.). San Francisco, California: Morgan Kaufmann. ISBN 0-934613-73-7.
- Pearl, Judea; Russell, Stuart (November 2002). "Bayesian Networks". In Arbib, Michael A. Handbook of Brain Theory and Neural Networks. Cambridge, Massachusetts: Bradford Books (MIT Press). pp. 157–160. ISBN 0-262-01197-2.
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
- Zhang, Nevin Lianwen; Poole, David (May 1994). "A simple approach to Bayesian network computations". Proceedings of the Tenth Biennial Canadian Artificial Intelligence Conference (AI-94). Banff, Alberta: 171–178. This paper presents variable elimination for belief networks.
- Bayesian Networks and BayesiaLab - A practical introduction for researchers by Stefan Conrady and Lionel Jouffe
- Bayesian networks without tears by Eugene Charniak. AI Magazine, Winter 1991.
- Computational Intelligence: A Methodological Introduction by Kruse, Borgelt, Klawonn, Moewes, Steinbrecher, Held, 2013, Springer, ISBN 9781447150121
- Graphical Models - Representations for Learning, Reasoning and Data Mining, 2nd Edition, by Borgelt, Steinbrecher, Kruse, 2009, J. Wiley & Sons, ISBN 9780470749562
- A tutorial on learning with Bayesian Networks
- An Introduction to Bayesian Networks and their Contemporary Applications
- On-line Tutorial on Bayesian nets and probability
- Web-App to create Bayesian nets and run it with a Monte Carlo method
- Continuous Time Bayesian Networks
- Bayesian Networks: Explanation and Analogy
- A live tutorial on learning Bayesian networks
- A hierarchical Bayes Model for handling sample heterogeneity in classification problems, provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
- Hierarchical Naive Bayes Model for handling sample uncertainty, shows how to perform classification and learning with continuous and discrete variables with replicated measurements.