# Bayesian inference in phylogeny

Classification Evolutionary Biology Molecular Phylogenetics Bayesian Inference

Bayesian Inference of Phylogeny uses a likelihood function to create a quantity called the posterior probability of trees using a model of evolution, based on some prior probabilities, producing the most likely phylogenetic tree for the given data. The Bayesian approach has become popular due to advances in computing speeds and the integration of Markov chain Monte Carlo (MCMC) algorithms. Bayesian inference has a number of applications in molecular phylogenetics and systematics.

## Bayesian Inference of Phylogeny Background and Bases

Bayes' Theorem
Metaphor illustrating MCMC method steps

Bayesian Inference refers to a probabilistic method developed by Reverend Thomas Bayes based on Bayes' theorem. Published posthumously in 1763 it was the first expression of inverse probability and the basis of Bayesian Inference. Independently, unaware of Bayes work, Pierre-Simon Laplace developed Bayes' Theorem in 1774.[1]

During the 1800s Bayesian Inference was widely used until 1900s when there was a shift to frequentist inference, mainly due to computational limitations. Based on Bayes' theorem, the bayesian approach combines the prior probability of a tree P(A) with the likelihood of the data (B) to produce a posterior probability distribution on trees P(A|B). The posterior probability of a tree will indicate the probability of the tree to be correct, being the tree with the highest posterior probability the one chosen to represent best a phylogeny. It was the introduction of Monte Carlo Markov Chains (MCMC) methods by Nicolas Metropolis in 1953 that revolutionized Bayesian Inference and by the 1990s became a widely used method amongst phylogeneticists. Some of the advantages over traditional Maximum Parsimony and Maximum Likelihood methods are the possibility of account for the phylogenetic uncertainty, use of prior information and incorporation of complex models of evolution that limited computational analyses for traditional methods. Although overcoming complex analytical operations the posterior probability still involves a summation over all trees and, for each tree, integration over all possible combinations of substitution model parameter values and branch lengths. MCMC methods can be described in three steps: first using a stochastic mechanism a new state for the Markov chain is proposed. Secondly, the probability of this new state to be correct is calculated. Thirdly, a new random variable (0,1) is proposed. If this new values is less than the acceptance probability the new state is accepted and the state of the chain is updated. This process is run for either thousands or millions of times. The amount of time a single tree is visited during the course of the chain is just a valid approximation of its posterior probability. Some of the most common algorithms used in MCMC methods include the Metropolis-Hastings algorithms, the Metropolis-Coupling MCMC (MC³) and the LOCAL algorithm of Larget and Simon.

### Metropolis-Hastings algorithm

One of the most common MCMC methods used is the Metropolis-Hastings algorithm,[2] a modified version of the original Metropolis algorithm.[3] It is a widely used method to sample randomly from complicated and multi-dimensional distribution probabilities. The Metropolis algorithm is described in the following steps:[4] 1) a tree is chosen (Ti) as a starting point 2) selection of a neighbour tree (Tj) from the collection of trees. 3) computation of the ratio of the probabilities (or probability density functions) of the new tree (Tj) and old tree (Ti) . R = f (Tj))/f (Ti) 4) if R ≥ 1, the new tree (Tj) is accepted as the current tree 5) if R < 1, a uniform number is drawn (random fraction between 0 and 1). If it is less than R, the new tree (Tj) is accepted as the current tree 6) if the random number is bigger than R the new tree (Tj) is rejected and the old one (Ti) is kept as the current tree 7) at this point the process is repeated from Step 2 n times. The algorithm keeps running until it reaches an equilibrium distribution. It also assumes that the probability of proposing a new tree (Tj) when we are at the old tree state (Ti), is the same probability of proposing (Ti) when we are at (Tj). When this is not the case Hastings corrections are applied. The aim of Metropolis-Hastings algorithm is to produce a collection of states with a determined distribution until the Markov process reaches a stationary distribution. The algorithm has two components: (1) a potential transition from one state to another (i → j) using a transition probability function qij, and 2) movement of the chain to state j with probability αij and remains in i with probability 1 – αij.[5]

### Metropolis-coupled MCMC

Metropolis-coupled MCMC algorithm (MC³) [6] has been proposed to solve a practical concern of the Markov chain moving across peaks when the target distribution has multiple local peaks, separated by low valleys, are known to exist in the tree space. This is the case during heuristic tree search under maximum parsimony (MP), maximum likelihood (ML), and minimum evolution (ME) criteria, and the same can be expected for stochastic tree search using MCMC. This problem will result in samples not approximating correctly to the posterior density. The (MC³) improves the mixing of Markov chains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each for n iterations and with different stationary distributions $\pi_j(.)\$, $j = 1, 2, \ldots, m\$, where the first one, $\pi_1 = \pi\$ is the target density, while $\pi_j\$, $j = 2, 3, \ldots, m\$ are chosen to improve mixing. For example, one can choose incremental heating of the form:

$\pi_j(\theta) = \pi(\theta)^{1/[1+\lambda(j-1)]}, \ \ \lambda > 0,$

so that the first chain is the cold chain with the correct target density, while chains $2, 3, \ldots, m$ are heated chains. Note that raising the density $\pi(.)$ to the power $1/T\$ with $T>1\$ has the effect of flattening out the distribution, similar to heating a metal. In such a distribution, it is easier to traverse between peaks (separated by valleys) than in the original distribution. After each iteration, a swap of states between two randomly chosen chains is proposed through a Metropolis-type step. Let $\theta^{(j)}\$ be the current state in chain $j\$, $j = 1, 2, \ldots, m\$. A swap between the states of chains $i\$ and $j\$ is accepted with probability:

$\alpha = \frac{\pi_i(\theta^{(j)})\pi_j(\theta^{(i)})}{\pi_i(\theta^{(i)})\pi_j(\theta^{(j)})}\$

At the end of the run, output from only the cold chain is used, while those from the hot chains are discarded. Heuristically, the hot chains will visit the local peaks rather easily, and swapping states between chains will let the cold chain occasionally jump valleys, leading to better mixing. However, if $\pi_i(\theta)/\pi_j(\theta)\$ is unstable, proposed swaps will seldom be accepted. This is the reason for using several chains which differ only incrementally.

An obvious disadvantage of the algorithm is that $m\$ chains are run and only one chain is used for inference. For this reason, $\mathrm{MC}^3\$ is ideally suited for implementation on parallel machines, since each chain will in general require the same amount of computation per iteration.

### The LOCAL algorithm of Larget and Simon

The LOCAL algorithms[7] offers a computational advantage over previous methods and demonstrates that a Bayesian approach is able to assess uncertainty computationally practical in larger trees. The LOCAL algorithm is an improvement of the GLOBAL algorithm presented in Mau, Newton and Larget (1999)[8] in which all branch lengths are changed in every cycle. The LOCAL algorithms modifies the tree by selecting an internal branch of the tree at random. The nodes at the ends of this branch are each connected to two other branches. One of each pair is chosen at random. Imagine taking these three selected edges and stringing them like a clothesline from left to right, where the direction (left/right) is also selected at random. The two endpoints of the first branch selected will have a sub-tree hanging like a piece of clothing strung to the line. The algorithm proceeds by multiplying the three selected branches by a common random amount, akin to stretching or shrinking the clothesline. Finally the leftmost of the two hanging sub-trees is disconnected and reattached to the clothesline at a location selected uniformly at random. This would be the candidate tree.

Suppose we began by selecting the internal branch with length $t_8\$ that separates taxa $A\$ and $B\$ from the rest. Suppose also that we have (randomly) selected branches with lengths $t_1\$ and $t_9\$ from each side, and that we oriented these branches. Let $m = t_1+t_8+t_9\$, be the current length of the clothesline. We select the new length to be $m^{\star} = m\exp(\lambda(U_1-0.5))\$, where $U_1\$ is a uniform random variable on $(0,1)\$. Then for the LOCAL algorithm, the acceptance probability can be computed to be:

$\frac{h(y)}{h(x)} \times \frac{{m^{\star}}^3}{m^3}\$

#### Assessing convergence

Suppose we want to estimate a branch length of a 2-taxon tree under JC, in which $n_1$ sites are unvaried and $n_2$ are variable. Assume exponential prior distribution with rate $\lambda\$. The density is $p(t) = \lambda e^{-\lambda t}\$. The probabilities of the possible site patterns are:

$1/4\left(1/4+3/4e^{-4/3t}\right)\$

for unvaried sites, and

$1/4\left(1/4-1/4e^{-4/3t}\right)\$

Thus the unnormalized posterior distribution is:

$h(t) = \left(1/4\right)^{n_1+n_2}\left(1/4+3/4{e^{-4/3t}}^{n_1}\right)\$

or, alternately,

$h(t) = \left(1/4-1/4{e^{-4/3t}}^{n_2}\right)(\lambda e^{-\lambda t})\$

Update branch length by choosing new value uniformly at random from a window of half-width $w\$ centered at the current value:

$t^\star = |t+U|\$

where $U\$is uniformly distributed between $-w\$ and $w\$. The acceptance probability is:

$h(t^\star)/h(t)\$

Example: $n_1 = 70\$, $n_2 = 30\$. We will compare results for two values of $w\$, $w = 0.1\$ and $w = 0.5\$. In each case, we will begin with an initial length of $5\$ and update the length $2000\$ times.

## Brief introduction to Maximum Parsimony and Maximum Likelihood

Tiger phylogenetic relationships, bootstrap values shown in branches. Source Wikimedia Commons
Example of long branch attraction. Longer branches (A & C) appear to be more closely related.

There is a diversity of approaches to reconstruct phylogenetic trees, each of them offering advantages and disadvantages and there is not a straight forward answer to “what is the best method?”. Maximum Parsimony (MP) and Maximum likelihood (ML) are traditional methods widely used for the estimation of phylogenies and both use character information directly, as Bayesian methods do.

Maximum Parsimony recovers one or more optimal trees based on a matrix of discrete characters for a certain group of taxa and it does not require a model of evolutionary change. MP gives the most simple explanation for a given set of data, reconstructing a phylogenetic tree that includes as few changes across the sequences as possible, this is the one that exhibits the fewest number of evolutionary steps to explain the relationship between taxa. The support of the tree branches is represented by boostrap percentage. For the same reason that it has been widely use, its simplicity, MP it has also received criticism and has been pushed into the background by ML and Bayesian methods. MP presents several problems and limitations. As shown by Felsenstein (1978), MP might be statistically inconsistent,[9] meaning that as more and more data (e.g. sequence length) is accumulated, results can converge on an incorrect tree and lead to long branch attraction, a phylogenetic phenomena where taxa with long branches (numerous character state changes) tend to come as closely related in the phylogeny than they really are. As in Maximum Parsimony, Maximum Likelihood will evaluate alternative trees. However it considers the probability of each tree explaining the given data based on a model evolution. In this case, the tree with the highest probability of explaining the data is chosen over the other ones.[10] In other words, it compares how different trees predict the observed data. The introduction of a model of evolution in ML analyses presents an advantage over MP as the probability of nucleotide substitutions and rates of these substitutions are taken into account, explaining the phylogenetic relationships of taxa in a more realistic way. An important consideration of this method is the branch length, which parsimony ignores, where changes are more likely to happen in long branches than in short ones. This approach might eliminate long branch attraction problem and explain the greater consistency of ML over MP. Although considered by many the best the best approach to infer phylogenies from a theoretical point of view ML is computationally intense and it is almost impossible to explore all trees as there are too many. Bayesian inference also incorporates a model of evolution and the main advantages over MP and ML are that is computationally more efficient than traditional methods, it quantifies and address the source of uncertainty and able to incorporate complex models of evolution.

## Pitfalls and controversies

• Boostrap values vs Posterior Probabilities. It has been observed that bootstrap support values, calculated under parsimony or maximum likelihood, tend to be lower than the posterior probabilities obtained by Bayesian inference.[11] This fact leads to a number of questions such as: Do posterior probabilities lead to overconfidence in the results? Are bootstrap values more robust than posterior probabilities?
• Controversy of using prior probabilities. Using prior probabilities for Bayesian analysis has been seen by many as an advantage as it will provide a hypothesis a more realistic view of the real world. However some biologists argue about the subjectivity of Bayesian posterior probabilities after the incorporation of these priors.
• Model choice. The results of the Bayesian analysis of a phylogeny are directly correlated to the model of evolution chosen so it is important to choose a model that fits the observed data, otherwise inferences in the phylogeny will be erroneous. Many scientists have raised questions about the interpretation of Bayesian inference when the model is unknown or incorrect. For example, an oversimplified model might give higher posterior probabilities[12][13] or simple evolutionary model are associated to less uncertainty than that from bootstrap values.[14]

## MRBAYES software for Bayesian Inference of Phylogeny

MrBayes is a free software that performs Bayesian inference of phylogeny. Originally written by John P. Huelsenbeck and Frederik Ronquist in 2001.[15] As Bayesian methods increased in popularity MrBayes became one of the software of choice for many molecular phylogeneticists. It is offered for Macintosh, Windows, and UNIX operating systems and it has a command-line interface. The program uses the standard MCMC algorithm as well as the Metropolis coupled MCMC variant. MrBayes reads aligned matrices of sequences (DNA or amino acids) in the standard NEXUS format.[16]

MrBayes uses MCMC to approximate the posterior probabilities of trees.[17] The user can change assumptions of the substitution model, priors and the details of the MC³ analysis. It also allows the user to remove and add taxa and characters to the analysis. The program uses the most standard model of DNA substitution, the 4x4 also called JC69, which assumes that changes across nucleotides occurs with equal probability.[18] It also implements a number of 20x20 models of amino acid substitution, and codon models of DNA substitution. It offers different methods for relaxing the assumption of equal substitutions rates across nucleotide sites.[19] MrBayes is also able to infer ancestral states accommodating uncertainty to the phylogenetic tree and model parameters.

MrBayes 3 [20] was a completely reorganized and restructured version of the original MrBayes. The main novelty was the ability of the software to accommodate heterogeneity of data sets. This new framework allows the user to mix models and take advantages of the efficiency of Bayesian MCMC analysis when dealing with different type of data (e.g. protein, nucleotide, and morphological). It uses the Metropolis-Coupling MCMC by default.

MrBayes 3.2 new version of MrBayes was released in 2012.[21] The new version allows the users to run multiple analyses in parallel. It also provides faster likelihood calculations and allow these calculations to be delegated to graphics processing unites (GPUs). Version 3.2 provides wider outputs options compatible with FigTree and other tree viewers.

## List of phylogenetics software for Bayesian Inference of Phylogeny

This table includes some of the most common phylogenetic software used for inferring phylogenies under a Bayesian framework. Some of them do not use exclusively Bayesian methods.

Name Description Method Author Website link
Armadillo Workflow Platform Workflow platform dedicated to phylogenetic and general bioinformatic analysis Inference of phylogenetic trees using Distance, Maximum Likelihood, Maximum Parsimony, Bayesian methods and related workflows E. Lord, M. Leclercq, A. Boc, A.B. Diallo and V. Makarenkov [22] http://www.bioinfo.uqam.ca/armadillo.
Bali-Phy Simultaneous Bayesian inference of alignment and phylogeny Bayesian inference, alignment as well as tree search M.A. Suchard, B. D. Redelings [23] http://www.bali-phy.org
BATWING Bayesian Analysis of Trees With Internal Node Generation Bayesian inference, demographic history, population splits I. J. Wilson, D. Weale, D.Balding [24] http://www.maths.abdn.ac.uk/˜ijw
Bayes Phylogenies Bayesian inference of trees using Markov Chain Monte Carlo methods Bayesian inference, multiple models, mixture model (auto-partitioning) M. Pagel, A. Meade [25] http://www.evolution.rdg.ac.uk/BayesPhy.html
BEAST Bayesian Evolutionary Analysis Sampling Trees Bayesian inference, relaxed molecular clock, demographic history A. J. Drummond, A. Rambaut & M. A. Suchard [26] http://beast.bio.ed.ac.uk
BUCKy Bayesian concordance of gene trees Bayesian concordance using modified greedy consensus of unrooted quartets C. Ané, B. Larget, D.A. Baum, S.D. Smith, A. Rokas and B. Larget, S.K. Kotha, C.N. Dewey, C. Ané [27] http://www.stat.wisc.edu/~ane/bucky/
Geneious (MrBayes plugin) Geneious provides genome and proteome research tools Neighbor-joining, UPGMA, MrBayes plugin, PHYML plugin, RAxML plugin, FastTree plugin, GARLi plugin, PAUP* Plugin A. J. Drummond,M.Suchard,V.Lefort et al. http://www.geneious.com
TOPALi Phylogenetic inference Phylogenetic model selection, Bayesian analysis and Maximum Likelihood phylogenetic tree estimation, detection of sites under positive selection, and recombination breakpoint location analysis I.Milne, D.Lindner, et al.[28] http://www.topali.org

## Applications of Bayesian Inference of Phylogeny

Bayesian Inference has extensively been used by molecular phylogeneticists for a wide number of applications. Some of these include:

Chronogram obtained from molecular clock analysis using BEAST. Pie chart in each node indicates the possible ancestral distributions inferred from Bayesian Binary MCMC analysis (BBM)
• Inference of phylogenies.[29][30]
• Inference and evaluation of uncertainty of phylogenies.[31]
• Inference of ancestral character state evolution.[32][33]
• Inference of ancestral areas.[34]
• Molecular dating analysis.[35][36]
• Model dynamics of species diversification and extinction.[37]
• Elucidate patterns in pathogens dispersal.[38]