# Viterbi algorithm

The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources and hidden Markov models.

The terms Viterbi path and Viterbi algorithm are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the Viterbi parse.

The Viterbi algorithm was proposed by Andrew Viterbi in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links.[1] The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, speech synthesis, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.

## Example

Consider a primitive clinic in a village. People in the village have a very nice property that they are either healthy or have a fever. They can only tell if they have a fever by asking a doctor in the clinic. The wise doctor makes a diagnosis of fever by asking patients how they feel. Villagers only answer that they feel normal, dizzy, or cold.

Suppose a patient comes to the clinic each day and tells the doctor how she feels. The doctor believes that the health condition of this patient operates as a discrete Markov chain. There are two states, "Healthy" and "Fever", but the doctor cannot observe them directly, that is, they are hidden from him. On each day, there is a certain chance that the patient will tell the doctor she has one of the following feelings, depending on her health condition: "normal", "cold", or "dizzy". Those are the observations. The entire system is that of a hidden Markov model (HMM).

The doctor knows the villager's general health condition, and what symptoms patients complain of with or without fever on average. In other words, the parameters of the HMM are known. They can be represented as follows in the Python programming language:

states = ('Healthy', 'Fever')

observations = ('normal', 'cold', 'dizzy')

start_probability = {'Healthy': 0.6, 'Fever': 0.4}

transition_probability = {
'Healthy' : {'Healthy': 0.7, 'Fever': 0.3},
'Fever' : {'Healthy': 0.4, 'Fever': 0.6}
}

emission_probability = {
'Healthy' : {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},
'Fever' : {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6}
}


In this piece of code, start_probability represents the doctor's belief about which state the HMM is in when the patient first visits (all he knows is that the patient tends to be healthy). The particular probability distribution used here is not the equilibrium one, which is (given the transition probabilities) approximately {'Healthy': 0.57, 'Fever': 0.43}. The transition_probability represents the change of the health condition in the underlying Markov chain. In this example, there is only a 30% chance that tomorrow the patient will have a fever if he is healthy today. The emission_probability represents how likely the patient is to feel on each day. If he is healthy, there is a 50% chance that he feels normal; if he has a fever, there is a 60% chance that he feels dizzy.

The patient visits three days in a row and the doctor discovers that on the first day she feels normal, on the second day she feels cold, on the third day she feels dizzy. The doctor has a question: what is the most likely sequence of health condition of the patient would explain these observations? This is answered by the Viterbi algorithm.

def viterbi(obs, states, start_p, trans_p, emit_p):
V = [{}]
path = {}

# Initialize base cases (t == 0)
for y in states:
V[0][y] = start_p[y] * emit_p[y][obs[0]]
path[y] = [y]

# alternative Python 2.7+ initialization syntax
# V = [{y:(start_p[y] * emit_p[y][obs[0]]) for y in states}]
# path = {y:[y] for y in states}

# Run Viterbi for t > 0
for t in range(1, len(obs)):
V.append({})
newpath = {}

for y in states:
(prob, state) = max((V[t-1][y0] * trans_p[y0][y] * emit_p[y][obs[t]], y0) for y0 in states)
V[t][y] = prob
newpath[y] = path[state] + [y]

# Don't need to remember the old paths
path = newpath
n = 0           # if only one element is observed max is sought in the initialization values
if len(obs)!=1:
n = t
print_dptable(V)
(prob, state) = max((V[n][y], y) for y in states)
return (prob, path[state])

# Don't study this, it just prints a table of the steps.
def print_dptable(V):
s = "    " + " ".join(("%7d" % i) for i in range(len(V))) + "\n"
for y in V[0]:
s += "%.5s: " % y
s += " ".join("%.7s" % ("%f" % v[y]) for v in V)
s += "\n"
print(s)


The function viterbi takes the following arguments: obs is the sequence of observations, e.g. ['normal', 'cold', 'dizzy']; states is the set of hidden states; start_p is the start probability; trans_p are the transition probabilities; and emit_p are the emission probabilities. For simplicity of code, we assume that the observation sequence obs is non-empty and that trans_p[i][j] and emit_p[i][j] is defined for all states i,j.

In the running example, the forward/Viterbi algorithm is used as follows:

def example():
return viterbi(observations,
states,
start_probability,
transition_probability,
emission_probability)
print(example())


This reveals that the observations ['normal', 'cold', 'dizzy'] were most likely generated by states ['Healthy', 'Healthy', 'Fever']. In other words, given the observed activities, the patient was most likely to have been healthy both on the first day when she felt normal as well as on the second day when she felt cold, and then she contracted a fever the third day.

The operation of Viterbi's algorithm can be visualized by means of a trellis diagram. The Viterbi path is essentially the shortest path through this trellis. The trellis for the clinic example is shown below; the corresponding Viterbi path is in bold:

When implementing Viterbi's algorithm, it should be noted that many languages use floating point arithmetic - as p is small, this may lead to underflow in the results. A common technique to avoid this is to take the logarithm of the probabilities and use it throughout the computation, the same technique used in the Logarithmic Number System. Once the algorithm has terminated, an accurate value can be obtained by performing the appropriate exponentiation.

## Extensions

A generalization of the Viterbi algorithm, termed the max-sum algorithm (or max-product algorithm) can be used to find the most likely assignment of all or some subset of latent variables in a large number of graphical models, e.g. Bayesian networks, Markov random fields and conditional random fields. The latent variables need in general to be connected in a way somewhat similar to an HMM, with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involves message passing and is substantially similar to the belief propagation algorithm (which is the generalization of the forward-backward algorithm).

With the algorithm called iterative Viterbi decoding one can find the subsequence of an observation that matches best (on average) to a given HMM. This algorithm is proposed by Qi Wang et. al [2] to deal with turbo code. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.

An alternative algorithm, the Lazy Viterbi algorithm, has been proposed recently.[3] For many codes of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the original Viterbi decoder (using Viterbi algorithm).[4] This algorithm works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result - however, it is not so easy to parallelize in hardware.

Moreover, the Viterbi algorithm has been extended to operate with a deterministic finite automaton in order to improve speed for use in stochastic letter-to-phoneme conversion.[5]

## Algorithm

Suppose we are given a Hidden Markov Model (HMM) with state space $S$, initial probabilities $\pi_i$ of being in state $i$ and transition probabilities $a_{i,j}$ of transitioning from state $i$ to state $j$. Say we observe outputs $y_1,\dots, y_T$. The most likely state sequence $x_1,\dots,x_T$ that produces the observations is given by the recurrence relations:[6]

$\begin{array}{rcl} V_{1,k} &=& \mathrm{P}\big( y_1 \ | \ k \big) \cdot \pi_k \\ V_{t,k} &=& \mathrm{P}\big( y_t \ | \ k \big) \cdot \max_{x \in S} \left( a_{x,k} \cdot V_{t-1,x}\right) \end{array}$

Here $V_{t,k}$ is the probability of the most probable state sequence responsible for the first $t$ observations that has $k$ as its final state. The Viterbi path can be retrieved by saving back pointers that remember which state $x$ was used in the second equation. Let $\mathrm{Ptr}(k,t)$ be the function that returns the value of $x$ used to compute $V_{t,k}$ if $t > 1$, or $k$ if $t=1$. Then:

$\begin{array}{rcl} x_T &=& \arg\max_{x \in S} (V_{T,x}) \\ x_{t-1} &=& \mathrm{Ptr}(x_t,t) \end{array}$

Here we're using the standard definition of arg max.
The complexity of this algorithm is $O(T\times\left|{S}\right|^2)$.

## Pseudocode

Given the observation space $O=\{o_1,o_2,\dots,o_N\}$, the state space $S=\{s_1,s_2,\dots,s_K\}$, a sequence of observations $Y=\{y_1,y_2,\ldots, y_T\}$, transition matrix $A$ of size $K\times K$ such that $A_{ij}$ stores the transition probability of transiting from state $s_i$ to state $s_j$, emission matrix $B$ of size $K\times N$ such that $B_{ij}$ stores the probability of observing $o_j$ from state $s_i$, an array of initial probabilities $\pi$ of size $K$ such that $\pi_i$ stores the probability that $x_1 == s_i$.We say a path $X=\{x_1,x_2,\ldots,x_T\}$ is a sequence of states that generate the observations $Y=\{y_1,y_2,\ldots, y_T\}$.

In this dynamic programming problem, we construct two 2-dimensional tables $T_1, T_2$ of size $K\times T$. Each element of $T_1$, $T_1[i,j]$, stores the probability of the most likely path so far $\hat{X}=\{\hat{x}_1,\hat{x}_2,\ldots,\hat{x}_j\}$ with $\hat{x}_j=s_i$ that generates $Y=\{y_1,y_2,\ldots, y_j\}$. Each element of $T_2$, $T_2[i,j]$, stores $\hat{x}_{j-1}$ of the most likely path so far $\hat{X}=\{\hat{x}_1,\hat{x}_2,\ldots,\hat{x}_{j-1},\hat{x}_j\}$ for $\forall j, 2\leq j \leq T$

We fill entries of two tables $T_1[i,j],T_2[i,j]$ by increasing order of $K\cdot j+i$.

$T_1[i,j]=\max_{k}{(T_1[k,j-1]\cdot A_{ki}\cdot B_{iy_j})}$, and
$T_2[i,j]=\arg\max_{k}{(T_1[k,j-1]\cdot A_{ki}\cdot B_{iy_j})}$

Note that $B_{iy_j}$ does not need to appear in the latter expression, as it's constant with i and j and does not affect the argmax.

   INPUT:  The observation space $O=\{o_1,o_2,\dots,o_N\}$,
the state space $S=\{s_1,s_2,\dots,s_K\}$,
a sequence of observations  $Y=\{y_1,y_2,\ldots, y_T\}$ such that $y_t==i$ if the
observation at time $t$ is $o_i$,
transition matrix $A$ of size $K\cdot K$ such that $A_{ij}$ stores the transition
probability of transiting from state $s_i$ to state $s_j$,
emission matrix $B$ of size $K\cdot N$ such that $B_{ij}$ stores the probability of
observing $o_j$ from  state $s_i$,
an array of initial probabilities $\pi$ of size $K$ such that $\pi_i$ stores the probability
that $x_1 == s_i$
OUTPUT: The most likely hidden state sequence $X=\{x_1,x_2,\ldots,x_T\}$
A01 function VITERBI( O, S,π,Y,A,B ) : X
A02     for each state si do
A03         T1[i,1]←πi$\cdot$Biy1
A04         T2[i,1]←0
A05     end for
A06     for i←2,3,...,T do
A07         for each state sj do
A08             T1[j,i]←$\max_{k}{(T_1[k,i-1]\cdot A_{kj}\cdot B_{jy_i})}$
A09             T2[j,i]←$\arg\max_{k}{(T_1[k,i-1]\cdot A_{kj}\cdot B_{jy_i})}$
A10         end for
A11     end for
A12     zT←$\arg\max_{k}{(T_1[k,T])}$
A13     xT←szT
A14     for i←T,T-1,...,2 do
A15         zi-1←T2[zi,i]
A16         xi-1←szi-1
A17     end for
A18     return X
A19 end function