Talk:Viterbi algorithm
Robotics Unassessed | ||||||||||
|
Mathematical description
This page should have a description of the algorithm using mathematical formulas, as well. Already I have found two different descriptions of Viterbi and would like Wikipedia to clear things out.
Also Python may be a nice language, but using variable names like "p" or "v_prob" doesn't really explain the algorithm!
- Strongly agree! I entered this page to find you have stated what I wanted to say.--Puekai 09:29, 6 March 2007 (UTC)
Yes I can describe the algorithm in simplified terms
I'm haveing a hard time to figure out where to put the description. There is nothing here that is not correct but it is a little confusing because of the usage of "hidden" and "events" without explaining. The algorithm itself is quite simple but I have a little problem with how it is described. I don't want to exorcise a lot of basically correct text, only to clarify in a more simple manner. I was thinking of adding a section just under the table of contents as an introduction, but being a newbie, I would need some lessons on how to do this.
I can also describe Trellis modulation. --User:jlpayton 9-sept-2005
.
Can someone show the algorithm here? — Kieff | Talk 07:33, Oct 15, 2004 (UTC)
- I'll post an example illustrating the forward algorithm and the Viterbi algorithm (those are really the same algorithm). --MarkSweep 21:33, 15 Oct 2004 (UTC)
Algorithm Please
Python maybe a formally defined language but it isn't a clear language. This page should clearly provide an example of the trellis, dynamic programming, the recursion, the loop invariant. Now this dynamically typed tripe. contributed by 24.69.52.124 a.k.a. S01060060676712c2.gv.shawcable.net
- On the contrary, Python is a very clear and concise language. But let's not get into a holy war about programming language preferences. You can cut and past the Python code and run it directly. That should give you a good idea of what's going on. Thank you for your suggestion! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome. --MarkSweep 19:15, 21 Apr 2005 (UTC)
- Sorry, I must admit that I agree with the original poster. I know the Viterbi algorithm, but was completely lost at the example given and then code. My suggestion would be: Drop the "rainy" and "sunny" story (it's utterly confusing), concentrate on some easy system instead (say, something with a transfer function of 1 + 0.5z^-1), and show a graphical description of the trellis and survivor paths. An encyclopedia shouldn't require people to cut-and-paste code to understand what's going on. 80.202.213.120
- Is this a joke? I found the rainy and sunny story to be very easy to follow and a great example. Although the page probably could do with a proper mathematical definition to complement the easy explanation I found this to be very refreshing compared to other comp.sci./math articles around here. —Preceding unsigned comment added by 84.166.155.142 (talk) 21:22, 23 November 2007 (UTC)
- +1 agree completely -- easy to follow and a great example. - Francis Tyers · 08:47, 6 October 2008 (UTC)
- It is definitely not a joke. -Sesse (talk) 02:57, 29 January 2008 (UTC)
Trellis modulation and the Viterbi algorithm
What exactly is the relation between Trellis modulation and the Viterbi algorithm ? --DavidCary 30 June 2005 19:32 (UTC)
- Can't totally remember clearly but you should investigate Factor Graphs and see how the viterbi can be done with a factor graph --ReptileLawyer 18:13, 24 April 2006 (UTC)
Complexity?
Anyone know what the complexity of Viterbi is? I skimmed over it and it looks like it's O(s*n^2), where s is the size of the sequence it's given, and n is the number of possible states. Any thoughts? --aciel 20:41, 22 November 2005 (UTC)
- That's correct. Depending on the data structures used to represent the underlying HMM, the Viterbi/Forward algorithm can be made to run in time, where s is the length of the sequence, n is the number of hidden states, and d is the average out-degree of each state. This is because you only need to explore pairs of states that are actually connected by an edge. In a fully connected HMM, the running time is . --MarkSweep (call me collect) 22:11, 23 November 2005 (UTC)
Is the python correct?
No, the algorithm is not correct.
The computation of the probability of the output(x) given the optimal Viterbi path (pi), P(x|pi) algorithm consists of
P(x|pi) = 1 starting probability * n emission probabilities * (n-1) transition probabilities
The displayed algorithm computes
P(x|pi) = 1 starting probability * n emission probabilities * n transition probabilities.
There is an obvious off by 1 error.
—Preceding unsigned comment added by 199.94.29.210 (talk) 20:00, 8 May 2009 (UTC)
The python contains the line p = ep[state][output] * tp[state][next_state]
which in my understanding should be p = ep[next_state][output] * tp[state][next_state]
I assume the first state returned by the example is the initial state
and the second state is the state that matches the first observation
The example returns ['Sunny', 'Rainy', 'Rainy', 'Rainy'] and after modification it returns ['Sunny', 'Sunny', 'Rainy', 'Rainy']
which would seam better given the example (observation 1 = walk)
Am I mistaken? if so a little help on the 4 returned states is welcome.
- The code is correct as is. It's written the way it is to simplify the main loop and to allow an empty sequence of observations. For example, here's what happens when you run the code on an empty observation sequence:
>>> forward_viterbi([], ['q','r'], {'q':0.8, 'r':0.2}, {}, {}) (1.0, ['q'], 0.80000000000000004) >>>
- This return value is the triple
(1.0, ['q'], 0.8)
, which indicates the following: The probability of the observation sequence conditional on its length is 1.0 (that's obviously correct, since there is exactly one sequence with length zero); the Viterbi path consists of a single edge, leading from the (unnamed) start state to the state 'q'; and the probability of the Viterbi path is 0.8. The Viterbi path as returned by this code contains one more state, namely the state that is reached after leaving the state corresponding to the last observation (in this case, the state reached from the start state with probability 0.8). That's because emissions are associated with outgoing edges, rather than incoming edges (you may have seen a presentation of this algorithm that assumes the latter). Both versions of HMMs exist, in addition to a third one where emissions are properly associated with edges, so there are at least three common variants of the forward algorithm and the Viterbi algorithm. --MarkSweep (call me collect) 17:23, 15 December 2005 (UTC)
- I (a different user) still don't understand. I haven't tested the change on the Python, but on my haskell version, both ways seem to work for the no-observation case. --196.210.100.237 (talk) 20:27, 11 February 2008 (UTC)
- Running the algorithm by hand seems to indicate that the proposed correction to the algorithm is correct....I don't think this algorithm is correct.--151.201.108.130 (talk) 18:40, 6 May 2008 (UTC)
- Really? I've never seen emissions associated with edges at all, only with nodes. That is certainly the case in the Hidden Markov model page here and in, e.g. the Rabiner reference used on that page, or in the Viterbi paper itself. If the Viterbi path is the path of states corresponding to the sequence of emissions, then there should be one state for each emission, not an extra state at the end.24.91.117.221 (talk) 14:32, 7 July 2008 (UTC)
It seems that the correction above has been incorporated into the main page, but the output after modification ['Sunny', 'Sunny', 'Rainy', 'Rainy'] has not. It seems that either version of the algorithm should work. In the original version (using emission probability of the source state) ['Sunny', 'Sunny'] would be the first 2 states in the Viterbi path for the sunny state after one iteration of the "for output in obs" loop. The first three states of the vp for the rainy state would be ['Sunny', 'Rainy', 'Rainy'] after iteration 2, as the greater correlation between shopping and rain, combined with the higher transition probability of rain to rain vs. sun to sun would move this triple ahead of ['Sunny', 'Sunny', 'Sunny'] (note only the first two observations are known). ['Sunny', 'Rainy', 'Rainy', 'Rainy'] should be the correct output. In the second case (emission probability of the next state is used), the first iteration's Viterbi path for the sunny state would be ['Sunny', 'Sunny']. The second iteration's vp for sunny would then be ['Sunny', 'Sunny', 'Sunny'], and the final vp would be ['Sunny', 'Sunny', 'Rainy', 'Rainy']. However, I am not sure why it is necessary to keep these extra start or end states, as it seems that the length of the hidden state sequence should be the same as that of the observation sequence. Perhaps by using the revised (second) algorithm, we could obtain ['Sunny', 'Rainy', 'Rainy']. If we initialize prob to 1, v_path to [] and v_prob to 1 in T; and, if we are on the first "output" in the obs array, ignore trans_p in the p computation, it seems that we should obtain the correct three states. (update) Correction to above: it seems that: 1. the above suggestion to ignore tran_p in the case of the first observation works only in the case where emit_p is for the next_state. 2. The T array should be initialized to (start_probability, [], start_probability), it seems, so that the initial probabilities are a factor. 3. algorithm would be basically:
//T initialized to (start_prob, [], start_prob) for rainy and //sunny above count = 0 for output in obs: U = {} for nextstate in states: total = 0 argmax = none valmax = 0 if (count==0) (prob, v_path, v_prob) = T[next_state] p = emit_p[next_state][output] prob*=p v_prob*=p argmax = v_path + next_state valmax = v_prob else for source_state in states: . . . //note that //p=emit_p[next_state][output]*trans_p[source_state][next_state] } U[next_state]=(total, argmax, valmax) T=U count++ //apply sum/max to final states as before below.
4. Note that a similar modification could occur if emit_p is calculated for the source_state, but in this case the special condition would be when the count is at size(obs)-1 (ie the last observation). 76.241.101.61 (talk) 18:42, 27 September 2008 (UTC) (75.10.146.25 (talk) 04:28, 28 August 2008 (UTC))
SOVA
I intend to add a section on soft output version of the VA....give me some time and let the link stay!!!
Pizzadeliveryboy 02:47, 19 January 2006 (UTC)
Algorithm seems wrong
I tried to apply the forward algorithm on the example given in the article, but I do not find the same result. Briefly, here is the forward algo I applied (from rabiner paper): Let say an HMM system with the transitional matrix, the emission matrix, an observation sequence and is the porbability of observing and having as a state for the HMM at time so we have:
The algo is like:
1- Initialization:
with the start probability of state i
2- Deduction:
3-Terminaison
Applying this algo with this code in python:
def compute_alpha(pi, A, B, O, alpha, t):
if t == 0:
for state in pi:
alpha[0][state] = pi[state] * B[state][O[0]]
else:
compute_alpha(pi, A, B, O, alpha, t - 1)
for state_j in pi:
temp = 0
for state_i in pi:
temp += alpha[t - 1][state_i] * A[state_i][state_j]
alpha[t][state_j] = temp * B[state_j][O[t]]
def evaluate(pi, A, B, O):
"""Evaluate a suite of states regarding a MM model"""
alpha = dict()
alpha[0] = dict()
for i, state in enumerate(pi):
alpha[i+1] = dict()
T = len(O) - 1
compute_alpha(pi, A, B, O, alpha, T)
max = 0
for state in alpha[T]:
if max <= alpha[T][state]:
max = alpha[T][state]
print alpha
return max
I got 0.02904 as a proba for the observation {'walk', 'shop', 'clean'}
given in the example. So which algo is wrong? I can give you the step of calculation, they are quite simple for such a small example.JeDi 05:14, 27 July 2006 (UTC)
- Your algorithm failed to select the survivor path for each new symbol. I suggest using
def compute_alpha(pi, A, B, O, alpha, t):
if t == 0:
for state in pi:
alpha[0][state] = pi[state] * B[state][O[0]]
else:
compute_alpha(pi, A, B, O, alpha, t - 1)
for state_j in pi:
temp = 0
maxval = 0
for state_i in pi:
if alpha[t - 1][state_i] * A[state_i][state_j] > maxval:
maxval = alpha[t - 1][state_i] * A[state_i][state_j]
alpha[t][state_j] = maxval* B[state_j][O[t]]
- Using this modified code suggests that the Viterbi path is Sunny, Rainy, Rainy, with probability 0.01344. This is the same value I get when doing all of the computations by hand (following Viterbi's paper, not your math or the article here). It corresponds to the product 0.4*0.6*0.4*0.4*0.7*0.5. This is the product of the probabilities of beginning in Sunny, emitting walk, transitioning to Rainy, emitting shop, transitioning to Rainy, finally emitting clean. If this is correct, the article will need to be changed. (Though your algorithm is misleading in that it prints, on each step, the currently most likely state, which will not necessarily end up on the final survivor path.)24.91.117.221 (talk) 14:46, 7 July 2008 (UTC)
A concrete example is partially wrong
From the transition matrix one can observe initial state probabilities, namely for given transition matrix probability of rainy day is 4/7, while probability of sunny day is 3/7. It appears that values in the example are rounded. 08:20, 28 December 2005 (UTC)
logs, comments
I realize it would make the code more complicated, but the implementation might be misleading because you really want to use logs of the probabilities to avoid underflows. I think the names of variables could be improved.
I think the output should have the same number of states as the number of observations. The observations are usually associated with states, not transitions between states.
- The code (as is), works fine for the given example. However, for anything but trivial examples it is likely to fail. By using logs, this can be avoided (just a hint really).
Markov vs. probabilities
How does the Markov assumption of non-dependence on history exist together with the state transition probabilities being essential for the algorithm? Can that aspect be made clearer? 130.126.180.235 04:01, 30 May 2007 (UTC)BillBusen
The Markov assumption simply means non-dependence on history BEYOND one state back, so state transition probabilities which encompass a one-state history are permissible. I hope that clears up any confusion. --Tekhnofiend (talk) 17:43, 27 April 2008 (UTC)
concrete example, trellis diagrams
I'm somewhat disappointed by the section called "concrete example".
Sorely missing from this article are pictures of trellis diagrams, and an in-depth comparison to the space/time performance of the forward algorithm, and in particular to the forward-backward algorithm. It's critical (I think) to note that the Viterbi algo drops many of the links that the other algorithms keep, in exchange for an O(N) run time (as opposed to O(N^2) or whatever), and only a minor loss of entropy.
Certainly, I was utterly unable to understand Viterbi, until I saw it layed out in comparison to the other algos, at which point, things clicked together. linas (talk) 21:06, 10 June 2008 (UTC)
numerical example
I thought this might be useful. Using the data given in the article, here is a suggested numerical example of the algorithm. I'm rather new to the algorithm, so I'm posting it here to get comments before it goes in the main article. It is possible that I have misunderstood the algorithm and that this is incorrect. It is also possible that the algorithm as now used and as described in the article is different than that of the Viterbi paper on error bounds for convolutional codes.
What is the most probable path (and its probability) corresponding to the sequence of observations ['walk', 'shop', 'clean']
? We first find the state most likely to have emitted walk, taking into account the probability of the system being in that state at all. In this case, the Rainy state is initially occupied with probability 0.6 and Rainy emits 'walk'
with probability 0.1, so the probability of the actual sequence beginning with Rainy is 0.6*0.1 = 0.06. Similarly, the probability of the sequence beginning with Sunny is 0.4*0.6 = 0.24, because there is a probability of 0.4 of beginning in Sunny and Sunny emits 'walk'
with probability 0.6. As there is only one path ending in Sunny and one path ending in Rainy, each becomes the survivor path (one state long) of this first step.
So it is expected the system is in the state Rainy with probability 0.06. For each state that can follow Rainy, what is the probability of that state emitting the next observation in the sequence: 'shop'
? It is the probability that the system was in Rainy at all, 0.06, times the probability of Rainy transitioning to Rainy, 0.7, times the probability of Rainy emitting 'shop'
, 0.4. The probability that the first two states in the sequence are Rainy and Rainy is 0.06*0.7*0.4 = 0.0168. In the same way, we find the probability of the first two states being Sunny then Rainy is 0.0384. These are the only two paths of two states that can end in Rainy. Only the more likely one becomes the survivor path to be used in the next step. In this case, it the path of Sunny then Rainy. Similarly, we look at the two paths that end in Sunny, one beginning with Rainy and one with Sunny. Multiplying the appropriate probabilities shows that Sunny then Sunny is the most likely path that ends with Sunny, with probability 0.0432.
Specifically, the calculations are
Rainy->Rainy: 0.06*0.7*0.4 = 0.0168 Sunny->Rainy: 0.24*0.4*0.4 = 0.0384 "survivor" Rainy->Sunny: 0.06*0.3*0.3 = 0.0054 Sunny->Sunny: 0.24*0.6*0.3 = 0.0432 "survivor"
That step is repeated on the two survivor paths: Sunny then Rainy and Sunny then Sunny. The four new calculations are
Sunny->Rainy->Rainy: 0.384*0.4*0.6 = 0.01344 Sunny->Sunny->Rainy: 0.0432*0.4*0.5 = 0.00864 Sunny->Rainy->Sunny: 0.384*0.4*0.6 = 0.001152 Sunny->Sunny->Sunny: 0.0432*0.6*0.1 = 0.002592
Of these, the most probable path that would emit the specified observations is Sunny, Rainy, Rainy, with probability 0.01344. —Preceding unsigned comment added by 24.91.117.221 (talk) 15:13, 7 July 2008 (UTC)
Simplified Algorithm
Hi, I understand python, but I feel pseudo code is more applicable. Something which shows the idea, not the physical implementation is more useful in a pedagogical sense. I just read the article, and I suggest (hope someine can improve this basic pseudocode)
Algorithm: Viterbi algorithm
inputs:
probability of starting states: pStart[state],
probability of transition between states: pTrans[source][destination],
probability of emission given the current state: pEmit[source][symbol],
observed emissions at each interval: emission[1...n]
outputs:
most likely series of events: likelyEvents[1...n]
probability of the observed emissions: p
method:
let probability of being in state s up to time t be: p[s][t]
let most likely path that ends in state s up to time t be: path[s][t]
let probability of having taken path[s][t] up to time t: pPath[s][t]
initialize p[s][0] = pStart[s]
initialize path[s][0] = (s)
initialize pPath[s][0] = 1
at each time period t
let s* = state s' for which p[s'][t-1]*pTrans[s'][s]*pEmit[s'][emission[t-1]] is maximized
p[s][t] = sum over all states s' of p[s'][t-1]*pTrans[s'][s]*pEmit[s'][emission[t-1]]
path[s][t] = append s to path[s*][t-1]
pPath[s][t] = pPath[s*][t-1]*p[s'][t-1]*pTrans[s'][s]*pEmit[s'][emission[t-1]] is maximized
let s* = state s' for which pPath[s'][n] is the biggest
return:
likelyEvents = path[s'][n]
p = sum over all states s' of p[s'][n]
Optimize this for the layman
I think the above proposal about pseudocode is on the wrong track: Remove the code altogether, and replace it with a description, in English (not a programming language or pseudocode), of the thinking Alice has to do to figure out the weather where Bob is. Most people have never taken a programming class, or even a logic class; we should be putting this in terms that anybody can understand.--Aervanath (talk) 04:54, 2 February 2009 (UTC)