Jump to content

Talk:Viterbi algorithm: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Algorithm seems wrong: oops, typo and extra blank line in algorithm
numerical example: new section
Line 187: Line 187:


Certainly, I was utterly unable to understand Viterbi, until I saw it layed out in comparison to the other algos, at which point, things clicked together. [[User:Linas|linas]] ([[User talk:Linas|talk]]) 21:06, 10 June 2008 (UTC)
Certainly, I was utterly unable to understand Viterbi, until I saw it layed out in comparison to the other algos, at which point, things clicked together. [[User:Linas|linas]] ([[User talk:Linas|talk]]) 21:06, 10 June 2008 (UTC)

== numerical example ==

I thought this might be useful. Using the data given in the article, here is a suggested numerical example of the algorithm. I'm rather new to the algorithm, so I'm posting it here to get comments before it goes in the main article. It is possible that I have misunderstood the algorithm and that this is incorrect. It is also possible that the algorithm as now used and as described in the article is different than that of the Viterbi paper on error bounds for convolutional codes.

What is the most probable path (and its probability) corresponding to the sequence of observations <code>['walk', 'shop', 'clean']</code>? We first find the state most likely to have emitted walk, taking into account the probability of the system being in that state at all. In this case, the Rainy state is initially occupied with probability 0.6 and Rainy emits <code>'walk'</code> with probability 0.1, so the probability of the actual sequence beginning with Rainy is 0.6*0.1 = 0.06. Similarly, the probability of the sequence beginning with Sunny is 0.4*0.6 = 0.24, because there is a probability of 0.4 of beginning in Sunny and Sunny emits <code>'walk'</code> with probability 0.6. As there is only one path ending in Sunny and one path ending in Rainy, each becomes the survivor path (one state long) of this first step.

So it is expected the system is in the state Rainy with probability 0.06. For each state that can follow Rainy, what is the probability of that state emitting the next observation in the sequence: <code>'shop'</code>? It is the probability that the system was in Rainy at all, 0.06, times the probability of Rainy transitioning to Rainy, 0.7, times the probability of Rainy emitting <code>'shop'</code>, 0.4. The probability that the first two states in the sequence are Rainy and Rainy is 0.06*0.7*0.4 = 0.0168. In the same way, we find the probability of the first two states being Sunny then Rainy is 0.0384. These are the only two paths of two states that can end in Rainy. Only the more likely one becomes the survivor path to be used in the next step. In this case, it the path of Sunny then Rainy. Similarly, we look at the two paths that end in Sunny, one beginning with Rainy and one with Sunny. Multiplying the appropriate probabilities shows that Sunny then Sunny is the most likely path that ends with Sunny, with probability 0.0432.

Specifically, the calculations are
<pre>
Rainy->Rainy: 0.06*0.7*0.4 = 0.0168
Sunny->Rainy: 0.24*0.4*0.4 = 0.0384 "survivor"
Rainy->Sunny: 0.06*0.3*0.3 = 0.0054
Sunny->Sunny: 0.24*0.6*0.3 = 0.0432 "survivor"
</pre>

That step is repeated on the two survivor paths: Sunny then Rainy and Sunny then Sunny. The four new calculations are
<pre>
Sunny->Rainy->Rainy: 0.384*0.4*0.6 = 0.01344
Sunny->Sunny->Rainy: 0.0432*0.4*0.5 = 0.00864
Sunny->Rainy->Sunny: 0.384*0.4*0.6 = 0.001152
Sunny->Sunny->Sunny: 0.0432*0.6*0.1 = 0.002592
</pre>

Of these, the most probable path that would emit the specified observations is Sunny, Rainy, Rainy, with probability 0.01344.

Revision as of 15:13, 7 July 2008

WikiProject iconRobotics Unassessed
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.

Mathematical description

This page should have a description of the algorithm using mathematical formulas, as well. Already I have found two different descriptions of Viterbi and would like Wikipedia to clear things out.

Also Python may be a nice language, but using variable names like "p" or "v_prob" doesn't really explain the algorithm!

Strongly agree! I entered this page to find you have stated what I wanted to say.--Puekai 09:29, 6 March 2007 (UTC)[reply]

Yes I can describe the algorithm in simplified terms

I'm haveing a hard time to figure out where to put the description. There is nothing here that is not correct but it is a little confusing because of the usage of "hidden" and "events" without explaining. The algorithm itself is quite simple but I have a little problem with how it is described. I don't want to exorcise a lot of basically correct text, only to clarify in a more simple manner. I was thinking of adding a section just under the table of contents as an introduction, but being a newbie, I would need some lessons on how to do this.

I can also describe Trellis modulation. --User:jlpayton 9-sept-2005

.

Can someone show the algorithm here? Kieff | Talk 07:33, Oct 15, 2004 (UTC)

I'll post an example illustrating the forward algorithm and the Viterbi algorithm (those are really the same algorithm). --MarkSweep 21:33, 15 Oct 2004 (UTC)

Algorithm Please

Python maybe a formally defined language but it isn't a clear language. This page should clearly provide an example of the trellis, dynamic programming, the recursion, the loop invariant. Now this dynamically typed tripe. contributed by 24.69.52.124 a.k.a. S01060060676712c2.gv.shawcable.net

On the contrary, Python is a very clear and concise language. But let's not get into a holy war about programming language preferences. You can cut and past the Python code and run it directly. That should give you a good idea of what's going on. Thank you for your suggestion! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome. --MarkSweep 19:15, 21 Apr 2005 (UTC)
Sorry, I must admit that I agree with the original poster. I know the Viterbi algorithm, but was completely lost at the example given and then code. My suggestion would be: Drop the "rainy" and "sunny" story (it's utterly confusing), concentrate on some easy system instead (say, something with a transfer function of 1 + 0.5z^-1), and show a graphical description of the trellis and survivor paths. An encyclopedia shouldn't require people to cut-and-paste code to understand what's going on. 80.202.213.120
Is this a joke? I found the rainy and sunny story to be very easy to follow and a great example. Although the page probably could do with a proper mathematical definition to complement the easy explanation I found this to be very refreshing compared to other comp.sci./math articles around here. —Preceding unsigned comment added by 84.166.155.142 (talk) 21:22, 23 November 2007 (UTC)[reply]
It is definitely not a joke. -Sesse (talk) 02:57, 29 January 2008 (UTC)[reply]

What exactly is the relation between Trellis modulation and the Viterbi algorithm ? --DavidCary 30 June 2005 19:32 (UTC)

Can't totally remember clearly but you should investigate Factor Graphs and see how the viterbi can be done with a factor graph --ReptileLawyer 18:13, 24 April 2006 (UTC)[reply]

Complexity?

Anyone know what the complexity of Viterbi is? I skimmed over it and it looks like it's O(s*n^2), where s is the size of the sequence it's given, and n is the number of possible states. Any thoughts? --aciel 20:41, 22 November 2005 (UTC)[reply]

That's correct. Depending on the data structures used to represent the underlying HMM, the Viterbi/Forward algorithm can be made to run in time, where s is the length of the sequence, n is the number of hidden states, and d is the average out-degree of each state. This is because you only need to explore pairs of states that are actually connected by an edge. In a fully connected HMM, the running time is . --MarkSweep (call me collect) 22:11, 23 November 2005 (UTC)[reply]

Is the python correct?

The python contains the line p = ep[state][output] * tp[state][next_state]

which in my understanding should be p = ep[next_state][output] * tp[state][next_state]


I assume the first state returned by the example is the initial state and the second state is the state that matches the first observation

The example returns ['Sunny', 'Rainy', 'Rainy', 'Rainy'] and after modification it returns ['Sunny', 'Sunny', 'Rainy', 'Rainy']

which would seam better given the example (observation 1 = walk)

Am I mistaken? if so a little help on the 4 returned states is welcome.

The code is correct as is. It's written the way it is to simplify the main loop and to allow an empty sequence of observations. For example, here's what happens when you run the code on an empty observation sequence:
 >>> forward_viterbi([], ['q','r'], {'q':0.8, 'r':0.2}, {}, {})
 (1.0, ['q'], 0.80000000000000004)
 >>> 
This return value is the triple (1.0, ['q'], 0.8), which indicates the following: The probability of the observation sequence conditional on its length is 1.0 (that's obviously correct, since there is exactly one sequence with length zero); the Viterbi path consists of a single edge, leading from the (unnamed) start state to the state 'q'; and the probability of the Viterbi path is 0.8. The Viterbi path as returned by this code contains one more state, namely the state that is reached after leaving the state corresponding to the last observation (in this case, the state reached from the start state with probability 0.8). That's because emissions are associated with outgoing edges, rather than incoming edges (you may have seen a presentation of this algorithm that assumes the latter). Both versions of HMMs exist, in addition to a third one where emissions are properly associated with edges, so there are at least three common variants of the forward algorithm and the Viterbi algorithm. --MarkSweep (call me collect) 17:23, 15 December 2005 (UTC)[reply]
I (a different user) still don't understand. I haven't tested the change on the Python, but on my haskell version, both ways seem to work for the no-observation case. --196.210.100.237 (talk) 20:27, 11 February 2008 (UTC)[reply]
Running the algorithm by hand seems to indicate that the proposed correction to the algorithm is correct....I don't think this algorithm is correct.--151.201.108.130 (talk) 18:40, 6 May 2008 (UTC)[reply]
Really? I've never seen emissions associated with edges at all, only with nodes. That is certainly the case in the Hidden Markov model page here and in, e.g. the Rabiner reference used on that page, or in the Viterbi paper itself. If the Viterbi path is the path of states corresponding to the sequence of emissions, then there should be one state for each emission, not an extra state at the end.24.91.117.221 (talk) 14:32, 7 July 2008 (UTC)[reply]

SOVA

I intend to add a section on soft output version of the VA....give me some time and let the link stay!!!

Pizzadeliveryboy 02:47, 19 January 2006 (UTC)[reply]

Algorithm seems wrong

I tried to apply the forward algorithm on the example given in the article, but I do not find the same result. Briefly, here is the forward algo I applied (from rabiner paper): Let say an HMM system with the transitional matrix, the emission matrix, an observation sequence and is the porbability of observing and having as a state for the HMM at time so we have:

The algo is like:

1- Initialization:

with the start probability of state i

2- Deduction:

3-Terminaison

Applying this algo with this code in python:

def compute_alpha(pi, A, B, O, alpha, t):
  if t == 0:
    for state in pi:
      alpha[0][state] = pi[state] * B[state][O[0]]
  else:
    compute_alpha(pi, A, B, O, alpha, t - 1)
    for state_j in pi:
      temp = 0
      for state_i in pi:
        temp += alpha[t - 1][state_i] * A[state_i][state_j]
      alpha[t][state_j] = temp * B[state_j][O[t]]

def evaluate(pi, A, B, O):
  """Evaluate a suite of states regarding a MM model"""
  alpha = dict()
  alpha[0] = dict()
  for i, state in enumerate(pi):
    alpha[i+1] = dict()
  T = len(O) - 1
  compute_alpha(pi, A, B, O, alpha, T)
  max = 0
  for state in alpha[T]:
    if max <= alpha[T][state]:
      max = alpha[T][state]
  print alpha
  return max

I got 0.02904 as a proba for the observation {'walk', 'shop', 'clean'} given in the example. So which algo is wrong? I can give you the step of calculation, they are quite simple for such a small example.JeDi 05:14, 27 July 2006 (UTC)[reply]

Your algorithm failed to select the survivor path for each new symbol. I suggest using

def compute_alpha(pi, A, B, O, alpha, t):
 if t == 0:
   for state in pi:
     alpha[0][state] = pi[state] * B[state][O[0]]
 else:
   compute_alpha(pi, A, B, O, alpha, t - 1)
   for state_j in pi:
     temp = 0
     maxval = 0
     for state_i in pi:
     if alpha[t - 1][state_i] * A[state_i][state_j] > maxval:
         maxval = alpha[t - 1][state_i] * A[state_i][state_j]
     alpha[t][state_j] = maxval* B[state_j][O[t]]

Using this modified code suggests that the Viterbi path is Sunny, Rainy, Rainy, with probability 0.01344. This is the same value I get when doing all of the computations by hand (following Viterbi's paper, not your math or the article here). It corresponds to the product 0.4*0.6*0.4*0.4*0.7*0.5. This is the product of the probabilities of beginning in Sunny, emitting walk, transitioning to Rainy, emitting shop, transitioning to Rainy, finally emitting clean. If this is correct, the article will need to be changed. (Though your algorithm is misleading in that it prints, on each step, the currently most likely state, which will not necessarily end up on the final survivor path.)24.91.117.221 (talk) 14:46, 7 July 2008 (UTC)[reply]

A concrete example is partially wrong

From the transition matrix one can observe initial state probabilities, namely for given transition matrix probability of rainy day is 4/7, while probability of sunny day is 3/7. It appears that values in the example are rounded. 08:20, 28 December 2005 (UTC)

logs, comments

I realize it would make the code more complicated, but the implementation might be misleading because you really want to use logs of the probabilities to avoid underflows. I think the names of variables could be improved.

I think the output should have the same number of states as the number of observations. The observations are usually associated with states, not transitions between states.

The code (as is), works fine for the given example. However, for anything but trivial examples it is likely to fail. By using logs, this can be avoided (just a hint really).

Markov vs. probabilities

How does the Markov assumption of non-dependence on history exist together with the state transition probabilities being essential for the algorithm? Can that aspect be made clearer? 130.126.180.235 04:01, 30 May 2007 (UTC)BillBusen[reply]

The Markov assumption simply means non-dependence on history BEYOND one state back, so state transition probabilities which encompass a one-state history are permissible. I hope that clears up any confusion. --Tekhnofiend (talk) 17:43, 27 April 2008 (UTC)[reply]

concrete example, trellis diagrams

I'm somewhat disappointed by the section called "concrete example".

Sorely missing from this article are pictures of trellis diagrams, and an in-depth comparison to the space/time performance of the forward algorithm, and in particular to the forward-backward algorithm. It's critical (I think) to note that the Viterbi algo drops many of the links that the other algorithms keep, in exchange for an O(N) run time (as opposed to O(N^2) or whatever), and only a minor loss of entropy.

Certainly, I was utterly unable to understand Viterbi, until I saw it layed out in comparison to the other algos, at which point, things clicked together. linas (talk) 21:06, 10 June 2008 (UTC)[reply]

numerical example

I thought this might be useful. Using the data given in the article, here is a suggested numerical example of the algorithm. I'm rather new to the algorithm, so I'm posting it here to get comments before it goes in the main article. It is possible that I have misunderstood the algorithm and that this is incorrect. It is also possible that the algorithm as now used and as described in the article is different than that of the Viterbi paper on error bounds for convolutional codes.

What is the most probable path (and its probability) corresponding to the sequence of observations ['walk', 'shop', 'clean']? We first find the state most likely to have emitted walk, taking into account the probability of the system being in that state at all. In this case, the Rainy state is initially occupied with probability 0.6 and Rainy emits 'walk' with probability 0.1, so the probability of the actual sequence beginning with Rainy is 0.6*0.1 = 0.06. Similarly, the probability of the sequence beginning with Sunny is 0.4*0.6 = 0.24, because there is a probability of 0.4 of beginning in Sunny and Sunny emits 'walk' with probability 0.6. As there is only one path ending in Sunny and one path ending in Rainy, each becomes the survivor path (one state long) of this first step.

So it is expected the system is in the state Rainy with probability 0.06. For each state that can follow Rainy, what is the probability of that state emitting the next observation in the sequence: 'shop'? It is the probability that the system was in Rainy at all, 0.06, times the probability of Rainy transitioning to Rainy, 0.7, times the probability of Rainy emitting 'shop', 0.4. The probability that the first two states in the sequence are Rainy and Rainy is 0.06*0.7*0.4 = 0.0168. In the same way, we find the probability of the first two states being Sunny then Rainy is 0.0384. These are the only two paths of two states that can end in Rainy. Only the more likely one becomes the survivor path to be used in the next step. In this case, it the path of Sunny then Rainy. Similarly, we look at the two paths that end in Sunny, one beginning with Rainy and one with Sunny. Multiplying the appropriate probabilities shows that Sunny then Sunny is the most likely path that ends with Sunny, with probability 0.0432.

Specifically, the calculations are

Rainy->Rainy: 0.06*0.7*0.4 = 0.0168
Sunny->Rainy: 0.24*0.4*0.4 = 0.0384 "survivor"
Rainy->Sunny: 0.06*0.3*0.3 = 0.0054
Sunny->Sunny: 0.24*0.6*0.3 = 0.0432 "survivor"

That step is repeated on the two survivor paths: Sunny then Rainy and Sunny then Sunny. The four new calculations are

Sunny->Rainy->Rainy: 0.384*0.4*0.6 = 0.01344
Sunny->Sunny->Rainy: 0.0432*0.4*0.5 = 0.00864
Sunny->Rainy->Sunny: 0.384*0.4*0.6 = 0.001152
Sunny->Sunny->Sunny: 0.0432*0.6*0.1 = 0.002592

Of these, the most probable path that would emit the specified observations is Sunny, Rainy, Rainy, with probability 0.01344.