Talk:Aho–Corasick algorithm

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computer science (Rated Start-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.

Something is wrong with the first sentence in the "Informally, the algorithm constructs a trie ..." paragraph -- some kind of copy-paste replacement error; haven't figured out exactly how to fix it yet. Looks like there isn't a clean version to go back to; the mismatched parenthetical expression hasn't changed since it was added (17:36, 27 July 2006). Anyone? dvgrn 18:46, 19 September 2006 (UTC)

  • Got rid of the obvious error, but may have introduced some subtle inaccuracy -- will have to study the algorithm some more to be sure. dvgrn 20:40, 19 September 2006 (UTC)
  • the Link that should lead to the php/javascript implementation of the algorithm is broken. anyone knows where to find it? —Preceding unsigned comment added by (talk) 20:25, 23 May 2008 (UTC)

Bad example dictionary[edit]

The example is bad from a didactical perspactive, because it doesn't contain any shared nodes/paths exept for proper prefixes, like the root/empty prefix. Shinobu (talk) 02:20, 6 October 2008 (UTC)


could somebody add details regarding when this algorithm was invented and when made public etc. Also, it would be better to add some background about the algorithm such as what kind of practical problems does it address? what are its applications etc.

Lokesh 2000 (talk) 04:42, 20 January 2010 (UTC)

"The Aho–Corasick string matching algorithm formed the basis of the original Unix command fgrep."

Not a great example. There were better algorithms for finding a single fixed string in text which fgrep could have used (eg, Knuth-Morris-Pratt). Remember that the strength of Aho-Corasick lies in finding a match for any of a large set of fixed strings in text; and this is not what fgrep does. Aho-Corasick was easily extended to implement Kleene's regular expressions and this extension is used in grep and egrep. So Aho-Corasick's use in fgrep was simply a case of ease of implementation. A virus scanner is by far the best example of the use of Aho-Corasick: in one sequential read of a file there is a set of many thousands of virus "signature" strings compared. Gdt (talk) 06:16, 10 August 2012 (UTC)

I suppose, but an important part of fgrep is that it allows for many fixed strings, likely with the -f flag. As I understand, regular grep was then extended with -f. GNU grep then, as well as I know, added the Boyer-Moore search on the longest fixed string before doing the regex match, and so also allowing for very fast fixed string matching. That removes the need for a separate fgrep. But yes, the optimal case for Aho-Corasick is a large number of pattern strings. Gah4 (talk) 22:27, 4 July 2013 (UTC)

The text should also mention the Rabin-Karp algorithm with Bloom filters, as that is the major alternative to Aho-Corasick. The two algorithms have differing strengths and weaknesses, so one can't be recommended over the other. Gdt (talk) 06:24, 10 August 2012 (UTC)

The diagram doesn't show the dictionary suffix links. I'll upload a new image that does. (talk) 12:56, 16 August 2012 (UTC)


I have made an interactive visualization of Aho-Corasick algorithm in flash. If you find it appropriate for this article, please add it to External links. I don't want to be accused of COI. The link is

Just regex?[edit]

Isn't Aho-Corasick really just a specific case that regular expression engines that generate compiled state machines already support? — Preceding unsigned comment added by (talk) 18:17, 16 June 2013 (UTC)

It is an optimization for (very) large numbers of constant strings. Regexp engines might support it, especially if the strings are not actually regexp, but the ones I know compile each one separately. I believe that GNU grep uses Boyer-Moore to speed up its searching, based on the longest constant string in the regexp, but haven't looked recently. Gah4 (talk) 15:04, 21 September 2016 (UTC)

Java vs. C++[edit]

There is a reference to a C++ implementation in the references. Someone added a reference to a Java implementation, but it was reverted. Is this a Java vs. C+ thing? Why one and not the other? I don't actually know about either of them. Gah4 (talk) 15:33, 21 September 2016 (UTC)

No, it is not a language thing. This article is about an algorithm; it is agnostic to programming issues. Generally, the external links section is for information that would be included in the article later or may be too detailed. External links that include readable content are better than links to just code. A link to an article about the algorithm (e.g., the lecture slides) would be much better than a code listing. It's poor practice to subject encyclopedia readers to digesting code, but we do it from time to time. Some code implementations also have substantial documentation in plain language.
A week ago an IP inserted an EL to a GitHub-hosted Java implementation.[1] I followed the link and found a project that appeared to have been posted a few days previously. That suggests some programmer has just done an implementation and posted it. That's the same as a blog; there's no independent review or assessment; it is not a reliable source. It's documentation is just a link to the 1975 paper that is already cited in this article. Then a link is placed on WP to advertise the implementation. Consequently, I reverted the addition with the edit comment "recent project? advert?"[2]
Although I recognized there was a GitHub C++ implementation, I did not research that link at the time and just left it (the WP article does not have an implementation). I just looked at that post now, and its documentation is a pointer to this WP article. I'm tempted to delete the C++ implementation link.
Glrx (talk) 17:51, 21 September 2016 (UTC)
I didn't look at either one, just wondered why one and not the other. Funny story, it seems that the algorithm was rediscovered by people working at the NCBI [1] in developing BLAST [2] software for DNA and protein sequence searching. I used their implementation, and cited it, for a project I was working on, and then someone told me that it was Aho-Corasick. I told the BLAST people, and they agreed. I suppose I could add BLAST to the external links section. It is definitely production level software, produced with government funding, not someone at home. I believe it is all C. Gah4 (talk) 18:45, 21 September 2016 (UTC)

What are the input what is the output[edit]

It's not clear reading the article what must be the input? A dictionary of words (a set actually) and an array of chars aka. a string. Or it's an operation on two sets of words to see the intersection? What is the output? How does it relate to grep? i⋅am⋅amz3 (talk) 18:47, 9 May 2017 (UTC)

Aho Corasick vs TF-IDF[edit]

How does compare Aho Corasick to a search engine retrivial scoring function like TF-IDF? i⋅am⋅amz3 (talk) 18:47, 9 May 2017 (UTC)

  1. ^ "National Center for Biotechnology Information". NCBI. Retrieved 21 September 2016. 
  2. ^ "BLAST". NCBI. Retrieved 21 September 2016.