Jump to content

Boyer–Moore string-search algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Reverted edit by 24.60.81.71 (talk): Please explain why do you think it's a bug. The offset should be the same in both memcmp calls.
Cneubauer (talk | contribs)
→‎Uses: added ref
Line 211: Line 211:


==Uses==
==Uses==
Wikipedia uses a patched version of PHP which uses an implementation of Boyer-Moore to speed up page loading.{{fact|date=May 2008}}
Wikipedia uses a patched version of PHP which uses an implementation of Boyer-Moore to speed up page loading. Wikipedia database developer Domas Mituzas discussed Wikipedia's use of the Boyer-Moore algorithm in a 2007 presentation:


"Fast String Search" is a replacement module for PHP’s strtr() function, which uses a Commentz-Walter–style algorithm for multiple search terms, or the Boyer-Moore algorithm for single search terms. License collisions (GPL code was used for it) do not allow its inclusion in PHP.{{fact|date=April 2008}} Using a proper algorithm instead of foreach loops is an incredible boost for some applications.{{fact|date=April 2008}}
<blockquote>"Fast String Search" is a replacement module for PHP’s strtr() function, which uses a Commentz-Walter–style algorithm for multiple search terms, or the Boyer-Moore algorithm for single search terms. License collisions (GPL code was used for it) do not allow its inclusion in PHP. Using a proper algorithm instead of foreach loops is an incredible boost for some applications.<ref>D. Mituzas, "Wikipedia: Site internals, configuration, code examples and management issues", http://dammit.lt/uc/workbook2007.pdf</ref></blockquote>


==References==
==References==

Revision as of 19:36, 24 June 2008

The Boyer–Moore string search algorithm is a particularly efficient string searching algorithm, and it has been the standard benchmark for the practical string search literature.[1] It was developed by Bob Boyer and J Strother Moore in 1977. The algorithm preprocesses the target string (key) that is being searched for, but not the string being searched (unlike some algorithms which preprocess the string to be searched, and can then amortize the expense of the preprocessing by searching repeatedly). The execution time of the Boyer-Moore algorithm can actually be sub-linear: it doesn't need to actually check every character of the string to be searched but rather skips over some of them. Generally the algorithm gets faster as the key being searched for becomes longer. Its efficiency derives from the fact that, with each unsuccessful attempt to find a match between the search string and the text it's searching in, it uses the information gained from that attempt to rule out as many positions of the text as possible where the string could not match.

How the algorithm works

- - - - - - - X - - - - - - -
A N P A N M A N - - - - - - -
- A N P A N M A N - - - - - -
- - A N P A N M A N - - - - -
- - - A N P A N M A N - - - -
- - - - A N P A N M A N - - -
- - - - - A N P A N M A N - -
- - - - - - A N P A N M A N -
- - - - - - - A N P A N M A N
The X in position 8 excludes all 8 of the possible starting positions shown.

What people frequently find surprising about the Boyer-Moore algorithm when they first encounter it is that its verifications – its attempts to check whether a match exists at a particular position – work backwards. If it starts a search at the beginning of a text for the word "ANPANMAN", for instance, it checks the eighth position of the text to see if it contains an "N". If it finds the "N", it moves to the seventh position to see if that contains the last "A" of the word, and so on until it checks the first position of the text for a "A".

Why Boyer-Moore takes this backward approach is clearer when we consider what happens if the verification fails – for instance, if instead of an "N" in the eighth position, we find an "X". The "X" doesn't appear anywhere in "ANPANMAN", and this means there is no match for the search string at the very start of the text – or at the next seven positions following it, since those would all fall across the "X" as well. After checking just one character, we're able to skip ahead and start looking for a match starting at the ninth position of the text, just after the "X".

This explains why the best-case performance of the algorithm, for a text of length N and a fixed pattern of length M, is N/M: in the best case, only one in M characters needs to be checked. This also explains the somewhat counter-intuitive result that the longer the pattern we are looking for, the faster the algorithm will be usually able to find it.

The algorithm precomputes two tables to process the information it obtains in each failed verification: one table calculates how many positions ahead to start the next search based on the identity of the character that caused the match attempt to fail; the other makes a similar calculation based on how many characters were matched successfully before the match attempt failed. (Because these two tables return results indicating how far ahead in the text to "jump", they are sometimes called "jump tables", which should not be confused with the more common meaning of jump tables in computer science.)

The first table

The first table is easy to calculate: Start at the last character of the sought string and move towards the first character. Each time you move left, if the character you are on is not in the table already, add it; its Shift value is its distance from the rightmost character. All other characters receive a count equal to the length of the search string.

Example: For the string ANPANMAN, the first table would be as shown (for clarity, entries are shown in the order they would be added to the table): (The N which is supposed to be zero is based on the 2nd N from the right because we only calculate from letters m-1)

Character Shift
N 3
A 1
M 2
P 5
all other characters 8

The amount of shift calculated by the first table is sometimes called the "bad character shift"[1].

The second table

- - - - A M A N - - - - - - -
A N P A N M A N - - - - - - -
- A N P A N M A N - - - - - -
- - A N P A N M A N - - - - -
- - - A N P A N M A N - - - -
- - - - A N P A N M A N - - -
- - - - - A N P A N M A N - -
- - - - - - A N P A N M A N -
The mismatch "A" in position 5 (3 back from the last letter of the needle) excludes the first 6 of the possible starting positions shown.

The second table is slightly more difficult to calculate: for each value of i less than the length of the search string, we must first calculate the pattern consisting of the last i characters of the search string, preceded by a mis-match for the character before it; then we initially line it up with the search pattern and determine the least number of characters the partial pattern must be shifted left before the two patterns match. For instance, for the search string ANPANMAN, the table would be as follows: (N signifies any character that is not N)

i Pattern Shift
0 N 1
1 AN 8
2 MAN 3
3 NMAN 6
4 ANMAN 6
5 PANMAN 6
6 NPANMAN 6
7 ANPANMAN 6

The amount of shift calculated by the second table is sometimes called the "good suffix shift"[2] or "(strong) good suffix rule". The original published Boyer-Moore algorithm[2] uses a simpler, weaker, version of the good suffix rule in which each entry in the above table did not require a mis-match for the left-most character. This is sometimes called the "weak good suffix rule" and is not sufficient for proving that Boyer-Moore runs in linear worst-case time.

Performance of the Boyer-Moore string search algorithm

The worst-case to find all occurrences in a text needs approximately 3*N comparisons, hence the complexity is O(n), regardless whether the text contains a match or not. The proof is due to Richard Cole, see R. COLE, Tight bounds on the complexity of the Boyer-Moore algorithm, Proceedings of the 2nd Annual ACM-SIAM Symposium on Discrete Algorithms, (1991) for details. This proof took some years to determine. In the year the algorithm was devised, 1977, the maximum number of comparisons was shown to be no more than 6*N; in 1980 it was shown to be no more than 4*N, until Cole's result in 1991.

Example implementation

Here is an example implementation of the Boyer-Moore algorithm, written in C99.

Note: The method of constructing the good-match table (skip[]) in this example is slower than it needs to be (for simplicity of implementation). It does not make a fair comparison to other algorithms, should you try to compare their speed. A faster method should be used instead.

#include <string.h>
#include <limits.h>

/* Same suffix at offset as at nlen, but the preceding character differs
 * (if that position is still within needle)
 */
static int suffix_match(const unsigned char *needle, size_t nlen, size_t offset, size_t suffixlen) {
	if (offset > suffixlen)
		return needle[offset - suffixlen - 1] != needle[nlen - suffixlen - 1] &&
			memcmp(needle + nlen - suffixlen, needle + offset - suffixlen, suffixlen) == 0;
	else
		return memcmp(needle + nlen - suffixlen, needle, offset) == 0;
}

static size_t max(ssize_t a, ssize_t b)
{
    return a > b ? a : b; 
}

/* Returns a pointer to the first occurrence of "needle"
 * within "haystack", or NULL if not found.
 */
const unsigned char* memmem_boyermoore
    (const unsigned char* haystack, size_t hlen,
     const unsigned char* needle,   size_t nlen)
{
    size_t skip[nlen]; /* Array of shifts with self-substring match check */
    ssize_t occ[UCHAR_MAX + 1]; /* Array of last occurrence of each character */
    
    if(nlen > hlen || nlen <= 0 || !haystack || !needle) 
        return NULL;

    /* Preprocess #1: init occ[]*/
    
    /* Initialize the table to default value */
    for(size_t a=0; a<UCHAR_MAX+1; ++a)
        occ[a] = -1;
    
    /* Then populate it with the analysis of the needle */
    /* But ignoring the last letter */
    for(size_t a = 0; a < nlen - 1; ++a) 
        occ[needle[a]] = a;
    
    /* Preprocess #2: init skip[] */  
    /* Note: This step could be made a lot faster.
     * A simple implementation is shown here. */
    for(size_t a = 0; a < nlen; ++a)
    {
        size_t offs = nlen;
        while(offs && !suffix_match(needle, nlen, offs, a))
            --offs;
        skip[nlen - a - 1] = nlen - offs;
    }
    
    /* Search: */
    for(size_t hpos = 0; hpos <= hlen - nlen; )
    {
        size_t npos = nlen - 1;
        while(needle[npos] == haystack[npos + hpos])
        {
            if(npos == 0) 
                return haystack + hpos;

            --npos;
        }
        hpos += max(skip[npos], npos - occ[haystack[npos + hpos]]);
    }
    return NULL;
}

Variants

The Turbo Boyer-Moore algorithm takes an additional constant amount of space to complete a search within 2n comparisons (as opposed to 3n for Boyer-Moore). (n is the number of characters in the text to be searched). [3]

The Boyer-Moore-Horspool algorithm is a simplification of the Boyer-Moore algorithm that leaves out the "second table". The Boyer-Moore-Horspool algorithm requires (in the worst case) M*N comparisons, while the Boyer-Moore algorithm requires (in the worst case) only 3*N comparisons[citation needed].

Uses

Wikipedia uses a patched version of PHP which uses an implementation of Boyer-Moore to speed up page loading. Wikipedia database developer Domas Mituzas discussed Wikipedia's use of the Boyer-Moore algorithm in a 2007 presentation:

"Fast String Search" is a replacement module for PHP’s strtr() function, which uses a Commentz-Walter–style algorithm for multiple search terms, or the Boyer-Moore algorithm for single search terms. License collisions (GPL code was used for it) do not allow its inclusion in PHP. Using a proper algorithm instead of foreach loops is an incredible boost for some applications.[3]

References

  1. ^ Hume and Sunday (1991) [Fast String Searching] SOFTWARE—PRACTICE AND EXPERIENCE, VOL. 21(11), 1221–1248 (NOVEMBER 1991)
  2. ^ R. S. Boyer (1977). "A fast string searching algorithm". Comm. ACM. 20: 762–772. doi:10.1145/359842.359859. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  3. ^ D. Mituzas, "Wikipedia: Site internals, configuration, code examples and management issues", http://dammit.lt/uc/workbook2007.pdf

See also

External links