# Suffix array

Suffix array
Type Array
Invented by Manber & Myers (1990)
Time complexity
in big O notation
Average Worst case
Space ${\displaystyle {\mathcal {O}}(n)}$ ${\displaystyle {\mathcal {O}}(n)}$
Construction ${\displaystyle {\mathcal {O}}(n)}$ ${\displaystyle {\mathcal {O}}(n)}$

In computer science, a suffix array is a sorted array of all suffixes of a string. It is a data structure used in, among others, full text indices, data compression algorithms, and the field of bibliometrics.

Suffix arrays were introduced by Manber & Myers (1990) as a simple, space efficient alternative to suffix trees. They had independently been discovered by Gaston Gonnet in 1987 under the name PAT array (Gonnet, Baeza-Yates & Snider 1992).

Li, Li & Huo (2016) gave the first in-place ${\displaystyle {\mathcal {O}}(n)}$ time suffix array construction algorithm that is optimal both in time and space, where in-place means that the algorithm only needs ${\displaystyle {\mathcal {O}}(1)}$ additional space beyond the input string and the output suffix array.

Enhanced suffix arrays (ESAs) are suffix arrays with additional tables that reproduce the full functionality of suffix trees preserving the same time and memory complexity.[1] The suffix array for a subset of all suffixes of a string is called sparse suffix array.[2] Multiple probabilistic algorithms have been developed to minimize the additional memory usage including an optimal time and memory algorithm.[3]

## Definition

Let ${\displaystyle S=S[1]S[2]...S[n]}$ be an ${\textstyle n}$-string and let ${\displaystyle S[i,j]}$ denote the substring of ${\displaystyle S}$ ranging from ${\displaystyle i}$ to ${\displaystyle j}$.

The suffix array ${\displaystyle A}$ of ${\displaystyle S}$ is now defined to be an array of integers providing the starting positions of suffixes of ${\displaystyle S}$ in lexicographical order. This means, an entry ${\displaystyle A[i]}$ contains the starting position of the ${\displaystyle i}$-th smallest suffix in ${\displaystyle S}$ and thus for all ${\displaystyle 1: ${\displaystyle S[A[i-1],n].

Each suffix of ${\displaystyle S}$ shows up in ${\displaystyle A}$ exactly once. Suffixes are simple strings. These strings are sorted (as in a paper dictionary), before their starting positions (integer indices) are saved in ${\displaystyle A}$.

## Example

Consider the text ${\displaystyle S}$=banana$ to be indexed:  i ${\displaystyle S[i]}$ 1 2 3 4 5 6 7 b a n a n a$

The text ends with the special sentinel letter $ that is unique and lexicographically smaller than any other character. The text has the following suffixes: Suffix i banana$ 1
anana$2 nana$ 3
ana$4 na$ 5
a$6$ 7

These suffixes can be sorted in ascending order:

Suffix i
$7 a$ 6
ana$4 anana$ 2
banana$1 na$ 5
nana$3 The suffix array ${\displaystyle A}$ contains the starting positions of these sorted suffixes:  i = ${\displaystyle A[i]}$ = 1 2 3 4 5 6 7 7 6 4 2 1 5 3 The suffix array with the suffixes written out vertically underneath for clarity: i = ${\displaystyle A[i]}$ = 1 2 3 4 5 1 2 3 4 5 6 7 7 6 4 2 1 5 3$ a a a b n n $n n a a a a a n$ n $n a a a n$ $a$

So for example, ${\displaystyle A[3]}$ contains the value 4, and therefore refers to the suffix starting at position 4 within ${\displaystyle S}$, which is the suffix ana$. ## Correspondence to suffix trees Suffix arrays are closely related to suffix trees: • Suffix arrays can be constructed by performing a depth-first traversal of a suffix tree. The suffix array corresponds to the leaf-labels given in the order in which these are visited during the traversal, if edges are visited in the lexicographical order of their first character. • A suffix tree can be constructed in linear time by using a combination of suffix array and LCP array. For a description of the algorithm, see the corresponding section in the LCP array article. It has been shown that every suffix tree algorithm can be systematically replaced with an algorithm that uses a suffix array enhanced with additional information (such as the LCP array) and solves the same problem in the same time complexity.[4] Advantages of suffix arrays over suffix trees include improved space requirements, simpler linear time construction algorithms (e.g., compared to Ukkonen's algorithm) and improved cache locality.[5] ## Space efficiency Suffix arrays were introduced by Manber & Myers (1990) in order to improve over the space requirements of suffix trees: Suffix arrays store ${\displaystyle n}$ integers. Assuming an integer requires ${\displaystyle 4}$ bytes, a suffix array requires ${\displaystyle 4n}$ bytes in total. This is significantly less than the ${\displaystyle 20n}$ bytes which are required by a careful suffix tree implementation.[6] However, in certain applications, the space requirements of suffix arrays may still be prohibitive. Analyzed in bits, a suffix array requires ${\displaystyle {\mathcal {O}}(n\log n)}$ space, whereas the original text over an alphabet of size ${\displaystyle \sigma }$ only requires ${\displaystyle {\mathcal {O}}(n\log \sigma )}$ bits. For a human genome with ${\displaystyle \sigma =4}$ and ${\displaystyle n=3.4\times 10^{9}}$ the suffix array would therefore occupy about 16 times more memory than the genome itself. Such discrepancies motivated a trend towards compressed suffix arrays and BWT-based compressed full-text indices such as the FM-index. These data structures require only space within the size of the text or even less. ## Construction algorithms A suffix tree can be built in ${\displaystyle {\mathcal {O}}(n)}$ and can be converted into a suffix array by traversing the tree depth-first also in ${\displaystyle {\mathcal {O}}(n)}$, so there exist algorithms that can build a suffix array in ${\displaystyle {\mathcal {O}}(n)}$. A naive approach to construct a suffix array is to use a comparison-based sorting algorithm. These algorithms require ${\displaystyle {\mathcal {O}}(n\log n)}$ suffix comparisons, but a suffix comparison runs in ${\displaystyle {\mathcal {O}}(n)}$ time, so the overall runtime of this approach is ${\displaystyle {\mathcal {O}}(n^{2}\log n)}$. More advanced algorithms take advantage of the fact that the suffixes to be sorted are not arbitrary strings but related to each other. These algorithms strive to achieve the following goals:[7] • minimal asymptotic complexity ${\displaystyle \Theta (n)}$ • lightweight in space, meaning little or no working memory beside the text and the suffix array itself is needed • fast in practice One of the first algorithms to achieve all goals is the SA-IS algorithm of Nong, Zhang & Chan (2009). The algorithm is also rather simple (< 100 LOC) and can be enhanced to simultaneously construct the LCP array.[8] The SA-IS algorithm is one of the fastest known suffix array construction algorithms. A careful implementation by Yuta Mori outperforms most other linear or super-linear construction approaches. Beside time and space requirements, suffix array construction algorithms are also differentiated by their supported alphabet: constant alphabets where the alphabet size is bound by a constant, integer alphabets where characters are integers in a range depending on ${\displaystyle n}$ and general alphabets where only character comparisons are allowed.[9] Most suffix array construction algorithms are based on one of the following approaches:[7] • Prefix doubling algorithms are based on a strategy of Karp, Miller & Rosenberg (1972). The idea is to find prefixes that honor the lexicographic ordering of suffixes. The assessed prefix length doubles in each iteration of the algorithm until a prefix is unique and provides the rank of the associated suffix. • Recursive algorithms follow the approach of the suffix tree construction algorithm by Farach (1997) to recursively sort a subset of suffixes. This subset is then used to infer a suffix array of the remaining suffixes. Both of these suffix arrays are then merged to compute the final suffix array. • Induced copying algorithms are similar to recursive algorithms in the sense that they use an already sorted subset to induce a fast sort of the remaining suffixes. The difference is that these algorithms favor iteration over recursion to sort the selected suffix subset. A survey of this diverse group of algorithms has been put together by Puglisi, Smyth & Turpin (2007). A well-known recursive algorithm for integer alphabets is the DC3 / skew algorithm of Kärkkäinen & Sanders (2003). It runs in linear time and has successfully been used as the basis for parallel[10] and external memory[11] suffix array construction algorithms. Recent work by Salson et al. (2010) proposes an algorithm for updating the suffix array of a text that has been edited instead of rebuilding a new suffix array from scratch. Even if the theoretical worst-case time complexity is ${\displaystyle {\mathcal {O}}(n\log n)}$, it appears to perform well in practice: experimental results from the authors showed that their implementation of dynamic suffix arrays is generally more efficient than rebuilding when considering the insertion of a reasonable number of letters in the original text. In practical open source work, a commonly used routine for suffix array construction was qsufsort, based on the 1999 Larsson-Sadakane algorithm.[12] This routine has been superseded by Yuta Mori's DivSufSort, "the fastest known suffix sorting algorithm in main memory" as of 2017. It too can be modified to compute an LCP array. It uses a induced copying combined with Itoh-Tanaka.[13] ## Applications The suffix array of a string can be used as an index to quickly locate every occurrence of a substring pattern ${\displaystyle P}$ within the string ${\displaystyle S}$. Finding every occurrence of the pattern is equivalent to finding every suffix that begins with the substring. Thanks to the lexicographical ordering, these suffixes will be grouped together in the suffix array and can be found efficiently with two binary searches. The first search locates the starting position of the interval, and the second one determines the end position:[citation needed] n = len(S) def search(P: str) -> Tuple[int, int]: """ Return indices (s, r) such that the interval A[s:r] (including the end index) represents all suffixes of S that start with the pattern P. """ # Find starting position of interval l = 0 # in Python, arrays are indexed starting at 0 r = n while l < r: mid = (l + r) // 2 # division rounding down to nearest integer # suffixAt(A[i]) is the ith smallest suffix if P > suffixAt(A[mid]): l = mid + 1 else: r = mid s = l # Find ending position of interval r = n while l < r: mid = (l + r) // 2 if suffixAt(A[mid]).startswith(P): l = mid + 1 else: r = mid return (s, r)  Finding the substring pattern ${\displaystyle P}$ of length ${\displaystyle m}$ in the string ${\displaystyle S}$ of length ${\displaystyle n}$ takes ${\displaystyle {\mathcal {O}}(m\log n)}$ time, given that a single suffix comparison needs to compare ${\displaystyle m}$ characters. Manber & Myers (1990) describe how this bound can be improved to ${\displaystyle {\mathcal {O}}(m+\log n)}$ time using LCP information. The idea is that a pattern comparison does not need to re-compare certain characters, when it is already known that these are part of the longest common prefix of the pattern and the current search interval. Abouelhoda, Kurtz & Ohlebusch (2004) improve the bound even further and achieve a search time of ${\displaystyle {\mathcal {O}}(m)}$ as known from suffix trees. Suffix sorting algorithms can be used to compute the Burrows–Wheeler transform (BWT). The BWT requires sorting of all cyclic permutations of a string. If this string ends in a special end-of-string character that is lexicographically smaller than all other character (i.e.,$), then the order of the sorted rotated BWT matrix corresponds to the order of suffixes in a suffix array. The BWT can therefore be computed in linear time by first constructing a suffix array of the text and then deducing the BWT string: ${\displaystyle BWT[i]=S[A[i]-1]}$.

Suffix arrays can also be used to look up substrings in example-based machine translation, demanding much less storage than a full phrase table as used in Statistical machine translation.

Many additional applications of the suffix array require the LCP array. Some of these are detailed in the application section of the latter.

## Notes

1. ^ Abouelhoda, Mohamed Ibrahim; Kurtz, Stefan; Ohlebusch, Enno (March 2004). "Replacing suffix trees with enhanced suffix arrays". Journal of Discrete Algorithms. 2 (1): 53–86. doi:10.1016/s1570-8667(03)00065-0. ISSN 1570-8667.
2. ^ Kärkkäinen, Juha; Ukkonen, Esko (1996), "Sparse suffix trees", Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. 219–230, doi:10.1007/3-540-61332-3_155, ISBN 9783540613329
3. ^ Gawrychowski, Paweł; Kociumaka, Tomasz (January 2017). "Sparse Suffix Tree Construction in Optimal Time and Space". Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics: 425–439. arXiv:1608.00865. doi:10.1137/1.9781611974782.27. ISBN 9781611974782.
4. ^ Abouelhoda, Kurtz & Ohlebusch 2004.
5. ^
6. ^
7. ^ a b
8. ^
9. ^
10. ^
11. ^
12. ^ Larsson, N. Jesper; Sadakane, Kunihiko (22 November 2007). "Faster suffix sorting". Theoretical Computer Science. 387 (3): 258–272. doi:10.1016/j.tcs.2007.07.017. ISSN 0304-3975.
13. ^ Fischer, Johannes; Kurpicz, Florian (5 October 2017). "Dismantling DivSufSort". Proceedings of the Prague Stringology Conference 2017. arXiv:1710.01896.