Integer sorting

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In computer science, integer sorting is the algorithmic problem of sorting a collection of data values by numeric keys, each of which is an integer. Algorithms designed for integer sorting may also often be applied to sorting problems in which the keys are floating point numbers or text strings.[1] The ability to perform integer arithmetic on the keys allows integer sorting algorithms to be faster than comparison sorting algorithms in many cases, depending on the details of which operations are allowed in the model of computing and how large the integers to be sorted are.

The classical integer sorting algorithms of bucket sort, counting sort, and radix sort are widely used and practical.[2] Much of the subsequent research on integer sorting algorithms has focused less on practicality and more on theoretical improvements in their worst case analysis, and the algorithms that come from this line of research are not believed to be practical for current 64-bit computer architectures, although experiments have shown that some of these methods may be an improvement on radix sorting for data with 128 or more bits per key.[3] Additionally, for large data sets, the near-random memory access patterns of many integer sorting algorithms can handicap them compared to comparison sorting algorithms that have been designed with the memory hierarchy in mind.[4]

Integer sorting provides one of the six benchmarks in the DARPA High Productivity Computing Systems Discrete Mathematics benchmark suite,[5] and one of eleven benchmarks in the NAS Parallel Benchmarks suite.

Models of computation[edit]

Time bounds for integer sorting algorithms typically depend on three parameters: the number n of data values to be sorted, the magnitude K of the largest possible key to be sorted, and the number w of bits that can be represented in a single machine word of the computer on which the algorithm is to be performed. Typically, it is assumed that w ≥ log2(max(n, K)); that is, that machine words are large enough to represent an index into the sequence of input data and that they are large enough to represent a single key.[6]

Integer sorting algorithms are usually designed to work in either the pointer machine or random access machine models of computing; the main difference between these two models is that the random access machine allows any value that is stored in a register to be used as the address of memory read and write operations, with unit cost per operation, allowing certain complex operations on data to be implemented quickly using table lookups. In contrast, the pointer machine model allows memory access only via pointers to memory that may not be manipulated using arithmetic operations. In both models, addition, bitwise Boolean operations, and binary shift operations may typically also be accomplished in unit time per operation. Different integer sorting algorithms make different assumptions, however, about whether integer multiplication is also allowed as a unit-time operation.[7] Other more specialized models of computation such as the parallel random access machine have also been considered.[8]

Andersson, Miltersen & Thorup (1999) showed that in some cases the multiplications or table lookups required by some integer sorting algorithms could be replaced by customized operations that would be more easily implemented in hardware but that are not typically available on general-purpose computers; Thorup (2003) improved this by showing how to replace these operations by the bit-field manipulation instructions available on Pentium processors.

Sorting versus integer priority queues[edit]

It is possible to base a selection sort algorithm on any priority queue data structure, that allows one to maintain sets of n items with keys in the range from 0 to K − 1, subject to operations that find and remove the item with the minimum key. Simply create a priority queue containing all of the items, and then repeatedly apply the delete-min operation until the queue is empty; the sequence in which items are deleted is the sorted sequence of the items. The time for this algorithm is the time for creating the queue, plus the time for n delete-min operations. For instance, among comparison sort algorithms, heap sort has this form. However, there are also algorithms of this type that are faster for integer keys, based on priority queue data structures that are specialized to integers.

In particular, a van Emde Boas tree may be used as a priority queue to sort a set of n keys, each in the range from 0 to K − 1, in time O(n log log K). This is a theoretical improvement over radix sorting when K is sufficiently large. However, in order to use a van Emde Boas tree, one either needs a directly-addressable memory of K words, or one needs to simulate it using a hash table, reducing the space to linear but making the algorithm be randomized. Another priority queue with similar performance (including the need for randomization in the form of hash tables) is the Y-fast trie of Willard (1983).

Thorup (2007) showed that the equivalence between priority queues and sorting goes also in the other direction: if it is possible to perform integer sorting in time T(n) per key, then the same time bound applies to the time per insertion or deletion operation in a priority queue data structure. Thorup's reduction is complicated and assumes the availability of either fast multiplication operations or table lookups, but he also provides an alternative priority queue using only addition and Boolean operations with time T(n) + T(log n) + T(log log n) + ... per operation, at most multiplying the time by an iterated logarithm.

Algorithms for small keys[edit]

It has long been known that bucket sort or counting sort can sort a set of n keys, each in the range from 0 to K − 1, in time O(n + K). In bucket sort, pointers to the data items are distributed to a table of "buckets" (represented as collection data types such as linked lists) using the keys as pointers into the table. Then, all of the buckets are concatenated together to form the output list.[9] In counting sort, the buckets are replaced by counters that determine the number of items with each value; then, a prefix sum computation is used to determine the subarray, within an output array, where the items with each value should be placed.[10]

Radix sort is a general technique in which some other sorting algorithm that is suited only for small keys is repeatedly applied, allowing the algorithm to be extended to larger keys. The key used for the ith application of the other sorting algorithm is the ith digit in the positional notation for the full key, according to some specified radix, starting from the least significant digit and progressing to the most significant. For this algorithm to work, the base algorithm must be stable: items with equal keys should not change positions with each other. Using radix sort, with a radix chosen to be proportional to n and with bucket sort or counting sort as the base algorithm, it is possible to sort a set of n keys, each in the range from 0 to K − 1, in time O(n logn K). Using a power of two as the radix allows the keys for each application of the base algorithm to be constructed using only fast binary shift and mask operations.[11]

A more sophisticated technique with a similar flavor and with better theoretical performance was developed by Kirkpatrick & Reisch (1984). They observe that each pass of radix sort can be interpreted as a "range reduction" technique that, in linear time, reduces the maximum key size by a factor of n; instead, their technique reduces the key size to the square root of its previous value (halving the number of bits needed to represent a key), again in linear time. As in radix sort, they interpret the keys as two-digit base-b numbers for a base b that is approximately K. They then group the items to be sorted into buckets according to their high digits, in linear time, using either a large but uninitialized direct addressed memory or a hash table. Each bucket has a representative, the item in the bucket with the largest key; they then sort the list of items using as keys the high digits for the representatives and the low digits for the non-representatives. By grouping the items from this list into buckets again, each bucket may be placed into sorted order, and by extracting the representatives from the sorted list the buckets may be concatenated together into sorted order. Thus, in linear time, the sorting problem is reduced to another recursive sorting problem in which the keys are much smaller, the square root of their previous magnitude. Repeating this range reduction until the keys are small enough to bucket sort leads to an algorithm with running time O(n log logn K).

A complicated randomized algorithm of Han & Thorup (2002) allows these time bounds to be reduced even farther, to O(nlog log K).

Algorithms for large words[edit]

An integer sorting algorithm is said to be non-conservative if it requires a word size w that is significantly larger than log max(n, K).[12] As an extreme instance, if wK, and all keys are distinct, then the set of keys may be sorted by representing it as a bitvector, with a 1 bit in position i when i is one of the input keys, and then repeatedly removing the least significant bit.[13]

The non-conservative "packed sorting" algorithm of Albers & Hagerup (1997) uses a subroutine, based on Ken Batcher's bitonic sorting network, for merging two sorted sequences of keys that are each short enough to be packed into a single machine word. The input to the packed sorting algorithm, a sequence of items stored one per word, is transformed into a packed form, a sequence of words each holding multiple items in sorted order, by using this subroutine repeatedly to double the number of items packed into each word. Once the sequence is in packed form, Albers and Hagerup use a form of merge sort to sort it; when two sequences are being merged to form a single longer sequence, the same bitonic sorting subroutine can be used to repeatedly extract packed words consisting of the smallest remaining elements of the two sequences. This algorithm gains enough of a speedup from its packed representation to sort its input in linear time whenever it is possible for a single word to contain Ω(log n log log n) keys; that is, when log K log n log log ncw for some constant c > 0.

Algorithms for few items[edit]

Bucket sort, counting sort, radix sort, and van Emde Boas tree sorting all work best when the key size is small; for large enough keys, they become slower than comparison sorting algorithms. However, when the key size or the word size is very large relative to the number of items (or equivalently when the number of items is small), it may again become possible to sort quickly, using different algorithms that take advantage of the parallelism inherent in the ability to perform arithmetic operations on large words.

An early result in this direction was provided by Ajtai, Fredman & Komlós (1984) using the cell probe model of computation (an artificial model in which the complexity of an algorithm is measured only by the number of memory accesses it performs). Building on their work, Fredman & Willard (1994) described two data structures, the Q-heap and the atomic heap, that are implementable on a random access machine. The Q-heap is a bit-parallel version of a binary trie, and allows both priority queue operations and successor and predecessor queries to be performed in constant time for sets of O((log N)1/4) items, where N ≤ 2w is the size of the precomputed tables needed to implement the data structure. The atomic heap is a B-tree in which each tree node is represented as a Q-heap; it allows constant time priority queue operations (and therefore sorting) for sets of (log N)O(1) items.

Andersson et al. (1998) provide a randomized algorithm called signature sort that allows for linear time sorting of sets of up to 2O((log w)1/2 − ε) items at a time, for any constant ε > 0. As in the algorithm of Kirkpatrick and Reisch, they perform range reduction using a representation of the keys as numbers in base b for a careful choice of b. Their range reduction algorithm replaces each digit by a "signature", a hashed value with O(log n) bits such that different digit values have different signatures. If n is sufficiently small, the numbers formed by this replacement process will be significantly smaller than the original keys, allowing the non-conservative packed sorting algorithm of Albers & Hagerup (1997) to sort the replaced numbers in linear time. From the sorted list of replaced numbers, it is possible to form a compressed trie of the keys in linear time, and the children of each node in the trie may be sorted recursively using only keys of size b, after which a tree traversal produces the sorted order of the items.

Trans-dichotomous algorithms[edit]

Fredman & Willard (1993) introduced the transdichotomous model of analysis for integer sorting algorithms, in which nothing is assumed about the range of the integer keys and one must bound the algorithm's performance by a function of the number of data values alone. Alternatively, in this model, the running time for an algorithm on a set of n items is assumed to be the worst case running time for any possible combination of values of K and w. For instance, Fredman and Willard's fusion tree sorting algorithm runs in time O(n log n / log log n), an improvement over comparison sorting for any choice of K and w. An alternative version of their algorithm that includes the use of random numbers and integer division operations improves this to O(nlog n).

Since their work, even better algorithms have been developed. For instance, by repeatedly applying the Kirkpatrick–Reisch range reduction technique until the keys are small enough to apply the Albers–Hagerup packed sorting algorithm, it is possible to sort in time O(n log log n); however, the range reduction part of this algorithm requires either a large memory (proportional to K) or randomization in the form of hash tables.[14]

Han & Thorup (2002) showed how to sort in randomized time O(nlog log n). Their technique involves using ideas related to signature sorting to partition the data into many small sublists, of a size small enough that signature sorting can sort each of them efficiently. It is also possible to use similar ideas to sort integers deterministically in time O(n log log n) and linear space.[15] Using only simple arithmetic operations (no multiplications or table lookups) it is possible to sort in randomized expected time O(n log log n)[16] or deterministically in time O(n (log log n)1 + ε) for any constant ε > 0.[1]

Notes[edit]

  1. ^ a b Han & Thorup (2002).
  2. ^ McIlroy, Bostic & McIlroy (1993); Andersson & Nilsson (1998).
  3. ^ Rahman & Raman (1998).
  4. ^ Pedersen (1999).
  5. ^ DARPA HPCS Discrete Mathematics Benchmarks, Duncan A. Buell, University of South Carolina, retrieved 2011-04-20.
  6. ^ Fredman & Willard (1993).
  7. ^ The question of whether integer multiplication or table lookup operations should be permitted goes back to Fredman & Willard (1993); see also Andersson, Miltersen & Thorup (1999).
  8. ^ Reif (1985); comment in Cole & Vishkin (1986); Hagerup (1987); Bhatt et al. (1991); Albers & Hagerup (1997).
  9. ^ Goodrich & Tamassia (2002). Note that, although Cormen et al. (2001) also describe a version of bucket sort, the version they describe is adapted to inputs where the keys are real numbers with a known distribution, rather than integer sorting.
  10. ^ Cormen et al. (2001), 8.2 Counting Sort, pp. 168–169.
  11. ^ Comrie (1929–1930); Cormen et al. (2001), 8.3 Radix Sort, pp. 170–173.
  12. ^ Kirkpatrick & Reisch (1984); Albers & Hagerup (1997).
  13. ^ Kirkpatrick & Reisch (1984).
  14. ^ Andersson et al. (1998).
  15. ^ Han (2004).
  16. ^ Thorup (2002)

References[edit]