Jump to content

Bit-reversal permutation

From Wikipedia, the free encyclopedia
A Hammersley set whose coordinates are the integers from 0 to 255 and their bit-reversals

In applied mathematics, a bit-reversal permutation is a permutation of a sequence of items, where is a power of two. It is defined by indexing the elements of the sequence by the numbers from to , representing each of these numbers by its binary representation (padded to have length exactly ), and mapping each item to the item whose representation has the same bits in the reversed order.

Repeating the same permutation twice returns to the original ordering on the items, so the bit reversal permutation is an involution.

This permutation can be applied to any sequence in linear time while performing only simple index calculations. It has applications in the generation of low-discrepancy sequences and in the evaluation of fast Fourier transforms.


Consider the sequence of eight letters abcdefgh. Their indexes are the binary numbers 000, 001, 010, 011, 100, 101, 110, and 111, which when reversed become 000, 100, 010, 110, 001, 101, 011, and 111. Thus, the letter a in position 000 is mapped to the same position (000), the letter b in position 001 is mapped to the fifth position (the one numbered 100), etc., giving the new sequence aecgbfdh. Repeating the same permutation on this new sequence returns to the starting sequence.

Writing the index numbers in decimal (but, as above, starting with position 0 rather than the more conventional start of 1 for a permutation), the bit-reversal permutations on items, for , are:[1]

0 1 0
1 2 0 1
2 4 0 2 1 3
3 8 0 4 2 6 1 5 3 7
4 16 0 8 4 12 2 10 6 14 1 9 5 13 3 11 7 15

Each permutation in this sequence can be generated by concatenating two sequences of numbers: the previous permutation, with its values doubled, and the same sequence with each value increased by one. Thus, for example doubling the length-4 permutation 0 2 1 3 gives 0 4 2 6, adding one gives 1 5 3 7, and concatenating these two sequences gives the length-8 permutation 0 4 2 6 1 5 3 7.[2]


The generalization to radix representations, for , and to , is a digit-reversal permutation, in which the base- digits of the index of each element are reversed to obtain the permuted index. The same idea can also been generalized to mixed radix number systems. In such cases, the digit-reversal permutation should simultaneously reverses the digits of each item and the bases of the number system, so that each reversed digit remains within the range defined by its base.[3]

Permutations that generalize the bit-reversal permutation by reversing contiguous blocks of bits within the binary representations of their indices can be used to interleave two equal-length sequences of data in-place.[4]

There are two extensions of the bit-reversal permutation to sequences of arbitrary length. These extensions coincide with bit-reversal for sequences whose length is a power of 2, and their purpose is to separate adjacent items in a sequence for the efficient operation of the Kaczmarz algorithm. The first of these extensions, called efficient ordering,[5] operates on composite numbers, and it is based on decomposing the number into its prime components.

The second extension, called EBR (extended bit-reversal), is similar in spirit to bit-reversal. Given an array of size , EBR fills the array with a permutation of the numbers in the range in linear time. Successive numbers are separated in the permutation by at least positions.[6]


Bit reversal is most important for radix-2 Cooley–Tukey FFT algorithms, where the recursive stages of the algorithm, operating in-place, imply a bit reversal of the inputs or outputs. Similarly, mixed-radix digit reversals arise in mixed-radix Cooley–Tukey FFTs.[7]

The bit reversal permutation has also been used to devise lower bounds in distributed computation.[8]

The Van der Corput sequence, a low-discrepancy sequence of numbers in the unit interval, is formed by reinterpreting the indexes of the bit-reversal permutation as the fixed-point binary representations of dyadic rational numbers.

Bit-reversal permutations are often used in finding lower bounds on dynamic data structures. For example, subject to certain assumptions, the cost of looking up the integers between and , inclusive, in any binary search tree holding those values, is when those numbers are queried in bit-reversed order. This bound applies even to trees like splay trees that are allowed to rearrange their nodes between accesses.[9]


Mainly because of the importance of fast Fourier transform algorithms, numerous efficient algorithms for applying a bit-reversal permutation to a sequence have been devised.[2]

Because the bit-reversal permutation is an involution, it may be performed easily in place (without copying the data into another array) by swapping pairs of elements. In the random-access machine commonly used in algorithm analysis, a simple algorithm that scans the indexes in input order and swaps whenever the scan encounters an index whose reversal is a larger number would perform a linear number of data moves.[10] However, computing the reversal of each index may take a non-constant number of steps. Alternative algorithms can perform a bit reversal permutation in linear time while using only simple index calculations.[11] Because bit-reversal permutations may be repeated multiple times as part of a calculation, it may be helpful to separate out the steps of the algorithm that calculate index data used to represent the permutation (for instance, by using the doubling and concatenation method) from the steps that use the results of this calculation to permute the data (for instance, by scanning the data indexes in order and performing a swap whenever the swapped location is greater than the current index, or by using more sophisticated vector scatter–gather operations).[2]

Another consideration that is even more important for the performance of these algorithms is the effect of the memory hierarchy on running time. Because of this effect, more sophisticated algorithms that consider the block structure of memory can be faster than this naive scan.[2][10] An alternative to these techniques is special computer hardware that allows memory to be accessed both in normal and in bit-reversed order.[12]

The performance improvement of bit-reversals in both uniprocessor and multiprocessors has been paid a serious attention in high-performance computing fields. Because architecture-aware algorithm development can best utilize hardware and system software resources, including caches, TLB, and multicores, significantly accelerating the computation.[13]


  1. ^ Sloane, N. J. A. (ed.), "Sequence A030109", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation
  2. ^ a b c d Karp, Alan H. (1996), "Bit reversal on uniprocessors", SIAM Review, 38 (1): 1–26, CiteSeerX, doi:10.1137/1038001, MR 1379039. Karp surveys and compares 30 different algorithms for bit reversal, developed between 1965 and the 1996 publication of his survey.
  3. ^ Elster, Anne C. (1989), "Fast bit-reversal algorithms", IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '89, Glasgow, Scotland, May 23-26, 1989, pp. 1099–1102, doi:10.1109/ICASSP.1989.266624, S2CID 15028026
  4. ^ Yang, Qingxuan; Ellis, John; Mamakani, Khalegh; Ruskey, Frank (2013), "In-place permuting and perfect shuffling using involutions", Information Processing Letters, 113 (10–11): 386–391, arXiv:1204.1958, doi:10.1016/j.ipl.2013.02.017, MR 3037467, S2CID 14672841.
  5. ^ Herman, Gabor T. (2009), Fundamentals of Computerized Tomography (2nd ed.), London: Springer, p. 209, ISBN 978-1-85233-617-2
  6. ^ Gordon, Dan (June 2017), "A derandomization approach to recovering bandlimited signals across a wide range of random sampling rates", Numerical Algorithms, 77 (4): 1141–1157, doi:10.1007/s11075-017-0356-3, S2CID 254889989
  7. ^ B. Gold and C. M. Rader, Digital Processing of Signals (New York: McGraw–Hill, 1969).
  8. ^ Frederickson, Greg N.; Lynch, Nancy A. (1984), "The impact of synchronous communication on the problem of electing a leader in a ring" (PDF), Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing (STOC '84), pp. 493–503, doi:10.1145/800057.808719, ISBN 978-0897911337.
  9. ^ Wilber, Robert (1989), "Lower Bounds for Accessing Binary Search Trees With Rotations", 27th Annual Symposium on Foundations of Computer Science (sfcs 1986), pp. 61–70, doi:10.1109/SFCS.1986.28, ISBN 0-8186-0740-8.
  10. ^ a b Carter, Larry; Gatlin, Kang Su (1998), "Towards an optimal bit-reversal permutation program", Proceedings of the 39th Annual Symposium on Foundations of Computer Science (FOCS), pp. 544–553, CiteSeerX, doi:10.1109/SFCS.1998.743505, ISBN 978-0-8186-9172-0, S2CID 14307262.
  11. ^ Jeong, Jechang; Williams, W.J. (1990), "A fast recursive bit-reversal algorithm", International Conference on Acoustics, Speech, and Signal Processing (ICASSP-90), vol. 3, pp. 1511–1514, doi:10.1109/ICASSP.1990.115695, S2CID 122373780.
  12. ^ Harley, T. R.; Maheshwaramurthy, G. P. (2004), "Address generators for mapping arrays in bit-reversed order", IEEE Transactions on Signal Processing, 52 (6): 1693–1703, doi:10.1109/TSP.2004.827148, S2CID 10043478.
  13. ^ Zhang, Zhao; Zhang, Xiaodong (2000), "Fast bit-reversals on uniprocessors and shared-memory multiprocessors", SIAM Journal on Scientific Computing, 22 (6): 2113–2134, doi:10.1137/S1064827599359709, MR 1856305