# Merge-insertion sort

In computer science, merge-insertion sort or the Ford–Johnson algorithm is a comparison sorting algorithm published in 1959 by L. R. Ford Jr. and Selmer M. Johnson.[1][2][3][4] It uses fewer comparisons in the worst case than the best previously known algorithms, binary insertion sort and merge sort,[1] and for 20 years it was the sorting algorithm with the fewest known comparisons.[5] Although not of practical significance, it remains of theoretical interest in connection with the problem of sorting with a minimum number of comparisons.[3] The same algorithm may have also been independently discovered by Stanisław Trybuła and Czen Ping.[4]

## Algorithm

Merge-insertion sort performs the following steps, on an input ${\displaystyle X}$ of ${\displaystyle n}$ elements:[6]

1. Group the elements of ${\displaystyle X}$ into ${\displaystyle \lfloor n/2\rfloor }$ pairs of elements, arbitrarily, leaving one element unpaired if there is an odd number of elements.
2. Perform ${\displaystyle \lfloor n/2\rfloor }$ comparisons, one per pair, to determine the larger of the two elements in each pair.
3. Recursively sort the ${\displaystyle \lfloor n/2\rfloor }$ larger elements from each pair, creating a sorted sequence ${\displaystyle S}$ of ${\displaystyle \lfloor n/2\rfloor }$ of the input elements, in ascending order.
4. Insert at the start of ${\displaystyle S}$ the element that was paired with the first and smallest element of ${\displaystyle S}$.
5. Insert the remaining ${\displaystyle \lceil n/2\rceil -1}$ elements of ${\displaystyle X\setminus S}$ into ${\displaystyle S}$, one at a time, with a specially chosen insertion ordering described below. Use binary search in subsequences of ${\displaystyle S}$ (as described below) to determine the position at which each element should be inserted.

The algorithm is designed to take advantage of the fact that the binary searches used to insert elements into ${\displaystyle S}$ are most efficient (from the point of view of worst case analysis) when the length of the subsequence that is searched is one less than a power of two. This is because, for those lengths, all outcomes of the search use the same number of comparisons as each other.[1] To choose an insertion ordering that produces these lengths, consider the sorted sequence ${\displaystyle S}$ after step 4 of the outline above (before inserting the remaining elements), and let ${\displaystyle x_{i}}$ denote the ${\displaystyle i}$th element of this sorted sequence. Thus,

${\displaystyle S=(x_{1},x_{2},x_{3},\dots ),}$

where each element ${\displaystyle x_{i}}$ with ${\displaystyle i\geq 3}$ is paired with an element ${\displaystyle y_{i} that has not yet been inserted. (There are no elements ${\displaystyle y_{1}}$ or ${\displaystyle y_{2}}$ because ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ were paired with each other.) If ${\displaystyle n}$ is odd, the remaining unpaired element should also be numbered as ${\displaystyle y_{i}}$ with ${\displaystyle i}$ larger than the indexes of the paired elements. Then, the final step of the outline above can be expanded into the following steps:[1][2][3][4]

• Partition the uninserted elements ${\displaystyle y_{i}}$ into groups with contiguous indexes. There are two elements ${\displaystyle y_{3}}$ and ${\displaystyle y_{4}}$ in the first group, and the sums of sizes of every two adjacent groups form a sequence of powers of two. Thus, the sizes of groups are: 2, 2, 6, 10, 22, 42, ...
• Order the uninserted elements by their groups (smaller indexes to larger indexes), but within each group order them from larger indexes to smaller indexes. Thus, the ordering becomes
${\displaystyle y_{4},y_{3},y_{6},y_{5},y_{12},y_{11},y_{10},y_{9},y_{8},y_{7},y_{22},y_{21}\dots }$
• Use this ordering to insert the elements ${\displaystyle y_{i}}$ into ${\displaystyle S}$. For each element ${\displaystyle y_{i}}$, use a binary search from the start of ${\displaystyle S}$ up to but not including ${\displaystyle x_{i}}$ to determine where to insert ${\displaystyle y_{i}}$.

## Analysis

Let ${\displaystyle C(n)}$ denote the number of comparisons that merge-insertion sort makes, in the worst case, when sorting ${\displaystyle n}$ elements. This number of comparisons can be broken down as the sum of three terms:

• ${\displaystyle \lfloor n/2\rfloor }$ comparisons among the pairs of items,
• ${\displaystyle C(\lfloor n/2\rfloor )}$ comparisons for the recursive call, and
• some number of comparisons for the binary insertions used to insert the remaining elements.

In the third term, the worst-case number of comparisons for the elements in the first group is two, because each is inserted into a subsequence of ${\displaystyle S}$ of length at most three. First, ${\displaystyle y_{4}}$ is inserted into the three-element subsequence ${\displaystyle (x_{1},x_{2},x_{3})}$. Then, ${\displaystyle y_{3}}$ is inserted into some permutation of the three-element subsequence ${\displaystyle (x_{1},x_{2},y_{4})}$, or in some cases into the two-element subsequence ${\displaystyle (x_{1},x_{2})}$. Similarly, the elements ${\displaystyle y_{6}}$ and ${\displaystyle y_{5}}$ of the second group are each inserted into a subsequence of length at most seven, using three comparisons. More generally, the worst-case number of comparisons for the elements in the ${\displaystyle i}$th group is ${\displaystyle i+1}$, because each is inserted into a subsequence of length at most ${\displaystyle 2^{i+1}-1}$.[1][2][3][4] By summing the number of comparisons used for all the elements and solving the resulting recurrence relation, this analysis can be used to compute the values of ${\displaystyle C(n)}$, giving the formula[7]

${\displaystyle C(n)=\sum _{i=1}^{n}\left\lceil \log _{2}{\frac {3i}{4}}\right\rceil \approx n\log _{2}n-1.415n}$

or, in closed form,[8]

${\displaystyle C(n)=n{\biggl \lceil }\log _{2}{\frac {3n}{4}}{\biggr \rceil }-{\biggl \lfloor }{\frac {2^{\lfloor \log _{2}6n\rfloor }}{3}}{\biggr \rfloor }+{\biggl \lfloor }{\frac {\log _{2}6n}{2}}{\biggr \rfloor }.}$

For ${\displaystyle n=1,2,\dots }$ the numbers of comparisons are[1]

0, 1, 3, 5, 7, 10, 13, 16, 19, 22, 26, 30, 34, ... (sequence A001768 in the OEIS)

## Relation to other comparison sorts

The algorithm is called merge-insertion sort because the initial comparisons that it performs before its recursive call (pairing up arbitrary items and comparing each pair) are the same as the initial comparisons of merge sort, while the comparisons that it performs after the recursive call (using binary search to insert elements one by one into a sorted list) follow the same principle as insertion sort. In this sense, it is a hybrid algorithm that combines both merge sort and insertion sort.[9]

For small inputs (up to ${\displaystyle n=11}$) its numbers of comparisons equal the lower bound on comparison sorting of ${\displaystyle \lceil \log _{2}n!\rceil \approx n\log _{2}n-1.443n}$. However, for larger inputs the number of comparisons made by the merge-insertion algorithm is bigger than this lower bound. Merge-insertion sort also performs fewer comparisons than the sorting numbers, which count the comparisons made by binary insertion sort or merge sort in the worst case. The sorting numbers fluctuate between ${\displaystyle n\log _{2}n-0.915n}$ and ${\displaystyle n\log _{2}n-n}$, with the same leading term but a worse constant factor in the lower-order linear term.[1]

Merge-insertion sort is the sorting algorithm with the minimum possible comparisons for ${\displaystyle n}$ items whenever ${\displaystyle n\leq 22}$, and it has the fewest comparisons known for ${\displaystyle n\leq 46}$.[10][11] For 20 years, merge-insertion sort was the sorting algorithm with the fewest comparisons known for all input lengths. However, in 1979 Glenn Manacher published another sorting algorithm that used even fewer comparisons, for large enough inputs.[3][5] It remains unknown exactly how many comparisons are needed for sorting, for all ${\displaystyle n}$, but Manacher's algorithm and later record-breaking sorting algorithms have all used modifications of the merge-insertion sort ideas.[3]

## References

1. Ford, Lester R. Jr.; Johnson, Selmer M. (1959), "A tournament problem", American Mathematical Monthly, 66: 387–389, doi:10.2307/2308750, MR 0103159
2. ^ a b c Williamson, Stanley Gill (2002), "2.31 Merge insertion (Ford–Johnson)", Combinatorics for Computer Science, Dover books on mathematics, Courier Corporation, pp. 66–68, ISBN 9780486420769
3. Mahmoud, Hosam M. (2011), "12.3.1 The Ford–Johnson algorithm", Sorting: A Distribution Theory, Wiley Series in Discrete Mathematics and Optimization, vol. 54, John Wiley & Sons, pp. 286–288, ISBN 9781118031131
4. ^ a b c d Knuth, Donald E. (1998), "Merge insertion", The Art of Computer Programming, Vol. 3: Sorting and Searching (2nd ed.), pp. 184–186
5. ^ a b Manacher, Glenn K. (July 1979), "The Ford-Johnson Sorting Algorithm Is Not Optimal", Journal of the ACM, 26 (3): 441–456, doi:10.1145/322139.322145
6. ^ The original description by Ford & Johnson (1959) sorted the elements in descending order. The steps listed here reverse the output, following the description in Knuth (1998). The resulting algorithm makes the same comparisons but produces ascending order instead.
7. ^ Knuth (1998) credits the summation formula to the 1960 Ph.D. thesis of A. Hadian. The approximation formula was already given by Ford & Johnson (1959).
8. ^ Guy, Richard K.; Nowakowski, Richard J. (December 1995), "Monthly Unsolved Problems, 1969-1995", American Mathematical Monthly, 102 (10): 921–926, doi:10.2307/2975272
9. ^ Knuth (1998), p. 184: "Since it involves some aspects of merging and some aspects of insertion, we call it merge insertion."
10. ^ Peczarski, Marcin (2004), "New results in minimum-comparison sorting", Algorithmica, 40 (2): 133–145, doi:10.1007/s00453-004-1100-7, MR 2072769
11. ^ Peczarski, Marcin (2007), "The Ford-Johnson algorithm still unbeaten for less than 47 elements", Information Processing Letters, 101 (3): 126–128, doi:10.1016/j.ipl.2006.09.001, MR 2287331