# Partition problem

In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset S of positive integers can be partitioned into two subsets S1 and S2 such that the sum of the numbers in S1 equals the sum of the numbers in S2. Although the partition problem is NP-complete, there is a pseudo-polynomial time dynamic programming solution, and there are heuristics that solve the problem in many instances, either optimally or approximately. For this reason, it has been called "the easiest hard problem".

There is an optimization version of the partition problem, which is to partition the multiset S into two subsets S1, S2 such that the difference between the sum of elements in S1 and the sum of elements in S2 is minimized. The optimization version is NP-hard, but can be solved efficiently in practice.

The partition problem is a special case of two related problems:

• In the subset sum problem, the goal is to find a subset of S whose sum is a certain target number T given as input (the partition problem is the special case in which T is half the sum of S).
• In multiway number partitioning, there is an integer parameter k, and the goal is to decide whether S can be partitioned into k subsets of equal sum (the partition problem is the special case in which k = 2).
• However, it is quite different than the 3-partition problem: in that problem, the number of subsets is not fixed in advance - it should be |S|/3, where each subset must have exactly 3 elements. 3-partition is much harder than partition - it has no pseudo-polynomial time algorithm unless P = NP.

## Examples

Given S = {3,1,1,2,2,1}, a valid solution to the partition problem is the two sets S1 = {1,1,1,2} and S2 = {2,3}. Both sets sum to 5, and they partition S. Note that this solution is not unique. S1 = {3,1,1} and S2 = {2,2,1} is another solution.

Not every multiset of positive integers has a partition into two subsets with equal sum. An example of such a set is S = {2,5}.

## Computational hardness

The partition problem is NP hard. This can be proved by reduction from the subset sum problem. An instance of SubsetSum consists of a set S of positive integers and a target sum T < S; the goal is to decide if there is a subset of S with sum exactly T.

Given such an instance, construct an instance of Partition in which the input set contains the original set plus two elements: z1 and z2, with z1 = sum(S) and z2 = 2 T. The sum of this input set is sum(S)+z1+z2 = 2 sum(S)+2 T , so the target sum for Partition is sum(S) + T.

• Suppose there exists a solution S' to the SubsetSum instance. Then sum(S')=T, so sum(S' u {z1}) = sum(S) + T, so S' u {z1} is a solution to the Partition instance.
• Conversely, suppose there exists a solution S'' to the Partition instance. Then, S'' must contain either z1 or z2, but not both, since their sum is more than sum(S) + T. If S'' contains z1, then it must contain elements from S with a sum of exactly T, so S'' minus z1 is a solution to the SubsetSum instance. If S'' contains z2, then it must contain elements from S with a sum of exactly sum(S)-T, so the other objects in S are a solution to the SubsetSum instance.

## Approximation algorithms

As mentioned above, the partition problem is a special case of multiway-partitioning and of subset-sum. Therefore, it can be solved by algorithms developed for each of these problems. Algorithms developed for multiway number partitioning include:

• Greedy number partitioning - loops over the numbers, and puts each number in the set whose current sum is smallest. If the numbers are not sorted, then the runtime is O(n) and the approximation ratio is at most 3/2 ("approximation ratio" means the larger sum in the algorithm output, divided by the larger sum in an optimal partition). Sorting the numbers increases the runtime to O(n log n ) and improves the approximation ratio to 7/6. If the numbers are distributed uniformly in [0,1], then the approximation ratio is at most $1+O(\log {\log {n}}/n)$ almost surely , and $1+O(1/n)$ in expectation.
• Largest Differencing Method (also called the Karmarkar-Karp algorithm ) sorts the numbers in descending order and repeatedly replaces numbers by their differences. The runtime complexity is O(n log n ). In the worst case, its approximation ratio is similar - at most 7/6. However, in the average case it performs much better than the greedy algorithm: when numbers are distributed uniformly in [0,1], its approximation ratio is at most $1+1/n^{\Theta (\log {n})}$ in expectation. It also performs better in simulation experiments.
• The Multifit algorithm uses binary search combined with an algorithm for bin packing . In the worst case, its approximation ratio is 8/7.

## Exact algorithms

There are exact algorithms, that always find the optimal partition. Since the problem is NP-hard, such algorithms might take exponential time in general, but may be practically usable in certain cases. Algorithms developed for multiway number partitioning include:

• The pseudopolynomial time number partitioning takes $O(nm)$ memory, where m is the largest number in the input.
• The Complete Greedy Algorithm (CGA) considers all partitions by constructing a binary tree. Each level in the tree corresponds to an input number, where the root corresponds to the largest number, the level below to the next-largest number, etc. Each branch corresponds to a different set in which the current number can be put. Traversing the tree in depth-first order requires only $O(n)$ space, but might take $O(2^{n})$ time. The runtime can be improved by using a greedy heuristic: in each level, develop first the branch in which the current number is put in the set with the smallest sum. This algorithm finds first the solution found by greedy number partitioning, but then proceeds to look for better solutions. Some variations of this idea are fully polynomial-time approximation schemes for the subset-sum problem, and hence for the partition problem as well.
• The Complete Karmarkar-Karp algorithm (CKK) considers all partitions by constructing a binary tree. Each level corresponds to a pair of numbers. The left branch corresponds to putting them in different subsets (i.e., replacing them by their difference), and the right branch corresponds to putting them in the same subset (i.e., replacing them by their sum). This algorithm finds first the solution found by the largest differencing method, but then proceeds to find better solutions. It runs substantially faster than CGA on random instances. Its advantage is much larger when an equal partition exists, and can be of several orders of magnitude. In practice, problems of arbitrary size can be solved by CKK if the numbers have at most 12 significant digits. CKK can also run as an anytime algorithm: it finds the KK solution first, and then finds progressively better solutions as time allows (possibly requiring exponential time to reach optimality, for the worst instances). It requires $O(n)$ space, but in the worst case might take $O(2^{n})$ time.

Algorithms developed for subset sum include:

• Horowitz and Sanhi - runs in time $O(2^{n/2}\cdot (n/2))$ , but requires $O(2^{n/2})$ space.
• Schroeppel and Shamir - runs in time $O(2^{n/2}\cdot (n/4))$ , and requires much less space - $O(2^{n/4})$ .
• Howgrave-Graham and Joux - runs in time $O(2^{n/3})$ , but it is a randomized algorithm that only solves the decision problem (not the optimization problem).

## Hard instances and phase-transition

Sets with only one, or no partitions tend to be hardest (or most expensive) to solve compared to their input sizes. When the values are small compared to the size of the set, perfect partitions are more likely. The problem is known to undergo a "phase transition"; being likely for some sets and unlikely for others. If m is the number of bits needed to express any number in the set and n is the size of the set then $m/n<1$ tends to have many solutions and $m/n>1$ tends to have few or no solutions. As n and m get larger, the probability of a perfect partition goes to 1 or 0 respectively. This was originally argued based on empirical evidence by Gent and Walsh, then using methods from statistical physics by Mertens,:130 and later proved by Borgs, Chayes, and Pittel.

## Probabilistic version

A related problem, somewhat similar to the Birthday paradox, is that of determining the size of the input set so that we have a probability of one half that there is a solution, under the assumption that each element in the set is randomly selected with uniform distribution between 1 and some given value. The solution to this problem can be counter-intuitive, like the birthday paradox.

## Variants and generalizations

The restriction of requiring the partition to have equal size, or that all input integers be distinct, is also NP-hard.[citation needed]

Product partition is the problem of partitioning a set of integers into two sets with the same product (rather than the same sum). This problem is strongly NP-hard.

Kovalyov and Pesch discuss a generic approach to proving NP-hardness of partition-type problems.

## Applications

One application of the partition problem is for manipulation of elections. Suppose there are three candidates (A, B and C). A single candidate should be elected using a voting rule based on scoring, e.g. the veto rule (each voter vetos a single candidate and the candidate with the fewest vetos wins). If a coalition wants to ensure that C is elected, they should partition their votes among A and B so as to maximize the smallest number of vetoes each of them gets. If the votes are weighted, then the problem can be reduced to the partition problem, and thus it can be solved efficiently using CKK. The same is true for any other voting rule that is based on scoring.