# Bin packing problem

In the bin packing problem, items of different volumes must be packed into a finite number of bins or containers each of a fixed given volume in a way that minimizes the number of bins used. In computational complexity theory, it is a combinatorial NP-hard problem.[1] The decision problem (deciding if items will fit into a specified number of bins) is NP-complete.[2]

There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. They have many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in field-programmable gate array semiconductor chip design.

The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterised by both a volume and a value, the problem of maximising the value of items that can fit in the bin is known as the knapsack problem.

Despite the fact that the bin packing problem has an NP-hard computational complexity, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many heuristics have been developed: for example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires Θ(n log n) time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order (sometimes known as the first-fit decreasing algorithm), although this still does not guarantee an optimal solution, and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution.[3]

A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing[4] since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if the space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.

Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This will of course require additional storage for holding the items to be rearranged.

## Formal statement

In Computers and Intractability[5] Garey and Johnson list the bin packing problem under the reference [SR1]. They define its decision variant as follows.

Instance: Finite set ${\displaystyle I}$ of items, a size ${\displaystyle s(i)\in \mathbb {Z} ^{+}}$ for each ${\displaystyle i\in I}$, a positive integer bin capacity ${\displaystyle B}$, and a positive integer ${\displaystyle K}$.
Question: Is there a partition of ${\displaystyle I}$ into disjoint sets ${\displaystyle I_{1},\dots ,I_{K}}$ such that the sum of the sizes of the items in each ${\displaystyle I_{j}}$ is ${\displaystyle B}$ or less?

Note that in the literature often an equivalent notation is used, where ${\displaystyle B=1}$ and ${\displaystyle s(i)\in \mathbb {Q} \cap (0,1]}$ for each ${\displaystyle i\in I}$. Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of ${\displaystyle K}$. A solution is optimal if it has minimal ${\displaystyle K}$. The ${\displaystyle K}$-value for an optimal solution for a set of items ${\displaystyle I}$ is denoted by ${\displaystyle \mathrm {OPT} (I)}$ or just ${\displaystyle \mathrm {OPT} }$ if the set of items is clear from the context.

A possible integer linear programming formulation of the problem is:

 minimize ${\displaystyle K=\sum _{j=1}^{n}y_{j}}$ subject to ${\displaystyle K\geq 1,}$ ${\displaystyle \sum _{i\in I}^{n}s(i)x_{ij}\leq By_{j},}$ ${\displaystyle \forall j\in \{1,\ldots ,n\}}$ ${\displaystyle \sum _{j=1}^{n}x_{ij}=1,}$ ${\displaystyle \forall i\in I}$ ${\displaystyle y_{j}\in \{0,1\},}$ ${\displaystyle \forall j\in \{1,\ldots ,n\}}$ ${\displaystyle x_{ij}\in \{0,1\},}$ ${\displaystyle \forall i\in I\,\forall j\in \{1,\ldots ,n\}}$

where ${\displaystyle y_{j}=1}$ if bin ${\displaystyle j}$ is used and ${\displaystyle x_{ij}=1}$ if item ${\displaystyle i}$ is put into bin ${\displaystyle j}$.[6]

## Hardness of bin packing

The bin packing problem is strongly NP-complete.[5] This can be proven by reducing the strongly NP-complete 3-partition problem to bin packing. Furthermore, a reduction from the partition problem shows that there can be no approximation algorithm with absolute approximation ratio smaller than ${\displaystyle 3/2}$ unless ${\displaystyle P=NP}$.[7] On the other hand, it is solvable in pseudo-polynomial time for any fixed number of bins ${\displaystyle K}$, and solvable in polynomial time for any fixed bin capacity ${\displaystyle B}$.[5]

In the online version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. Yao [8] proved 1980 that there can be no online algorithm with asymptotic competitive ratio smaller than ${\displaystyle 3/2}$. Brown [9] and Liang[10] improved this bound to ${\displaystyle 1.53635}$. Afterward, this bound was improved to ${\displaystyle 1.54014}$ by Vliet.[11] In 2012, this lower bound was again improved by Békési and Galambos[12] to ${\displaystyle 248/161\approx 1.54037}$.

## Approximation algorithms for bin packing

To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items ${\displaystyle L}$ the number ${\displaystyle A(L)}$ denotes the number of bins used when algorithm ${\displaystyle A}$ is applied to list ${\displaystyle L}$, while ${\displaystyle \mathrm {OPT} (L)}$ denotes the optimum number for this list. The absolute worst-case performance ratio ${\displaystyle R_{A}}$ for an algorithm ${\displaystyle A}$ is defined as

${\displaystyle R_{A}\equiv \inf\{r\geq 1:A(L)/\mathrm {OPT} (L)\leq r{\text{ for all lists }}L\}.}$

On the other hand, the asymptotic worst-case ratio ${\displaystyle R_{A}^{\infty }}$ is defined as

${\displaystyle R_{A}^{\infty }\equiv \inf\{r\geq 1:\exists N>0,A(L)/\mathrm {OPT} (L)\leq r{\text{ for all lists }}L{\text{ with }}\mathrm {OPT} (L)\geq N\}.}$

Additionally, one can restrict the lists to those for which all items have a size of at most ${\displaystyle \alpha }$. For such lists, the bounded size performance ratios are denoted as ${\displaystyle R_{A}(\alpha )}$ and ${\displaystyle R_{A}^{\infty }(\alpha )}$.

Approximation algorithms for bin packing can be classified into two categories:

1. Online heuristics, that consider the items in a given order and place them one by one inside the bins. These heuristics are also applicable to the online version of this problem.
2. Offline heuristics, that modify the given list of items e.g. by sorting the items by size. These algorithms are no longer applicable to the online variant of this problem. However, they have an improved approximation guarantee while maintaining the advantage of their small time-complexity. A sub-category of offline heuristics is asymptotic approximation schemes. These algorithms have an approximation guarantee of the form ${\displaystyle (1+\varepsilon )\mathrm {OPT} (L)+C}$ for some constant that may depend on ${\displaystyle 1/\varepsilon }$. For an arbitrarily large ${\displaystyle \mathrm {OPT} (L)}$ these algorithms get arbitrarily close to ${\displaystyle \mathrm {OPT} (L)}$. However, this comes at the cost of a (drastically) increased time complexity compared to the heuristical approaches.

## Online heuristics

A diverse set of offline and online heuristics for bin-packing have been studied by Johnson.[13] He introduced the following two characterizations for online heuristics. An algorithm is an any-fit (AF) algorithm if it fulfills the following property: A new bin is opened for a considered item, only if it does not fit into an already open bin. On the other hand, an algorithm is an almost-any-fit (AAF) algorithm if it has the additional property: If a bin is the unique bin with the lowest non-zero level, it cannot be chosen unless the item will not fit in any other bin with a non-zero level. He proved that each AAF-algorithm ${\displaystyle A}$ has an approximation guarantee of ${\displaystyle R_{NF}^{\infty }=17/10}$, meaning that it has an asymptotic approximation ratio of at most ${\displaystyle 17/10}$ and that there are lists for that it has an asymptotic approximation ratio of at least ${\displaystyle 17/10}$.

An online algorithm uses k-bounded space if, for each new item, the number of bins in which it may be packed is at most k.[14] Examples for these algorithms are Next-k-Fit and Harmonic-k.

Algorithm Approximation guarantee Worst case list ${\displaystyle L}$ Time-complexity
Next-fit (NF) ${\displaystyle NF(L)\leq 2\cdot \mathrm {OPT} (L)-1}$[13] ${\displaystyle NF(L)=2\cdot \mathrm {OPT} (L)-2}$[13] ${\displaystyle {\mathcal {O}}(|L|)}$
First-fit (FF) ${\displaystyle FF(L)\leq \lfloor 1.7\mathrm {OPT} (L)\rfloor }$[15] ${\displaystyle FF(L)=\lfloor 1.7\mathrm {OPT} (L)\rfloor }$[15] ${\displaystyle {\mathcal {O}}(|L|\log(|L|))}$[13]
Best-fit (BF) ${\displaystyle BF(L)\leq \lfloor 1.7\mathrm {OPT} (L)\rfloor }$[16] ${\displaystyle BF(L)=\lfloor 1.7\mathrm {OPT} (L)\rfloor }$[16] ${\displaystyle {\mathcal {O}}(|L|\log(|L|))}$[13]
Worst-Fit (WF) ${\displaystyle WF(L)\leq 2\cdot \mathrm {OPT} (L)-1}$[13] ${\displaystyle WF(L)=2\cdot \mathrm {OPT} (L)-2}$[13] ${\displaystyle {\mathcal {O}}(|L|\log(|L|))}$[13]
Almost-Worst-Fit (AWF) ${\displaystyle R_{AWF}^{\infty }\leq 17/10}$[13] ${\displaystyle R_{AWF}^{\infty }=17/10}$[13] ${\displaystyle {\mathcal {O}}(|L|\log(|L|))}$[13]
Refined-First-Fit (RFF) ${\displaystyle RFF(L)\leq (5/3)\cdot \mathrm {OPT} (L)+5}$[8] ${\displaystyle RFF(L)=(5/3)\mathrm {OPT} (L)+1/3}$ (for ${\displaystyle \mathrm {OPT} (L)=6k+1}$)[8] ${\displaystyle {\mathcal {O}}(|L|\log(|L|))}$[8]
Harmonic-k (Hk) ${\displaystyle R_{Hk}^{\infty }\leq 1.69103}$ for ${\displaystyle k\rightarrow \infty }$[17] ${\displaystyle R_{Hk}^{\infty }\geq 1.69103}$ [17] ${\displaystyle {\mathcal {O}}(|L|\log(|L|)}$[17]
Refined Harmonic (RH) ${\displaystyle R_{RH}^{\infty }\leq 373/228\approx 1.63597}$[17] ${\displaystyle {\mathcal {O}}(|L|)}$[17]
Modified Harmonic (MH) ${\displaystyle R_{MH}^{\infty }\leq 538/33\approx 1.61562}$[18]
Modified Harmonic 2 (MH2) ${\displaystyle R_{MH2}^{\infty }\leq 239091/148304\approx 1.61217}$[18]
Harmonic + 1 (H+1) ${\displaystyle R_{H+1}^{\infty }\geq 1.59217}$[19]
Harmonic ++ (H++) ${\displaystyle R_{H++}^{\infty }\leq 1.58889}$[19] ${\displaystyle R_{H++}^{\infty }\geq 1.58333}$[19]

### Next-Fit (NF)

Next Fit (NF) is a bounded space AF-algorithm with only one partially filled bin that is open at any time. The algorithm works as follows. It considers the items in an order defined by a list ${\displaystyle L}$. If an item fits inside the currently considered bin, the item is placed inside it. Otherwise, the current bin is closed, a new bin is opened and the current item is placed inside this new bin.

This algorithm was studied by Johnson in this doctoral thesis[13] in the year 1973. It has the following properties:

• The running time can be bounded by ${\displaystyle {\mathcal {O}}(n\log(n))}$, where ${\displaystyle n}$ is the number of items.[13]
• For each list ${\displaystyle L}$ it holds that ${\displaystyle NF(L)\leq 2\cdot \mathrm {OPT} (L)-1}$ and hence ${\displaystyle R_{NF}=2}$.[13]
• For each ${\displaystyle N\in \mathbb {N} }$ there exists a list ${\displaystyle L}$ such that ${\displaystyle \mathrm {OPT} (L)=N}$ and ${\displaystyle NF(L)=2\cdot \mathrm {OPT} (L)-2}$.[13]
• ${\displaystyle R_{NF}^{\infty }(\alpha )\leq 2}$ for all ${\displaystyle \alpha \geq 1/2}$.[13]
• ${\displaystyle R_{NF}^{\infty }(\alpha )\leq 1/(1-\alpha )}$ for all ${\displaystyle \alpha \leq 1/2}$.[13]
• For each algorithm ${\displaystyle A}$ that is an AF-algorithm it holds that ${\displaystyle R_{A}^{\infty }(\alpha )\leq R_{NF}^{\infty }(\alpha )}$.[13]

The intuition to the proof of the upper bound ${\displaystyle NF(L)\leq 2\cdot \mathrm {OPT} (L)}$ is the following: The number of bins used by this algorithm is no more than twice the optimal number of bins. In other words, it is impossible for 2 bins to be at most half full because such a possibility implies that at some point, exactly one bin was at most half full and a new one was opened to accommodate an item of size at most ${\displaystyle B/2}$. But since the first one has at least a space of ${\displaystyle B/2}$, the algorithm will not open a new bin for any item whose size is at most ${\displaystyle B/2}$. Only after the bin fills with more than ${\displaystyle B/2}$ or if an item with a size larger than ${\displaystyle B/2}$ arrives, the algorithm may open a new bin. Thus if we have ${\displaystyle K}$ bins, at least ${\displaystyle K-1}$ bins are more than half full. Therefore, ${\displaystyle \sum _{i\in I}s(i)>{\tfrac {K-1}{2}}B}$. Because ${\displaystyle {\tfrac {\sum _{i\in I}s(i)}{B}}}$ is a lower bound of the optimum value ${\displaystyle \mathrm {OPT} }$, we get that ${\displaystyle K-1<2\mathrm {OPT} }$ and therefore ${\displaystyle K\leq 2\mathrm {OPT} }$.[20]

The family of lists for which it holds that ${\displaystyle NF(L)=2\cdot \mathrm {OPT} (L)-2}$ is given by ${\displaystyle L:=\left({\frac {1}{2}},{\frac {1}{2(N-1)}},{\frac {1}{2}},{\frac {1}{2(N-1)}},\dots ,{\frac {1}{2}},{\frac {1}{2(N-1)}}\right)}$ with ${\displaystyle |L|=4(N-1)}$. The optimal solution for this list has ${\displaystyle N-1}$ bins containing two items with size ${\displaystyle 1/2}$ and one bin with ${\displaystyle 2(N-1)}$ items with size ${\displaystyle 1/2(N-1)}$ (i.e., ${\displaystyle N}$ bins total), while the solution generated by NF has ${\displaystyle 2(N-1)}$ bins with one item of size ${\displaystyle 1/2}$ and one item with size ${\displaystyle 1/(2(N-1))}$.

### Next-k-Fit (NkF)

NkF works as NF, but instead of keeping only one bin open, the algorithm keeps the last ${\displaystyle k}$ bins open and chooses the first bin in which the item fits.

For ${\displaystyle k\geq 2}$ the NkF delivers results that are improved compared to the results of NF, however, increasing ${\displaystyle k}$ to constant values larger than ${\displaystyle 2}$ improves the algorithm no further in its worst-case behavior. If algorithm ${\displaystyle A}$ is an AAF-algorthm and ${\displaystyle m=\lfloor 1/\alpha \rfloor \geq 2}$ then ${\displaystyle R_{A}^{\infty }(\alpha )\leq R_{N2F}^{\infty }(\alpha )=1+1/m}$.[13]

### First-Fit (FF)

First-Fit is an AF-algorithm that processes the items in a given arbitrary order ${\displaystyle L}$. For each item in ${\displaystyle L}$, it attempts to place the item in the first bin that can accommodate the item. If no bin is found, it opens a new bin and puts the item within the new bin.

The first upper bound of ${\displaystyle FF(L)\leq 1.7\mathrm {OPT} +3}$ for FF was proven by Ullman[21] in 1971. In 1972, this upper bound was improved to ${\displaystyle FF(L)\leq 1.7\mathrm {OPT} +2}$ by Garey et al.[22] In 1976, it was improved by Garey et al. [23] to ${\displaystyle FF(L)\leq \lceil 1.7\mathrm {OPT} \rceil }$, which equivalent to ${\displaystyle FF(L)\leq 1.7\mathrm {OPT} +0.9}$ due to the integrality of ${\displaystyle FF(L)}$ and ${\displaystyle \mathrm {OPT} }$. The next improvement, by Xia and Tan [24] in 2010, lowered the bound to ${\displaystyle FF(L)\leq 1.7\mathrm {OPT} +0.7}$. Finally 2013, this bound was improved to ${\displaystyle FF(L)\leq \lfloor 1.7\mathrm {OPT} \rfloor }$ by Dósa and Sgall.[15] They also present an example input list ${\displaystyle L}$, for that ${\displaystyle FF(L)}$ matches this bound.

### Best-Fit (BF)

Best-fit is an AAF-algorithm similar to First-fit. Instead of placing the next item in the first bin, where it fits, it is placed inside the bin that has the maximum load, where the item fits.

The first upper bound of ${\displaystyle BF(L)\leq 1.7\mathrm {OPT} +3}$ for BF was proven by Ullman[21] in 1971. This upper bound was improved to ${\displaystyle BF(L)\leq 1.7\mathrm {OPT} +2}$ by Garey et al.[22] Afterward, it was improved by Garey et al. [23] to ${\displaystyle BF(L)\leq \lceil 1.7\mathrm {OPT} \rceil }$. Finally this bound was improved to ${\displaystyle FF(L)\leq \lfloor 1.7\mathrm {OPT} \rfloor }$ by Dósa and Sgall.[16] They also present an example input list ${\displaystyle L}$, for that ${\displaystyle BF(L)}$ matches this bound.

### Worst-Fit (WF)

This algorithm is similar to Best-fit. instead of placing the item inside the bin with the maximum load, the item is placed inside the bin with the minimum load.

This algorithm can behave as badly as Next-Fit and will do so on the worst-case list for that ${\displaystyle NF(L)=2\cdot \mathrm {OPT} (L)-2}$.[13] Furthermore, it holds that ${\displaystyle R_{WF}^{\infty }(\alpha )=R_{NF}^{\infty }(\alpha )}$. Since WF is an AF-algorithm, there exists an AF-algorithm such that ${\displaystyle R_{AF}^{\infty }(\alpha )=R_{NF}^{\infty }(\alpha )}$.[13]

### Almost Worst-Fit (AWF)

AWF is an AAF-algorithm that considers the items in order of a given list ${\displaystyle L}$. It tries to fill the next item inside the second most empty open bin (or emptiest bin if there are two such bins). If it does not fit, it tries the most empty one, and if it does not fit there as well, the algorithm opens a new bin. As AWF is an AAF algorithm it has an asymptotic worst-case ratio of ${\displaystyle 17/10}$.[13]

### Refined-First-Fit (RFF)

The items are categorized in four classes. An item ${\displaystyle i}$ is called ${\displaystyle A}$-piece, ${\displaystyle B_{1}}$-piece, ${\displaystyle B_{2}}$-piece, or ${\displaystyle X}$-piece, if its size is in the interval ${\displaystyle (1/2,1]}$, ${\displaystyle (2/5,1/2]}$, ${\displaystyle (1/3,2/5]}$, or ${\displaystyle (0,1/3]}$ respectively. Similarly, the bins are categorized into four classes. Let ${\displaystyle m\in \{6,7,8,9\}}$ be a fixed integer. The next item ${\displaystyle i\in L}$ is assigned due to the following rules: It is placed using First-Fit into a bin in

• Class 1, if ${\displaystyle i}$ is an ${\displaystyle A}$-piece,
• Class 2, if ${\displaystyle i}$ is an ${\displaystyle B_{1}}$-piece,
• Class 3, if ${\displaystyle i}$ is an ${\displaystyle B_{2}}$-piece, but not the ${\displaystyle (mk)}$th ${\displaystyle B_{2}}$-piece seen so far, for any integer ${\displaystyle k\geq 1}$.
• Class 1, if ${\displaystyle i}$ is the ${\displaystyle (mk)}$th ${\displaystyle B_{2}}$-piece seen so far,
• Class 4, if ${\displaystyle i}$ is an ${\displaystyle X}$-piece.

Note that this algorithm is not an any-Fit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin. This algorithm was first presented by Andrew Chi-Chih Yao,[8] who proved that it has an approximation guarantee of ${\displaystyle RFF(L)\leq (5/3)\cdot \mathrm {OPT} (L)+5}$ and presented a familie of lists ${\displaystyle L_{k}}$ with ${\displaystyle RFF(L_{k})=(5/3)\mathrm {OPT} (L_{k})+1/3}$ for ${\displaystyle \mathrm {OPT} (L)=6k+1}$.

### Harmonic-k

The Harmonic-k algorithm partitiones the interval of sizes ${\displaystyle (0,1]}$ harmonically into ${\displaystyle k-1}$ pieces ${\displaystyle I_{j}:=(1/(j+1),1/j]}$ for ${\displaystyle 1\leq j and ${\displaystyle I_{k}:=(0,1/k]}$ such that ${\displaystyle \bigcup _{j=1}^{k}I_{j}=(0,1]}$. An item ${\displaystyle i\in L}$ is called an ${\displaystyle I_{j}}$-item, if ${\displaystyle s(i)\in I_{j}}$. The algorithm divides the set of empty bins into ${\displaystyle k}$ infinite classes ${\displaystyle B_{j}}$ for ${\displaystyle 1\leq j\leq k}$, one bin type for each item type. A bin of type ${\displaystyle B_{j}}$ is only used for bins to pack items of type ${\displaystyle j}$. Each bin of type ${\displaystyle B_{j}}$ for ${\displaystyle 1\leq j can contain exactly ${\displaystyle j}$ ${\displaystyle I_{j}}$-items. The algorithm now acts as follows: If the next item ${\displaystyle i\in L}$ is an ${\displaystyle I_{j}}$-item for ${\displaystyle 1\leq j, the item is placed in the first (only open) ${\displaystyle B_{j}}$ bin that contains fewer than ${\displaystyle j}$ pieces or opens a new one if no such bin exists. If the next item ${\displaystyle i\in L}$ is an ${\displaystyle I_{k}}$-item, the algorithm places it into the bins of type ${\displaystyle B_{k}}$ using Next-Fit.

This algorithm was first described by Lee and Lee.[17] It has a time complexity of ${\displaystyle {\mathcal {O}}(|L|\log(|L|))}$ and at each step, there are at most ${\displaystyle k}$ open bins that can be potentially used to place items, i.e., it is a k-bounded space algorithm. Furthermore, they studied its asymptotic approximation ratio. They defined a sequence ${\displaystyle \sigma _{1}:=1}$, ${\displaystyle \sigma _{i+1}:=\sigma _{i}(\sigma _{i}+1)}$ for ${\displaystyle i\geq 1}$ and proved that for ${\displaystyle \sigma _{l} it holds that ${\displaystyle R_{Hk}^{\infty }\leq \sum _{i=1}^{l}1/\sigma _{i}+k/(\sigma _{l+1}(k-1))}$. For ${\displaystyle k\rightarrow \infty }$ it holds that ${\displaystyle R_{Hk}^{\infty }\approx 1.6910}$. Additionally, they presented a family of worst-case examples for that ${\displaystyle R_{Hk}^{\infty }=\sum _{i=1}^{l}1/\sigma _{i}+k/(\sigma _{l+1}(k-1))}$

### Refined-Harmonic (RH)

The Refined-Harmonic combines ideas from the Harmonic-k algorithm with ideas from Refined-First-Fit. It places the items larger than ${\displaystyle 1/3}$ similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than ${\displaystyle 1/2}$.

The algorithm classifies the items with regard to the following intervals: ${\displaystyle I_{1}:=(59/96,1]}$, ${\displaystyle I_{a}:=(1/2,59/96]}$, ${\displaystyle I_{2}:=(37/96,1/2]}$, ${\displaystyle I_{b}:=(1/3,37/96]}$, ${\displaystyle I_{j}:=(1/(j+1),1/j]}$, for ${\displaystyle j\in \{3,\dots ,k-1\}}$, and ${\displaystyle I_{k}:=(0,1/k]}$. The algorithm places the ${\displaystyle I_{j}}$-items as in Harmonic-k, while it follows a different strategy for the items in ${\displaystyle I_{a}}$ and ${\displaystyle I_{b}}$. There are four possibilities to pack ${\displaystyle I_{a}}$-items and ${\displaystyle I_{b}}$-items into bins.

• An ${\displaystyle I_{a}}$-bin contains only one ${\displaystyle I_{a}}$-item.
• An ${\displaystyle I_{b}}$-bin contains only one ${\displaystyle I_{b}}$-item.
• An ${\displaystyle I_{a}b}$-bin contains one ${\displaystyle I_{a}}$-item and one ${\displaystyle I_{b}}$-item.
• An ${\displaystyle I_{b}b}$-bin contains two ${\displaystyle I_{b}}$-items.

An ${\displaystyle I_{b}'}$-bin denotes a bin that is designated to contain a second ${\displaystyle I_{b}}$-item. The algorithm uses the numbers N_a, N_b, N_ab, N_bb, and N_b' to count the numbers of corresponding bins in the solution. Furthermore, N_c= N_b+N_ab

Algorithm Refined-Harmonic-k for a list L = (i_1, \dots i_n):
1. N_a = N_b = N_ab = N_bb = N_b' = N_c = 0
2. If i_j is an I_k-piece
then use algorithm Harmonic-k to pack it
3.     else if i_j is an I_a-item
then if N_b != 1,
then pack i_j into any J_b-bin; N_b--;  N_ab++;
else place i_j in a new (empty) bin; N_a++;
4.         else if i_j is an I_b-item
then if N_b' = 1
then place i_j into the I_b'-bin; N_b' = 0; N_bb++;
5.                 else if N_bb <= 3N_c
then place i_j in a new bin and designate it as an I_b'-bin; N_b' = 1
else if N_a != 0
then place i_j into any I_a-bin; N_a--; N_ab++;N_c++
else place i_j in a new bin; N_b++;N_c++


This algorithm was first described by Lee and Lee.[17] They proved that for ${\displaystyle k=20}$ it holds that ${\displaystyle R_{RH}^{\infty }\leq 373/228}$.

## Offline algorithms

Algorithm Approximation guarantee Worst case instance
First-fit-decreasing (FFD) ${\displaystyle FFD(I)\leq 11/9\mathrm {OPT} (I)+6/9}$ [25] ${\displaystyle FFD(I)=11/9\mathrm {OPT} (I)+6/9}$[25]
Modified-first-fit-decreasing (MFFD) ${\displaystyle MFFD(I)\leq (71/60)\mathrm {OPT} (I)+1}$[26] ${\displaystyle R_{MFFD}^{\infty }\geq 71/60}$[27]
Hoberg and Rothvoss[28] ${\displaystyle HB(I)\leq OPT(I)+O(\log {OPT(I)})}$

### First Fit Decreasing (FFD)

This algorithm works analog to First-Fit. However, before starting to place the items, they are sorted in non-increasing order of their sizes. This algorithm can be implemented to have a running time of at most ${\displaystyle O(n\log(n))}$.

In 1973, D.S. Johnson proved in his doctoral theses[13] that ${\displaystyle FFD(I)\leq 11/9\mathrm {OPT} (I)+4}$. In 1985, B.S. Backer[29] gave a slightly simpler proof and showed that the additive constant is not more than 3. Yue Minyi[30] proved that ${\displaystyle FFD(I)\leq 11/9\mathrm {OPT} (I)+1}$ in 1991 and, in 1997, improved this analysis to ${\displaystyle FFD(I)\leq 11/9\mathrm {OPT} (I)+7/9}$ together with Li Rongheng.[31] In 2007 György Dósa[25] proved the tight bound ${\displaystyle FFD(I)\leq 11/9\mathrm {OPT} (I)+6/9}$ and presented an example for which ${\displaystyle FFD(I)=11/9\mathrm {OPT} (I)+6/9}$.

The lower bound example given in by Dósa[25] is the following: Consider the two bin configurations ${\displaystyle B_{1}:=\{1/2+\varepsilon ,1/4+\varepsilon ,1/4-2\varepsilon \}}$ and ${\displaystyle B_{2}:=\{1/4+2\varepsilon ,1/4+2\varepsilon ,1/4-2\varepsilon ,1/4-2\varepsilon \}}$. If there are 4 copies of ${\displaystyle B_{1}}$ and 2 copies of ${\displaystyle B_{2}}$ in the optimal solution, FFD will compute the following bins: 4 bins with configuration ${\displaystyle \{1/2+\varepsilon ,1/4+2\varepsilon \}}$, one bin with configuration ${\displaystyle \{1/4+\varepsilon ,1/4+\varepsilon ,1/4+\varepsilon \}}$, one bin with configuration ${\displaystyle \{1/4+\varepsilon ,1/4-2\varepsilon ,1/4-2\varepsilon ,1/4-2\varepsilon \}}$, one bin with configuration ${\displaystyle \{1/4-2\varepsilon ,1/4-2\varepsilon ,1/4-2\varepsilon ,1/4-2\varepsilon \}}$, and one final bin with configuration ${\displaystyle \{1/4-2\varepsilon \}}$, i.e. 8 bins total, while the optimum has only 6 bins. Therefore, the upper bound is tight, because ${\displaystyle 11/9\cdot 6+6/9=72/9=8}$. This example can be extended to all sizes of ${\displaystyle OPT(I)}$.[25]

### Modified First Fit Decreasing (MFFD)

Modified first fit decreasing (MFFD)[27] improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Then it proceeds through five phases:

1. Allot a bin for each large item, ordered largest to smallest.
2. Proceed forward through the bins. On each: If the smallest remaining medium item does not fit, skip this bin. Otherwise, place the largest remaining medium item that fits.
3. Proceed backward through those bins that do not contain a medium item. On each: If the two smallest remaining small items do not fit, skip this bin. Otherwise, place the smallest remaining small item and the largest remaining small item that fits.
4. Proceed forward through all bins. If the smallest remaining item of any size class does not fit, skip this bin. Otherwise, place the largest item that fits and stay on this bin.
5. Use FFD to pack the remaining items into new bins.

This algorithm was first studied by Johnson and Garey[27] in 1985, where they proved that ${\displaystyle MFFD(I)\leq (71/60)\mathrm {OPT} (I)+(31/6)}$. This bound was improved in the year 1995 by Yue and Zhang[26] who proved that ${\displaystyle MFFD(I)\leq (71/60)\mathrm {OPT} (I)+1}$.

### Asymptotic approximation schemes

Fernandez de la Vega and Lueker[32] presented the first PTAS for bin packing. For every ${\displaystyle \varepsilon >0}$, their algorithm finds a solution with size at most ${\displaystyle (1+\varepsilon )\mathrm {OPT} +{\mathcal {O}}(1/\varepsilon ^{2})}$, and runs in time ${\displaystyle {\mathcal {O}}(n\log(1/\varepsilon ))+{\mathcal {O}}_{\varepsilon }(1)}$, where ${\displaystyle {\mathcal {O}}_{\varepsilon }(1)}$ denotes a function only dependent on ${\displaystyle 1/\varepsilon }$.

Karmarkar and Karp[33] improved the time complexity of this algorithm to polynomial in ${\displaystyle n}$ and ${\displaystyle 1/\varepsilon }$. Their algorithm finds a solution with size at most ${\displaystyle \mathrm {OPT} +{\mathcal {O}}(\log ^{2}(OPT))}$.

Rothvoss[34] presented an algorithm that generates a solution with size at most ${\displaystyle \mathrm {OPT} +O(\log(\mathrm {OPT} )\cdot \log \log(\mathrm {OPT} ))}$.

Hoberg and Rothvoss[35] improved this algorithm to generate a solution with size at most ${\displaystyle \mathrm {OPT} +O(\log(\mathrm {OPT} ))}$. The algorithm is randomized, and its running-time is polynomial in the total number of items.

## Exact algorithm

Martello and Toth[36] developed an exact algorithm for the 1-D bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002[37] and later improved.[38]

A further improvement was presented by Schreiber and Korf in 2013.[39] The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP (branch-and-cut-and-price) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like number of items, optimal number of bins, unused space in the optimal solution and value precision.

## Implementations

• The binpacking package contains greedy algorithms in Python for solving two typical bin packing problems: (i) packing items into a constant number of bins, (ii) packing items into a low number of bins of constant size.[40]
• The OR-tools package contains bin packing algorithms in C++, with wrappers in Python, C# and Java.

## Related problems

In the bin packing problem, the size of the bins is fixed and their number can be enlarged (but should be as small as possible).

In contrast, in the multiway number partitioning problem, the number of bins is fixed and their size can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum makespan problem, the goal is specifically to minimize the size of the largest bin).

In the inverse bin packing problem,[41] both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins.

In the maximum resource bin packing problem,[42] the goal is to maximize the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin.

In the bin covering problem, the bin size is bounded from below: the goal is to maximize the number of bins used such that the total size in each bin is at least a given threshold.

In the fair indivisible chore allocation problem (a variant of fair item allcation), the items represents chores, and there are different people each of whom attributes a different difficulty-value to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficulty-value (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too.[43]

In the guillotine cutting problem, both the items and the "bins" are two-dimensional rectangles rather than one-dimensional numbers, and the items have to be cut from the bin using end-to-end cuts.

## References

1. ^ Korte, Bernhard; Vygen, Jens (2006). "Bin-Packing". Combinatorial Optimization: Theory and Algorithms. Algorithms and Combinatorics 21. Springer. pp. 426–441. doi:10.1007/3-540-29297-7_18. ISBN 978-3-540-25684-7.
2. ^ Barrington, David Mix (2006). "Bin Packing". Archived from the original on 2019-02-16. Retrieved 2016-02-27.
3. ^ Lewis 2009
4. ^ Sindelar, Sitaraman & Shenoy 2011, pp. 367–378
5. ^ a b c Garey, M. R.; Johnson, D. S. (1979). Victor Klee (ed.). Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences. San Francisco, Calif.: W. H. Freeman and Co. pp. x+338. ISBN 0-7167-1045-5. MR 0519066.
6. ^ Martello & Toth 1990, p. 221
7. ^ Vazirani, Vijay V. (14 March 2013). Approximation Algorithms. Springer Berlin Heidelberg. p. 74. ISBN 978-3662045657.
8. Yao, Andrew Chi-Chih (April 1980). "New Algorithms for Bin Packing". Journal of the ACM. 27 (2): 207–227. doi:10.1145/322186.322187. S2CID 7903339.
9. ^ Donna J, Brown (1979). "A Lower Bound for On-Line One-Dimensional Bin Packing Algorithms" (PDF). Technical Rept.
10. ^ Liang, Frank M. (1980). "A lower bound for on-line bin packing". Information Processing Letters. 10 (2): 76–79. doi:10.1016/S0020-0190(80)90077-0.
11. ^ van Vliet, André (1992). "An improved lower bound for on-line bin packing algorithms". Information Processing Letters. 43 (5): 277–284. doi:10.1016/0020-0190(92)90223-I.
12. ^ Balogh, János; Békési, József; Galambos, Gábor (July 2012). "New lower bounds for certain classes of bin packing algorithms". Theoretical Computer Science. 440–441: 1–13. doi:10.1016/j.tcs.2012.04.017.
13. Johnson, David S (1973). "Near-optimal bin packing algorithms" (PDF). Massachusetts Institute of Technology.
14. ^ Gonzalez, Teofilo F. (23 May 2018). Handbook of approximation algorithms and metaheuristics. Volume 2 Contemporary and emerging applications. ISBN 9781498770156.
15. ^ a b c Dósa, György; Sgall, Jiri (2013). "First Fit bin packing: A tight analysis". 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Schloss Dagstuhl–Leibniz-Zentrum für Informatik. 20: 538–549. doi:10.4230/LIPIcs.STACS.2013.538.
16. ^ a b c György, Dósa; Sgall, Jirí (2014). "Optimal Analysis of Best Fit Bin Packing". {Automata, Languages, and Programming – 41st International Colloquium (ICALP)}. Lecture Notes in Computer Science. 8572: 429–441. doi:10.1007/978-3-662-43948-7_36. ISBN 978-3-662-43947-0.
17. Lee, C. C.; Lee, D. T. (July 1985). "A simple on-line bin-packing algorithm". Journal of the ACM. 32 (3): 562–572. doi:10.1145/3828.3833. S2CID 15441740.
18. ^ a b Ramanan, Prakash; Brown, Donna J; Lee, C.C; Lee, D.T (September 1989). "On-line bin packing in linear time". Journal of Algorithms. 10 (3): 305–326. doi:10.1016/0196-6774(89)90031-X. hdl:2142/74206.
19. ^ a b c Seiden, Steven S. (2002). "On the online bin packing problem". Journal of the ACM. 49 (5): 640–671. doi:10.1145/585265.585269. S2CID 14164016.
20. ^ Vazirani 2003, p. 74.
21. ^ a b Ullman, J. D. (1971). "The performance of a memory allocation algorithm". Technical Report 100 Princeton Univ.
22. ^ a b Garey, M. R; Graham, R. L; Ullman, J. D. (1972). "Worst-case analysis of memory allocation algorithms | Proceedings of the fourth annual ACM symposium on Theory of computing". Proceedings of the Fourth Annual ACM Symposium on Theory of Computing: 143–150. doi:10.1145/800152.804907. S2CID 26654056.
23. ^ a b Garey, M. R; Graham, R. L; Johnson, D. S; Yao, Andrew Chi-Chih (1976). "Resource constrained scheduling as generalized bin packing". Journal of Combinatorial Theory, Series A. 21 (3): 257–298. doi:10.1016/0097-3165(76)90001-7. ISSN 0097-3165.
24. ^ Xia, Binzhou; Tan, Zhiyi (August 2010). "Tighter bounds of the First Fit algorithm for the bin-packing problem". Discrete Applied Mathematics. 158 (15): 1668–1675. doi:10.1016/j.dam.2010.05.026.
25. Dósa, György (2007). "The Tight Bound of First Fit Decreasing Bin-Packing Algorithm Is FFD(I) ≤ 11/9OPT(I) + 6/9". Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. ESCAPE. doi:10.1007/978-3-540-74450-4_1.
26. ^ a b Yue, Minyi; Zhang, Lei (July 1995). "A simple proof of the inequality MFFD(L) ≤ 71/60 OPT(L) + 1,L for the MFFD bin-packing algorithm". Acta Mathematicae Applicatae Sinica. 11 (3): 318–330. doi:10.1007/BF02011198. S2CID 118263129.
27. ^ a b c Johnson, David S; Garey, Michael R (October 1985). "A 7160 theorem for bin packing". Journal of Complexity. 1 (1): 65–106. doi:10.1016/0885-064X(85)90022-6.
28. ^ Hoberg, Rebecca; Rothvoss, Thomas (2017-01-01), "A Logarithmic Additive Integrality Gap for Bin Packing", Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, Society for Industrial and Applied Mathematics, pp. 2616–2625, doi:10.1137/1.9781611974782.172, ISBN 978-1-61197-478-2, retrieved 2020-11-22
29. ^ Baker, Brenda S. (1985). "A New Proof for the First-Fit Decreasing Bin-Packing Algorithm". J. Algorithms. 6 (1): 49–70. doi:10.1016/0196-6774(85)90018-5.
30. ^ Yue, Minyi (October 1991). "A simple proof of the inequality FFD (L) ≤ 11/9 OPT (L) + 1, ∀L for the FFD bin-packing algorithm". Acta Mathematicae Applicatae Sinica. 7 (4): 321–331. doi:10.1007/BF02009683. S2CID 189915733.
31. ^ Li, Rongheng; Yue, Minyi (August 1997). "The proof of FFD(L) < -OPT(L) + 7/9". Chinese Science Bulletin. 42 (15): 1262–1265. Bibcode:1997ChSBu..42.1262L. doi:10.1007/BF02882754. S2CID 93280100.
32. ^ Fernandez de la Vega, W.; Lueker, G. S. (1981). "Bin packing can be solved within 1 + ε in linear time". Combinatorica. 1 (4): 349–355. doi:10.1007/BF02579456. ISSN 1439-6912. S2CID 10519631.
33. ^ Karmarkar, Narendra; Karp, Richard M. (November 1982). "An efficient approximation scheme for the one-dimensional bin-packing problem". 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982): 312–320. doi:10.1109/SFCS.1982.61. S2CID 18583908.
34. ^ Rothvoß, T. (2013-10-01). "Approximating Bin Packing within O(log OPT * Log Log OPT) Bins". 2013 IEEE 54th Annual Symposium on Foundations of Computer Science: 20–29. arXiv:1301.4010. doi:10.1109/FOCS.2013.11. ISBN 978-0-7695-5135-7. S2CID 15905063.
35. ^ Hoberg, Rebecca; Rothvoss, Thomas (2017-01-01), "A Logarithmic Additive Integrality Gap for Bin Packing", Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, Society for Industrial and Applied Mathematics, pp. 2616–2625, doi:10.1137/1.9781611974782.172, ISBN 978-1-61197-478-2, S2CID 1647463, retrieved 2021-02-10
36. ^ Martello & Toth 1990, pp. 237–240.
37. ^ Korf 2002
38. ^ R. E. Korf (2003), An improved algorithm for optimal bin packing. Proceedings of the International Joint Conference on Artificial Intelligence, (pp. 1252–1258)
39. ^ Schreiber & Korf 2013
40. ^ Vaccaro, Alessio (2020-11-13). "🧱 4 Steps to Easily Allocate Resources with Python & Bin Packing". Medium. Retrieved 2021-03-21.
41. ^ Chung, Yerim; Park, Myoung-Ju (2015-01-01). "Notes on inverse bin-packing problems". Information Processing Letters. 115 (1): 60–68. doi:10.1016/j.ipl.2014.09.005. ISSN 0020-0190.
42. ^ Boyar, Joan; Epstein, Leah; Favrholdt, Lene M.; Kohrt, Jens S.; Larsen, Kim S.; Pedersen, Morten M.; Wøhlk, Sanne (2006-10-11). "The maximum resource bin packing problem". Theoretical Computer Science. 362 (1): 127–139. doi:10.1016/j.tcs.2006.06.001. ISSN 0304-3975.
43. ^ Huang, Xin; Lu, Pinyan (2020-11-10). "An Algorithmic Framework for Approximating Maximin Share Allocation of Chores". arXiv:1907.04505 [cs.GT].

Bibliography

1. Korf, Richard E. (2002), A new algorithm for optimal bin packing. (PDF) Unknown parameter |conference= ignored (help)
2. Vazirani, Vijay V. (2003), Approximation Algorithms, Berlin: Springer, ISBN 3-540-65367-8
3. Yue, Minyi (October 1991), "A simple proof of the inequality FFD (L) ≤ 11/9 OPT (L) + 1, ∀L for the FFD bin-packing algorithm", Acta Mathematicae Applicatae Sinica, 7 (4): 321–331, doi:10.1007/BF02009683, ISSN 0168-9673, S2CID 189915733
4. Dósa, György (2007). "The Tight Bound of First Fit Decreasing Bin-Packing Algorithm Is FFD(I) ≤ (11/9)OPT(I)+6/9". In Chen, Bo; Paterson, Mike; Zhang, Guochuan (eds.). Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. Lecture Notes in Computer Science. 7000 (2011). Lecture Notes in Computer Science. 4614/2007. Springer Berlin / Heidelberg. pp. 1–11. doi:10.1007/978-3-540-74450-4. ISBN 978-3-540-74449-8. ISSN 0302-9743.
5. Xia, Binzhou; Tan, Zhiyi (2010), "Tighter bounds of the First Fit algorithm for the bin-packing problem", Discrete Applied Mathematics, 158 (15): 1668–1675, doi:10.1016/j.dam.2010.05.026, ISSN 0166-218X
6. Garey, Michael R.; Johnson, David S. (1985), "A 71/60 theorem for bin packing*1", Journal of Complexity, 1: 65–106, doi:10.1016/0885-064X(85)90022-6
7. Yue, Minyi; Zhang, Lei (July 1995), "A simple proof of the inequality MFFD(L) ≤ 71/60 OPT(L) + 1,L for the MFFD bin-packing algorithm", Acta Mathematicae Applicatae Sinica, 11 (3): 318–330, doi:10.1007/BF02011198, ISSN 0168-9673, S2CID 118263129
8. Fernandez de la Vega, W.; Lueker, G. S. (December 1981), "Bin packing can be solved within 1 + ε in linear time", Combinatorica, Springer Berlin / Heidelberg, 1 (4): 349–355, doi:10.1007/BF02579456, ISSN 0209-9683, S2CID 10519631
9. Lewis, R. (2009), "A General-Purpose Hill-Climbing Method for Order Independent Minimum Grouping Problems: A Case Study in Graph Colouring and Bin Packing" (PDF), Computers and Operations Research, 36 (7): 2295–2310, doi:10.1016/j.cor.2008.09.004
10. Martello, Silvano; Toth, Paolo (1990), "Bin-packing problem" (PDF), Knapsack Problems: Algorithms and Computer Implementations, Chichester, UK: John Wiley and Sons, ISBN 0471924202
11. Michael R. Garey and David S. Johnson (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A4.1: SR1, p. 226.
12. David S. Johnson, Alan J. Demers, Jeffrey D. Ullman, M. R. Garey, Ronald L. Graham. Worst-Case Performance Bounds for Simple One-Dimensional Packing Algorithms. SICOMP, Volume 3, Issue 4. 1974.
13. Lodi A., Martello S., Monaci, M., Vigo, D. (2010) "Two-Dimensional Bin Packing Problems". In V.Th. Paschos (Ed.), Paradigms of Combinatorial Optimization, Wiley/ISTE, pp. 107–129
14. Dósa, György; Sgall, Jiří (2013). "First Fit bin packing: A tight analysis". 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Dagstuhl, Germany. pp. 538–549. ISBN 978-3-939897-50-7.
15. Benkő A., Dósa G., Tuza Z. (2010) "Bin Packing/Covering with Delivery, Solved with the Evolution of Algorithms," Proceedings 2010 IEEE 5th International Conference on Bio-Inspired Computing: Theories and Applications, BIC-TA 2010, art. no. 5645312, pp. 298–302.
16. Sindelar, Michael; Sitaraman, Ramesh; Shenoy, Prashant (2011), "Sharing-Aware Algorithms for Virtual Machine Colocation" (PDF), Proceedings of 23rd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), San Jose, CA, June 2011: 367–378
17. Schreiber, Ethan L.; Korf, Richard E. (2013), Improved Bin Completion for Optimal Bin Packing and Number Partitioning, IJCAI '13, Beijing, China: AAAI Press, pp. 651–658, ISBN 978-1-57735-633-2 Unknown parameter |book-title= ignored (help)