# Maximum disjoint set

In computational geometry, a maximum disjoint set (MDS) is a largest set of non-overlapping geometric shapes selected from a given set of candidate shapes.

Every set of non-overlapping shapes is an independent set in the intersection graph of the shapes. Therefore, the MDS problem is a special case of the maximum independent set (MIS) problem. Both problems are NP complete, but finding a MDS may be easier than finding a MIS in two respects:

• For the general MIS problem, the best known exact algorithms are exponential. In some geometric intersection graphs, there are sub-exponential algorithms for finding a MDS.[1]
• The general MIS problem is hard to approximate and doesn't even have a constant-factor approximation. In some geometric intersection graphs, there are polynomial-time approximation schemes (PTAS) for finding a MDS.

Finding an MDS is important in applications such as automatic label placement, VLSI circuit design, and cellular frequency division multiplexing.

The MDS problem can be generalized by assigning a different weight to each shape and searching for a disjoint set with a maximum total weight.

In the following text, MDS(C) denotes the maximum disjoint set in a set C.

## Greedy algorithms

Given a set C of shapes, an approximation to MDS(C) can be found by the following greedy algorithm:

• INITIALIZATION: Initialize an empty set, S.
• SEARCH: For every shape x in C:
1. Calculate N(x) - the subset of all shapes in C that intersect x (including x itself).
2. Calculate the largest independent set in this subset: MDS(N(x)).
3. Select an x such that |MDS(N(x))| is minimized.
• Remove x and N(x) from C.
• If there are shapes in C, go back to Search.
• END: return the set S.

For every shape x that we add to S, we lose the shapes in N(x), because they are intersected by x and thus cannot be added to S later on. However, some of these shapes themselves intersect each other, and thus in any case it is not possible that they all be in the optimal solution MDS(S). The largest subset of shapes that can all be in the optimal solution is MDS(N(x)). Therefore, selecting an x that minimizes |MDS(N(x))| minimizes the loss from adding x to S.

In particular, if we can guarantee that there is an x for which |MDS(N(x))| is bounded by a constant (say, M), then this greedy algorithm yields a constant M-factor approximation, as we can guarantee that:

${\displaystyle |S|\geq {\frac {|MDS(C)|}{M}}}$

Such an upper bound M exists for several interesting cases:

### 1-dimensional intervals: exact polynomial algorithm

When C is a set of intervals on a line, M=1, and thus the greedy algorithm finds the exact MDS. To see this, assume w.l.o.g. that the intervals are vertical, and let x be the interval with the highest bottom endpoint. All other intervals intersected by x must cross its bottom endpoint. Therefore, all intervals in N(x) intersect each other, and MDS(N(x)) has a size of at most 1 (see figure).

Therefore, in the 1-dimensional case, the MDS can be found exactly in time O(n log n):[2]

1. Sort the intervals in ascending order of their bottom endpoints (this takes time O(n log n)).
2. Add an interval with the highest bottom endpoint, and delete all intervals intersecting it.
3. Continue until no intervals remain.

This algorithm is analogous to the earliest deadline first scheduling solution to the interval scheduling problem.

In contrast to the 1-dimensional case, in 2 or more dimensions the MDS problem becomes NP-complete, and thus has either exact super-polynomial algorithms or approximate polynomial algorithms.

### Fat shapes: constant-factor approximations

When C is a set of unit disks, M=3,[3] because the leftmost disk (the disk whose center has the smallest x coordinate) intersects at most 3 other disjoint disks (see figure). Therefore the greedy algorithm yields a 3-approximation, i.e., it finds a disjoint set with a size of at least MDS(C)/3.

Similarly, when C is a set of axis-parallel unit squares, M=2.

When C is a set of arbitrary-size disks, M=5, because the disk with the smallest radius intersects at most 5 other disjoint disks (see figure).

Similarly, when C is a set of arbitrary-size axis-parallel squares, M=4.

Other constants can be calculated for other regular polygons.[3]

## Divide-and-conquer algorithms

The most common approach to finding a MDS is divide-and-conquer. A typical algorithm in this approach looks like the following:

1. Divide the given set of shapes into two or more subsets, such that the shapes in each subset cannot overlap the shapes in other subsets because of geometric considerations.
2. Recursively find the MDS in each subset separately.
3. Return the union of the MDSs from all subsets.

The main challenge with this approach is to find a geometric way to divide the set into subsets. This may require to discard a small number of shapes that do not fit into any one of the subsets, as explained in the following subsections.

### Axis-parallel rectangles with the same height: 2-approximation

Let C be a set of n axis-parallel rectangles in the plane, all with the same height H but with varying lengths. The following algorithm finds a disjoint set with a size of at least |MDS(C)|/2 in time O(n log n):[2]

• Draw m horizontal lines, such that:
1. The separation between two lines is strictly more than H.
2. Each line intersects at least one rectangle (hence m ≤ n).
3. Each rectangle is intersected by exactly one line.
• Since the height of all rectangles is H, it is not possible that a rectangle is intersected by more than one line. Therefore the lines partition the set of rectangles into m subsets (${\displaystyle R_{i},\ldots ,R_{m}}$) – each subset includes the rectangles intersected by a single line.
• For each subset ${\displaystyle R_{i}}$, compute an exact MDS ${\displaystyle M_{i}}$ using the one-dimensional greedy algorithm (see above).
• By construction, the rectangles in (${\displaystyle R_{i}}$) can intersect only rectangles in ${\displaystyle R_{i+1}}$ or in ${\displaystyle R_{i-1}}$. Therefore, each of the following two unions is a disjoint sets:
• Union of odd MDSs: ${\displaystyle M_{1}\cup M_{3}\cup \cdots }$
• Union of even MDSs: ${\displaystyle M_{2}\cup M_{4}\cup \cdots }$
• Return the largest of these two unions. Its size must be at least |MDS|/2.

### Axis-parallel rectangles with the same height: PTAS

Let C be a set of n axis-parallel rectangles in the plane, all with the same height but with varying lengths. There is an algorithm that finds a disjoint set with a size of at least |MDS(C)|/(1 + 1/k) in time O(n2k−1), for every constant k > 1.[2]

The algorithm is an improvement of the above-mentioned 2-approximation, by combining dynamic programming with the shifting technique of Hochbaum and Maass.[4]

This algorithm can be generalized to d dimensions. If the labels have the same size in all dimensions except one, it is possible to find a similar approximation by applying dynamic programming along one of the dimensions. This also reduces the time to n^O(1/e).[5]

### Axis-parallel rectangles: Logarithmic-factor approximation

Let C be a set of n axis-parallel rectangles in the plane. The following algorithm finds a disjoint set with a size of at least ${\displaystyle {\frac {|MDS(C)|}{\log {n}}}}$ in time ${\displaystyle O(n\log {n})}$:[2]

• INITIALIZATION: sort the horizontal edges of the given rectangles by their y-coordinate, and the vertical edges by their x-coordinate (this step takes time O(n log n)).
• STOP CONDITION: If there are at most n ≤ 2 shapes, compute the MDS directly and return.
• RECURSIVE PART:
1. Let ${\displaystyle x_{\mathrm {med} }}$ be the median x-coordinate.
2. Partition the input rectangles into three groups according to their relation to the line ${\displaystyle x=x_{\mathrm {med} }}$: those entirely to its left (${\displaystyle R_{\mathrm {left} }}$), those entirely to its right (${\displaystyle R_{\mathrm {right} }}$), and those intersected by it (${\displaystyle R_{\mathrm {int} }}$). By construction, the cardinalities of ${\displaystyle R_{\mathrm {left} }}$ and ${\displaystyle R_{\mathrm {right} }}$ are at most n/2.
3. Recursively compute an approximate MDS in ${\displaystyle R_{\mathrm {left} }}$ (${\displaystyle M_{\mathrm {left} }}$) and in ${\displaystyle R_{\mathrm {right} }}$ (${\displaystyle M_{\mathrm {right} }}$), and calculate their union. By construction, the rectangles in ${\displaystyle M_{\mathrm {left} }}$ and ${\displaystyle M_{\mathrm {right} }}$ are all disjoint, so ${\displaystyle M_{\mathrm {left} }\cup M_{\mathrm {right} }}$ is a disjoint set.
4. Compute an exact MDS in ${\displaystyle R_{\mathrm {int} }}$ (${\displaystyle M_{\mathrm {int} }}$). Since all rectangles in ${\displaystyle R_{\mathrm {int} }}$ intersect a single vertical line ${\displaystyle x=x_{\mathrm {med} }}$, this computation is equivalent to finding an MDS from a set of intervals, and can be solved exactly in time O(n log n) (see above).
• Return either ${\displaystyle M_{\mathrm {left} }\cup M_{\mathrm {right} }}$ or ${\displaystyle M_{\mathrm {int} }}$ – whichever of them is larger.

It is provable by induction that, at the last step, either ${\displaystyle M_{\mathrm {left} }\cup M_{\mathrm {right} }}$ or ${\displaystyle M_{\mathrm {int} }}$ have a cardinality of at least ${\displaystyle {\frac {|MDS(C)|}{\log {n}}}}$.

The approximation factor has been reduced to ${\displaystyle O(\log {\log {n}})}$[6] and generalized to the case in which rectangles have different weights.[7]

### Axis-parallel rectangles: constant-factor approximation

For a long time, it was not known whether a constant-factor approximation exists for axis-parallel rectangles of different lengths and heights. It was conjectured that such an approximation could be found using guillotine cuts. Particularly, if there exists a guillotine separation of axes-parallel rectangles in which ${\displaystyle \Omega (n)}$ rectangles are separated, then it can be used in a dynamic programming approach to find a constant-factor approximation to the MDS.[8]: sub.1.2

To date, it is not known whether such a guillotine separation exists. However, Joseph S. B. Mitchell have presented a constant-factor approximation algorithm using a special class of non-guillotine cuts.[9]

### Fat objects with identical sizes: PTAS

Let C be a set of n squares or circles of identical size. There is a polynomial-time approximation scheme for finding an MDS using a simple shifted-grid strategy. It finds a solution within (1 − e) of the maximum in time nO(1/e2) time and linear space.[4] The strategy generalizes to any collection of fat objects of roughly the same size (i.e., when the maximum-to-minimum size ratio is bounded by a constant).

### Fat objects with arbitrary sizes: PTAS

Let C be a set of n fat objects (e.g. squares or circles) of arbitrary sizes. There is a PTAS for finding an MDS based on multi-level grid alignment. It has been discovered by two groups in approximately the same time, and described in two different ways.

#### Version 1

Version 1 finds a disjoint set with a size of at least (1 − 1/k)2 · |MDS(C)| in time nO(k2), for every constant k > 1:[10]

Scale the disks so that the smallest disk has diameter 1. Partition the disks to levels, based on the logarithm of their size. I.e., the j-th level contains all disks with diameter between (k + 1)j and (k + 1)j+1, for j ≤ 0 (the smallest disk is in level 0).

For each level j, impose a grid on the plane that consists of lines that are (k + 1)j+1 apart from each other. By construction, every disk can intersect at most one horizontal line and one vertical line from its level.

For every r, s between 0 and k, define D(r,s) as the subset of disks that are not intersected by any horizontal line whose index modulo k is r, nor by any vertical line whose index modulu k is s. By the pigeonhole principle, there is at least one pair (r,s) such that ${\displaystyle |\mathrm {MDS} (D(r,s))|\geq (1-{\frac {1}{k}})^{2}\cdot |\mathrm {MDS} |}$, i.e., we can find the MDS only in D(r,s) and miss only a small fraction of the disks in the optimal solution:

• For all k2 possible values of r,s (0 ≤ r,s < k), calculate D(r,s) using dynamic programming.
• Return the largest of these k2 sets.

#### Version 2

A region quadtree with point data

Version 2 finds a disjoint set with a size of at least (1 − 2/k)·|MDS(C)| in time nO(k), for every constant k > 1.[5]

The algorithm uses shifted quadtrees. The key concept of the algorithm is alignment to the quadtree grid. An object of size r is called k-aligned (where k ≥ 1 is a constant) if it is inside a quadtree cell of size at most kr (R ≤ kr).

By definition, a k-aligned object that intersects the boundary of a quatree cell of size R must have a size of at least R/k (r > R/k). The boundary of a cell of size R can be covered by 4k squares of size R/k; hence the number of disjoint fat objects intersecting the boundary of that cell is at most 4kc, where c is a constant measuring the fatness of the objects.

Therefore, if all objects are fat and k-aligned, it is possible to find the exact maximum disjoint set in time nO(kc) using a divide-and-conquer algorithm. Start with a quadtree cell that contains all objects. Then recursively divide it to smaller quadtree cells, find the maximum in each smaller cell, and combine the results to get the maximum in the larger cell. Since the number of disjoint fat objects intersecting the boundary of every quadtree cell is bounded by 4kc, we can simply "guess" which objects intersect the boundary in the optimal solution, and then apply divide-and-conquer to the objects inside.

If almost all objects are k-aligned, we can just discard the objects that are not k-aligned, and find a maximum disjoint set of the remaining objects in time nO(k). This results in a (1 − e) approximation, where e is the fraction of objects that are not k-aligned.

If most objects are not k-aligned, we can try to make them k-aligned by shifting the grid in multiples of (1/k,1/k). First, scale the objects such that they are all contained in the unit square. Then, consider k shifts of the grid: (0,0), (1/k,1/k), (2/k,2/k), ..., ((k − 1)/k,(k − 1)/k). I.e., for each j in {0,...,k − 1}, consider a shift of the grid in (j/k,j/k). It is possible to prove that every label will be 2k-aligned for at least k − 2 values of j. Now, for every j, discard the objects that are not k-aligned in the (j/k,j/k) shift, and find a maximum disjoint set of the remaining objects. Call that set A(j). Call the real maximum disjoint set is A*. Then:

${\displaystyle \sum _{j=0,\ldots ,k-1}{|A(j)|}\geq (k-2)|A*|}$

Therefore, the largest A(j) has a size of at least: (1 − 2/k)|A*|. The return value of the algorithm is the largest A(j); the approximation factor is (1 − 2/k), and the run time is nO(k). We can make the approximation factor as small as we want, so this is a PTAS.

Both versions can be generalized to d dimensions (with different approximation ratios) and to the weighted case.

## Geometric separator algorithms

Several divide-and-conquer algorithms are based on a certain geometric separator theorem. A geometric separator is a line or shape that separates a given set of shapes to two smaller subsets, such that the number of shapes lost during the division is relatively small. This allows both PTASs and sub-exponential exact algorithms, as explained below.

### Fat objects with arbitrary sizes: PTAS using geometric separators

Let C be a set of n fat objects of arbitrary sizes. The following algorithm finds a disjoint set with a size of at least (1 − O(b))·|MDS(C)| in time nO(b), for every constant b > 1.[5]

The algorithm is based on the following geometric separator theorem, which can be proved similarly to the proof of the existence of geometric separator for disjoint squares:

For every set C of fat objects, there is a rectangle that partitions C into three subsets of objects – Cinside, Coutside and Cboundary, such that:
• |MDS(Cinside)| ≤ a|MDS(C)|
• |MDS(Coutside)| ≤ a|MDS(C)|
• |MDS(Cboundary)| c|MDS(C)|

where a and c are constants. If we could calculate MDS(C) exactly, we could make the constant a as low as 2/3 by a proper selection of the separator rectangle. But since we can only approximate MDS(C) by a constant factor, the constant a must be larger. Fortunately, a remains a constant independent of |C|.

This separator theorem allows to build the following PTAS:

Select a constant b. Check all possible combinations of up to b + 1 labels.

• If |MDS(C)| has a size of at most b (i.e. all sets of b + 1 labels are not disjoint) then just return that MDS and exit. This step takes nO(b) time.
• Otherwise, use a geometric separator to separate C to two subsets. Find the approximate MDS in Cinside and Coutside separately, and return their combination as the approximate MDS in C.

Let E(m) be the error of the above algorithm when the optimal MDS size is MDS(C) = m. When m ≤ b, the error is 0 because the maximum disjoint set is calculated exactly; when m > b, the error increases by at most cm the number of labels intersected by the separator. The worst case for the algorithm is when the split in each step is in the maximum possible ratio which is a:(1 − a). Therefore the error function satisfies the following recurrence relation:

${\displaystyle E(m)=0\ \ \ \ {\text{ if }}m\leq b}$
${\displaystyle E(m)=E(a\cdot m)+E((1-a)\cdot m)+c\cdot {\sqrt {m}}{\text{ if }}m>b}$

The solution to this recurrence is:

${\displaystyle E(m)=({\frac {0}{b}}+{\frac {c}{{\sqrt {b}}({\sqrt {a}}+{\sqrt {1-a}}-1)}})\cdot m-{\frac {c}{{\sqrt {a}}+{\sqrt {1-a}}-1}}\cdot {\sqrt {m}}.}$

i.e., ${\displaystyle E(m)=O(m/{\sqrt {b}})}$. We can make the approximation factor as small as we want by a proper selection of b.

This PTAS is more space-efficient than the PTAS based on quadtrees, and can handle a generalization where the objects may slide, but it cannot handle the weighted case.

### Disks with a bounded size-ratio: exact sub-exponential algorithm

Let C be a set of n disks, such that the ratio between the largest radius and the smallest radius is at most r. The following algorithm finds MDS(C) exactly in time ${\displaystyle 2^{O(r\cdot {\sqrt {n}})}}$.[11]

The algorithm is based on a width-bounded geometric separator on the set Q of the centers of all disks in C. This separator theorem allows to build the following exact algorithm:

• Find a separator line such that at most 2n/3 centers are to its right (Cright), at most 2n/3 centers are to its left (Cleft), and at most O(n) centers are at a distance of less than r/2 from the line (Cint).
• Consider all possible non-overlapping subsets of Cint. There are at most ${\displaystyle 2^{O(r\cdot {\sqrt {n}})}}$ such subsets. For each such subset, recursively compute the MDS of Cleft and the MDS of Cright, and return the largest combined set.

The run time of this algorithm satisfies the following recurrence relation:

${\displaystyle T(1)=1}$
${\displaystyle T(n)=2^{O(r\cdot {\sqrt {n}})}T\left({\frac {2n}{3}}\right){\text{ if }}n>1}$

The solution to this recurrence is:

${\displaystyle T(n)=2^{O(r\cdot {\sqrt {n}})}}$

## Local search algorithms

### Pseudo-disks: a PTAS

A pseudo-disks-set is a set of objects in which the boundaries of every pair of objects intersect at most twice. (Note that this definition relates to a whole collection, and does not say anything about the shapes of the specific objects in the collection). A pseudo-disks-set has a bounded union complexity, i.e., the number of intersection points on the boundary of the union of all objects is linear in the number of objects.

Let C be a pseudo-disks-set with n objects. The following local search algorithm finds a disjoint set of size at least ${\displaystyle (1-O({\frac {1}{\sqrt {b}}}))\cdot |MDS(C)|}$ in time ${\displaystyle O(n^{b+3})}$, for every integer constant ${\displaystyle b\geq 0}$:[12]

• INITIALIZATION: Initialize an empty set, ${\displaystyle S}$.
• SEARCH: Loop over all the subsets of ${\displaystyle C-S}$ whose size is between 1 and ${\displaystyle b+1}$. For each such subset X:
• Verify that X itself is independent (otherwise go to the next subset);
• Calculate the set Y of objects in S that intersect X.
• If ${\displaystyle |Y|<|X|}$, then remove Y from S and insert X: ${\displaystyle S:=S-Y+X}$.
• END: return the set S.

Every exchange in the search step increases the size of S by at least 1, and thus can happen at most n times.

The algorithm is very simple; the difficult part is to prove the approximation ratio.[12]

Let C be a pseudo-disks-set with n objects and union complexity u. Using linear programming relaxation, it is possible to find a disjoint set of size at least ${\displaystyle {\frac {n}{u}}\cdot |MDS(C)|}$. This is possible either with a randomized algorithm that has a high probability of success and run time ${\displaystyle O(n^{3})}$, or a deterministic algorithm with a slower run time (but still polynomial). This algorithm can be generalized to the weighted case.[12]