# Maximum coverage problem

The maximum coverage problem is a classical question in computer science, computational complexity theory, and operations research. It is a problem that is widely taught in approximation algorithms.

As input you are given several sets and a number ${\displaystyle k}$. The sets may have some elements in common. You must select at most ${\displaystyle k}$ of these sets such that the maximum number of elements are covered, i.e. the union of the selected sets has maximal size.

Formally, (unweighted) Maximum Coverage

Instance: A number ${\displaystyle k}$ and a collection of sets ${\displaystyle S=\{S_{1},S_{2},\ldots ,S_{m}\}}$.
Objective: Find a subset ${\displaystyle S^{'}\subseteq S}$ of sets, such that ${\displaystyle \left|S^{'}\right|\leq k}$ and the number of covered elements ${\displaystyle \left|\bigcup _{S_{i}\in S^{'}}{S_{i}}\right|}$ is maximized.

The maximum coverage problem is NP-hard, and can be approximated within ${\displaystyle 1-{\frac {1}{e}}+o(1)\approx 0.632}$ under standard assumptions. This result essentially matches the approximation ratio achieved by the generic greedy algorithm used for maximization of submodular functions with a cardinality constraint.[1]

## ILP formulation

The maximum coverage problem can be formulated as the following integer linear program.

 maximize ${\displaystyle \sum _{e_{j}\in E}y_{j}}$ (maximizing the sum of covered elements) subject to ${\displaystyle \sum {x_{i}}\leq k}$ (no more than ${\displaystyle k}$ sets are selected) ${\displaystyle \sum _{e_{j}\in S_{i}}x_{i}\geq y_{j}}$ (if ${\displaystyle y_{j}>0}$ then at least one set ${\displaystyle e_{j}\in S_{i}}$ is selected) ${\displaystyle y_{j}\in \{0,1\}}$ (if ${\displaystyle y_{j}=1}$ then ${\displaystyle e_{j}}$ is covered) ${\displaystyle x_{i}\in \{0,1\}}$ (if ${\displaystyle x_{i}=1}$ then ${\displaystyle S_{i}}$ is selected for the cover)

## Greedy algorithm

The greedy algorithm for maximum coverage chooses sets according to one rule: at each stage, choose a set which contains the largest number of uncovered elements. It can be shown that this algorithm achieves an approximation ratio of ${\displaystyle 1-{\frac {1}{e}}}$.[2] ln-approximability results show that the greedy algorithm is essentially the best-possible polynomial time approximation algorithm for maximum coverage unless ${\displaystyle P=NP}$.[3]

## Known extensions

The inapproximability results apply to all extensions of the maximum coverage problem since they hold the maximum coverage problem as a special case.

The Maximum Coverage Problem can be applied to road traffic situations; one such example is selecting which bus routes in a public transportation network should be installed with pothole detectors to maximise coverage, when only a limited number of sensors is available. This problem is a known extension of the Maximum Coverage Problem and was first explored in literature by Junade Ali and Vladimir Dyo.[4]

## Weighted version

In the weighted version every element ${\displaystyle e_{j}}$ has a weight ${\displaystyle w(e_{j})}$. The task is to find a maximum coverage which has maximum weight. The basic version is a special case when all weights are ${\displaystyle 1}$.

maximize ${\displaystyle \sum _{e\in E}w(e_{j})\cdot y_{j}}$. (maximizing the weighted sum of covered elements).
subject to ${\displaystyle \sum {x_{i}}\leq k}$; (no more than ${\displaystyle k}$ sets are selected).
${\displaystyle \sum _{e_{j}\in S_{i}}x_{i}\geq y_{j}}$; (if ${\displaystyle y_{j}>0}$ then at least one set ${\displaystyle e_{j}\in S_{i}}$ is selected).
${\displaystyle y_{j}\in \{0,1\}}$; (if ${\displaystyle y_{j}=1}$ then ${\displaystyle e_{j}}$ is covered)
${\displaystyle x_{i}\in \{0,1\}}$ (if ${\displaystyle x_{i}=1}$ then ${\displaystyle S_{i}}$ is selected for the cover).

The greedy algorithm for the weighted maximum coverage at each stage chooses a set that contains the maximum weight of uncovered elements. This algorithm achieves an approximation ratio of ${\displaystyle 1-{\frac {1}{e}}}$.[1]

## Budgeted maximum coverage

In the budgeted maximum coverage version, not only does every element ${\displaystyle e_{j}}$ have a weight ${\displaystyle w(e_{j})}$, but also every set ${\displaystyle S_{i}}$ has a cost ${\displaystyle c(S_{i})}$. Instead of ${\displaystyle k}$ that limits the number of sets in the cover a budget ${\displaystyle B}$ is given. This budget ${\displaystyle B}$ limits the total cost of the cover that can be chosen.

maximize ${\displaystyle \sum _{e\in E}w(e_{j})\cdot y_{j}}$. (maximizing the weighted sum of covered elements).
subject to ${\displaystyle \sum {c(S_{i})\cdot x_{i}}\leq B}$; (the cost of the selected sets cannot exceed ${\displaystyle B}$).
${\displaystyle \sum _{e_{j}\in S_{i}}x_{i}\geq y_{j}}$; (if ${\displaystyle y_{j}>0}$ then at least one set ${\displaystyle e_{j}\in S_{i}}$ is selected).
${\displaystyle y_{j}\in \{0,1\}}$; (if ${\displaystyle y_{j}=1}$ then ${\displaystyle e_{j}}$ is covered)
${\displaystyle x_{i}\in \{0,1\}}$ (if ${\displaystyle x_{i}=1}$ then ${\displaystyle S_{i}}$ is selected for the cover).

A greedy algorithm will no longer produce solutions with a performance guarantee. Namely, the worst case behavior of this algorithm might be very far from the optimal solution. The approximation algorithm is extended by the following way. First, define a modified greedy algorithm, that selects the set ${\displaystyle S_{i}}$ that has the best ratio of weighted uncovered elements to cost. Second, among covers of cardinality ${\displaystyle 1,2,...,k-1}$, find the best cover that does not violate the budget. Call this cover ${\displaystyle H_{1}}$. Third, find all covers of cardinality ${\displaystyle k}$ that do not violate the budget. Using these covers of cardinality ${\displaystyle k}$ as starting points, apply the modified greedy algorithm, maintaining the best cover found so far. Call this cover ${\displaystyle H_{2}}$. At the end of the process, the approximate best cover will be either ${\displaystyle H_{1}}$ or ${\displaystyle H_{2}}$. This algorithm achieves an approximation ratio of ${\displaystyle 1-{1 \over e}}$ for values of ${\displaystyle k\geq 3}$. This is the best possible approximation ratio unless ${\displaystyle NP\subseteq DTIME(n^{O(\log \log n)})}$.[5]

## Generalized maximum coverage

In the generalized maximum coverage version every set ${\displaystyle S_{i}}$ has a cost ${\displaystyle c(S_{i})}$, element ${\displaystyle e_{j}}$ has a different weight and cost depending on which set covers it. Namely, if ${\displaystyle e_{j}}$ is covered by set ${\displaystyle S_{i}}$ the weight of ${\displaystyle e_{j}}$ is ${\displaystyle w_{i}(e_{j})}$ and its cost is ${\displaystyle c_{i}(e_{j})}$. A budget ${\displaystyle B}$ is given for the total cost of the solution.

maximize ${\displaystyle \sum _{e\in E,S_{i}}w_{i}(e_{j})\cdot y_{ij}}$. (maximizing the weighted sum of covered elements in the sets in which they are covered).
subject to ${\displaystyle \sum {c_{i}(e_{j})\cdot y_{ij}}+\sum {c(S_{i})\cdot x_{i}}\leq B}$; (the cost of the selected sets cannot exceed ${\displaystyle B}$).
${\displaystyle \sum _{i}y_{ij}\leq 1}$; (element ${\displaystyle e_{j}=1}$ can only be covered by at most one set).
${\displaystyle \sum _{S_{i}}x_{i}\geq y_{ij}}$; (if ${\displaystyle y_{j}>0}$ then at least one set ${\displaystyle e_{j}\in S_{i}}$ is selected).
${\displaystyle y_{ij}\in \{0,1\}}$; (if ${\displaystyle y_{ij}=1}$ then ${\displaystyle e_{j}}$ is covered by set ${\displaystyle S_{i}}$)
${\displaystyle x_{i}\in \{0,1\}}$ (if ${\displaystyle x_{i}=1}$ then ${\displaystyle S_{i}}$ is selected for the cover).

### Generalized maximum coverage algorithm

The algorithm uses the concept of residual cost/weight. The residual cost/weight is measured against a tentative solution and it is the difference of the cost/weight from the cost/weight gained by a tentative solution.

The algorithm has several stages. First, find a solution using greedy algorithm. In each iteration of the greedy algorithm the tentative solution is added the set which contains the maximum residual weight of elements divided by the residual cost of these elements along with the residual cost of the set. Second, compare the solution gained by the first step to the best solution which uses a small number of sets. Third, return the best out of all examined solutions. This algorithm achieves an approximation ratio of ${\displaystyle 1-1/e-o(1)}$.[6]

## Notes

1. ^ a b G. L. Nemhauser, L. A. Wolsey and M. L. Fisher. An analysis of approximations for maximizing submodular set functions I, Mathematical Programming 14 (1978), 265–294
2. ^ Hochbaum, Dorit S. (1997). "Approximating Covering and Packing Problems: Set Cover, Vertex Cover, Independent Set, and Related Problems". In Hochbaum, Dorit S. Approximation Algorithms for NP-Hard Problems. Boston: PWS Publishing Company. pp. 94–143. ISBN 053494968-1.
3. ^ Feige, Uriel (July 1998). "A Threshold of ln n for Approximating Set Cover". Journal of the ACM. 45 (4). New York, NY, USA: Association for Computing Machinery. pp. 634–652. doi:10.1145/285055.285059. ISSN 0004-5411.
4. ^ Ali, Junade; Dyo, Vladimir (2017). "Coverage and Mobile Sensor Placement for Vehicles on Predetermined Routes: A Greedy Heuristic Approach". Proceedings of the 14th International Joint Conference on e-Business and Telecommunications. Volume 2: WINSYS: 83–88. doi:10.5220/0006469800830088.
5. ^ Khuller, S., Moss, A., and Naor, J. 1999. The budgeted maximum coverage problem. Inf. Process. Lett. 70, 1 (Apr. 1999), 39-45.
6. ^ Cohen, R. and Katzir, L. 2008. The Generalized Maximum Coverage Problem. Inf. Process. Lett. 108, 1 (Sep. 2008), 15-22.