Jump to content

Gap reduction

From Wikipedia, the free encyclopedia

In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.

c-gap problem

[edit]

We define a c-gap problem as follows:[1] given an optimization (maximization or minimization) problem P, the equivalent c-gap problem distinguishes between two cases, for an input k and an instance x of problem P:

. Here, the best solution to instance x of problem P has a cost, or score, below k.
. Here, the best solution to instance x of problem P has a cost above ck. The gap between the two thresholds is thus c.

Note that whenever OPT falls between the thresholds, there is no requirement on what the output should be. A valid algorithm for the c-gap problem may answer anything if OPT is in the middle of the gap. The value c does not need to be constant; it can depend on the size of the instance of P. Note that c-approximating the solution to an instance of P is at least as hard as solving the c-gap version of P.

One can define an (a,b)-gap problem similarly. The difference is that the thresholds do not depend on the input; instead, the lower threshold is a and the upper threshold is b.

Gap-producing reduction

[edit]

A gap-producing reduction is a reduction from an optimization problem to a c-gap problem, so that solving the c-gap problem quickly would enable solving the optimization problem quickly. The term gap-producing arises from the nature of the reduction: the optimal solution in the optimization problem maps to the opposite side of the gap from every other solution via reduction. Thus, a gap is produced between the optimal solution and every other solution.

A simple example of a gap-producing reduction is the nonmetric Traveling Salesman problem (i.e. where the graph's edge costs need not satisfy the conditions of a metric). We can reduce from the Hamiltonian path problem on a given graph G = (V, E) to this problem as follows: we construct a complete graph G' = (V, E'), for the traveling salesman problem. For each edge e ∈ G', we let the cost of traversing it be 1 if e is in the original graph G and ∞ otherwise. A Hamiltonian path in the original graph G exists if and only if there exists a traveling salesman solution with weight (|V|-1). However, if no such Hamiltonian path exists, then the best traveling salesman tour must have weight at least |V|. Thus, Hamiltonian Path reduces to |V|/(|V|-1)-gap nonmetric traveling salesman.

Gap-preserving reduction

[edit]

A gap-preserving reduction is a reduction from a c-gap problem to a c'-gap problem. More specifically, we are given an instance x of a problem A with |x| = n and want to reduce it to an instance x' of a problem B with |x'| = n'. A gap-preserving reduction from A to B is a set of functions (k(n), k'(n'), c(n), c'(n')) such that

For minimization problems:

OPTA(x) ≤ k ⇒ OPTB(x') ≤ k', and
OPTA(x) ≥ c⋅k ⇒ OPTB(x') ≥ c'⋅k'

For maximization problems:

OPTA(x) ≥ k ⇒ OPTB(x') ≥ k', and
OPTA(x) ≤ k/c ⇒ OPTB(x') ≤ k'/c'

If c' > c, then this is a gap-amplifying reduction.

Examples

[edit]

Max E3SAT

[edit]

This problem is a form of the Boolean satisfiability problem (SAT), where each clause contains three distinct literals and we want to maximize the number of clauses satisfied.[2]

Håstad showed that the (1/2+ε, 1-ε)-gap version of a similar problem, MAX E3-X(N)OR-SAT, is NP-hard.[3] The MAX E3-X(N)OR-SAT problem is a form of SAT where each clause is the XOR of three distinct literals, exactly one of which is negated. We can reduce from MAX E3-X(N)OR-SAT to MAX E3SAT as follows:

A clause xi ⊕ xj ⊕ xk = 1 is converted to (xi ∨ xj ∨ xk) ∧ (¬xi ∨ ¬xj ∨ xk) ∧ (¬xi ∨ xj ∨ ¬xk) ∧ (xi ∨ ¬xj ∨ ¬xk)
A clause xi ⊕ xj ⊕ xk = 0 is converted to (¬xi ∨ ¬xj ∨ ¬xk) ∧ (¬xi ∨ xj ∨ xk) ∧ (xi ∨ ¬xj ∨ xk) ∧ (xi ∨ xj ∨ ¬xk)

If a clause is not satisfied in the original instance of MAX E3-X(N)OR-SAT, then at most three of the four corresponding clauses in our MAX E3SAT instance can be satisfied. Using a gap argument, it follows that a YES instance of the problem has at least a (1-ε) fraction of the clauses satisfied, while a NO instance of the problem has at most a (1/2+ε)(1) + (1/2-ε)(3/4) = (7/8 + ε/4)-fraction of the clauses satisfied. Thus, it follows that (7/8 + ε, 1 - ε)-gap MAX E3SAT is NP-hard. Note that this bound is tight, as a random assignment of variables gives an expected 7/8 fraction of satisfied clauses.

Label Cover

[edit]

The label cover problem is defined as follows: given a bipartite graph G = (A∪B, E), with

A = A1 ∪ A2 ∪ ... ∪ Ak, |A| = n, and |Ai| = n/k
B = B1 ∪ B2 ∪ ... ∪ Bk, |B| = n, and |Bi| = n/k

We define a "superedge" between Ai and Bj if at least one edge exists from Ai to Bj in G, and define the superedge to be covered if at least one edge from Ai to Bj is covered.

In the max-rep version of the problem, we are allowed to choose one vertex from each Ai and each Bi, and we aim to maximize the number of covered superedges. In the min-rep version, we are required to cover every superedge in the graph, and want to minimize the number of vertices we choose. Manurangsi and Moshkovitz show that the (O(n1/4), 1)-gap version of both problems is solvable in polynomial time.[4]

See also

[edit]

References

[edit]
  1. ^ Demaine, Erik (Fall 2014). 6.890/fall14/scribe/lec12.pdf "Algorithmic Lower Bounds: Fun with Hardness Proofs Lecture 12 Notes" (PDF). {{cite web}}: Check |url= value (help)
  2. ^ Demaine, Erik (Fall 2014). "Algorithmic Lower Bounds: Fun with Hardness Proofs Lecture 12 Notes" (PDF).
  3. ^ Johan, Hastad (July 2001). "Some Optimal Inapproximability Results". J. ACM. 48 (4). ACM: 798–859. doi:10.1145/502090.502098. S2CID 5120748.
  4. ^ Manurangsi, Pasin; Moshkovitz, Dana (2013). "Improved Approximation Algorithms for Projection Games". Esa 2013. 8125 (2). ESA: 683–694. arXiv:1408.4048. doi:10.1007/s00453-015-0088-5. S2CID 12959740.