Isolation lemma

In theoretical computer science, the term isolation lemma (or isolating lemma) refers to randomized algorithms that reduce the number of solutions to a problem to one, should a solution exist. This is achieved by constructing random constraints such that, with non-negligible probability, exactly one solution satisfies these additional constraints if the solution space is not empty. Isolation lemmas have important applications in computer science, such as the Valiant–Vazirani theorem and Toda's theorem in computational complexity theory.

The first isolation lemma was introduced by Valiant & Vazirani (1986), albeit not under that name. Their isolation lemma chooses a random number of random hyperplanes, and has the property that, with non-negligible probability, the intersection of any fixed non-empty solution space with the chosen hyperplanes contains exactly one element. This suffices to show the Valiant–Vazirani theorem: there exists a randomized polynomial-time reduction from the satisfiability problem for Boolean formulas to the problem of detecting whether a Boolean formula has a unique solution. Mulmuley, Vazirani & Vazirani (1987) introduced an isolation lemma of a slightly different kind: Here every coordinate of the solution space gets assigned a random weight in a certain range of integers, and the property is that, with non-negligible probability, there is exactly one element in the solution space that has minimum weight. This can be used to obtain a randomized parallel algorithm for the maximum matching problem.

Stronger isolation lemmas have been introduced in the literature to fit different needs in various settings. For example, the isolation lemma of Chari, Rohatgi & Srinivasan (1993) has similar guarantees as that of Mulmuley et al., but it uses fewer random bits. In the context of the exponential time hypothesis, Calabro et al. (2008) prove an isolation lemma for k-CNF formulas.

The isolation lemma of Mulmuley, Vazirani, and Vazirani

Any linear program with a randomly chosen linear cost function has a unique optimum with high probability. The isolation lemma of Mulmuley, Vazirani, and Vazirani extends this fact to arbitrary sets and a random cost function that is sampled using few random bits.
Lemma. Let $n$ and $N$ be positive integers, and let $\mathcal F$ be an arbitrary family of subsets of the universe $\{1,\dots,n\}$. Suppose each element $x\in\{1,\dots,n\}$ in the universe receives an integer weight $w(x)$, each of which is chosen independently and uniformly at random from $\{1,\dots,N\}$. The weight of a set S in $\mathcal F$ is defined as
$w(S) = \sum_{x \in S} w(x)\,.$
Then, with probability at least $1-n/N$, there is a unique set in $\mathcal F$ that has the minimum weight among all sets of $\mathcal F$.

It is remarkable that the lemma assumes nothing about the nature of the family $\mathcal F$: for instance $\mathcal F$ may include all $2^n-1$ nonempty subsets. Since the weight of each set in $\mathcal F$ is between $1$ and $nN$ on average there will be $(2^n-1) / (nN)$ sets of each possible weight. Still, with high probability, there is a unique set that has minimum weight.

Examples/applications

• The original application was to minimum-weight (or maximum-weight) perfect matchings in a graph. Each edge is assigned a random weight in {1, …, 2m}, and $\mathcal F$ is the set of perfect matchings, so that with probability at least 1/2, there exists a unique perfect matching. When each indeterminate $x_{ij}$ in the Tutte matrix of the graph is replaced with $2^{w_{ij}}$ where $w_{ij}$ is the random weight of the edge, we can show that the determinant of the matrix is nonzero, and further use this to find the matching.
• More generally, the paper also observed that any search problem of the form "Given a set system $(S,\mathcal F)$, find a set in $\mathcal F$" could be reduced to a decision problem of the form "Is there a set in $\mathcal F$ with total weight at most k?". For instance, it showed how to solve the following problem posed by Papadimitriou and Yannakakis, for which (as of the time the paper was written) no deterministic polynomial-time algorithm is known: given a graph and a subset of the edges marked as "red", find a perfect matching with exactly k red edges.
• The Valiant–Vazirani theorem, concerning unique solutions to NP-complete problems, has a simpler proof using the isolation lemma. This is proved by giving a randomized reduction from CLIQUE to UNIQUE-CLIQUE.[2]
• Ben-David, Chor & Goldreich (1989) use the proof of Valiant-Vazirani in their search-to-decision reduction for average-case complexity.
• Avi Wigderson used the isolation lemma in 1994 to give a randomized reduction from NL to UL, and thereby prove that NL/poly ⊆ ⊕L/poly.[3] Reinhardt and Allender later used the isolation lemma again to prove that NL/poly = UL/poly.[4]
• The book by Hemaspaandra and Ogihara has a chapter on the isolation technique, including generalizations.[5]
• The isolation lemma has been proposed as the basis of a scheme for digital watermarking.[6]
• There is ongoing work on derandomizing the isolation lemma in specific cases[7] and on using it for identity testing.[8]