Schwartz–Zippel lemma

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, the Schwartz–Zippel lemma is a tool commonly used in probabilistic polynomial identity testing, i.e. in the problem of determining whether a given multivariate polynomial is the 0-polynomial (or identically equal to 0).

Statement of the lemma[edit]

The input to the problem is an n-variable polynomial over a field F. It can occur in the following forms:

Algebraic form

For example, is

(x_1 + 3x_2 - x_3)(3x_1 + x_4 - 1) \cdots (x_7 - x_2) \equiv 0\  ?

To solve this, we can multiply it out and check that all the coefficients are 0. However, this takes exponential time. In general, a polynomial can be algebraically represented by an arithmetic formula or circuit.

Determinant of a matrix with polynomial entries
Let
p(x_1,x_2, \ldots, x_n) \,

be the determinant of the polynomial matrix.

Currently, there is no known sub-exponential time algorithm that can solve this problem deterministically. However, there are randomized polynomial algorithms for testing polynomial identities. The first of these algorithms was discovered independently by Jack Schwartz and Richard Zippel.[1][2]

It bounds the probability that a non-zero polynomial will have roots at randomly selected test points. The formal statement is as follows:

Theorem 1 (Schwartz, Zippel). Let

P\in F[x_1,x_2,\ldots,x_n]

be a non-zero polynomial of total degree d ≥ 0 over a field, F. Let S be a finite subset of F and let r1, r2, ..., rn be selected randomly from S. Then

\Pr[P(r_1,r_2,\ldots,r_n)=0]\leq\frac{d}{|S|}. \,

In the single variable case, this follows directly from the fact that a polynomial of degree d can have no more than d roots. It seems logical, then, to think that a similar statement would hold for multivariable polynomials. This is, in fact, the case.

Proof. The proof is by mathematical induction on n. For n = 1, as was mentioned before, P can have at most d roots. This gives us the base case. Now, assume that the theorem holds for all polynomials in n − 1 variables. We can then consider P to be a polynomial in x1 by writing it as

P(x_1,\dots,x_n)=\sum_{i=0}^d x_1^i P_i(x_2,\dots,x_n).

Since P is not identically 0, there is some i such that P_i is not identically 0. Take the largest such i. Then \deg P_i\leq d-i, since the degree of x_1^iP_i is at most d.

Now we randomly pick r_2,\dots,r_n from S. By the induction hypothesis, \Pr[P_i(r_2,\ldots,r_n)=0]\leq\frac{d-i}{|S|}. If P_i(r_2,\ldots,r_n)\neq 0, then P(x_1,r_2,\ldots,r_n) is of degree i so

\Pr[P(r_1,r_2,\ldots,r_n)=0|P_i(r_2,\ldots,r_n)\neq 0]\leq\frac{i}{|S|}.

If we denote the event P(r_1,r_2,\ldots,r_n)=0 by A, the event P_i(r_2,\ldots,r_n)=0 by B, and the complement of B by B^c, we have

\Pr[A] =\Pr[A\cap B]+\Pr[A\cap B^c]
=\Pr[B]\Pr[A|B]+\Pr[B^c]\Pr[A|B^c]
\leq \Pr[B]+\Pr[A|B^c]
\leq \frac{d-i}{|S|}+\frac{i}{|S|}=\frac{d}{|S|}.

Applications[edit]

The importance of the Schwartz–Zippel Theorem and Testing Polynomial Identities follows from algorithms which are obtained to problems that can be reduced to the problem of polynomial identity testing.

Comparison of two polynomials[edit]

Given a pair of polynomials p_1(x) and p_2(x), is

p_1(x) \equiv p_2(x)?

This problem can be solved by reducing it to the problem of polynomial identity testing. It is equivalent to checking if

[p_1(x) - p_2(x)] \equiv 0.

Hence if we can determine that

p(x) \equiv 0,

where

p(x) = p_1(x)\;-\;p_2(x),

then we can determine whether the two polynomials are equivalent.

Comparison of polynomials has applications for branching programs (also called binary decision diagrams). A read-once branching program can be represented by a multilinear polynomial which computes (over any field) on {0,1}-inputs the same Boolean function as the branching program, and two branching programs compute the same function if and only if the corresponding polynomials are equal. Thus, identity of Boolean functions computed by read-once branching programs can be reduced to polynomial identity testing.

Comparison of two polynomials (and therefore testing polynomial identities) also has applications in 2D-compression, where the problem of finding the equality of two 2D-texts A and B is reduced to the problem of comparing equality of two polynomials p_A(x,y) and p_B(x,y).

Primality testing[edit]

Given n \in \mathbb{Z^+}, is n a prime number?

A simple randomized algorithm developed by Manindra Agrawal and Somenath Biswas can determine probabilistically whether n is prime and uses polynomial identity testing to do so.

They propose that all prime numbers n (and only prime numbers) satisfy the following polynomial identity:

(1+z)^n = 1+z^n (\mbox{mod}\;n).

This is a consequence of the Frobenius endomorphism.

Let

\mathcal{P}_n(z) = (1+z)^n - 1 -z^n.\,

Then \mathcal{P}_n(z) = 0\;(\mbox{mod}\;n) iff n is prime. The proof can be found in [4]. However, since this polynomial has degree n, and since n may or may not be a prime, the Schwartz–Zippel method would not work. Agrawal and Biswas use a more sophisticated technique, which divides \mathcal{P}_n by a random monic polynomial of small degree.

Prime numbers are used in a number of applications such as hash table sizing, pseudorandom number generators and in key generation for cryptography. Therefore finding very large prime numbers (on the order of (at least) 10^{350} \approx 2^{1024}) becomes very important and efficient primality testing algorithms are required.

Perfect matching[edit]

Let G = (V, E) be a graph of \mathrm{n} vertices where \mathrm{n} is even. Does \mathrm{G} contain a perfect matching?

Theorem 2 (Tutte 1947): A Tutte matrix determinant is not a \mathrm{0}-polynomial if and only if there exists a perfect matching.

A subset \mathrm{D} of \mathrm{E} is called a matching if each vertex in \mathrm{V} is incident with at most one edge in \mathrm{D}. A matching is perfect if each vertex in \mathrm{V} has exactly one edge that is incident to it in \mathrm{D}. Create a Tutte matrix \mathrm{A} in the following way:

A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1\mathit{n}} \\ a_{21} & a_{22} & \cdots & a_{2\mathit{n}} \\ \vdots & \vdots & \ddots & \vdots \\ a_{\mathit{n}1} & a_{\mathit{n}2} & \ldots & a_{\mathit{nn}} \end{bmatrix}

where

a_{ij} = \begin{cases} x_{ij}\;\;\mbox{if}\;(i,j) \in E \mbox{ and } i<j\\
-x_{ji}\;\;\mbox{if}\;(i,j) \in E \mbox{ and } i>j\\
0\;\;\;\;\mbox{otherwise}. \end{cases}

The Tutte matrix determinant (in the variables xij, i<j ) is then defined as the determinant of this skew-symmetric matrix which coincides with the square of the pfaffian of the matrix A and is non-zero (as polynomial) if and only if a perfect matching exists. One can then use polynomial identity testing to find whether \mathrm{G} contains a perfect matching.

In the special case of a balanced bipartite graph on  n =m + m vertices this matrix takes the form of a block matrix

A = \begin{pmatrix} 0 & X \\ -X^t & 0 \end{pmatrix}

if the first m rows (resp. columns) are indexed with the first subset of the bipartition and the last m rows with the complementary subset. In this case the pfaffian coincides with the usual determinant of the m × m matrix X (up to sign). Here X is the Edmonds matrix.

Notes[edit]

References[edit]

External links[edit]