# Resultant

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients, which is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called eliminant.[1]

The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.

The resultant of n homogeneous polynomials in n variables or multivariate resultant, sometimes called Macaulay's resultant, is a generalization of the usual resultant introduced by Macaulay. It is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).

## Notation

The resultant of two univariate polynomials A and B is commonly denoted ${\displaystyle \operatorname {res} (A,B)}$ or ${\displaystyle \operatorname {Res} (A,B).}$

In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: ${\displaystyle \operatorname {res} _{x}(A,B)}$ or ${\displaystyle \operatorname {Res} _{x}(A,B).}$

The degree of the polynomials are used in the definition of the resultant. However, a polynomial of degree d may also be considered as a polynomial of higher degree such the leading coefficients are zero. If such an higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as ${\displaystyle \operatorname {res} _{d,e}(A,B).}$

## Definition

The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let

${\displaystyle A=a_{0}x^{d}+a_{1}x^{d-1}+\cdots +a_{d}}$

and

${\displaystyle B=b_{0}x^{e}+b_{1}x^{e-1}+\cdots +b_{e}}$

be nonzero polynomials of respective degrees d and e. Let us denote by ${\displaystyle {\mathcal {P}}_{i}}$ the vector space (or free module if the coefficients belong to a commutative ring) of dimension i whose elements are the polynomials of degree less than i. The map

${\displaystyle \varphi :{\mathcal {P}}_{e}\times {\mathcal {P}}_{d}\rightarrow {\mathcal {P}}_{d+e}}$

such that

${\displaystyle \varphi (P,Q)=AP+BQ}$

is a linear map between two spaces of the same dimension. Over the basis of the powers of x, this map is represented by a square matrix of dimension d + e, which called the Sylvester matrix of A and B (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map).

The resultant of A and B is thus the determinant

${\displaystyle {\begin{vmatrix}a_{0}&0&\cdots &0&b_{0}&0&\cdots &0\\a_{1}&a_{0}&\cdots &0&b_{1}&b_{0}&\cdots &0\\a_{2}&a_{1}&\ddots &0&b_{2}&b_{1}&\ddots &0\\\vdots &\vdots &\ddots &a_{0}&\vdots &\vdots &\ddots &b_{0}\\\vdots &\vdots &\cdots &a_{1}&\vdots &\vdots &\cdots &b_{1}\\a_{d}&a_{d-1}&\cdots &\vdots &b_{e}&b_{e-1}&\cdots &\vdots \\0&a_{d}&\ddots &\vdots &0&b_{e}&\ddots &\vdots \\\vdots &\vdots &\ddots &a_{d-1}&\vdots &\vdots &\ddots &b_{e-1}\\0&0&\cdots &a_{d}&0&0&\cdots &b_{e}\end{vmatrix}},}$

which has e columns of ai and d columns of bj (for simplification, d = e in the displayed determinant).

In the case of monic polynomials over an integral domain the resultant is equal to the product

${\displaystyle \prod _{(x,y)\colon f(x)=g(y)=0}(x-y),}$

where x and y run over the roots of the polynomials over an algebraically closed field containing the coefficients. For non-monic polynomials with leading coefficients a0 and b0 , respectively, the above product is multiplied by ${\displaystyle a_{0}^{e}b_{0}^{d}.}$

## Properties

In this section and its subsections, A and B are two polynomials of respective degrees d and e, and their resultant is denoted ${\displaystyle \operatorname {res} (A,B).}$

### Characterizing properties

• If d = 0 (that is if ${\displaystyle A=a_{0}}$ is a nonzero constant) then ${\displaystyle \operatorname {res} (A,B)=a_{0}^{e}.}$ Similarly, if e = 0, then ${\displaystyle \operatorname {res} (A,B)=b_{0}^{d}.}$
• ${\displaystyle \operatorname {res} (x-a_{0},y-b_{0})=b_{0}-a_{0}}$
• ${\displaystyle \operatorname {res} (B,A)=(-1)^{de}\operatorname {res} (A,B)}$
• ${\displaystyle \operatorname {res} (AB,C)=\operatorname {res} (A,C)\operatorname {res} (B,C)}$
• The preceding properties characterize the resultant. In other words, the resultant is the unique function of the coefficients of polynomials that has these properties.

### Zeros

• The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common divisor of positive degree.
• The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients.
• There exist a polynomial P of degree less than e and a polynomial Q of degree less than d such that ${\displaystyle \operatorname {res} (A,B)=AP+BQ.}$ This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials.

### Invariance by ring homomorphisms

Let A and B be two polynomials of respective degrees d and e with coefficients in a commutative ring R, and ${\displaystyle \varphi \colon R\to S}$ a ring homomorphism of R into another commutative ring S. Applying ${\displaystyle \varphi }$ to the coefficients of a polynomial extends ${\displaystyle \varphi }$ to a homomorphism of polynomial rings ${\displaystyle R[x]\to S[x]}$, which is also denoted ${\displaystyle \varphi .}$ With this notation, we have:

• If ${\displaystyle \varphi }$ preserve the degrees of A and B (that is if ${\displaystyle \deg(\varphi (A)=d}$ and ${\displaystyle \deg(\varphi (B)=e),}$ then
${\displaystyle \operatorname {res} (\varphi (A),\varphi (B))=\varphi (\operatorname {res} (A,B)).}$
• If ${\displaystyle \deg(\varphi (A) and ${\displaystyle \deg(\varphi (B) then
${\displaystyle \operatorname {res} (\varphi (A),\varphi (B))=0.}$
• If ${\displaystyle \deg(\varphi (A)=d}$ and ${\displaystyle \deg(\varphi (B)=f and the leading coefficient of A is ${\displaystyle a_{0}}$ then
${\displaystyle \varphi (a_{0})^{e-f}\operatorname {res} (\varphi (A),\varphi (B))=\varphi (\operatorname {res} (A,B)).}$
• If ${\displaystyle \deg(\varphi (A)=f and ${\displaystyle \deg(\varphi (B)=e,}$ and the leading coefficient of B is ${\displaystyle b_{0}}$ then
${\displaystyle \varphi (b_{0})^{d-f}\operatorname {res} (\varphi (A),\varphi (B))=(-1)^{e(d-f)}\varphi (\operatorname {res} (A,B)).}$

These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When R is a polynomial ring in other indeterminates, and S is the ring obtained by specializing to numerical values some or all indeterminates of R, these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition.

### Invariance under change of polynomials

• If a and b are nonzero constants (that is they are independent of the indeterminate x), and A and B are as above, then
${\displaystyle \operatorname {res} (aA,bB)=a^{e}b^{d}\operatorname {res} (A,B)}$
• If ${\displaystyle d=\deg(A)\geq e=\deg(B),}$ if a is a constant and ${\displaystyle b_{0}}$ is the leading coefficient of B, and if C is a polynomial of degree at most ${\displaystyle d-e}$ then
${\displaystyle b_{0}^{d-e}\operatorname {res} (aA-CB,B)=a^{e}\operatorname {res} (A,B)}$

These properties imply that in Euclidean algorithm for polynomials, the resultant of two successive remainders differs from the resultant of the initial polynomials by a factor, which is easy to compute. Moreover, the constant a in above second formula may be chosen in order that the successive remainders have their coefficients in the ring of coefficients of input polynomials. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm for computing the greatest common divisor and the resultant of two polynomials. This algorithms works for polynomials over the integers or, more generally, over an integral domain, without any other division than exact divisions (that is without involving fractions). It involves ${\displaystyle O(de)}$ arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms require ${\displaystyle O((d+e)^{3})}$ arithmetic operations.

## Computation

Since the resultant depends polynomially (with integer coefficients) on the roots of P and Q, and it is invariant with respect to permutations of each set of roots, it must be possible to calculate it using an (integer) polynomial formula on the coefficients of P and Q. See elementary symmetric polynomial for details.

More concretely, the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix) associated to P and Q. This is the standard definition of the resultant over a commutative ring.

The above definition of the resultant can be rewritten as

${\displaystyle p^{\deg(Q)}\prod _{P(x)=0}Q(x),}$

so it can be expressed polynomially in terms of the coefficients of Q for each fixed P. By the symmetry of the defining formula, the resultant is also a polynomial in the coefficients of P for each fixed Q. It follows that the resultant is a polynomial in the coefficients of P and Q jointly.

This expression remains unchanged if Q is replaced by the remainder P mod Q of the Euclidean division of Q by P. If we set P' = P mod Q, then this idea can be continued by swapping the roles of P' and Q. However, P' has a set of roots different from that of P. This can be resolved by writing res(P',Q) as a determinant again, where P' has leading zero coefficients. This determinant can now be simplified by iterative expansion with respect to the column, where only the leading coefficient q of Q appears: res(P,Q)=qdegP-degP' res(P',Q). Continuing this procedure ends up in a variant of Euclid's algorithm.

This procedure needs a number of arithmetic operations on the coefficients that is of the order of product of the degrees. However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient.

The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism of the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem.

## Applications

• If x and y are algebraic numbers such that ${\displaystyle P(x)=Q(y)=0}$ (with degree of Q = n), we see that ${\displaystyle z=x+y}$ is a root of the resultant (in x) of ${\displaystyle P(x)}$ and ${\displaystyle Q(z-x)}$ and that ${\displaystyle t=xy}$ is a root of the resultant of ${\displaystyle P(x)}$ and ${\displaystyle x^{n}Q(t/x)}$ ; combined with the fact that ${\displaystyle 1/y}$ is a root of ${\displaystyle y^{n}Q(1/y)}$, this shows that the set of algebraic numbers is a field.
• The discriminant of a polynomial is the quotient by its leading coefficient of the resultant of the polynomial and its derivative.
• Resultants can be used in algebraic geometry to determine intersections. For example, let
${\displaystyle f(x,y)=0}$
and
${\displaystyle g(x,y)=0}$
define algebraic curves in ${\displaystyle \mathbb {A} _{k}^{2}}$. If ${\displaystyle f}$ and ${\displaystyle g}$ are viewed as polynomials in ${\displaystyle x}$ with coefficients in ${\displaystyle k[y]}$, then the resultant of ${\displaystyle f}$ and ${\displaystyle g}$ is a polynomial in ${\displaystyle y}$ whose roots are the ${\displaystyle y}$-coordinates of the intersection of the curves and of the common asymptotes parallel to the ${\displaystyle x}$ axis.

## Generalizations and related concepts

The resultant is sometimes defined for two homogeneous polynomials in two variables, in which case it vanishes when the polynomials have a common non-zero solution, or equivalently when they have a common zero on the projective line. More generally, the multipolynomial resultant,[2] multivariate resultant or Macaulay's resultant of n homogeneous polynomials in n variables is a polynomial in their coefficients that vanishes when they have a common non-zero solution, or equivalently when the n hypersurfaces corresponding to them have a common zero in n–1 dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).

## Notes

1. ^ Salmon 1885, lesson VIII, p. 66.
2. ^ Cox, David; Little, John; O'Shea, Donal (2005), Using Algebraic Geometry, Springer Science+Business Media, ISBN 978-0387207339, Chapter 3. Resultants