From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about the resultant of polynomials. For the result of adding two or more vectors, see Parallelogram rule. For the musical phenomenon, see Resultant tone.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients, which is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called eliminant.[1]

The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.

The resultant of n homogeneous polynomials in n variables or multivariate resultant, sometimes called Macaulay's resultant, is a generalization of the usual resultant introduced by Macaulay. It is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).


The resultant of two univariate polynomials A and B is commonly denoted or

In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: or

The degree of the polynomials are used in the definition of the resultant. However, a polynomial of degree d may also be considered as a polynomial of higher degree such the leading coefficients are zero. If such an higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as or


The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let


be nonzero polynomials of respective degrees d and e. Let us denote by the vector space (or free module if the coefficients belong to a commutative ring) of dimension i whose elements are the polynomials of degree less than i. The map

such that

is a linear map between two spaces of the same dimension. Over the basis of the powers of x, this map is represented by a square matrix of dimension d + e, which called the Sylvester matrix of A and B (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map).

The resultant of A and B is thus the determinant

which has e columns of ai and d columns of bj (for simplification, d = e in the displayed determinant).

In the case of monic polynomials over an integral domain the resultant is equal to the product

where x and y run over the roots of the polynomials over an algebraically closed field containing the coefficients. For non-monic polynomials with leading coefficients a0 and b0 , respectively, the above product is multiplied by


In this section and its subsections, A and B are two polynomials in x of respective degrees d and e, and their resultant is denoted

Characterizing properties[edit]

  • If d = 0 (that is if is a nonzero constant) then Similarly, if e = 0, then
  • The preceding properties characterize the resultant. In other words, the resultant is the unique function of the coefficients of polynomials that has these properties.


  • The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common divisor of positive degree.
  • The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients.
  • There exist a polynomial P of degree less than e and a polynomial Q of degree less than d such that This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials.

Invariance by ring homomorphisms[edit]

Let A and B be two polynomials of respective degrees d and e with coefficients in a commutative ring R, and a ring homomorphism of R into another commutative ring S. Applying to the coefficients of a polynomial extends to a homomorphism of polynomial rings , which is also denoted With this notation, we have:

  • If preserve the degrees of A and B (that is if and then
  • If and then
  • If and and the leading coefficient of A is then
  • If and and the leading coefficient of B is then

These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When R is a polynomial ring in other indeterminates, and S is the ring obtained by specializing to numerical values some or all indeterminates of R, these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition.

Invariance under change of variable[edit]

  • If and are the reciprocal polynomials of A and B, respectively, then

This means that the property of the resultant being zero is invariant under linear and projective changes of the variable

Invariance under change of polynomials[edit]

  • If a and b are nonzero constants (that is they are independent of the indeterminate x), and A and B are as above, then
  • If if a is a constant and is the leading coefficient of B, and if C is a polynomial of degree at most then

These properties imply that in Euclidean algorithm for polynomials, the resultant of two successive remainders differs from the resultant of the initial polynomials by a factor, which is easy to compute. Moreover, the constant a in above second formula may be chosen in order that the successive remainders have their coefficients in the ring of coefficients of input polynomials. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm for computing the greatest common divisor and the resultant of two polynomials. This algorithms works for polynomials over the integers or, more generally, over an integral domain, without any other division than exact divisions (that is without involving fractions). It involves arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms require arithmetic operations.

Generic properties[edit]

In this section, we consider two polynomials


whose d + e +2 coefficients are distinct indeterminates. Let

be the polynomial ring over the integers defined by these indeterminates. The resultant is often called the generic resultant for the degrees d and e. It has the following properties.

  • is an absolutely irreducible polynomial.
  • If is the ideal of generated by A and B, then is the principal ideal generated by .


The generic resultant for the degrees d and e is homogeneous in various ways. More precisely:

  • It is homogeneous of degree e in
  • It is homogeneous of degree d in
  • It is homogeneous of degree d + e in all the variables and
  • If and are given the weight i (that is, the weight of each coefficient is its degree as elementary symmetric polynomial), then it is quasi-homogeneous of total weight de.
  • If P and Q are homogeneous multivariate polynomials of respective degrees d and e, then their resultant in degrees d and e with respect to an indeterminate x, denoted in § Notation, is homogeneous of degree de in the other indeterminates.

Elimination properties[edit]

Let be the ideal generated by two polynomials A and B in a polynomial ring where R is itself a polynomial ring over a field. Then:

  • is a principal ideal generated by r for some
  • There exists a positive integer k such that
  • If I is a prime ideal, then


Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable. As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials, but this would be highly inefficient.

As the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix), it may be computed by using any algorithm for computing determinants. This needs arithmetic operations. As one knows algorithms with a better complexity (see below), this method is not used in practice.

It follows from § Invariance under change of polynomials that the computation of a resultant is strongly related with Euclidean algorithm for polynomials. This shows that the computation of the resultant of two polynomials of degrees d and e may be done in arithmetic operations in the field of coefficients.

However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient. The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem.

The use of fast multiplication of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity, which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input ( where s is an upper bound of the number of digits of the input polynomials).

Application to polynomial systems[edit]

Resultants were introduced for solving systems of polynomial equations and provides the oldest proof that there exist algorithms for solving such systems. There are primarily intended for systems of two equations in two unknown, but allow also solving general systems.

Case of two equations in two unknowns[edit]

Let us consider two polynomials the system of equations

where P and Q are polynomials of respective total degrees d and e. Then is a polynomial in x, which is generically of degree de (by properties of § Homogeneity). A value of x is a root of R, if and only if, either there exist in an algebraically closed field containing the coefficients, such that , or and (in this case, one says that P and Q have a common root at infinity for ).

Therefore, solving the system amounts computing the roots of R, and for each root computing the common root(s) of and

It is worth to remark that Bézout's theorem results of the value of

General case[edit]

At first glance, it seems that resultants may be applied to a general polynomial system of equations

by computing the resultants of every pair with respect to for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduce many spurious solutions, which are difficult to remove.

The method, which dated of the end of 19th century roughly as follows: introduce k − 1 new indeterminates and compute

This is a polynomial in whose coefficients are polynomials in which have the property that is a common zero of these polynomial coefficients, if and only if the univariate polynomials have a common zero, possibly at infinity. This process may be iterated until finding univariate polynomials.

For getting a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components. One, were the common factor is zero, and the other which is obtained by factoring out this common factor before continuing.

This algorithm is very complicated and has a huge time complexity. Therefore, its interest is mainly historical.

Other applications[edit]

Number theory[edit]

The discriminant of a polynomial, which is a fundamental tool in number theory is the quotient by its leading coefficient of the resultant of the polynomial and its derivative.

If x and y are algebraic numbers such that with degree Q of degree n), then is a root of the resultant and is a root of Combined with the fact that is a root of , this shows that the set of algebraic numbers is a field.

Let be an algebraic field extension generated by an element which has as minimal polynomial. Every element of may be written as where Q is a polynomial. Then is a root of and this resultant is a power of the minimal polynomial of

Algebraic geometry[edit]

Given two plane algebraic curves defined as the zeros of the polynomials P(x, y) and Q(x, y), the resultant allows the computation of their intersection. More precisely, the roots of are the x-coordinates of the intersection points and of the common vertical asymptotes, and the roots of are the y-coordinates of the intersection points and of the common horizontal asymptotes.

A rational plane curve may be defined by a parametric equation

where P, Q and R are polynomials. An implicit equation of the curve is given by

The degree of this curve is the highest degree of P, Q and R, which is equal to the total degree of the resultant.

Symbolic integration[edit]

In symbolic integration, for computing the antiderivative of a rational fraction, one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form

where Q is a square-free polynomial and P is a polynomial of lower degree than Q. The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of Q). In fact, the antiderivative is

where the sum runs over all complex roots of Q.

The number of algebraic numbers involved by this expression is generally equal to the degree of Q, but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard–Rioboo–Trager method produced a expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers.


be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is

where the internal sums run over the roots of the (if the sum is zero, as being the empty sum), and is a polynomial of degree i in x. The Lazard-Rioboo contribution is the proof that is the subresultant of degree i of and It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence.

Computer algebra[edit]

All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra. In fact most computer algebra systems include an efficient implementation of the computation of resultants.

Generalizations and related concepts[edit]

The resultant is sometimes defined for two homogeneous polynomials in two variables, in which case it vanishes when the polynomials have a common non-zero solution, or equivalently when they have a common zero on the projective line. More generally, the multipolynomial resultant,[2] multivariate resultant or Macaulay's resultant of n homogeneous polynomials in n variables is a polynomial in their coefficients that vanishes when they have a common non-zero solution, or equivalently when the n hypersurfaces corresponding to them have a common zero in n–1 dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).

See also[edit]


  1. ^ Salmon 1885, lesson VIII, p. 66.
  2. ^ Cox, David; Little, John; O'Shea, Donal (2005), Using Algebraic Geometry, Springer Science+Business Media, ISBN 978-0387207339 , Chapter 3. Resultants


External links[edit]