Resultant

From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about the resultant of polynomials. For the result of adding two or more vectors, see Parallelogram rule. For the musical phenomenon, see Resultant tone.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients, which is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called eliminant.[1]

The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation.

The resultant of n homogeneous polynomials in n variables or multivariate resultant, sometimes called Macaulay's resultant, is a generalization of the usual resultant introduced by Macaulay. It is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).

Notation[edit]

The resultant of two univariate polynomials A and B is commonly denoted or

In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: or

The degree of the polynomials are used in the definition of the resultant. However, a polynomial of degree d may also be considered as a polynomial of higher degree such the leading coefficients are zero. If such an higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as

Definition[edit]

The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let

and

be nonzero polynomials of respective degrees d and e. Let us denote by the vector space (or free module if the coefficients belong to a commutative ring) of dimension i whose elements are the polynomials of degree less than i. The map

such that

is a linear map between two spaces of the same dimension. Over the basis of the powers of x, this map is represented by a square matrix of dimension d + e, which called the Sylvester matrix of A and B (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map).

The resultant of A and B is thus the determinant

which has e columns of ai and d columns of bj (for simplification, d = e in the displayed determinant).

In the case of monic polynomials over an integral domain the resultant is equal to the product

where x and y run over the roots of the polynomials over an algebraically closed field containing the coefficients. For non-monic polynomials with leading coefficients a0 and b0 , respectively, the above product is multiplied by

Properties[edit]

In this section and its subsections, A and B are two polynomials of respective degrees d and e, and their resultant is denoted

Characterizing properties[edit]

  • If d = 0 (that is if is a nonzero constant) then Similarly, if e = 0, then
  • The preceding properties characterize the resultant. In other words, the resultant is the unique function of the coefficients of polynomials that has these properties.

Zeros[edit]

  • The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common divisor of positive degree.
  • The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients.
  • There exist a polynomial P of degree less than e and a polynomial Q of degree less than d such that This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials.

Invariance by ring homomorphisms[edit]

Let A and B be two polynomials of respective degrees d and e with coefficients in a commutative ring R, and a ring homomorphism of R into another commutative ring S. Applying to the coefficients of a polynomial extends to a homomorphism of polynomial rings , which is also denoted With this notation, we have:

  • If preserve the degrees of A and B (that is if and then
  • If and then
  • If and and the leading coefficient of A is then
  • If and and the leading coefficient of B is then

These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When R is a polynomial ring in other indeterminates, and S is the ring obtained by specializing to numerical values some or all indeterminates of R, these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition.

Invariance under change of variable[edit]

Invariance under change of polynomials[edit]

  • If a and b are nonzero constants (that is they are independent of the indeterminate x), and A and B are as above, then
  • If if a is a constant and is the leading coefficient of B, and if C is a polynomial of degree at most then

These properties imply that in Euclidean algorithm for polynomials, the resultant of two successive remainders differs from the resultant of the initial polynomials by a factor, which is easy to compute. Moreover, the constant a in above second formula may be chosen in order that the successive remainders have their coefficients in the ring of coefficients of input polynomials. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm for computing the greatest common divisor and the resultant of two polynomials. This algorithms works for polynomials over the integers or, more generally, over an integral domain, without any other division than exact divisions (that is without involving fractions). It involves arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms require arithmetic operations.

Computation[edit]

Since the resultant depends polynomially (with integer coefficients) on the roots of P and Q, and it is invariant with respect to permutations of each set of roots, it must be possible to calculate it using an (integer) polynomial formula on the coefficients of P and Q. See elementary symmetric polynomial for details.

More concretely, the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix) associated to P and Q. This is the standard definition of the resultant over a commutative ring.

The above definition of the resultant can be rewritten as

so it can be expressed polynomially in terms of the coefficients of Q for each fixed P. By the symmetry of the defining formula, the resultant is also a polynomial in the coefficients of P for each fixed Q. It follows that the resultant is a polynomial in the coefficients of P and Q jointly.

This expression remains unchanged if Q is replaced by the remainder P mod Q of the Euclidean division of Q by P. If we set P' = P mod Q, then this idea can be continued by swapping the roles of P' and Q. However, P' has a set of roots different from that of P. This can be resolved by writing res(P',Q) as a determinant again, where P' has leading zero coefficients. This determinant can now be simplified by iterative expansion with respect to the column, where only the leading coefficient q of Q appears: res(P,Q)=qdegP-degP' res(P',Q). Continuing this procedure ends up in a variant of Euclid's algorithm.

This procedure needs a number of arithmetic operations on the coefficients that is of the order of product of the degrees. However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient.

The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism of the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem.

Applications[edit]

  • If x and y are algebraic numbers such that (with degree of Q = n), we see that is a root of the resultant (in x) of and and that is a root of the resultant of and  ; combined with the fact that is a root of , this shows that the set of algebraic numbers is a field.
  • The discriminant of a polynomial is the quotient by its leading coefficient of the resultant of the polynomial and its derivative.
  • Resultants can be used in algebraic geometry to determine intersections. For example, let
and
define algebraic curves in . If and are viewed as polynomials in with coefficients in , then the resultant of and is a polynomial in whose roots are the -coordinates of the intersection of the curves and of the common asymptotes parallel to the axis.

Generalizations and related concepts[edit]

The resultant is sometimes defined for two homogeneous polynomials in two variables, in which case it vanishes when the polynomials have a common non-zero solution, or equivalently when they have a common zero on the projective line. More generally, the multipolynomial resultant,[2] multivariate resultant or Macaulay's resultant of n homogeneous polynomials in n variables is a polynomial in their coefficients that vanishes when they have a common non-zero solution, or equivalently when the n hypersurfaces corresponding to them have a common zero in n–1 dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers).

See also[edit]

Notes[edit]

  1. ^ Salmon 1885, lesson VIII, p. 66.
  2. ^ Cox, David; Little, John; O'Shea, Donal (2005), Using Algebraic Geometry, Springer Science+Business Media, ISBN 978-0387207339 , Chapter 3. Resultants

References[edit]

External links[edit]