Jump to content

Geometrical properties of polynomial roots

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by ClueBot NG (talk | contribs) at 15:19, 17 April 2016 (Reverting possible vandalism by 116.33.41.81 to version by Wcherowi. Report False Positive? Thanks, ClueBot NG. (2625351) (Bot)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, a univariate polynomial is an expression of the form

where the ai belong to some field, which, in this article, is always the field of the complex numbers. The natural number n is known as the degree of the polynomial.

In the following, p will be used to represent the polynomial, so we have

A root of the polynomial p is a solution of the equation p = 0: that is, a complex number a such that p(a) = 0.

The fundamental theorem of algebra combined with the factor theorem states that the polynomial p has n roots in the complex plane, if they are counted with their multiplicities.

This article concerns various properties of the roots of p, including their location in the complex plane.

Continuous dependence on coefficients

The n roots of a polynomial of degree n depend continuously on the coefficients.

This result implies that the eigenvalues of a matrix depend continuously on the matrix. A proof can be found in a book of Tyrtyshnikov.[1]

The problem of approximating the roots given the coefficients is ill-conditioned. See, for example, Wilkinson's polynomial.

Algebraic expression of roots

All roots of polynomials with rational coefficients are algebraic numbers, by definition of the latter. If the degree n of the polynomial is no greater than 4, all the roots of the polynomial can be written as algebraic expressions in terms of the coefficients—that is, applying only the four basic arithmetic operations and the extraction of n-th roots. But by the Abel-Ruffini theorem, this cannot be done in general for higher-degree equations.

Complex conjugate root theorem

The complex conjugate root theorem states that if the coefficients of a polynomial are real, then the non-real roots appear in pairs of the type a ± ib.

For example, the equation x2 + 1 = 0 has roots ± i.

Radical conjugate roots

It can be proved that if a polynomial P(x) with rational coefficients has a + √b as a root, where a, b are rational and √b is irrational, then a − √b is also a root. First observe that

Denote this quadratic polynomial by D(x). Then, by the Euclidean division of polynomials,

where c, d are rational numbers (by virtue of the fact that the coefficients of P(x) and D(x) are all rational). But a + √b is a root of P(x):

It follows that c, d must be zero, since otherwise the final equality could be arranged to suggest the irrationality of rational values (and vice versa). Hence P(x) = D(x)Q(x), for some quotient polynomial Q(x), and D(x) is a factor of P(x).[2]

This property may be generalized as: If an irreducible polynomial P has a root in common with a polynomial Q, then P divides Q evenly.

Bounds on (complex) polynomial roots

Based on the Rouché theorem

A very general class of bounds on the magnitude of roots is implied by the Rouché theorem. If there is a positive real number R and a coefficient index k such that

then there are exactly k (counted with multiplicity) roots of absolute value less than R. For k=0,n there is always a solution to this inequality, for example

  • for k=n,
or
are upper bounds for the size of all roots,
  • for k=0,
or

are lower bounds for the size of all of the roots.

  • for all other indices, the function
is convex on the positive real numbers, thus the minimizing point is easy to determine numerically. If the minimal value is negative, one has found additional information on the location of the roots.

One can increase the separation of the roots and thus the ability to find additional separating circles from the coefficients, by applying the root squaring operation of the Dandelin-Graeffe iteration to the polynomial.

A different approach is by using the Gershgorin circle theorem applied to some companion matrix of the polynomial, as it is used in the Weierstraß–(Durand–Kerner) method. From initial estimates of the roots, that might be quite random, one gets unions of circles that contain the roots of the polynomial.

Other bounds

Useful upper bounds for the magnitudes of all of a polynomial's roots[3] include the near optimal Fujiwara bound[4]

which is an improvement (as the geometric mean) of Kojima's bound:[5]

Other bounds are the Cauchy bound[6]

and the Lagrange bound[7][8]

or

These expressions return only bounds surpassing unity, so they cannot be used for some polynomials.[clarification needed]

A tighter upper bound on the magnitudes of the roots is[9]

Without loss of generality we assume the polynomial to be monic with general term aixi. Let { aj } be the set of negative coefficients. An upper bound for the positive real roots is given by the sum of the two largest numbers in the set { |aj|1/j }. This is an improvement on Fujiwara's bound which uses twice the maximum value of this set as its upper bound.

A similar bound also due to Lagrange holds for a polynomial with complex coefficients. Again assume the polynomial to be monic with general term aixi. Then the upper bound for the absolute values of the roots is given by the sum of the two greatest values in the set { |ai|1/i }. Again this is an improvement on Fujiwara's bound which uses twice the maximum value of this set as its upper bound.

A third bound also due to Lagrange holds for a polynomial with real coefficients. Let the aixn-i be the general term of the polynomial with 0 ≤ im. Let the first d terms of the polynomial have positive coefficients and let A be the maximum of these d coefficients. Then 1 + ( A / a0 )1/( 1 + d ) is an upper bound to the positive roots of the polynomial.

Sun and Hsieh obtained an improvement on Cauchy's bound.[10] assume the polynomial is monic with general term aixi. Sun and Hsieh showed that upper bounds 1 + d1 and 1 + d2 could be obtained from the following equations.

d2 is the positive root of the cubic equation

They also noted that d2d1

Proof

Let ζ be a root of the polynomial

in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as

and using the Hölder's inequality we find

Now, if p = 1, this is

thus

In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have

thus

simplifying we get,

Therefore

holds, for all 1 ≤ p ≤ ∞.

Landau's inequality

The previous bounds are upper bounds for each root separately. Landau's inequality provides an upper bound for the absolute values of the product of the roots that have an absolute value greater than one. This bound for the product of roots is not much greater than the preceding bounds of each root separately.[11]

Let be the n roots of the polynomial p, and

Then

This bound is useful to bound the coefficients of a divisor of a polynomial: if

is a divisor of p, then

Bounds on positive polynomial roots

There also exist bounds on just the positive roots of polynomials; these bounds were developed by Akritas, Strzeboński and Vigklas based on previous work by Doru Stefanescu. They are used in the computer algebra systems Mathematica, Sage, SymPy, Xcas etc.[12][13]

Polynomials with real roots

It is possible to determine the bounds of the roots of a polynomial using Samuelson's inequality. This method is due to a paper by Laguerre.[14]

Let be a polynomial with all real roots. Then its roots are located in the interval with endpoints

Example: The polynomial has four real roots −3, −2, −1 and 1. The above formula gives

thus its roots are contained in .

Gauss–Lucas theorem

The Gauss–Lucas theorem states that the convex hull of the roots of a polynomial contains the roots of the derivative of the polynomial.

A sometimes useful corollary is that if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial.

A related result is Bernstein's inequality. It states that for a polynomial P of degree n with derivative P′ we have

Statistical repartition of the roots

The statistical properties of the roots of a random polynomial have been the subject of several studies. Let

be a random polynomial. If the coefficients ai are independently and identically distributed with a mean of zero, the real roots are mostly located near ±1. The complex roots can be shown to be on or close to the unit circle.

If the coefficients are Gaussian distributed with a mean of zero and variance of σ then the mean density of real roots is given by the Kac formula[15][16]

where

When the coefficients are Gaussian distributed with a non-zero mean and variance of σ, a similar but more complex formula is known.[citation needed]

Asymptotic results

For large n, a number of asymptotic formulae are known. For a fixed x

and

where m( x ) is the mean density of real roots. The expected number of real roots is

where C is a constant approximately equal to 0.6257358072 and O() is the order operator.

This result has been shown by Kac, Erdös and others to be insensitive to the actual distribution of coefficients. Numerical testing of this formula has confirmed these earlier results.

See also

Notes

  1. ^ Tyrtyshnikov, E.E. (1997). A Brief Introduction to Numerical Analysis. Birkhäuser Boston. ISBN 0-8176-3916-0. Zbl 0874.65001.
  2. ^ S. Sastry (2004). Engineering Mathematics. PHI Learning. pp. 72–73. ISBN 81-203-2579-6.
  3. ^ M. Marden (1966). Geometry of Polynomials. Amer. Math. Soc. ISBN 0-8218-1503-2.
  4. ^ Fujiwara M (1916) Über die obere Schranke des absoluten Betrages der Wurzeln einer algebraischen Gleichung, Tôhoku Math J 10: 167–171
  5. ^ Kojima T (1917) On the limits of the roots of an algebraic equation, Tôhoku Math J 11 119–127
  6. ^ Cauchy AL (1829) Exercises de mathematique. Oeuvres 2 (9) p122
  7. ^ Lagrange J–L (1798) Traite de la r'esolution des equations numeriques. Paris.
  8. ^ Hirst HP & Macey WT (1997) Bounding the roots of polynomials. Coll Math J 28 (4) 292
  9. ^ Cohen, Alan M., "Bounds for the roots of polynomial equations", Mathematical Gazette 93, March 2009, 87–88.
  10. ^ Sun YJ and Hsieh JG (1996) A note on circular bound of polynomial zeros, IEEE Trans Circuits Syst. I 43, 476-478
  11. ^ Mignotte, Maurice, "Some useful bounds". Computer algebra, 259–263, Springer, Vienna, 1983
  12. ^ Vigklas, Panagiotis, S. (2010). Upper bounds on the values of the positive roots of polynomials (PDF). Ph. D. Thesis, University of Thessaly, Greece.{{cite book}}: CS1 maint: multiple names: authors list (link)
  13. ^ Akritas, Alkiviadis, G. (2009). "Linear and Quadratic Complexity Bounds on the Values of the Positive Roots of Polynomials". Journal of Universal Computer Science. 15 (3): 523–537.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  14. ^ Laguerre E (1880). "Sur une méthode pour obtenir par approximation les racines d'une équation algébrique qui a toutes ses racines réelles". Nouvelles Annales de Mathématiques. 2. 19: 161–172, 193–202..
  15. ^ Kac M (1943) Bull Am Math Soc 49, 314
  16. ^ Kac M (1948) Proc London Math Soc 50, 390

References