Inmathematics, aunivariate polynomial of degreen with real or complex coefficients hasn complexroots, if counted with theirmultiplicities. They form a multiset ofn points in thecomplex plane. This article concerns thegeometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.
Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots. Such bounds are widely used forroot-finding algorithms for polynomials, either for tuning them, or for computing theircomputational complexity.
Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degreen with real coefficients, which is less than forn sufficiently large.
In this article, a polynomial that is considered is always denoted
where are real orcomplex numbers and; thus is the degree of the polynomial.
Then roots of a polynomial of degreen dependcontinuously on the coefficients. For simple roots, this results immediately from theimplicit function theorem. This is true also for multiple roots, but some care is needed for the proof.
A small change of coefficients may induce a dramatic change of the roots, including the change of a real root into a complex root with a rather large imaginary part (seeWilkinson's polynomial). A consequence is that, for classical numericroot-finding algorithms, the problem of approximating the roots given the coefficients can beill-conditioned for many inputs.
Thecomplex conjugate root theorem states that if the coefficientsof a polynomial are real, then the non-real roots appear in pairs of the form(a +ib,a –ib).
It follows that the roots of a polynomial with real coefficients aremirror-symmetric with respect to the real axis.
This can be extended toalgebraic conjugation: the roots of a polynomial withrational coefficients areconjugate (that is, invariant) under the action of theGalois group of the polynomial. However, this symmetry can rarely be interpreted geometrically.
Upper bounds on the absolute values of polynomial roots are widely used forroot-finding algorithms, either for limiting the regions where roots should be searched, or for the computation of thecomputational complexity of these algorithms.
Many such bounds have been given, and the sharper one depends generally on the specific sequence of coefficient that are considered. Most bounds are greater or equal to one, and are thus not sharp for a polynomial which have only roots of absolute values lower than one. However, such polynomials are very rare, as shown below.
Any upper bound on the absolute values of roots provides a corresponding lower bound. In fact, if andU is an upper bound of the absolute values of the roots of
then1/U is a lower bound of the absolute values of the roots of
since the roots of either polynomial are the multiplicative inverses of the roots of the other.Therefore, in the remainder of the article lower bounds will not be given explicitly.
Lagrange andCauchy were the first to provide upper bounds on all complex roots.[1] Lagrange's bound is[2]
and Cauchy's bound is[3]
Lagrange's bound is sharper (smaller) than Cauchy's bound only when 1 is larger than the sum of all but the largest. This is relatively rare in practice, and explains why Cauchy's bound is more widely used than Lagrange's.
Both bounds result from theGershgorin circle theorem applied to thecompanion matrix of the polynomial and itstranspose. They can also be proved by elementary methods.
Proof of Lagrange's and Cauchy's bounds |
---|
Ifz is a root of the polynomial, and|z| ≥ 1 one has Dividing by one gets which is Lagrange's bound when there is at least one root of absolute value larger than 1. Otherwise, 1 is a bound on the roots, and is not larger than Lagrange's bound. Similarly, for Cauchy's bound, one has, if|z| ≥ 1, Thus Solving in|z|, one gets Cauchy's bound if there is a root of absolute value larger than 1. Otherwise the bound is also correct, as Cauchy's bound is larger than 1. |
These bounds are not invariant by scaling. That is, the roots of the polynomialp(sx) are the quotient bys of the root ofp, and the bounds given for the roots ofp(sx) are not the quotient bys of the bounds ofp. Thus, one may get sharper bounds by minimizing over possible scalings. This gives
and
for Lagrange's and Cauchy's bounds respectively.
Another bound, originally given by Lagrange, but attributed to Zassenhaus byDonald Knuth, is[4]
This bound is invariant by scaling.
Proof of the preceding bound |
---|
LetA be the largest for0 ≤i <n. Thus one has forIfz is a root ofp, one has and thus, after dividing by As we want to prove|z| ≤ 2A, we may suppose that|z| > A (otherwise there is nothing to prove).Thus which gives the result, since |
Lagrange improved this latter bound into the sum of the two largest values (possibly equal) in the sequence[4]
Lagrange also provided the bound[citation needed]
where denotes theithnonzero coefficient when the terms of the polynomials are sorted by increasing degrees.
Hölder's inequality allows the extension of Lagrange's and Cauchy's bounds to everyh-norm. Theh-norm of a sequence
is
for any real numberh ≥ 1, and
If with1 ≤h,k ≤ ∞, and1 / ∞ = 0, an upper bound on the absolute values of the roots ofp is
Fork = 1 andk = ∞, one gets respectively Cauchy's and Lagrange's bounds.
Forh =k = 2, one has the bound
This is not only a bound of the absolute values of the roots, but also a bound of the product of their absolute values larger than 1; see§ Landau's inequality, below.
Proof |
---|
Letz be a root of the polynomial Setting we have to prove that every rootz ofp satisfies If the inequality is true; so, one may suppose for the remainder of the proof. Writing the equation as Hölder's inequality implies Ifk = ∞, this is Thus In the case1 ≤k < ∞, the summation formula for ageometric progression, gives Thus which simplifies to Thus, in all cases which finishes the proof. |
Many other upper bounds for the magnitudes of all roots have been given.[5]
Fujiwara's bound[6]
slightly improves the bound given above by dividing the last argument of the maximum by two.
Kojima's bound is[7][verification needed]
where denotes theithnonzero coefficient when the terms of the polynomials are sorted by increasing degrees. If all coefficients are nonzero, Fujiwara's bound is sharper, since each element in Fujiwara's bound is thegeometric mean of first elements in Kojima's bound.
Sun and Hsieh obtained another improvement on Cauchy's bound.[8] Assume the polynomial is monic with general termaixi. Sun and Hsieh showed that upper bounds1 +d1 and1 +d2 could be obtained from the following equations.
d2 is the positive root of the cubic equation
They also noted thatd2 ≤d1.
The previous bounds are upper bounds for each root separately.Landau's inequality provides an upper bound for the absolute values of the product of the roots that have an absolute value greater than one. This inequality, discovered in 1905 byEdmund Landau,[9] has been forgotten and rediscovered at least three times during the 20th century.[10][11][12]
This bound of the product of roots is not much greater than the best preceding bounds of each root separately.[13]Let be then roots of the polynomialp. If
is theMahler measure ofp,then
Surprisingly, this bound of the product of the absolute values larger than 1 of the roots is not much larger than the best bounds ofone root that have been given above for a single root. This bound is even exactly equal to one of the bounds that are obtainedusing Hölder's inequality.
This bound is also useful to bound the coefficients of a divisor of a polynomial with integer coefficients:[14] if
is a divisor ofp, then
and, byVieta's formulas,
fori = 0, ...,m, where is abinomial coefficient. Thus
and
Rouché's theorem allows defining discs centered at zero and containing a given number of roots. More precisely, if there is a positive real numberR and an integer0 ≤k ≤n such that
then there are exactlyk roots, counted with multiplicity, of absolute value less thanR.
Proof |
---|
If then By Rouché's theorem, this implies directly that and have the same number of roots of absolute values less thanR, counted with multiplicities. As this number isk, the result is proved. |
The above result may be applied if the polynomial
takes a negative value for some positive real value ofx.
In the remaining of the section, suppose thata0 ≠ 0. If it is not the case, zero is a root, and the localization of the other roots may be studied by dividing the polynomial by a power of the indeterminate, getting a polynomial with a nonzero constant term.
Fork = 0 andk =n,Descartes' rule of signs shows that the polynomial has exactly one positive real root. If and are these roots, the above result shows that all the roots satisfy
As these inequalities apply also to and these bounds are optimal for polynomials with a given sequence of the absolute values of their coefficients. They are thus sharper than all bounds given in the preceding sections.
For0 <k <n, Descartes' rule of signs implies that either has two positive real roots that are not multiple, or is nonnegative for every positive value ofx. So, the above result may be applied only in the first case. If are these two roots, the above result implies that
fork roots ofp, and that
for then –k other roots.
Instead of explicitly computing and it is generally sufficient to compute a value such that (necessarily). These have the property of separating roots in terms of their absolute values: if, forh <k, both and exist, there are exactlyk –h rootsz such that
For computing one can use the fact that is aconvex function (its second derivative is positive). Thus exists if and only if is negative at its unique minimum. For computing this minimum, one can use anyoptimization method, or, alternatively,Newton's method for computing the unique positive zero of the derivative of (it converges rapidly, as the derivative is amonotonic function).
One can increase the number of existing's by applying the root squaring operation of theDandelin–Graeffe iteration. If the roots have distinct absolute values, one can eventually completely separate the roots in terms of their absolute values, that is, computen + 1 positive numbers such there is exactly one root with an absolute value in the open interval fork = 1, ...,n.
TheGershgorin circle theorem applies thecompanion matrix of the polynomial on a basis related toLagrange interpolation to define discs centered at the interpolation points, each containing a root of the polynomial; seeDurand–Kerner method § Root inclusion via Gerschgorin's circles for details.
If the interpolation points are close to the roots of the roots of the polynomial, the radii of the discs are small, and this is a key ingredient of Durand–Kerner method for computing polynomial roots.
For polynomials with real coefficients, it is often useful to bound only the real roots. It suffices to bound the positive roots, as the negative roots ofp(x) are the positive roots ofp(–x).
Clearly, every bound of all roots applies also for real roots. But in some contexts, tighter bounds of real roots are useful. For example, the efficiency of themethod of continued fractions forreal-root isolation strongly depends on tightness of a bound of positive roots. This has led to establishing new bounds that are tighter than the general bounds of all roots. These bounds are generally expressed not only in terms of the absolute values of the coefficients, but also in terms of their signs.
Other bounds apply only to polynomials whose all roots are reals (see below).
To give a bound of the positive roots, one can assume without loss of generality, as changing the signs of all coefficients does not change the roots.
Every upper bound of the positive roots of
is also a bound of the real zeros of
In fact, ifB is such a bound, for allx >B, one hasp(x) ≥q(x) > 0.
Applied to Cauchy's bound, this gives the upper bound
for the real roots of a polynomial with real coefficients. If this bound is not greater than1, this means that all nonzero coefficients have the same sign, and that there is no positive root.
Similarly, another upper bound of the positive roots is
If all nonzero coefficients have the same sign, there is no positive root, and the maximum must be zero.
Other bounds have been recently developed, mainly for themethod of continued fractions forreal-root isolation.[15][16]
If all roots of a polynomial are real,Laguerre proved the following lower and upper bounds of the roots, by using what is now calledSamuelson's inequality.[17]
Let be a polynomial with all real roots. Then its roots are located in the interval with endpoints
For example, the roots of the polynomial satisfy
Theroot separation of a polynomial is the minimal distance between two roots, that is the minimum of the absolute values of the difference of two roots:
The root separation is a fundamental parameter of thecomputational complexity ofroot-finding algorithms for polynomials. In fact, the root separation determines the precision of number representation that is needed for being certain of distinguishing distinct roots. Also, forreal-root isolation, it allows bounding the number of interval divisions that are needed for isolating all roots.
For polynomials with real or complex coefficients, it is not possible to express a lower bound of the root separation in terms of the degree and the absolute values of the coefficients only, because a small change on a single coefficient transforms a polynomial with multiple roots into asquare-free polynomial with a small root separation, and essentially the same absolute values of the coefficient. However, involving thediscriminant of the polynomial allows a lower bound.
For square-free polynomials with integer coefficients, the discriminant is an integer, and has thus an absolute value that is not smaller than1. This allows lower bounds for root separation that are independent from the discriminant.
Mignotte's separation bound is[18][19][20]
where is the discriminant, and
For a square free polynomial with integer coefficients, this implies
wheres is thebit size ofp, that is the sum of the bitsize of its coefficients.
The Gauss–Lucas theorem states that theconvex hull of the roots of a polynomial contains the roots of thederivative of the polynomial.
A sometimes useful corollary is that, if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial.
A related result isBernstein's inequality. It states that for a polynomialP of degreen with derivativeP′ we have
If the coefficientsai of a random polynomial are independently and identically distributed with amean of zero, most complex roots are on the unit circle or close to it. In particular, the real roots are mostly located near±1, and, moreover, their expected number is, for a large degree, less than thenatural logarithm of the degree.
If the coefficients areGaussian distributed with a mean of zero andvariance ofσ then the mean density of real roots is given by the Kac formula[21][22]
where
When the coefficients are Gaussian distributed with a non-zero mean and variance ofσ, a similar but more complex formula is known.[citation needed]
For largen, the mean density of real roots nearx is asymptotically
ifand
It follows that the expected number of real roots is, usingbigO notation
whereC is a constant approximately equal to0.6257358072.[23]
In other words,the expected number of real roots of a random polynomial of high degree is lower than thenatural logarithm of the degree.
Kac, Erdős and others have shown that these results are insensitive to the distribution of the coefficients, if they are independent and have the same distribution with mean zero. However, if the variance of theith coefficient is equal to the expected number of real roots is[23]
A polynomial can be written in the form of
with distinct roots and correspondingmultiplicities. A root is asimple root if or amultiple root if. Simple roots areLipschitz continuous with respect to coefficients but multiple roots are not. In other words, simple roots have bounded sensitivities but multiple roots are infinitely sensitive if the coefficients are perturbed arbitrarily. As a result, most root-finding algorithms suffer substantial loss of accuracy on multiple roots in numerical computation.
In 1972,William Kahan proved that there is an inherent stability of multiple roots.[24] Kahan discovered that polynomials with a particular set of multiplicities form what he called apejorative manifold and proved that a multiple root isLipschitz continuous if the perturbation maintains its multiplicity.
This geometric property of multiple roots is crucial innumerical computation of multiple roots.