Asystem of polynomial equations (sometimes simply apolynomial system) is a set ofsimultaneous equationsf1 = 0, ...,fh = 0 where thefi arepolynomials in several variables, sayx1, ...,xn, over somefieldk.
Asolution of a polynomial system is a set of values for thexis which belong to somealgebraically closedfield extensionK ofk, and make all equations true. Whenk is the field ofrational numbers,K is generally assumed to be the field ofcomplex numbers, because each solution belongs to afield extension ofk, which is isomorphic to a subfield of the complex numbers.
This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fieldsk in which computation (including equality testing) is easy and efficient, that is the field ofrational numbers andfinite fields.
Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, seeDiophantine equation.

A simple example of a system of polynomial equations is
Its solutions are the four pairs(x,y) = (1, 2), (2, 1), (-1, -2), (-2, -1). These solutions can easily be checked by substitution, but more work is needed for proving that there are no other solutions.
The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions.
Asystem of polynomial equations, orpolynomial system is a collection of equations
where eachfh is apolynomial in theindeterminatesx1, ...,xm, withinteger coefficients, or coefficients in some fixedfield, often the field ofrational numbers or afinite field.[1] Other fields of coefficients, such as thereal numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers).
Asolution of a polynomial system is atuple of values of(x1, ...,xm) that satisfies all equations of the polynomial system. The solutions are sought in thecomplex numbers, or more generally in analgebraically closed field containing the coefficients. In particular, incharacteristic zero, allcomplex solutions are sought. Searching for thereal orrational solutions are much more difficult problems that are not considered in this article.
The set of solutions is not always finite; for example, the solutions of the system
are a point(x,y) = (1,1) and a linex = 0.[2] Even when thesolution set is finite, there is, in general, noclosed-form expression of the solutions (in the case of a single equation, this isAbel–Ruffini theorem).
TheBarth surface, shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numeroussingular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such anoverdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most53 = 125, byBézout's theorem. However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface.
A system isoverdetermined if the number of equations is higher than the number of variables. A system isinconsistent if it has nocomplex solution (or, if the coefficients are not complex numbers, no solution in analgebraically closed field containing the coefficients). ByHilbert's Nullstellensatz this means that 1 is alinear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the systemx3 – 1 = 0,x2 – 1 = 0 is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solutionx = 1.
A system isunderdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in analgebraically closed field that contains the coefficients of the equations). This is a non-trivial result ofcommutative algebra that involves, in particular,Hilbert's Nullstellensatz andKrull's principal ideal theorem.
A system iszero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that thealgebraic variety of the solutions hasdimension zero. A system with infinitely many solutions is said to bepositive-dimensional.
A zero-dimensional system with as many equations as variables is sometimes said to bewell-behaved.[3]Bézout's theorem asserts that a well-behaved system whose equations have degreesd1, ...,dn has at mostd1⋅⋅⋅dn solutions. This bound is sharp. If all the degrees are equal tod, this bound becomesdn and is exponential in the number of variables. (Thefundamental theorem of algebra is the special casen = 1.)
This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound).[citation needed]
The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of aGröbner basis of the left-hand sides of the equations. The system isinconsistent if this Gröbner basis is reduced to 1. The system iszero-dimensional if, for every variable there is aleading monomial of some element of the Gröbner basis which is a pure power of this variable. For this test, the bestmonomial order (that is the one which leads generally to the fastest computation) is usually thegraded reverse lexicographic one (grevlex).
If the system ispositive-dimensional, it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield ofalgebraic geometry.
A natural example of such a question concerning positive-dimensional systems is the following:decide if a polynomial system over therational numbers has a finite number of real solutions and compute them. A generalization of this question isfind at least one solution in eachconnected component of the set of real solutions of a polynomial system. The classical algorithm for solving these question iscylindrical algebraic decomposition, which has adoubly exponentialcomputational complexity and therefore cannot be used in practice, except for very small examples.
For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is callednumeric. A solution iscertified if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions.
The other way of representing the solutions is said to bealgebraic. It uses the fact that, for a zero-dimensional system, the solutions belong to thealgebraic closure of the fieldk of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficientsis a highly unstable problem.
A trigonometric equation is an equationg = 0 whereg is atrigonometric polynomial. Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (usingsum and difference formulas), replacingsin(x) andcos(x) by two new variabless andc and adding the new equations2 +c2 – 1 = 0.
For example, because of the identity
solving the equation
is equivalent to solving the polynomial system
For each solution(c0,s0) of this system, there is a unique solutionx of the equation such that0 ≤x < 2π.
In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system.
When solving a system over a finite fieldk withq elements, one is primarily interested in the solutions ink. As the elements ofk are exactly the solutions of the equationxq –x = 0, it suffices, for restricting the solutions tok, to add the equationxiq –xi = 0 for each variable xi.
The elements of analgebraic number field are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers.
For example, if a system contains, a system over the rational numbers is obtained by adding the equationr22 – 2 = 0 and replacing byr2 in the other equations.
In the case of a finite field, the same transformation allows always supposing that the fieldk has a prime order.
The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomialsf1(x1),f2(x1,x2), ...,fn(x1, ...,xn) such that, for everyi such that1 ≤i ≤n
To such aregular chain is associated atriangular system of equations
The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained fromfi has degreedi and thus that the system hasd1 ...dn solutions, provided that there is no multiple root in this resolution process (fundamental theorem of algebra).
Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions.
There are several algorithms for computing atriangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional)[4] intoregular chains (orregular semi-algebraic systems).
There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first theGröbner basis for thegraded reverse lexicographic order (grevlex), then deducing the lexicographical Gröbner basis by FGLM algorithm[5] and finally applying the Lextriangular algorithm.[6]
This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of:
The first issue has been solved by Dahan and Schost:[7][8] Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, calledequiprojectable decomposition, depends only on the choice of the coordinates. This allows the use ofmodular methods for computing efficiently the equiprojectable decomposition.[9]
The second issue is generally solved by outputting regular chains of a special form, sometimes calledshape lemma, for which alldi but the first one are equal to1. For getting such regular chains, one may have to add a further variable, calledseparating variable, which is given the index0. Therational univariate representation, described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis.
Therational univariate representation or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier.[10]
A RUR of a zero-dimensional system consists in a linear combinationx0 of the variables, calledseparating variable, and a system of equations[11]
whereh is a univariate polynomial inx0 of degreeD andg0, ...,gn are univariate polynomials inx0 of degree less thanD.
Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties.
For example, for the system in the previous section, every linear combination of the variable, except the multiples ofx,y andx +y, is a separating variable. If one choosest =x –y/2 as a separating variable, then the RUR is
The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size.
For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them inrational functions. This allows production of certified approximations of the solutions to any given precision.
Moreover, the univariate polynomialh(x0) of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides theprime decomposition of the given ideal (that is theprimary decomposition of theradical of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities.
Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension.
The general numerical algorithms which are designed for anysystem of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to findall solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution.
Nevertheless, two methods deserve to be mentioned here.
This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades.[13]
This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say, is kept.
In the second step, a system of polynomial equations is generated which has exactly solutions that are easy to compute. This new system has the same number of variables and the same number of equations and the same general structure as the system to solve,.
Then ahomotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system
The homotopy continuation consists in deforming the parameter from 0 to 1 andfollowing the solutions during this deformation. This gives the desired solutions for.Following means that, if, the solutions for are deduced from the solutions for by Newton's method. The difficulty here is to well choose the value of Too large, Newton's convergence may be slow and may even jump from a solution path to another one. Too small, and the number of steps slows down the method.
To deduce the numeric values of the solutions from a RUR seems easy: it suffices to compute the roots of the univariate polynomial and to substitute them in the other equations. This is not so easy because the evaluation of a polynomial at the roots of another polynomial is highly unstable.
The roots of the univariate polynomial have thus to be computed at a high precision which may not be defined once for all. There are two algorithms which fulfill this requirement.
There are at least four software packages which can solve zero-dimensional systems automatically (by automatically, one means that no human intervention is needed between input and output, and thus that no knowledge of the method by the user is needed). There are also several other software packages which may be useful for solving zero-dimensional systems. Some of them are listed after the automatic solvers.
TheMaple functionRootFinding[Isolate] takes as input any polynomial system over the rational numbers (if some coefficients arefloating point numbers, they are converted to rational numbers) and outputs the real solutions represented either (optionally) as intervals of rational numbers or as floating point approximations of arbitrary precision. If the system is not zero dimensional, this is signaled as an error.
Internally, this solver, designed by F. Rouillier computes first a Gröbner basis and then a Rational Univariate Representation from which the required approximation of the solutions are deduced. It works routinely for systems having up to a few hundred complex solutions.
The rational univariate representation may be computed withMaple functionGroebner[RationalUnivariateRepresentation].
To extract all the complex solutions from a rational univariate representation, one may useMPSolve, which computes the complex roots of univariate polynomials to any precision. It is recommended to runMPSolve several times, doubling the precision each time, until solutions remain stable, as the substitution of the roots in the equations of the input variables can be highly unstable.
The second solver is PHCpack,[13][16] written under the direction of J. Verschelde. PHCpack implements the homotopy continuation method. This solver computes the isolated complex solutions of polynomial systems having as many equations as variables.
The third solver is Bertini,[17][18] written by D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini uses numerical homotopy continuation with adaptive precision. In addition to computing zero-dimensional solution sets, both PHCpack and Bertini are capable of working with positive dimensional solution sets.
The fourth solver is theMaple libraryRegularChains, written by Marc Moreno-Maza and collaborators. It contains various functions for solving polynomial systems by means ofregular chains.