
Incomputability theory andcomputational complexity theory, areduction is analgorithm for transforming oneproblem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.
Intuitively, problemA isreducible to problemB, if an algorithm for solving problemB efficiently (if it exists) could also be used as a subroutine to solve problemA efficiently. When this is true, solvingA cannot be harder than solvingB. "Harder" means having a higher estimate of the required computational resources in a given context (e.g., highertime complexity, greater memory requirement, need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). The existence of a reduction fromA toB can be written in the shorthand notationA ≤mB, usually with a subscript on the ≤ to indicate the type of reduction being used (m :many-one reduction, p :polynomial reduction).
Themathematical structure generated on a set of problems by the reductions of a particular type generally forms apreorder, whoseequivalence classes may be used to definedegrees of unsolvability andcomplexity classes.
There are two main situations where we need to use reductions:
A very simple example of a reduction is frommultiplication tosquaring. Suppose all we know how to do is to add, subtract, take squares, and divide by two. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:
We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds toTuring reduction.
However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because in order to get the desired result as a square we have to compute itssquare root first, and this square root could be anirrational number like that cannot be constructed by arithmetic operations on rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds tomany-one reduction.
Reducibility is apreordering, that is, areflexive andtransitive relation, onP(N)×P(N), whereP(N) is thepower set of thenatural numbers.
As described in the example above, there are two main types of reductions used in computational complexity theory, themany-one reduction and theTuring reduction. Many-one reductions mapinstances of one problem toinstances of another; Turing reductionscompute the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is a stronger type of Turing reduction, and is more effective at separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
A problem iscomplete for a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.
However, in order to be useful, reductions must beeasy. For example, it's quite possible to reduce a difficult-to-solveNP-complete problem like theboolean satisfiability problem to a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem inexponential time and output zero only if there is a solution. However, this does not achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem. Likewise, a reduction computing anoncomputable function can reduce anundecidable problem to a decidable one. AsMichael Sipser points out inIntroduction to the Theory of Computation: "The reduction must be easy, relative to the complexity of typical problems in the class [...] If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it."
Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity classNP and many harder classes such as thepolynomial hierarchy,polynomial-time reductions are used. When studying classes within P such asNC andNL,log-space reductions are used. Reductions are also used incomputability theory to show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only tocomputable functions (for many-one reductions) ororacle machines (for Turing reductions).
In the case of optimization (maximization or minimization) problems, we often think in terms ofapproximation-preserving reductions. Suppose we have two optimization problems such that instances of one problem can be mapped onto instances of the other, in a way that nearly optimal solutions to instances of the latter problem can be transformed back to yield nearly optimal solutions to the former. This way, if we have an optimization algorithm (orapproximation algorithm) that finds near-optimal (or optimal) solutions to instances of problemB, and an efficient approximation-preserving reduction from problemA to problemB, by composition we obtain an optimization algorithm that yields near-optimal solutions to instances of problemA. Approximation-preserving reductions are often used to provehardness of approximation results: if some optimization problemA is hard to approximate (under some complexity assumption) within a factor better thanα for someα, and there is aβ-approximation-preserving reduction from problemA to problemB, we can conclude that problemB is hard to approximate within factorα/β.
The following example shows how to use reduction from the halting problem to prove that a language is undecidable. SupposeH(M,w) is the problem of determining whether a givenTuring machineM halts (by accepting or rejecting) on input stringw. This language is known to be undecidable. SupposeE(M) is the problem of determining whether the language a given Turing machineM accepts is empty (in other words, whetherM accepts any strings at all). We show thatE is undecidable by a reduction fromH.
To obtain a contradiction, supposeR is a decider forE. We will use this to produce a deciderS forH (which we know does not exist). Given inputM andw (a Turing machine and some input string), defineS(M,w) with the following behavior:S creates a Turing machineN that accepts only if the input string toN isw andM halts on inputw, and does not halt otherwise. The deciderS can now evaluateR(N) to check whether the language accepted byN is empty. IfR acceptsN, then the language accepted byN is empty, so in particularM does not halt on inputw, soS can reject. IfR rejectsN, then the language accepted byN is nonempty, soM does halt on inputw, soS can accept. Thus, if we had a deciderR forE, we would be able to produce a deciderS for the halting problemH(M,w) for any machineM and inputw. Since we know that such anS cannot exist, it follows that the languageE is also undecidable.