
Incomputability theory, a system of data-manipulation rules (such as amodel of computation, a computer'sinstruction set, aprogramming language, or acellular automaton) is said to beTuring-complete orcomputationally universal if it can be used to simulate anyTuring machine[1][2] (devised by English mathematician and computer scientistAlan Turing). This means that this system is able to recognize or decode other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.[a]
A related concept is that ofTuring equivalence – two computers P and Q are called equivalent if P can simulate Q and Q can simulate P.[4] TheChurch–Turing thesis conjectures that any function whose values can be computed by analgorithm can be computed by a Turing machine, and therefore that if any real-world computer can simulate a Turing machine, it is Turing equivalent to a Turing machine. Auniversal Turing machine can be used to simulate any Turing machine and by extension the purely computational aspects of any possible real-world computer.[5][6]
To show that something is Turing-complete, it is enough to demonstrate that it can be used to simulate some Turing-complete system. No physical system can have infinite memory, but if the limitation of finite memory is ignored, most programming languages are otherwise Turing-complete.[7][8]
Incolloquial usage, the terms "Turing-complete" and "Turing-equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate the computational aspects of any other real-world general-purpose computer or computer language. In real life, this leads to the practical concepts of computingvirtualization andemulation.[citation needed]
Real computers constructed so far can be functionally analyzed like a single-tape Turing machine (which uses a "tape" for memory); thus the associated mathematics can apply by abstracting their operation far enough. However, real computers have limited physical resources, so they are onlylinear bounded automaton complete. In contrast, the abstraction of auniversal computer is defined as a device with a Turing-complete instruction set, infinite memory, and infinite available time.[citation needed]
Incomputability theory, several closely related terms are used to describe the computational power of a computational system (such as anabstract machine orprogramming language):
Turing completeness is significant in that every real-world design for a computing device can be simulated by auniversal Turing machine. TheChurch–Turing thesis states that this is a law of mathematics – that a universal Turing machine can, in principle, perform any calculation that any other programmablecomputer can. This says nothing about the effort needed to write theprogram, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.
Charles Babbage'sanalytical engine (1830s) would have been the first Turing-complete machine if it had been built at the time it was designed. Babbage appreciated that the machine was capable of great feats of calculation, including primitive logical reasoning, but he did not appreciate that no other machine could do better.[citation needed] From the 1830s until the 1940s, mechanical calculating machines such as adders and multipliers were built and improved, but they could not perform a conditional branch and therefore were not Turing-complete.
In the late 19th century,Leopold Kronecker formulated notions of computability, definingprimitive recursive functions. These functions can be calculated by rote computation, but they are not enough to make a universal computer, because the instructions that compute them do not allow for an infinite loop. In the early 20th century,David Hilbert led a program to axiomatize all of mathematics with precise axioms and precise logical rules of deduction that could be performed by a machine. Soon it became clear that a small set of deduction rules are enough to produce the consequences of any set of axioms. These rules were proved byKurt Gödel in 1930 to be enough to produce every theorem.
The actual notion of computation was isolated soon after, starting withGödel's incompleteness theorem. This theorem showed that axiom systems were limited when reasoning about the computation that deduces their theorems. Church and Turing independently demonstrated that Hilbert'sEntscheidungsproblem (decision problem) was unsolvable,[9] thus identifying the computational core of the incompleteness theorem. This work, along with Gödel's work ongeneral recursive functions, established that there are sets of simple instructions, which, when put together, are able to produce any computation. The work of Gödel showed that the notion of computation is essentially unique.
In 1941Konrad Zuse completed theZ3 computer. Zuse was not familiar with Turing's work on computability at the time. In particular, the Z3 lacked dedicated facilities for a conditional jump, thereby precluding it from being Turing complete. However, in 1998, it was shown by Rojas that the Z3 is capable of simulating conditional jumps, and therefore Turing complete in theory. To do this, its tape program would have to be long enough to execute every possible path through both sides of every branch.[10]
The first computer capable of conditional branching in practice, and therefore Turing complete in practice, was theENIAC in 1946. Zuse'sZ4 computer was operational in 1945, but it did not support conditional branching until 1950.[11]
Computability theory usesmodels of computation to analyze problems and determine whether they arecomputable and under what circumstances. The first result of computability theory is that there exist problems for which it is impossible to predict what a (Turing-complete) system will do over an arbitrarily long time.
The classic example is thehalting problem: create an algorithm that takes as input a program in some Turing-complete language and some data to be fed tothat program, and determines whether the program, operating on the input, will eventually stop or will continue forever. It is trivial to create an algorithm that can do this forsome inputs, but impossible to do this in general. For any characteristic of the program's eventual output, it is impossible to determine whether this characteristic will hold.
This impossibility poses problems when analyzing real-world computer programs. For example, one cannot write a tool that entirely protects programmers from writing infinite loops or protects users from supplying input that would cause infinite loops.
One can instead limit a program to executing only for a fixed period of time (timeout) or limit the power of flow-control instructions (for example, providing only loops that iterate over the items of an existing array). However, another theorem shows that there are problems solvable by Turing-complete languages that cannot be solved by any language with only finite looping abilities (i.e., languages that guarantee that every program will eventually finish to a halt). So any such language is not Turing-complete. For example, a language in which programs are guaranteed to complete and halt cannot compute the computable function produced byCantor's diagonal argument on all computable functions in that language.
A computer with access to an infinite tape of data may be more powerful than a Turing machine: for instance, the tape might contain the solution to thehalting problem or some other Turing-undecidable problem. Such an infinite tape of data is called aTuring oracle. Even a Turing oracle with random data is not computable (with probability 1), since there are only countably many computations but uncountably many oracles. So a computer with a random Turing oracle can compute things that a Turing machine cannot.
All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis calleddigital physics states that this is no accident because theuniverse itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically.[12]
The computational systems (algebras, calculi) that are discussed as Turing-complete systems are those intended for studyingtheoretical computer science. They are intended to be as simple as possible, so that it would be easier to understand the limits of computation. Here are a few:
Mostprogramming languages (their abstract models, maybe with some particular constructs that assume finite memory omitted), conventional and unconventional, are Turing-complete. This includes:
Somerewrite systems are Turing-complete.
Turing completeness is an abstract statement of ability, rather than a prescription of specific language features used to implement that ability. The features used to achieve Turing completeness can be quite different; Fortran systems would use loop constructs or possibly evengoto statements to achieve repetition; Haskell and Prolog, lacking looping almost entirely, would userecursion. Most programming languages are describing computations onvon Neumann architectures, which have memory (RAM and register) and a control unit. These two elements make this architecture Turing-complete. Even purefunctional languages are Turing-complete.[15][16]
Turing completeness in declarative SQL is implemented throughrecursive common table expressions. Unsurprisingly, procedural extensions to SQL (PLSQL, etc.) are also Turing-complete. This illustrates one reason why relatively powerful non-Turing-complete languages are rare: the more powerful the language is initially, the more complex are the tasks to which it is applied and the sooner its lack of completeness becomes perceived as a drawback, encouraging its extension until it is Turing-complete.
The untypedlambda calculus is Turing-complete, but many typed lambda calculi, includingSystem F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors.
Rule 110 andConway's Game of Life, bothcellular automata, are Turing-complete.
Somesoftware andvideo games are Turing-complete by accident, i.e. not by design.
Software:
Games:
Social media:
Computational languages:
Biology:
Many computational languages exist that are not Turing-complete. One such example is the set ofregular languages, which are generated byregular expressions and which are recognized byfinite automata. A more powerful but still not Turing-complete extension of finite automata is the category ofpushdown automata andcontext-free grammars, which are commonly used to generate parse trees in an initial stage of programcompiling. Further examples include some of the early versions of the pixel shader languages embedded inDirect3D andOpenGL extensions.[citation needed]
Intotal functional programming languages, such asCharity andEpigram, all functions are total and must terminate. Charity uses a type system andcontrol constructs based oncategory theory, whereas Epigram usesdependent types. TheLOOP language is designed so that it computes only the functions that areprimitive recursive. All of these compute proper subsets of the total computable functions, since the full set of total computable functions is notcomputably enumerable. Also, since all functions in these languages are total, algorithms forrecursively enumerable sets cannot be written in these languages, in contrast with Turing machines.
Although (untyped)lambda calculus is Turing-complete,simply typed lambda calculus is not.
{{citation}}: CS1 maint: work parameter with ISBN (link)