Incomputational complexity theory,P, also known asPTIME orDTIME(nO(1)), is a fundamentalcomplexity class. It contains alldecision problems that can be solved by adeterministic Turing machine using apolynomial amount ofcomputation time, orpolynomial time.
Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or "tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a usefulrule of thumb.
AlanguageL is in P if and only if there exists a deterministic Turing machineM, such that
P can also be viewed as a uniform family ofBoolean circuits. A languageL is in P if and only if there exists apolynomial-time uniform family of Boolean circuits, such that
The circuit definition can be weakened to use only alogspace uniform family without changing the complexity class.
P is known to contain many natural problems, including the decision versions oflinear programming, and finding amaximum matching. In 2002, it was shown that the problem of determining if a number isprime is in P.[1] The related class offunction problems isFP.
Several natural problems are complete for P, includingst-connectivity (orreachability) on alternating graphs.[2] The article onP-complete problems lists further relevant problems in P.


A generalization of P isNP, which is the class ofdecision problems decidable by anon-deterministic Turing machine that runs inpolynomial time. Equivalently, it is the class of decision problems where each "yes" instance has a polynomial size certificate, and certificates can be checked by a polynomial time deterministic Turing machine. The class of problems for which this is true for the "no" instances is calledco-NP. P is trivially a subset of NP and of co-NP; most experts believe it is a proper subset,[3] although this belief (the hypothesis)remains unproven. Another open problem is whether NP = co-NP; since P = co-P,[4] a negative answer would imply.
P is also known to be at least as large asL, the class of problems decidable in alogarithmic amount ofmemory space. A decider using space cannot use more than time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory byalternating Turing machines. P is also known to be no larger thanPSPACE, the class of problems decidable in polynomial space. PSPACE is equivalent to NPSPACE bySavitch's theorem. Again, whether P = PSPACE is an open problem. To summarize:
Here,EXPTIME is the class of problems solvable in exponential time. Of all the classes shown above, only two strict containments are known:
The most difficult problems in P areP-complete problems.
Another generalization of P isP/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that anadvice string is given that depends only on the length of the input. Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical problems, including all ofBPP. If it contains NP, then thepolynomial hierarchy collapses to the second level. On the other hand, it also contains some impractical problems, including someundecidable problems such as the unary version of any undecidable problem.
In 1999,Jin-Yi Cai and D. Sivakumar, building on work byMitsunori Ogihara, showed that if there exists asparse language that is P-complete, then L = P.[5]

P is contained inBQP; it is unknown whether this containment is strict.
Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function that is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P islow for itself. This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such asrandom access, that can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine.
Languages in P are also closed under reversal,intersection,union,concatenation,Kleene closure, inversehomomorphism, andcomplementation.[6]
Some problems are known to be solvable in polynomial time, but no concrete algorithm is known for solving them. For example, theRobertson–Seymour theorem guarantees that there is a finite list offorbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determining whether a graph has a given graph as a minor. This yields anonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem.
Indescriptive complexity, P can be described as the problems expressible inFO(LFP), thefirst-order logic with aleast fixed point operator added to it, on ordered structures. In Immerman's 1999 textbook on descriptive complexity,[7] Immerman ascribes this result to Vardi[8] and to Immerman.[9]
It was published in 2001 that PTIME corresponds to (positive)range concatenation grammars.[10]
P can also be defined as an algorithmic complexity class for problems that are not decision problems[11] (even though, for example, finding the solution to a2-satisfiability instance in polynomial time automatically gives a polynomial algorithm for the corresponding decision problem). In that case P is not a subset of NP, but P∩DEC is, where DEC is the class of decision problems.
Kozen[12] states thatCobham andEdmonds are "generally credited with the invention of the notion of polynomial time," thoughRabin also invented the notion independently and around the same time (Rabin's paper[13] was in a 1967 proceedings of a 1966 conference, while Cobham's[14] was in a 1965 proceedings of a 1964 conference and Edmonds's[15] was published in a journal in 1965, though Rabin makes no mention of either and was apparently unaware of them). Cobham invented the class as a robust way of characterizing efficient algorithms, leading toCobham's thesis. However,H. C. Pocklington, in a 1910 paper,[16][17] analyzed two algorithms for solving quadratic congruences, and observed that one took time "proportional to a power of the logarithm of the modulus" and contrasted this with one that took time proportional "to the modulus itself or its square root", thus explicitly drawing a distinction between an algorithm that ran in polynomial time versus one that ran in (moderately) exponential time.