
Aquantum computer is a (real or theoretical)computer that exploitssuperposed andentangledstates, and the intrinsicallynon-deterministic outcomes ofquantum measurements, as features of its computation. Quantum computers can be viewed as sampling from quantum systems that evolve in ways that may be described as operating on an enormous number of possibilities simultaneously, though still subject to strict computational constraints. By contrast, ordinary ("classical") computers operate according todeterministic rules. (Aclassical computer can, in principle, be replicated by aclassical mechanical device, with only a simple multiple of time cost. On the other hand (it is believed), a quantum computer would requireexponentially more time and energy to be simulated classically.) It is widely believed that a quantum computer could performsome calculations exponentially faster than any classical computer. For example, a large-scale quantum computer couldbreak some widely used public-key cryptographic schemes and aid physicists in performingphysical simulations. However, current hardware implementations of quantum computation are largely experimental and only suitable for specialized tasks.
The basicunit of information in quantum computing, thequbit (or "quantum bit"), serves the same function as thebit in ordinary or "classical" computing.[1] However, unlike a classical bit, which can be in one of two states (abinary), a qubit can exist in alinear combination of two states known as aquantum superposition. The result ofmeasuring a qubit is one of the two states given by aprobabilistic rule. If a quantum computer manipulates the qubit in a particular way,wave interference effects amplify the probability of the desired measurement result. The design ofquantum algorithms involves creating procedures that allow a quantum computer to perform this amplification.
Quantum computers are not yet practical for real-world applications. Physically engineering high-quality qubits has proven to be challenging. If a physical qubit is not sufficientlyisolated from its environment, it suffers fromquantum decoherence, introducingnoise into calculations. National governments have invested heavily in experimental research aimed at developing scalable qubits with longer coherence times and lower error rates. Example implementations includesuperconductors (which isolate anelectrical current by eliminatingelectrical resistance) andion traps (which confine a singleatomic particle usingelectromagnetic fields). Researchers have claimed, and are widely believed to be correct, that certain quantum devices can outperform classical computers on narrowly defined tasks, a milestone referred to as quantum advantage orquantum supremacy. These tasks are not necessarily useful for real-world applications.
For many years, the fields ofquantum mechanics andcomputer science formed distinct academic communities.[2]Modern quantum theory was developed in the 1920s to explain perplexing physical phenomena observed at atomic scales,[3][4] anddigital computers emerged in the following decades to replacehuman computers for tedious calculations.[5] Both disciplines had practical applications duringWorld War II; computers played a major role inwartime cryptography,[6] and quantum physics was essential fornuclear physics used in theManhattan Project.[7]
Asphysicists applied quantum mechanical models to computational problems and swapped digitalbits forqubits, the fields of quantum mechanics and computer science began to converge. In 1980,Paul Benioff introduced thequantum Turing machine, which uses quantum theory to describe a simplified computer.[8]When digital computers became faster, physicists faced anexponential increase in overhead whensimulating quantum dynamics,[9] promptingYuri Manin andRichard Feynman to independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation.[10][11][12]In a 1984 paper,Charles Bennett andGilles Brassard applied quantum theory tocryptography protocols and demonstrated that quantum key distribution could enhanceinformation security.[13][14]
Quantum algorithms then emerged for solvingoracle problems, such asDeutsch's algorithm in 1985,[15] theBernstein–Vazirani algorithm in 1993,[16] andSimon's algorithm in 1994.[17]These algorithms did not solve practical problems, but demonstrated mathematically that one could obtain more information by querying ablack box with a quantum state insuperposition, sometimes referred to asquantum parallelism.[18]

Peter Shor built on these results withhis 1994 algorithm for breaking the widely usedRSA andDiffie–Hellman encryption protocols,[19] which drew significant attention to the field of quantum computing. In 1996,Grover's algorithm established a quantum speedup for the widely applicableunstructured search problem.[20][21] The same year,Seth Lloyd proved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations,[22] validating Feynman's 1982 conjecture.[23]
Over the years,experimentalists have constructed small-scale quantum computers usingtrapped ions and superconductors.[24]In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology,[25][26] and subsequent experiments have increased the number of qubits and reduced error rates.[24]
In 2019,Google AI andNASA announced that they had achievedquantum supremacy with a 54-qubit machine, performing a computation that any classical computer would find impossible.[27][28][29][30]
This announcement was met with a rebuttal fromIBM, which contended that the calculation Google claimed would take 10,000 years could be performed in just 2.5 days on itsSummit supercomputer if its architecture were optimized, sparking a debate over the precise threshold for "quantum supremacy".[31]
Computer engineers typically describe amodern computer's operation in terms ofclassical electrodynamics. In these "classical" computers, some components (such assemiconductors andrandom number generators) may rely on quantum behavior; however, because they are notisolated from their environment, anyquantum information eventually quicklydecoheres. Whileprogrammers may depend onprobability theory when designing arandomized algorithm, quantum-mechanical notions such assuperposition andwave interference are largely irrelevant inprogram analysis.
Quantum programs, in contrast, rely on precise control ofcoherent quantum systems. Physicistsdescribe these systems mathematically usinglinear algebra.Complex numbers modelprobability amplitudes,vectors modelquantum states, andmatrices model the operations that can be performed on these states. Programming a quantum computer is then a matter ofcomposing operations in such a way that the resulting program computes a useful result in theory and is implementable in practice.
As physicistCharlie Bennett describes the relationship between quantum and classical computers,[32]
A classical computer is a quantum computer ... so we shouldn't be asking about "where do quantum speedups come from?" We should say, "Well, all computers are quantum. ... Where do classical slowdowns come from?"
Just as the bit is the basic concept of classical information theory, thequbit is the fundamental unit ofquantum information. The same termqubit is used to refer to an abstract mathematical model and to any physical system that is represented by that model. A classical bit, by definition, exists in either of two physical states, which can be denoted 0 and 1. A qubit is also described by a state, and two states, often written and, serve as the quantum counterparts of the classical states 0 and 1. However, the quantum states and belong to avector space, meaning that they can be multiplied by constants and added together, and the result is again a valid quantum state. Such a combination is known as asuperposition of and.[33][34]
A two-dimensionalvector mathematically represents a qubit state. Physicists typically usebra–ket notation for quantum mechanicallinear algebra, writing'ketpsi' for a vector labeled. Because a qubit is a two-state system, any qubit state takes the form, where and are the standardbasis states,[a] and and are theprobability amplitudes, which are in generalcomplex numbers.[34] If either or is zero, the qubit is effectively a classical bit; when both are nonzero, the qubit is in superposition. Such aquantum state vector behaves similarly to a (classical)probability vector, with one key difference: unlike probabilities, probabilityamplitudes are not necessarily positive numbers.[36] Negative amplitudes allow for destructive wave interference.
When a qubit ismeasured in thestandard basis, the result is a classical bit. TheBorn rule describes thenorm-squared correspondence between amplitudes and probabilities—when measuring a qubit, the statecollapses to with probability, or to with probability.Any valid qubit state has coefficients and such that.As an example, measuring the qubit would produce either or with equal probability.
Each additional qubit doubles thedimension of thestate space.[35]As an example, the vector1/√2|00⟩ +1/√2|01⟩ represents a two-qubit state, atensor product of the qubit|0⟩ with the qubit1/√2|0⟩ +1/√2|1⟩.This vector inhabits a four-dimensionalvector space spanned by the basis vectors|00⟩,|01⟩,|10⟩, and|11⟩.TheBell state1/√2|00⟩ +1/√2|11⟩ is impossible to decompose into the tensor product of two individual qubits—the two qubits areentangled because neither qubit has a state vector of its own.In general, the vector space for ann-qubit system is2n-dimensional, and this makes it challenging for a classical computer to simulate a quantum one: representing a 100-qubit system requires storing 2100 classical values.
The state of this one-qubitquantum memory can be manipulated by applyingquantum logic gates, analogous to how classical memory can be manipulated withclassical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by amatrixMathematically, the application of such a logic gate to a quantum state vector is modeled withmatrix multiplication. Thus
The mathematics of single-qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory areThecontrolled NOT (CNOT) gate can then be represented using the following matrix:As a mathematical consequence of this definition,,,, and. In other words, the CNOT applies a NOT gate ( from before) to the second qubit if and only if the first qubit is in the state. If the first qubit is, nothing is done to either qubit.
In summary, quantum computation can be described as a network of quantum logic gates and measurements. However, anymeasurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so mostquantum circuits depict a network consisting only of quantum logic gates and no measurements.
Quantum parallelism is the heuristic that quantum computers can be thought of as evaluating a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, enabling the simultaneous computation of multiple outputs. This property is key to the speedup of many quantum algorithms. However, "parallelism" in this sense is insufficient to speed up a computation, because the measurement at the end of the computation gives only one value. To be useful, a quantum algorithm must also incorporate some other conceptual ingredient.[37][38]
There are multiplemodels of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed.

Aquantum gate array decomposes computation into a sequence of few-qubitquantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Any quantum computation (which is, in the above formalism, anyunitary matrix of size over qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as auniversal gate set, since a computer that can run such circuits is auniversal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to theSolovay-Kitaev theorem. Implementation of Boolean functions using the few-qubit quantum gates is presented here.[39]
Ameasurement-based quantum computer decomposes computation into a sequence ofBell state measurements and single-qubitquantum gates applied to a highly entangled initial state (acluster state), using a technique calledquantum gate teleportation.
Anadiabatic quantum computer, based onquantum annealing, decomposes computation into a slow continuous transformation of an initialHamiltonian into a final Hamiltonian, whose ground states contain the solution.[40]
Neuromorphic quantum computing (abbreviated 'n.quantum computing') is an unconventional process of computing that usesneuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and do not follow thevon Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing and quantum computing share similar physical properties during computation.
Atopological quantum computer decomposes computation into the braiding ofanyons in a 2D lattice.[41]
Aquantum Turing machine is the quantum analog of aTuring machine.[8] All of these models of computation—quantum circuits,[42]one-way quantum computation,[43] adiabatic quantum computation,[44] and topological quantum computation[45]—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
Thethreshold theorem shows how increasing the number of qubits can mitigate errors,[46] yet fully fault-tolerant quantum computing remains "a rather distant dream".[47] According to some researchers,noisy intermediate-scale quantum (NISQ) machines may have specialized uses in the near future, butnoise in quantum gates limits their reliability.[47]Scientists atHarvard University successfully created "quantum circuits" that correct errors more efficiently than alternative methods, which may potentially remove a major obstacle to practical quantum computers.[48] The Harvard research team was supported byMIT,QuEra Computing,Caltech, andPrinceton University and funded byDARPA's Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program.[49][50]
Digital cryptography enables communications to remain private, preventing unauthorized parties from accessing them. Conventional encryption, the obscuring of a message with a key through an algorithm, relies on the algorithm being difficult to reverse. Encryption is also the basis for digital signatures and authentication mechanisms. Quantum computing may be sufficiently more powerful that difficult reversals are feasible, allowing messages relying on conventional encryption to be read.[51]
Quantum cryptography replaces conventional algorithms with computations based on quantum computing. In principle, quantum encryption would be impossible to decode even with a quantum computer. This advantage comes at a significant cost in terms of elaborate infrastructure, while effectively preventing legitimate decoding of messages by governmental security officials.[51]
Ongoing research in quantum andpost-quantum cryptography has led to new algorithms forquantum key distribution, initial work on quantumrandom number generation and to some early technology demonstrations.[52]: 1012–1036
Quantum cryptography enables new ways to transmit data securely; for example,quantum key distribution uses entangled quantum states to establish securecryptographic keys.[52]: 1017 When a sender and receiver exchange quantum states, they can guarantee that anadversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change.[53] With appropriatecryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.[13][54]
Modernfiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware (such as quantum repeaters), hoping to scale this technology to long-distancequantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhancedquantum sensing.[55][56]
Progress in findingquantum algorithms typically focuses on this quantum circuit model, though exceptions like thequantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.[57]
Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm includeShor's algorithm for factoring and the related quantum algorithms for computingdiscrete logarithms, solvingPell's equation, and, more generally, solving thehidden subgroup problem forabelian finite groups.[57] These algorithms depend on the primitive of thequantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely.[58] Certain oracle problems likeSimon's problem and theBernstein–Vazirani problem do give provable speedups, though this is in thequantum query model, which is a restricted model where lower bounds are much easier to prove and don't necessarily translate to speedups for practical problems.
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certainJones polynomials, and thequantum algorithm for linear systems of equations, have quantum algorithms appearing to give super-polynomial speedups and areBQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that "no quantum algorithm" provides a super-polynomial speedup, which is believed to be unlikely.[59]
In addition to these problems, quantum algorithms are being explored for applications in cryptography, optimization, and machine learning, although most of these remain at the research stage and require significant advances in error correction and hardware scalability before practical implementation.[60]
Some quantum algorithms, such asGrover's algorithm andamplitude amplification, give polynomial speedups over corresponding classical algorithms.[57] Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems.[21] These speed-ups are, however, over the theoretical worst-case of classical algorithms, and concrete real-world speed-ups over algorithms used in practice have not been demonstrated.
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically,quantum simulation may be an important application of quantum computing.[61] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside acollider.[62] In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer.[63][64]
About 2% of the annual global energy output is used fornitrogen fixation to produceammonia for theHaber process in the agricultural fertiliser industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production.[65] It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process[66] by the mid-2020s[67] although some have predicted it will take longer.[68]
A notable application of quantum computing is inattacking cryptographic systems that are currently in use.Integer factorization, which underpins the security ofpublic key cryptographic systems, is believed to be computationally infeasible on aclassical computer for large integers if they are the product of a fewprime numbers (e.g., the product of two 300-digit primes).[69] By contrast, a quantum computer could solve this problem exponentially faster using Shor's algorithm to factor the integer.[70] This ability would allow a quantum computer to break many of thecryptographic systems in use today, in the sense that there would be apolynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popularpublic key ciphers are based on the difficulty of factoring integers or thediscrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, theRSA,Diffie–Hellman, andelliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field ofpost-quantum cryptography.[71][72] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, such as theMcEliece cryptosystem, which relies on a hard problem incoding theory.[71][73]Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving thedihedralhidden subgroup problem, which would break many lattice-based cryptosystems, is a well-studied open problem.[74] It has been shown that applying Grover's algorithm to break asymmetric (secret-key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[75] meaning that symmetric key lengths are effectively halved: AES-256 would have comparable security against an attack using Grover's algorithm to that AES-128 has against classical brute-force search (seeKey size).
The most well-known example of a problem that allows for a polynomial quantum speedup isunstructured search, which involves finding a marked item out of a list of items in a database. This can be solved by Grover's algorithm using queries to the database, quadratically fewer than the queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Many examples of provable quantum speedups for query problems are based on Grover's algorithm, includingBrassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions,[76] and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees.[77]
Problems that can be efficiently addressed with Grover's algorithm have the following properties:[78][79]
For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied[80] is aBoolean satisfiability problem, where thedatabase through which the algorithm iterates is that of all possible answers. An example and possible application of this is apassword cracker that attempts to guess a password. Breakingsymmetric ciphers with this algorithm is of interest to government agencies.[81]
Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough, the system will stay in its ground state at all times through the process. Quantum annealing can solveIsing models and the (computationally equivalent)QUBO problem, which in turn can be used to encode a wide range ofcombinatorial optimization problems.[82]Adiabatic optimization may be helpful for solvingcomputational biology problems.[83]
Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed upmachine learning tasks.[47][84]
For example, theHHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts.[47][85] Some research groups have recently explored the use of quantum annealing hardware for trainingBoltzmann machines anddeep neural networks.[86][87][88]
Deep generative chemistry models emerge as powerful tools to expeditedrug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems[22] and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models[89] including quantum GANs[90] may eventually be developed into ultimate generative chemistry algorithms.

As of 2023,[update] classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain.[91][92]
There are a number of technical challenges in building a large-scale quantum computer.[93] PhysicistDavid DiVincenzo has listedthese requirements for a practical quantum computer:[94]
Sourcing parts for quantum computers is also very difficult.Superconducting quantum computers, like those constructed byGoogle andIBM, needhelium-3, anuclear research byproduct, and specialsuperconducting cables made only by the Japanese company Coax Co.[95]
The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development ofquantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.[96]
One of the greatest challenges involved in constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment, as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, the lattice vibrations, and the background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation timeT2 (forNMR andMRI technology, also called thedephasing time), typically range between nanoseconds and seconds at low temperatures.[97] Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using adilution refrigerator[98]) in order to prevent significant decoherence.[99] A 2020 study argues thationizing radiation such ascosmic rays can nevertheless cause certain systems to decohere within milliseconds.[100]
As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.[101]
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter, and an often-cited approach to overcoming them is opticalpulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time; hence, any operation must be completed much more quickly than the decoherence time.
As described by thethreshold theorem, if the error rate is small enough, it is thought to be possible to usequantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be betweenL andL2, whereL is the number of binary digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor ofL. For a 1000-bit number, this implies a need for about 104 bits without error correction.[102] With error correction, the figure would rise to about 107 bits. Computation time is aboutL2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates[103][104] show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate[105] for practically useful integer factorization problem sizing 1,024-bit or larger.
One approach to overcoming errors combineslow-density parity-check code withcat qubits that have intrinsic bit-flip error suppression. Implementing 100 logical qubits with 768 cat qubits could reduce the error rate to one part in 108 per cycle per bit.[106]
Another approach to the stability-decoherence problem is to create atopological quantum computer withanyons,quasi-particles used as threads, and relying onbraid theory to form stable logic gates.[107][108] Non-Abelian anyons can, in effect, remember how they have been manipulated, making them potentially useful in quantum computing.[109] As of 2025, Microsoft and other organizations are investing in quasi-particle research.[109]
PhysicistJohn Preskill coined the termquantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers.[110][47][111] The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.[112]
In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on theSycamore quantum computer more than 3,000,000 times faster than they could be done onSummit, generally considered the world's fastest computer.[28][113][114] This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed,[115][116] and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers[117][118][119] and even beating it.[120][121][122]
In December 2020, a group atUSTC implemented a type ofBoson sampling on 76 photons with aphotonic quantum computer,Jiuzhang, to demonstrate quantum supremacy.[123][124][125] The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.[126]
Claims of quantum supremacy have generated hype around quantum computing,[127] but they are based on contrived benchmark tasks that do not directly imply useful real-world applications.[91][128]
In January 2024, a study published inPhysical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims.[129]
Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023Nature spotlight article summarized current quantum computers as being "For now, [good for] absolutely nothing".[91] The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that, in the long term, such computers are likely to be useful. A 2023Communications of the ACM article[92] found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example, in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm".
This state of affairs can be traced to several current and long-term considerations.
In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain a sufficiently high degree of entanglement for a long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms (which may be unknown) and show that some quantum algorithms asymptotically improve upon those bounds.
Bill Unruh doubted the practicality of quantum computers in a paper published in 1994.[132]Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by theholographic principle.[133] Skeptics likeGil Kalai doubt that quantum supremacy will ever be achieved.[134][135][136] PhysicistMikhail Dyakonov has expressed skepticism of quantum computing as follows:

A practical quantum computer must use a physical system as a programmable quantum register.[139] Researchers are exploring several technologies as candidates for reliable qubit implementations.[140]Superconductors andtrapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well.[141]For example,topological quantum computer approaches are being explored for more fault-tolerance computing systems.[142]
The first quantum logic gates were implemented withtrapped ions and prototype general-purpose machines with up to 20 qubits have been realized. However, the technology behind these devices combines complex vacuum equipment, lasers, and microwave and radio frequency equipment, making full-scale processors difficult to integrate with standard computing equipment. Moreover, the trapped ion system itself has engineering challenges to overcome.[143]
The largest commercial systems are based onsuperconductor devices and have scaled to 2000 qubits. However, the error rates for larger machines have been on the order of 5%. Technologically, these devices are all cryogenic and scaling to large numbers of qubits requires wafer-scale integration, a serious engineering challenge by itself.[144]
With focus on business management's point of view, the potential applications of quantum computing into four major categories are cybersecurity, data analytics and artificial intelligence, optimization and simulation, and data management and searching.[145]
Other applications include healthcare (i.e., drug discovery), financial modeling, and natural language processing.[146]
Anycomputational problem solvable by a classical computer is also solvable by a quantum computer.[147] Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described usingquantum mechanics, which underlies the operation of quantum computers.
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by aTuring machine. In other words, quantum computers provide no additional power over classical computers in terms ofcomputability. This means that quantum computers cannot solveundecidable problems like thehalting problem, and the existence of quantum computers does not disprove theChurch–Turing thesis.[148]
While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficientlyfactor integers, while this is not believed to be the case for classical computers.
The class ofproblems that can be efficiently solved by a quantum computer with bounded error is calledBQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart toBPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-timeprobabilistic Turing machines with bounded error.[149] It is known that and is widely suspected that, which intuitively would mean that quantum computers are more powerful than classical computers in terms oftime complexity.[150]

The exact relationship of BQP toP,NP, andPSPACE is not known. However, it is known that; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning that there exist problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and thediscrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class ofNP-complete problems (if an NP-complete problem were in BQP, then it would follow fromNP-hardness that all problems in NP are in BQP).[151]
Quantum physics has replaced classical physics as the correct fundamental description of our physical universe. It is used routinely to describe most phenomena that occur at short distances. [...] The era of quantum physics began in earnest in 1925 with the discoveries of Erwin Schrödinger and Werner Heisenberg. The seeds for these discoveries were planted by Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, and others.
{{cite AV media}}: CS1 maint: bot: original URL status unknown (link)