Combining physics, mathematics and computer science, quantum computingand its sister discipline ofquantum information have developed in the past few decades from visionary ideas to two ofthe most fascinating areas of quantum theory. General interest andexcitement in quantum computing was initially triggered by P. W. Shor(1994) who showed how a quantum algorithm apparently can factor largenumbers into primes far more efficiently than any known classicalalgorithm. Shor’s algorithm was soon followed by several otheralgorithms for solving combinatorial and algebraic problems, and inthe years since the theoretical study of quantum computational systemshas achieved tremendous progress. Although no proof exists yet for thegeneral superiority of quantum computers over classical computers, theimplementation of Shor’s algorithm on a large scale quantumcomputer would render ineffective currently widely used cryptosystemsthat rely on the premise that no efficient algorithm for factoringexists. Consequently, experimentalists around the world are engaged inattempts to tackle the technological difficulties that prevent therealisation of a large scale quantum computer.
The philosophical interest in quantum computing is manifold. From asocial-historical perspective, quantum computing is a domain whereexperimentalists find themselves ahead of their fellow theorists.Indeed, quantum mysteries such asentanglement andnonlocality were historically considered to be of “merelyphilosophical” interest until physicists discovered that thesemysteries might be harnessed to devise new efficient algorithms. Butwhile the technology for harnessing the power of hundreds of qubits(the basic unit of information in the quantum computer) is now withinreach, only a handful of quantum algorithms exist, and the question ofwhether these can truly outperform any conceivable classicalalternative is still, for theoreticians, open.
From a foundational point of view, reflecting on features of thedesign and implementation of efficient quantum algorithms may help usto better understand just what it is that makes quantum systemsquantum, and it may illuminate fundamental concepts such asmeasurement andcausality. Further, the idea that abstract mathematical concepts such ascomputability andcomplexity may not only betranslated into physics, but alsore-written by physics bearsdirectly on the autonomous character of computer science and thestatus of its theoretical entities—the so-called“computational kinds”. As such it is also relevant to thelong-standing philosophical debate on the relationship betweenmathematics and the physical world.
A mathematical model for a universal computer was defined long beforethe invention of quantum computers and is called theTuring machine. It consists of (a) an unbounded tape divided (in one dimension) intocells, (b) a “read-write head” capable of reading orwriting one of a finite number of symbols from or to a cell at aspecific location, and (c) an instruction table (instantiating atransition function) which, given the machine’s initial“state of mind” (one of a finite number of such statesthat can be visited any number of times in the course of acomputation) and the input read from the tape in that state,determines (i) the symbol to be written to the tape at the currenthead position, (ii) the subsequent displacement (to the left or to theright) of the head, and (iii) the machine’s final state. In 1936Turing (1936) showed that since one can encode the instruction tableof any given Turing machine \(T\) as a binary number \(\#(T)\), thereexists a universal Turing machine \(U\) which, upon reading a given\(\#(T)\) from its tape, can simulate the operation of \(T\) on anyinput.
In mathematics, aneffective method, informally speaking, isa method consisting of a finite number of precise finite-lengthinstructions, guaranteed to produce some desired result in a finitenumber of steps if followed exactly by a human being using nothingother than paper and pencil (Papayannopoulos 2023). That the Turingmachine model formally captures the concept of effective calculabilityin its entirety is the essence of theChurch-Turing thesis. Since the thesis involves both a precise mathematical notion (i.e.,that of a Turing machine) and an informal and intuitive notion (i.e.,that of an effective method), however, strictly speaking it cannot beproved or disproved but is arguably best thought of as an explicationin Carnap’s sense (Carnap 1962, ch. I).
Simple cardinality considerations show, in any case, that not allfunctions are Turing-computable (the set of all Turing machines iscountable, while the set of all functions from the natural numbers tothe natural numbers is not), and the discovery of this fact came as acomplete surprise in the 1930s (Davis 1958). But as interesting andimportant as the question of whether a given function is computable byTuring machine—the purview of computability theory (Boolos,Burgess, & Jeffrey 2007)—is, it is not the only questionthat interests computer scientists. Beginning especially in the 1960s(Cobham 1965; Edmonds 1965; Hartmanis & Stearns 1965), thequestion of thecost of computing a function also came to beof great importance. This cost, also known ascomputational complexity, is measured naturally in terms of the physical resources (inparticular time and space, given in terms of computational steps andmemory locations, respectively) required in order to solve thecomputational problem at hand. Computer scientists classifycomputational problems according to the way their cost functionbehaves as a function of their input size, \(n\) (the number of bitsrequired to store the input).Tractable, or efficientlysolvable, problems are those that can be solved in “polynomialtime”; i.e., in a number of time steps that is bounded by apolynomial function of the size of the input, whileintractable problems are those which cannot, i.e., thatrequire “exponential” time.
For adeterministic Turing machine (DTM) like the ones wehave been discussing so far, its behaviour at any given time is whollydetermined by its state plus whatever its input happens to be. Inother words such machines have a unique transition function. We cangeneralise the Turing model, however, by allowing a machine toinstantiate more than one transition function simultaneously. Anondeterministic Turing machine (NTM), upon being presentedwith a given input in a given state, is allowed to‘choose’ which of its transition functions to follow, andwe say that it solves a given problem whenever, given some input,there exists at least one path through its state space leading to asolution. Exactly how an NTM “chooses” whether to followone transition function rather than another at a given moment in timeis left undefined (Turing originally conceived these choices as thoseof an external operator). In particular, we do not assume that anyprobabilities are attached to these choices. In aprobabilisticTuring machine (PTM), by contrast, we characterise thecomputer’s choices by associating a particular probability witheach of its possible transitions.
Probabilistic and deterministic Turing machines (DTMs) have differentsuccess criteria. A successful deterministic algorithm for a givenproblem is guaranteed to yield the correct answer given its input. Ofa successful probabilistic algorithm, on the other hand, we onlydemand that it yield a correct answer with “high”probability (minimally, we demand that it be strictly greater than1/2). It was generally believed, until relatively recently, that forsome problems (see, e.g. Rabin 1976) probabilistic algorithms aredramatically more efficient than any deterministic alternative; inother words that the set or “class” of problemsefficiently solvable by PTM is larger than the class of problemsefficiently solvable by DTM. It is now generally believed that the PTMmodel does not, in fact, offer a computational advantage in this senseover the DTM model (Arora & Barak 2009, ch. 20). Probabilistic(Turing) computation is nevertheless interesting to consider, becauseabstractly aquantum computer is just a variation on the PTMthatdoes appear to offer computational advantages overdeterministic computation, although as already mentioned thisconjecture still awaits a proof.
The class \(\mathbf{P}\) (for Polynomial) is the class containing allthe computational decision problems that can be solved by a DTM inpolynomial time. The classNP (for Non-deterministicPolynomial) is the class containing all the computational decisionproblems that can be solved by an NTM in polynomial time.[1] The most famous problems inNP are called“NP-complete”, where“complete” designates the fact that these problems standor fall together: Either they are all tractable, or none of them is!If we knew how to solve anNP-complete problemefficiently (i.e., with polynomial cost) we could use it toefficiently solve any other problem inNP (Cook1971). Today we know of hundreds of examples ofNP-complete problems (Garey & Johnson 1979), allof which are reducible one to another with no more than a polynomialslowdown. Since the best known algorithm for any of these problems isexponential, the widely believed conjecture is that there is nopolynomial algorithm that can solve them. Clearly \(\mathbf{P}\subseteq \mathbf{NP}\). Proving or disproving the conjecture that\(\mathbf{P} \ne \mathbf{NP}\), however, remains perhaps one of themost important open questions in computer science. The classBPP (bounded probabilistic polynomial) is the classof problems that can be solved in polynomial time with“high” probability (see above) by a PTM. Finally, theclassBQP is the class of problems that can be solvedin polynomial time with “high” probability by a quantumcomputer. From the perspective of computer science, answering thequestion of whether quantum computers are more powerful than classicalcomputers amounts to determining whetherBPP\(\subsetneq\)BQP is true (see Cuffaro 2018b).
Although the original Church-Turing thesis involves theabstract mathematical notion of computability, physicists aswell as computer scientists often interpret it as saying somethingabout the scope and limitations ofphysical computingmachines. Wolfram (1985) claims that any physical system can besimulated (to any degree of approximation) by a universal Turingmachine, and that complexity bounds on Turing machine simulations havephysical significance. For example, if the computation of the minimumenergy of some system of \(n\) particles requires at least anexponentially increasing number of steps in \(n\), then the actualrelaxation of this system to its minimum energy state will also takeexponential time. Aharonov (1999) strengthens this thesis (in thecontext of showing its putative incompatibility with quantummechanics) when she says that a PTM can simulate any reasonablephysical device at polynomial cost.
In order for the “physical Church-Turing thesis” to makesense one has to relate physical space and time parameters to theircomputational counterparts: memory capacity and number of computationsteps, respectively. There are various ways to do that, leading todifferent formulations of the thesis (see Copeland 2018; Gandy 1980;Pitowsky 1990; Sieg & Byrnes 1999). For example, one can encodethe set of instructions of a universal Turing machine and the state ofits infinite tape in the binary development of the positioncoordinates of a single particle. Consequently, one can physically‘realise’ a universal Turing machine as a billiard ballwith hyperbolic mirrors (Moore 1990; Pitowsky 1996).
It should be stressed that strictly speaking there is no relationbetween the original Church-Turing thesis and its physical version(Pitowsky & Shagrir 2003), and while the former concerns theconcept of computation that is relevant to logic (since it is stronglytied to the notion ofproof which requiresvalidation), it does not analytically entail thatall computations should be subject to validation. Indeed,there is a long historical tradition of analog computations (Dewdney1984; Maley 2023; Papayannopoulos 2020), and the output of thesecomputations is validated either by repetitive “runs” orby validating the physical theory that presumably governs thebehaviour of the analog computer.
Do physical processes exist which contradict the physicalChurch-Turing thesis? Apart from analog computation, there exist atleast two main kinds of example purporting to show that the notion ofrecursion, or Turing-computability, is not a naturalphysicalproperty (Hogarth 1994; Pitowsky 1990; Pour-el & Richards 1981).Although the physical systems involved (a specific initial conditionfor the wave equation in three dimensions and an exotic solution toEinstein’s field equations, respectively) are somewhatcontrived, a school of “hypercomputation” that aspires toextend the limited examples of physical “hypercomputers”and in so doing to physically “compute” thenon-Turing-computable has nevertheless emerged (Andréka,Madarász, Németi, Németi, & Székely2018; Copeland 2002, 2011; Davis 2003).Quantumhypercomputation is less frequently discussed in the literature (see,e.g., Adamyan, Calude, & Pavlov 2004), but arguably the mostconcrete attempt to harness quantum theory to compute thenon-computable is the suggestion to use the quantum adiabaticalgorithm (see below) to solve Hilbert’s Tenth Problem (Kieu2002, 2004)—a Turing-undecidable problem equivalent to thehalting problem—though this alleged quantum adiabatichypercomputer has been criticised as unphysical (see Hagar &Korolev 2007; Hodges 2005 [Other Internet Resources]).
Setting aside hypercomputers, even if we restrict ourselves only toTuring-computable functions, one can still find many proposals in theliterature that purport to display “short-cuts” incomputational resources. Consider, e.g., the DNA model of computationthat was claimed (Adleman 1994; Lipton 1995) to solveNP-complete problems in polynomial time. A closerinspection shows that the cost of the computation in this model isstill exponential since the number of molecules in the physical systemgrows exponentially with the size of the problem. Or take an allegedlyinstantaneous solution to anotherNP-complete problemusing a construction of rods and balls (Vergis, Steiglitz, &Dickinson 1986) that unfortunately ignores the accumulatingtime-delays in the rigid rods that result in an exponential overallslowdown. It appears that these and other similar models cannot serveas counter-examples to the physical Church-Turing thesis (as far ascomplexity is concerned) since they all require some exponentialphysical resource. Note, however, that all these models are describedusing classical physics, hence the unavoidable question: Can the shiftto quantum physics allow us to find computational short-cuts? Thequest for the quantum computer began with the possibility of giving apositive answer to this question.
As early as 1969 Steven Wiesner suggested quantum informationprocessing as a possible way to better accomplish cryptologic tasks.But the first four published papers on quantum information (Wiesnerpublished his only in 1983), belong to Alexander Holevo (1973), R. P.Poplavskii (1975), Roman Ingarden (1976), and Yuri Manin (1980).Better known are contributions made in the early 1980s by Charles H.Bennett of the IBM Thomas J. Watson Research Center, Paul A. Benioffof Argonne National Laboratory in Illinois, David Deutsch of theUniversity of Oxford, and Richard P. Feynman of the CaliforniaInstitute of Technology. The idea emerged when scientists wereinvestigating the fundamental physical limits of computation: Iftechnology continued to abide by “Moore’s Law” (theobservation made in 1965 by Gordon Moore, co-founder of Intel, thatthe number of transistors per square inch on integrated circuits haddoubled every 18 months since the integrated circuit was invented),then the continually shrinking size of circuitry packed onto siliconchips would eventually reach a point where individual elements wouldbe no larger than a few atoms. But since the physical laws that governthe behaviour and properties of the putative circuit at the atomicscale are inherently quantum-mechanical in nature, not classical, thenatural question arose whether a new kind of computer could be devisedbased on the principles of quantum physics.
Inspired by Ed Fredkin’s ideas on reversible computation (seeHagar 2016), Feynman was among the first to attempt to provide ananswer to this question by producing an abstract model in 1982 thatshowed how a quantum system could be used to do computations. He alsoexplained how such a machine would be able to act as a simulator forquantum physics, conjecturing that any classical computer could do thesame task only inefficiently. In 1985, David Deutsch proposed thefirst universal quantum Turing machine, which paved the way to thequantum circuit model (Deutsch 1989) and the development of quantumalgorithms.
The 1990s saw the discovery of the Deutsch-Josza algorithm (1992) andof Simon’s algorithm (1994). The latter supplied the basis forShor’s factoring algorithm. Published in 1994, this algorithmmarked a “phase transition” in the development of quantumcomputing and sparked a tremendous interest even outside the physicscommunity. In that year the first experimental realisation of thequantumCNOT (controlled-not) gate with trapped ions wasproposed by Cirac & Zoller (1995). In 1995, Peter Shor and AndrewSteane proposed (independently) the first scheme for quantumerror-correction. In that same year the first realisation of a quantumlogic gate was done in Boulder, Colorado, following Cirac andZoller’s proposal. In 1996, Lov Grover from Bell Labs invented aquantum search algorithm which yields a provable (though onlyquadratic) “speed-up” compared to its classicalcounterparts. A year later the first model for quantum computationbased on nuclear magnetic resonance (NMR) techniques was proposed.This technique was realised in 1998 with a 2-qubit register, and wasscaled up to 7 qubits in the Los Alamos National Lab in 2000.
The adiabatic and cluster-state models of quantum computing werediscovered in 2000 and 2002, respectively (Farhi, Goldstone, Gutmann,& Sipser 2000; Raussendorf & Briegel 2002) and in 2011 D-Wavesystems announced the creation of “D-Wave one,” anadiabatic quantum computer system running on a 128-qubit processor(Johnson, Amin, Gildert, et al. 2011). The late 2010s saw thebeginning of the Noisy Intermediate Scale Quantum Computing (NISQ) era(Preskill 2018), and in 2019 scientists affiliated with Google LLCannounced (Arute, Arya, Babbush, & coauthors 2019) that they hadachieved “quantum computational supremacy” (Aaronson 2019[Other Internet Resources])—the actual existence of a (in thiscase, NISQ) quantum computer capable of solving a specific problem forwhich no efficient classical algorithm is known—at least until2022 when a classical algorithm to outperform Google LLC’squantum computer was discovered (Pan, Chen, & Zhang 2022), not tomention subsequent theoretical results demonstrating the inherentlimitations of Google LLC’s approach (Aharonov, Gao, Landau,Liu, & Vazirani 2023). Despite the tremendous growth of the fieldsince the discovery of Shor’s algorithm, the basic questionsremain open even today: (1) theoretically, can quantum algorithmsefficiently solve classically intractable problems? (2) operationally,can we actually realise a large scale quantum computer to run thesealgorithms?
In this section we review the basic paradigm for quantum algorithms,namely the quantum circuit model, which comprises the basic quantumunit of information (the qubit) and the basic logical manipulationsthereof (quantum gates). For more detailed introductions see Nielsen& Chuang (2010) and Mermin (2007).
The qubit is the quantum analogue of the bit, the classicalfundamental unit of information. It is a mathematical object withspecific properties that can be realised in an actual physical systemin many different ways. Just as the classical bit has astate—either 0 or 1—a qubit also has a state. Yetcontrary to the classical bit, \(\lvert 0\rangle\) and \(\lvert1\rangle\) are but two possible states of the qubit, and any linearcombination (superposition) thereof is also possible.Ingeneral, thus, the physical state of a qubit is the superposition\(\lvert\psi \rangle = \alpha \lvert 0\rangle + \beta \lvert1\rangle\) (where \(\alpha\) and \(\beta\) are complex numbers). Thestate of a qubit can be described as a vector in a two-dimensionalHilbert space, a complex vector space (see the entry onquantum mechanics). The special states \(\lvert 0\rangle\) and \(\lvert 1\rangle\) areknown as thecomputational basis states, and form anorthonormal basis for this vector space. According to quantum theory,when we try to measure the qubit in this basis in order to determineits state, we get either \(\lvert 0\rangle\) with probability \(\lvert\alpha\rvert^2\) or \(\lvert 1\rangle\) with probability \(\lvert\beta\rvert^2\). Since \(\lvert \alpha\rvert^2 + \lvert\beta\rvert^2 =1\) (i.e., the qubit is aunit vector in the aforementionedtwo-dimensional Hilbert space), we may (ignoring the overall phasefactor) effectively write its state as \(\lvert \psi \rangle =\)cos\((\theta)\lvert 0\rangle + e^{i\phi}\)sin\((\theta)\lvert1\rangle\), where the numbers \(\theta\) and \(\phi\) define a pointon the unit three-dimensional sphere, as shown in the figure below.This sphere is typically calledthe Bloch sphere, and itprovides a useful means to visualise the state space of a singlequbit.
| \(\lvert 0\rangle\) |
![]() |
| \(\lvert 1\rangle\) |
The Bloch Sphere
Since \(\alpha\) and \(\beta\) are complex and therefore continuousvariables one might think that a single qubit is capable of storing aninfinite amount of information. When measured, however, it yields onlythe classical result (0 or 1) with certain probabilities specified bythe quantum state. In other words, the measurementchangesthe state of the qubit, “collapsing” it from asuperposition to one of its terms. In fact one can prove (Holevo 1973)that the amount of information actually retrievable from a singlequbit (what Timpson (2013, 47ff.) calls its “accessibleinformation”) is no more than one bit. If the qubit isnot measured, however, the amount of “hidden”information it “stores” (what Timpson calls its“specification information”) is conserved under its(unitary) dynamical evolution. This feature of quantum mechanicsallows one to manipulate the information stored in unmeasured qubitswith quantum gates (i.e. unitary transformations), and is one of thesources for the putative power of quantum computers.
As an illustration, let us suppose we have two qubits at our disposal.A pair of qubits has four computational basis states: {\(\lvert00\rangle, \lvert 01\rangle, \lvert 10\rangle, \lvert 11\rangle\)}. Ifthese were classical bits, they would represent the four physicallypossible states of the system. But a pair of qubits can also exist inwhat can be described as a superposition of these four basis states,each of which has its own complex coefficient (whose mod square, beinginterpreted as a probability, is normalised). As long as the quantumsystem evolves unitarily and is unmeasured, it can be imagined to“store” that many bits of (specification) information. Thedifficult task, however, is to use this information efficiently inlight of the bound on the state’s accessible information.
Classical computational gates are Boolean logic gates that manipulateinformation stored in bits. In quantum computing such gates arerepresented by matrices, and can be visualised as rotations over theBloch sphere. This visualisation represents the fact that quantumgates are unitary operators, i.e., they preserve the norm of thequantum state (i.e., \(U^{\dagger}U=I\), where \(U\) is a linearoperator representing a quantum gate and \(U^{\dagger}\) is itsadjoint). In classical computing some gates are“universal”. For instance, all of the possible logicalconnections between two inputs A and B can be realised using somestring ofNAND gates (which each evaluate the function“not both A and B”). Another universal gate isNOR. In the context of quantum computing it was shown(DiVincenzo 1995) that two-qubit gates (i.e. that transform twoqubits) are sufficient to realise any quantum circuit, in the sensethat a circuit composed exclusively from (a small set of) one- andtwo-qubit gates can approximate to arbitrary accuracy any unitarytransformation of \(n\) qubits. Barencoet. al. (1995) showedin particular that any multiple-qubit logic gate may be composed inthis sense from a combination of single-qubit gates and the two-qubitcontrolled-not (CNOT) gate, which either flips or preservesits “target” input bit depending on the state of its“control” input bit (specifically: in aCNOT gatethe output state of the target qubit is the result of an operationanalogous to the classical exclusive-OR (XOR) gate on theinputs). One general feature of quantum gates that distinguishes themfrom classical gates is that they are always reversible: the inverseof a unitary matrix is also a unitary matrix, and thus a quantum gatecan always be inverted by another quantum gate.

The CNOT Gate
Unitary gates manipulate information stored in the “quantumregister”—a quantum system—and in this senseordinary (unitary) quantum evolution can be regarded as a computation.In order to read the result of this computation, however, the quantumregister must be measured. Measurement is represented as a non-unitarygate that “collapses” the quantum superposition in theregister onto one of its terms with a probability corresponding tothat term’s complex coefficient. Usually this is described withrespect to the computational basis, but in principle a measurementcould be carried out in any of the infinitely many possibleorthonormal bases with respect to which a given state \(| \psi\rangle\) can be expressed as a linear combination of basis states. Itso happens that some such measurements are more difficult to implementthan others.
Quantum circuits are similar to classical computer circuits in thatthey consist of logicalwires andgates. The wiresare used to carry the information, while the gates manipulate it (notethat the wires are abstract and do not necessarily correspond tophysical wires; they may correspond to a physical particle, e.g. aphoton, moving from one location to another in space, or even totime-evolution). Conventionally, the input of the quantum circuit isassumed to be a number of qubits each initialised to a computationalbasis state (typically \(\lvert 0\rangle\)). The output state of thecircuit is then measured in some orthonormal basis (usually thecomputational basis).
The first quantum algorithms (i.e., Deutsch-Jozsa, Simon, Shor andGrover) were constructed in this paradigm. Additional paradigms forquantum computing exist today that differ from the circuit model inmany interesting ways. So far, however, they all have beendemonstrated to be computationally equivalent to the circuit model(see below), in the sense that any computational problem that can besolved by the circuit model can be solved by these new models withonly a polynomial overhead in computational resources. We note theparallel here with the various classical computational models, forwhich it is also the case that any “reasonable” such modelcan be efficiently simulated by any other (for discussion, see Cuffaro2018b, 274).
Algorithm design is a highly complicated task, and in quantumcomputing, delicately leveraging the features of quantum mechanics inorder to make an algorithm more efficient makes the task even morecomplicated. But before discussing this aspect of quantum algorithmdesign, let us first convince ourselves that quantum computers canactually simulate classical computation. In some sense this isobvious, given the belief in the universal character of quantummechanics, and the observation that any quantum computation that isdiagonal in the computational basis, i.e., that involves nointerference between the qubits, is effectively classical. Yet thedemonstration that quantum circuits can be used to simulate classicalcircuits is not straightforward (recall that the former are alwaysreversible while the latter use gates which are in generalirreversible). Indeed, quantum circuits cannot be useddirectly to simulate classical computation, but the lattercan still be simulated on a quantum computer using an intermediategate, namely theToffoli gate. This universal classical gatehas three input bits and three output bits. Two of the input bits arecontrol bits, unaffected by the action of the gate. The third inputbit is a target bit that is flipped if both control bits are set to 1,and otherwise is left alone. This gate is reversible (its inverse isitself), but by stringing a number of such gates together one cansimulate any classical circuit. Consequently, using the quantumversion of theToffoli gate (which by definition permutes thecomputational basis states similarly to the classicalToffoligate) one can simulate, although rather tediously, irreversibleclassical logic gates with quantum reversible ones. Quantum computersare thus capable of performing any computation which a classicaldeterministic computer can do.
What about probabilistic computation? Not surprisingly, a quantumcomputer can also simulate this type of computation by using anotherfamous quantum gate, namely the Hadamard gate, a single-qubit gatethat takes the input state \(\lvert 0\rangle\) to \(\frac{\lvert0\rangle + \lvert 1\rangle}{\sqrt{2}}\) and the input state \(\lvert1\rangle\) to \(\frac{\lvert 0\rangle - \lvert 1\rangle}{\sqrt{2}}\).Measuring either of these output states yields \(\lvert 0\rangle\) or\(\lvert 1\rangle\) with 50/50 probability, which can be used tosimulate a fair coin toss.

The Hadamard Gate
Obviously, if quantum algorithms could be used only to simulateclassical algorithms the interest in them would be far more limitedthan it currently is. But while there may always be some computationalproblems that resist quantum speed-up (see Myers 1997 and Linden &Popescu 1998 [Other Internet Resources]), there is a generalconfidence in the community that quantum algorithms may not onlysimulate classical ones, but that they will actuallyoutperform the latter in some cases, with debatable (Cuffaro2018b; Hagar 2007) implications for our abstract notions oftractability and intractability.
The first quantum algorithms were designed to solve problems whichessentially involve the use of an “oracle”, so let usbegin by explaining this term. An oracle is a conceptual device thathas proven useful in the complexity-theoretic analysis ofcomputational problems, which one can think of as a kind of imaginarymagic black box (Arora & Barak 2009, 72–73; Aaronson 2013a,29ff.) to which, like the famous oracle at Delphi, one poses (yes orno) questions. Unlike that ancient oracle, the oracles considered incomputer science always return an answer in asingle timestep. For example, we can imagine an oracle to determine whether agiven Boolean formula is satisfiable or not: Given as input thedescription of a particular propositional formula, the oracleoutputs—in a single time step—a single bit indicatingwhether or not there is a truth-value assignment satisfying thatformula. Obviously such a machine does not reallyexist—SAT (short for satisfiability) is anNP-complete problem—but that is not the point. The point ofusing such imaginary devices is to abstract away from certain“implementational details” which are for whatever reasondeemed unimportant for answering a given complexity-theoreticquestion. For example, Simon’s problem (Simon 1994) is that ofdetermining the period of a given function \(f\) that is periodicunder bit-wise modulo-2 addition. Relative to Simon’s problem,we judge the internal complexity of \(f\) to be unimportant, and soabstract away from it by imagining that we have an oracle to evaluateit in a single step. As useful as these conceptual devices are,however, their usefulness has limitations. To take one example, thereare oracles relative to whichP =NP, as well as oracles relative to whichP \(\not =\)NP. This (and manyother) questions are not clarified by oracles (see Fortnow 1994).
Deutsch (1989) asks the following question: Suppose we have a function\(f\) which can be either constant—i.e. such that it producesthe same output value for each of its possible inputs, orbalanced—i.e. such that the output of one half of its possibleinputs is the opposite of the output of the other half. The particularexample considered is a function \(f : \{0,1\} \rightarrow \{0,1\}\),which is constant if \(f\)(0) \(= f\)(1) and balanced if \(f\)(0)\(\ne f\)(1). Classically it would taketwo evaluations ofthe function to tell whether it is one or the other.Quantum-mechanically, we can answer this question inoneevaluation.

A Schematic Representation of Deutsch’sAlgorithm
After initially preparing (Mermin 2007, ch. 2) the first and secondqubits of the computer in the state \(\lvert 0\rangle\lvert0\rangle\), one then “flips” both qubits (see the Figureabove) using “NOT” gates (i.e. PauliXtransformations) to \(\lvert 1 \rangle\), and then subjects each qubitto a Hadamard gate. One then sends the two qubits through an oracle or‘black box’ which one imagines as a unitary gate,\(\mathbf{U}_f\), representative of the function whose character (ofbeing either constant or balanced) we wish to determine, where wedefine \(\mathbf{U}_f\) so that it takes inputs like \(\lvertx,y\rangle\) to \(\lvert x, y\oplus f (x)\rangle\), such that\(\oplus\) is addition modulo two (i.e. exclusive-or). The first qubitis then fed into a further Hadamard gate, and the final output of thealgorithm (prior to measurement) is the state:
\[\frac{1}{2}| 1 \rangle(| f(0) \rangle - | \hat{f}(0) \rangle)\]whenever \(f\) is constant, and the state:
\[\frac{1}{2}| 0 \rangle(| f(0) \rangle - |\hat{f}(0) \rangle)\]whenever \(f\) is balanced, where \(\hat{f}(x) \equiv 1 \oplus f(x)\).Since the computational basis states are orthogonal to one another, asingle measurement of the first qubit suffices to retrieve the answerto our original question regarding the function’s nature. Andsince there are two possible constant functions and two possiblebalanced functions from \(f : \{0,1\} \rightarrow \{0,1\}\), we cancharacterise the algorithm as distinguishing, using only one oraclecall, between two quantum disjunctions without finding out the truthvalues of the disjuncts themselves, i.e. without determiningwhich balanced orwhich constant function \(f\) is(Bub 2010).
A generalisation of Deutsch’s problem, called the Deutsch-Jozsaproblem (Deutsch & Jozsa 1992), enlarges the class of functionsunder consideration so as to include all of the functions\(f:\{0,1\}^n\to\{0,1\}\), i.e., rather than only considering \(n =1\). The best deterministic classical algorithm for determiningwhether a given such function is constant or balanced requires\(\frac{2^{n}}{2}+1\) queries to an oracle. In a quantum computer,however, we can answer the question usingone oracle call. Aswith Deutsch’s algorithm, an analysis shows that the reason whya quantum computer only requires one call to the oracle to evaluatethe global property of the function in question, is that the outputstate of the computer is a superposition of balanced and constantstates such that the balanced states all lie in a subspace of thesystem’s Hilbert space orthogonal to that of the constant statesand can therefore be distinguished from the latter in a singlemeasurement (Bub 2006a).
Suppose we have a Boolean function \(f\) on \(n\) bits that is 2-to-1,i.e. that takes \(n\) bits to \(n-1\) bits in such a way that forevery \(n\)-bit integer \(x_1\) there is an \(n\)-bit integer \(x_2\)for which \(f (x_{1}) = f (x_{2})\). The function is moreover periodicin the sense that \(f(x_1)\) = \(f(x_2)\) if and only if \(x_1 = x_2\oplus a\), where \(\oplus\) designates bit-wise modulo 2 addition and\(a\) is an \(n\)-bit nonzero number called theperiod of\(f\). Simon’s problem is the problem to find \(a\) given \(f\).Relative to an oracle \(U_f\) that evaluates \(f\) in a single step,Simon’s quantum algorithm (Simon 1994) finds the period of \(f\)in a number of oracle calls that grows only linearly with the lengthof \(n\), while the best known classical algorithm requires anexponentially greater number of oracle calls. Simon’s algorithmreduces to Deutsch’s algorithm when \(n=2\), and can be regardedas an extension of the latter, in the sense that in both cases aglobal property of a function is evaluated in no more than a(sub-)polynomial number of oracle invocations, owing to the fact thatthe output state of the computer just before the final measurement isdecomposed into orthogonal subspaces, only one of which contains theproblem’s solution. Note that one important difference betweenDeutsch’s and Simon’s algorithms is that the former yieldsa solution with certainty, whereas the latter only yields a solutionwith probability very close to 1. For more on the logical analysis ofthese first quantum circuit-based algorithms see Bub (2006a) and Bub(2010).
The algorithms just described, although demonstrating the potentialsuperiority of quantum over classical computation, nevertheless dealwith apparently unimportant computational problems. Moreover thespeed-ups in each of them are only relative to their respectiveoracles. It is therefore doubtful whether research into quantumcomputing would have attracted so much attention in the 1990s had Shornot realised that Simon’s algorithm could be harnessed to solvea much more interesting and crucial problem, namelyfactoring, which lies at the heart of widely-usedcryptographic protocols such as RSA (Rivest, Shamir, & Adleman1978). Shor’s algorithm turned quantum computing into one of themost exciting research domains in quantum mechanics.
Shor’s algorithm exploits the ingenious number theoreticargument that two prime factors \(p,q\) of a positive integer \(N=pq\)can be found by determining the period, \(r\), of a function \(f(x) =y^x \textrm{mod} N,\) for any \(y < N\) which has no common factorswith \(N\) other than 1 (Nielsen & Chuang 2010, app. 4). Theperiod of \(f(x)\) depends on \(y\) and \(N\). If one knows it, onecan factor \(N\) if \(r\) is even and if \(y^{\,\frac{r}{2}} \neq -1\)mod \(N\). This will be jointly the case with probability greater than\(\frac{1}{2}\) for any \(y\) chosen randomly (otherwise one choosesanother value of \(y\) and tries again). The factors of \(N\) are thegreatest common divisors of \(y^{\,\frac{r}{2}} \pm 1\) and \(N\),which can be found in polynomial time using the well known Euclideanalgorithm. In other words, Shor’s remarkable result rests on thediscovery that the problem offactoring reduces to theproblem of finding the period of a certain periodic function. Thatthis problem can be solved efficiently by a quantum computer is hintedat by Simon’s algorithm, which considers the more restrictedcase of functions periodic under bit-wise modulo-2 addition as opposedto the periodic functions under ordinary addition considered here.
Notwithstanding thatfactoring is believed to be only inNP and not inNP-complete (seeAaronson 2013a, 64–66), Shor’s result is arguably the mostdramatic example of quantum speed-up known. To verify whether \(n\) isprime takes a number of steps which is polynomial in \(\log_{2}n\)(the binary encoding of a natural number \(n\) requires \(\log_{2}n\)resources). But nobody knows how to classically factor numbers intoprimes in polynomial time, and the best classical algorithms we havefor this problem are sub-exponential. A number of widely-used moderncryptographic protocols are based on these facts (Giblin 1993), andthe discovery that quantum computers can solvefactoring inpolynomial time has therefore had a dramatic effect. Theimplementation of the algorithm on a scalable architecture wouldconsequently have economic, as well as scientific consequences(Alléaume et al. 2014).
In a brilliant undercover operation, Agent 13 has managed to securetwo crucial bits of information concerning the whereabouts of thearch-villain Siegfried: the phone number of the secret hideout fromwhich he intends to begin carrying out KAOS’s plans for worlddomination, and the fact that the number is a listed one (apparentlyan oversight on Siegfried’s part). Unfortunately you and yourcolleagues at CONTROL have no other information besides this. Can youfind Siegfried’s hideout using only this number and a phonedirectory? In theoretical computer science this task is known as anunstructured search. In the worst case, if there are \(n\) entries inthe directory, the computational resources required to find the entrywill be linear in \(n\). Grover (1996) showed how this task could bedone with a quantum algorithm using computational resources on theorder of only \(\sqrt{n}\). Agreed, this speed-up is more modest thanShor’s since unstructured search belongs to the class\(\mathbf{P}\), but contrary to Shor’s case, where the classicalcomplexity of factoring is still unknown, here the superiority of thequantum algorithm, however modest, is definitely provable. That thisquadratic speed-up is also theoptimal quantum speed-uppossible for this problem was proved by Bennett, Bernstein, Brassard,& Vazirani (1997).
Although the purpose of Grover’s algorithm is usually describedas “searching a database”, it may be more accurate todescribe it as “inverting a function”. Roughly speaking,if we have a function \(y=f(x)\) that can be evaluated on a quantumcomputer, Grover’s algorithm allows us to calculate \(x\) given\(y\). Inverting a function is related to searching a database becausewe could come up with a function that produces a particular value of\(y\) if \(x\) matches a desired entry in a database, and anothervalue of \(y\) for other values of \(x\). The applications ofGrover’s algorithm are far-reaching (even more so than foilingSiegfried’s plans for world domination). For example, it can beused to determine efficiently the number of solutions to an \(N\)-itemsearch problem, hence to perform exhaustive searches on a class ofsolutions to anNP-complete problem and substantiallyreduce the computational resources required for solving it.
Many decades have passed since the discovery of the first quantumalgorithm, but so far little progress has been made with respect tothe “Holy Grail” of solving anNP-complete problem with a quantum circuit. In 2000 agroup of physicists from MIT and Northeastern University (Farhi et al.2000 [Other Internet Resources]) proposed a novel paradigm for quantumcomputing that differs from the circuit model in several interestingways. Their goal was to try to decide with this algorithm instances ofone of the most famousNP-complete problems:satisfiability. According to the adiabatic theorem (see,e.g., Messiah 1961) and given certain specific conditions, a quantumsystem remains in its lowest energy state, known as the ground state,along an adiabatic transformation in which the system is deformedslowly and smoothly from an initial Hamiltonian to a final Hamiltonian(as an illustration, think of moving a sleeping baby in a cradle fromthe living room to the bedroom. If the transition is done slowly andsmoothly enough, and if the baby is a sound sleeper, then it willremain asleep during the whole transition). The most importantcondition in this theorem is the energy gap between the ground stateand the next excited state (in our analogy, this gap reflects howsound asleep the baby is). Being inversely proportional to theevolution time \(T\), this gap controls the latter. If this gap existsduring the entire evolution (i.e., there is no level crossing betweenthe energy states of the system), the theorem dictates that in theadiabatic limit (when \(T\rightarrow \infty)\) the system will remainin its ground state. In practice, of course, \(T\) is always finite,but the longer it is, the less likely it is that the system willdeviate from its ground state during the time evolution.
The crux of the quantum adiabatic algorithm which rests on thistheorem lies in the possibility of encoding a specific instance of agiven decision problem in a certain Hamiltonian (this can be done bycapitalising on the well-known fact that any decision problem can bederived from an optimisation problem by incorporating into it anumerical bound as an additional parameter). One then starts thesystem in a ground state of another Hamiltonian which is easy toconstruct, and slowly evolves the system in time, deforming it towardsthe desired Hamiltonian. According to the quantum adiabatic theoremand given the gap condition, the result of such a physical process isanother energy ground state that encodes the solution to the desireddecision problem. The adiabatic algorithm is thus a rather ‘laidback’ algorithm: one needs only to start the system in itsground state, deform it adiabatically, and measure its final groundstate in order to retrieve the desired result. But whether or not thisalgorithm yields the desired speed-up depends crucially on thebehaviour of the energy gap as the number of degrees of freedom in thesystem increases. If this gap decreases exponentially with the size ofthe input, then the evolution time of the algorithm will increaseexponentially; if the gap decreases polynomially, the decision problemso encoded could be solved efficiently. Physicists have been studyingspectral gaps for almost a century, but they have only relativelyrecently begun to do so with computing in mind. It is now known thatthere exists no algorithm to determine, given the Hamiltonian of anarbitrary quantum many-body system, whether it is gapped or gapless(Cubitt, Perez-Garcia, & Wolf 2015). In practice, gapamplification techniques are employed in adiabatic quantum computersto ensure the existence of a gap throughout a computation (Albash& Lidar 2018, sec. F).
The quantum adiabatic algorithm holds much promise (Farhi et al.2001). It has been shown (Aharonov et al. 2008) to be polynomiallyequivalent to the circuit model (that is, each model can simulate theother with only polynomial overhead in the number of qubits andcomputational steps), but the caveat that is sometimes leftunmentioned is that its application to an intractable computationalproblem may sometimes require solving another, as intractable a task(this general worry was first raised by a philosopher; see Pitowsky1990). Indeed, Reichardt (2004) has shown that there are simpleproblems for which the algorithm will get stuck in a local minimum, inwhich there are exponentially many eigenvalues all exponentially closeto the ground state energy, so applying the adiabatic theorem, evenfor these simple problems, will take exponential time, and we are backto square one. For a recent survey of the state of the art, see Albash& Lidar (2018).
Measurement-based algorithms differ from circuit algorithms in thatinstead of employing unitary evolution as the basic mechanism for themanipulation of information, these algorithms make essential use ofmeasurements in the course of a computation and not only at thereadout stage. They fall into two categories. The first isteleportation quantum computing, based on an idea of Gottesman &Chuang (1999), and developed into a computational model by Nielsen(2003) and Leung (2004). The second is the “one way quantumcomputer”, known also as the “cluster state” model(Raussendorf & Briegel 2002). The interesting feature of thesemodels, which are polynomially equivalent to the circuit model(Raussendorf, Browne, & Briegel 2003), is that they canefficiently simulate unitary quantum dynamics using non-unitarymeasurements. This is accomplished (see Duwell 2021, 5.2) viameasurements on a pool of highly entangled quantum systems such thatthe orthonormal basis in which each measurement is performed iscalculated, via a classical computer, using the results of earliermeasurements.
Measurement-based models are interesting from a foundationalperspective for a number of reasons. To begin with, in these modelsthere is a clear separation between the classical (i.e., thecalculation of the next measurement-basis) and quantum (i.e.,measurements on the entangled qubits) parts of a computation, whichmay make it easier to pinpoint the quantum resources that areresponsible for speed-up. Further, they may offer insight into therole ofentanglement in quantum computing. They may also have interestingengineering-related consequences, suggesting a different kind ofcomputer architecture which is more fault tolerant (Brown &Roberts 2020; Nielsen & Dawson 2005).
Another model for quantum computing which has attracted a lot ofattention, especially from Microsoft inc. (Freedman 1998), is theTopological Quantum Field Theory model (Lahtinen & Pachos 2017).In contrast to the easily visualisable circuit model, this modelresides in the most abstract reaches of theoretical physics. Theexotic physical systems TQFT describes are topological states ofmatter. That the formalism of TQFT can be applied to computationalproblems was shown by Witten (1989) and the idea was later developedby others. One of the principal merits of the model, which ispolynomially equivalent to the circuit model (Aharonov, Jones, &Landau 2009; Freedman, Kitaev, & Wang 2002), lies in its hightolerance to the errors which are inevitably introduced in theimplementation of a large scale quantum computer (see below). Topologyis especially helpful here because many global topological propertiesare, by definition, invariant under deformation, and given that mosterrors are local, information encoded in topological properties isrobust against them.
The quantum computer might be the theoretician’s dream, but asfar as experimentalists are concerned, its full realisation, whichinvolves resolving the still open question of how to combine theelements needed to build a quantum computer into scalable systems (seeVan Meter & Horsman 2013), is a nightmare. Shor’s algorithmmay break RSA encryption, but it will remain an anecdote if thelargest number that it can factor is 21 (Martín-López etal. 2012; Skosana & Tame 2021). In the circuit-based model theproblem is to achieve a scalable quantum system that at the same timewill allow one to (1) robustly represent quantum information with (2)a time to decoherence significantly longer than the length of thecomputation, (3) implement a universal family of unitarytransformations, (4) prepare a fiducial initial state, and (5) measurethe output result (these are DiVincenzo (2000)’s five criteria).Alternative paradigms may trade some of these requirements withothers, but the gist will remain the same, i.e., one would have toachieve control of one’s quantum system in such a way that thesystem will remain “quantum” albeit macroscopic or atleast mesoscopic in its dimensions.
In order to deal with these challenges, several ingenious solutionshave been devised, including quantum error correction codes and faulttolerant computation (Aharonov & Ben-Or 1997; de Beaudrap &Horsman 2020; Horsman, Fowler, Devitt, & Van Meter 2012;Raussendorf, Harrington, & Goyal 2008; Shor 1995; Shor &DiVincenzo 1996; Steane 1996) which can dramatically reduce the spreadof errors during a ‘noisy’ quantum computation. Animportant criticism of these active error correction schemes, however,is that they are devised for a very unrealistic noise model whichtreats the computer as quantum and the environment as classical(Alicki, Lidar, & Zinardi 2006). Once a more realistic noise modelis allowed, the feasibility of large scale, fault tolerant andcomputationally superior quantum computers is less clear (Hagar 2009;Tabakin 2017).
In the near term, a promising avenue for realising a quantum advantagein a limited number of problem domains is the Noisy Intermediate-ScaleQuantum (NISQ) paradigm (Lau, Lim, Shrotriya, & Kwek 2022;Preskill 2018). The NISQ paradigm does not employ any error correctionmechanisms (postponing the problem to implement scalable versions ofthese to the future) but rather focuses on building computationalcomponents, and on tackling computational problems, that areinherently more resilient to noise. These include, for example,certain classes of optimisation problems, quantum semidefiniteprogramming, and digital quantum simulation (Tacchino, Chiesa,Carretta, & Gerace 2020). A caveat here is that as the resiliencyto noise of a circuit increases, the more classically it behaves.
As just mentioned, one of the envisioned applications of NISQcomputing is for digital quantum simulation (i.e. simulation using agate-based programmable quantum computer). There is an older traditionofanalog quantum simulation, however, wherein one utilises aquantum system whose dynamics resemble the dynamics of a particulartarget system of interest. Although it is believed that digitalquantum simulation will eventually supersede it, the field of analogquantum simulation has progressed substantially in the years since itwas first proposed, and analog quantum simulators have already beenused to study quantum dynamics in regimes thought to be beyond thereach of classical simulators (see, e.g., Bernien et al. 2017; forfurther discussion of the philosophical issues involved, seeHangleiter, Carolan, & Thébault 2022).
In this section we review some of the important philosophical issuesrelated to quantum computing that have been discussed in thephilosophical and physical literature. For more detailed surveys ofsome of these issues that are still accessible to non-specialists, seeCuffaro (2022) and Duwell (2021).
Putting aside the problem of practically realising and implementing alarge scale quantum computer, a crucial theoretical question remainsopen: What physical resources—which of the essential features ofquantum mechanics—are responsible for the putative power ofquantum computers to outperform classical computers? A number ofcandidates have been put forward. Fortnow (2003) posits interferenceas the key, though it has been suggested that this is not truly aquantum phenomenon (Spekkens 2007). Jozsa (1997) and many others pointto entanglement, although there are purported counter-examples to thisthesis (see, e.g., Linden & Popescu 1998 [Other InternetResources], Biham, Brassard, Kenigsberg, & Mor 2004, and for aphilosophical discussion of their significance see Cuffaro 2017).Howard, Wallman, Veitch, & Emerson (2014) appeal to quantumcontextuality. For Bub (2010), the answer lies in the logicalstructure of quantum mechanics (cf. Dalla Chiara, Giuntini, Leporini,& Sergioli 2018), while Duwell (2018) argues for quantumparallelism. And for Deutsch (1997) and others it is “parallelworlds” which are the resource.
Speculative as it may seem, the question “what isquantum in quantum computing?” has significantpractical consequences. It is almost certain that one of the reasonsfor the paucity of quantum algorithms that have actually beendiscovered is the lack of a full understanding of what makes a quantumcomputer quantum. Quantum computing skeptics (Levin 2003) happilycapitalise on this: If no one knowswhy quantum computers aresuperior to classical ones, how can we be sure that theyare,indeed, superior?
The answer that has tended to dominate the popular literature onquantum computing is motivated by evolutions such as:
\[\tag{1}\Sigma_{x} \lvert x\rangle \lvert 0\rangle \rightarrow \Sigma_{x}\lvert x\rangle \lvert f(x)\rangle,\]which were common to many early quantum algorithms. Note theappearance that \(f\) is evaluated for each of its possible inputssimultaneously. The idea that we should take this at facevalue—that quantum computersactually do compute afunction for many different input values simultaneously—is whatDuwell (2018, 2021) calls theQuantum Parallelism Thesis(QPT). For Deutsch, who accepts it as true, the only reasonableexplanation for the QPT is that themany worlds interpretation (MWI) of quantum mechanics is also true. For Deutsch, a quantumcomputer in superposition, like any other quantum system, exists insome sense in many classical universes simultaneously. These providethe physical arena within which the computer effects its parallelcomputations. This conclusion is also defended by Hewitt-Horsman(2009) and by Wallace (2012). Wallace notes, however, that theQPT—and hence the explanatory need for many worlds—may notbe true of all or even most quantum algorithms.
For Pitowsky (2002) and Steane (2003), the explanation for quantumspeedup is not to be found in quantum parallelism. Pitowsky (2002)asks us to consider a given solution, which has been found using aquantum circuit-based algorithm, to a problem likesatisfiability. The quantum algorithm may appear to solvethis problem by testing exponentially many assignments “atonce” as suggested by (1), yet this quantum‘miracle’ helps us very little since, as previouslymentioned, any measurement performed on the output state collapses it,and if there is one possible truth assignment that solves thisdecision problem, the probability of retrieving it is no greater thanit would be for a classical probabilistic Turing machine which guessesthe solution and then checks it. Pitowsky’s conclusion is thatachieving quantum speedup requires us to construct‘clever’ superpositions that increase the probability ofsuccessfully retrieving the result far more than that of a pure guess(see also Aaronson 2022 [Other Internet Resources]). Steane (2003),among other things, argues that if we compare the information actuallyproduced by quantum and classical algorithms, then we should concludethat quantum algorithms perform not more butfewer, cleverer,computations than classical algorithms (see, also, Section5.1.2 below). Additionally Steane argues that the motivation for the QPT isat least partly due to misleading aspects of the standard quantumformalism.
Another critic of the MWI approach is Duwell, who (contra Pitowsky andSteane) accepts the QPT (Duwell 2018), but nevertheless denies (contraDeutsch) that it uniquely supports the MWI (Duwell 2007). Consideringthe phase relations between the terms in a superposition such as (1)is crucially important when evaluating a quantum algorithm’scomputational efficiency. Phase relations, however, are globalproperties of a state. Thus a quantum computation, Duwell argues, doesnot consistsolely of local parallel computations. But inthis case, the QPT does not uniquely support the MWI over otherexplanations.
Defending the MWI, Hewitt-Horsman (2009) argues (contra Steane) thatto state that quantum computers do not actually generate each of theevaluation instances represented in (1) is false according to theview: on the MWI such information could be extracted in principlegiven sufficiently advanced technology. Further, Hewitt-Horsmanemphasises that the MWI is not motivated simply by a suggestivemathematical representation. Worlds on the MWI are defined accordingto their explanatory usefulness, manifested in particular by theirstability and independence over the time scales relevant to thecomputation. Wallace (2012) argues similarly.
Aaronson (2013b) and Cuffaro (2012, 2022) point out that there is aprima facie tension between the Many Worlds Explanation (MWX) ofQuantum Computing and the MWI. The latter typically employsdecoherence as a criterion for distinguishing macroscopic worlds from oneanother. Quantum circuit model algorithms, however, utilise coherentsuperpositions. To distinguish computational worlds from one another,therefore, one needs to somehow modify the decoherence criterion, butCuffaro questions whether this can be successfully motivatedindependently of a prior commitment to the MWI. Further, Cuffaroargues that the MWX is for all practical purposes incompatible withmeasurement based computation, for even granting the legitimacy of a modified world identificationcriterion, there is no natural way in this model to identify worldsthat are stable and independent in the way required.
Even if we could rule out the MWX, identifying the physicalresource(s) responsible for quantum speed-up would remain a difficultproblem. Among other things the question raises important issues abouthow to measure the complexity of a given quantum algorithm, as well asissues about which quantum operations we can realistically expect tobe able to implement (Geroch 2009, ch. 18; Schmitz 2023). The answersdiffer according to the particular model under consideration. In theadiabatic model one needs only to estimate the energy gap behaviourand its relation to the input size (encoded in the number of degreesof freedom of the Hamiltonian of the system). In the measurement-basedmodel one counts the number of measurements needed to reveal thesolution that is hidden in the input cluster state (since thepreparation of the cluster state is a polynomial process, it does notadd to the complexity of the computation). But in the circuit modelthings are not as straightforward. After all, the whole of the quantumcircuit-based computation can be represented as asingleunitary transformation from the input state to the output state.
This arguably suggests that the source of the power of a quantumcomputer, if any, lies not in its dynamics (i.e., the Schrödingerequation) per se, but rather in some feature of the quantum state, or“wave function”. Consider also that the Hilbert subspace“visited” during a quantum computational process is, atany moment, a linear space spanned by all of the vectors in the totalHilbert space which have been created by the computational process upto that moment. But this Hilbert subspace is thus a subspace spannedby a polynomial number of vectors and is thus at most a polynomialsubspace of the total Hilbert space. A classical simulation of thequantum evolution on a Hilbert space with polynomial number ofdimensions (that is, a Hilbert space spanned by a number of basisvectors which is polynomial in the number of qubits involved in thecomputation), however, can be carried out in a polynomial number ofclassical computations. Were quantum dynamics solely responsible forthe power of quantum computing, the latter could be mimicked in apolynomial number of steps with a classical computer (see, e.g. Vidal2003).
This is not to say that quantum computation is no more powerful thanclassical computation. The key point, of course, is that one does notend a quantum computation with an arbitrary superposition, but aimsfor a very special, ‘clever’ state—to usePitowsky’s term. Quantum computations are not always efficientlyclassically simulable because the characterisation of thecomputational subspace of certain quantum states is difficult.Consequently, in the quantum circuit model one should count the numberof computational steps in the computation not by counting the numberof transformations of the state, but by counting the number of one- ortwo-qubit local transformations that are required to create the‘clever’ superposition that ensures the desired speed-up.(Note that Shor’s algorithm, for example, involves three majorsteps in this context: First, one creates the ‘clever’entangled state with a set of unitary transformations. The result ofthe computation—a global property of a function—is now‘hidden’ in this state; second, in order to retrieve thisresult, one projects it on a subspace of the Hilbert space, andfinally one performs another set of unitary transformations in orderto make the result measurable in the original computational basis.All these steps count ascomputational steps as faras the efficiency of the algorithm is concerned. See also Bub 2006b.)The trick is to perform these local one- or two-qubit transformationsin polynomial time, and it is likely that it is here where thephysical power of quantum computing may be found.
The quantum information revolution has prompted discussion and debate(in which both physicists and philosophers have figured centrally)over what the rising new science can tell us about the foundations ofquantum mechanics (see, e.g., Bub 2016; Bub & Pitowsky 2010;Chiribella & Spekkens 2016; Cuffaro 2020; Dunlap 2022; Duwell2020; Felline 2016; Henderson 2020; Koberinski & Müller 2018;Janas, Cuffaro, & Janssen 2022; Myrvold 2010; Timpson 2013). To besure (though see below), no resolution tothe quantum measurement problem would seem to be forthcoming (see Felline 2020; Hagar 2003; Hagar& Hemmo 2006). But what the rise of the new science has motivated,some would argue, is a reconsideration of whether that is a problemworth solving at all. On “informational approaches” to theinterpretation of quantum mechanics such as these (see Cuffaro 2023),quantum mechanics is seen as elevating something that we already knoweffectively constrains the practice of classical physics (Curiel 2020[Other Internet Resources]) to the level of a postulate, namely, thatinterpreting the outcome of a measurement interaction as providing uswith information about a given system of interest requires thespecification of a schematic representation of an observer, minimallyin terms of a “Boolean frame” within which one representsthe answers to a set of yes-or-no questions associated with thesystem. On such a view, classical physics can be understood as aspecial case of this more general conception of a theory in which sucha schematic representation adds no information that is not alreadycontained, in principle, in a given system’s state description.That quantum mechanics is more general than this is the reason why, itis argued, it is able to represent correlational phenomena that cannotbe represented efficiently in classical mechanics. And furthermorethis ought to make us reconsider the usefulness for physics of thequest for a theory underlying quantum mechanics that satisfies ourclassical intuitions, such as that a “fundamental” theoryof physics must solve the measurement problem.
Not all of the foundational work prompted by the rising science ofquantum computing takes this attitude toward the measurement problem,and it is the hope of some that recent advances in the realisation ofa large scale quantum computer may actually provide us with anempirical solution to it. As it turns out,collapse theories—one form of alternatives to quantum theory which aim to solve themeasurement problem—modify Schrödinger’s equation andgive different predictions from quantum theory in certain specificcircumstances. These circumstances can be realised, moreover, ifdecoherence effects can be suppressed (Bassi, Adler, & Ippoliti 2004). Nowone of the most difficult obstacles that await the construction of alarge scale quantum computer is its robustness against decoherenceeffects (Unruh 1995). It thus appears that the technologicalcapabilities required for the realisation of a large scale quantumcomputer are potentially related to those upon which the distinctionbetween “true” and “false” collapse (Pearle1997), i.e., between collapse theories and environmentally induceddecoherence, is contingent. Consequently the physical realisation of alarge-scale quantum computer, if it were of the right architecture,could potentially shed light on one of the long standing conceptualproblems in the foundations of the theory, and if so this would serveas yet another example of experimental metaphysics (the term wascoined by Abner Shimony to designate the chain of events that led fromthe EPR argument viaBell’s theorem to Aspect’s experiments). Note, however, that as justmentioned, one would need to consider the computer’sarchitecture before making any metaphysical conclusions. The computerarchitecture is important because while dynamical collapse theoriestend to collapse superpositions involving the positions of macroscopicquantities of mass, they tend not to collapse large complicatedsuperpositions of photon polarisation or spin.
Is quantum mechanics compatible with the principle of causality? Thisis an old question (Hermann 2017; Schlick 1961, 1962). In thecontemporary literature there is considerable skepticism regarding theprospects of explaining quantum phenomena causally (Hausman &Woodward 1999; Woodward 2007), or at any ratelocallycausally, especially in the wake ofBell’s theorem (Myrvold 2016). Inspired by ideas very familiar to computerscientists, however, a strand in the physical and philosophicalliterature on causation has begun to reconsider whether the prospectsfor a locally causal explanation of quantum phenomena, at least in thecontext of aninterventionist theory of causation, are quite as hopeless as they may initially haveseemed (Adlam 2023; Allen, Barrett, Horsman, Lee, & Spekkens 2017;Costa & Shrapnel 2016; Lorenz 2022; Shrapnel 2017). This is not tosay that decades of physical and philosophical investigations into theconsequences of Bell’s theorem have all been mistaken, ofcourse. For one thing, the interventionist frameworks utilised in thisnew work are operationalist, thus the relevance of this work toso-called hidden variables theories of quantum mechanics is unclear.Second, the interventionist frameworks utilised are not classical, andneither is the kind of causality they explicate. Indeed it is arguablythe key insight emerging from this work that the frameworks previouslyutilised for analysing interventionist causation in the quantumcontext are inappropriate to that context. In contrast to a classicalinterventionist framework in which events are thought of as primitive(i.e. as not further analysable), events in these generalisedframeworks are characterised asprocesses with associatedinputs and outputs. Specifically, one characterises quantum eventsusing a concept from quantum computation and information theory calledaquantum channel. And within this generalisedinterventionist framework, causal models of quantum phenomena can begiven which do not need to posit non-local causal influences, andwhich satisfy certain other desiderata typically required in a causalmodel (in particular that such a model respect the causal Markovcondition and that it not require ‘fine-tuning’; seeShrapnel 2017).
Physics is traditionally conceived as a primarily“theoretical” activity, in the sense that it is generallythought to be the goal of physics to tell us, even if only indirectly(Fuchs 2002, pp. 5–6), what the world is like independently ofany considerations of purpose. This is not the case with everyscience. Chemistry, for example, is arguably best thought of as a“practically” oriented discipline concerned with the waysin which systems can be manipulated for particular purposes(Bensaude-Vincent 2009). Even within physics, there aresub-disciplines which are best construed in this way (Ladyman 2018;Myrvold 2011; Wallace 2014), and indeed some have even advocated thatphysics should be reconceptualised as the science of possible andimpossible transformations (Deutsch 2013; Marletto 2022; Marletto etal. 2022).
Elaborating upon ideas which one can glean from Pitowsky’s work(1990, 1996, 2002), Cuffaro (2017, 2018a) argues that quantumcomputation and information theory (QCIT) are practical sciences inthis sense, as opposed to the “theoretical sciences”exemplified by physics under its traditional characterisation; furtherthat recognising this distinction illuminates both areas of activity.On the one hand, practical investigators attempting to isolate and/orquantify the computational resources made available by quantumcomputers are in danger of conceptual confusion if they are notcognisant of the differences between practical and traditionalsciences. On the other hand, one should be wary of the significance ofclassical computer simulations of quantum mechanical phenomena for thepurposes of a foundational analysis of the latter. For example,certain mathematical results can legitimately be thought of as no-gotheorems for the purposes of a traditional foundational analysis, andyet are not really relevant for the purpose of characterising theclass of efficiently simulable quantum phenomena.
The Church-Turing thesis, which asserts that every function naturallyregarded as computable is Turing-computable, is argued by Deutsch topresuppose a physical principle, namely that:
[DP]: Every finitely realisable physical system can be perfectlysimulated by a universal model computing machine operating by finitemeans. (Deutsch 1985)
Since no machine operating by finite means can simulate classicalphysics’ continuity of states and dynamics, Deutsch argues thatDP is false in a classical world. He argues that it is true forquantum physics, however, owing to the existence of the universalquantum Turing machine he introduces in the same paper, which thusproves both DP and the Church-Turing thesis it underlies to be sound.This idea—that the Church-Turing thesis requires a physicalgrounding—is set into historical context by Lupacchini (2018),who traces its roots in the thought of Gödel, Post, and Gandy. Itis criticised by Timpson (2013), who views it as methodologicallyfruitful, but as nevertheless resting on a confusion regarding themeaning of the Church-Turing thesis, which in itself has to do withthe notion of an effective method and has nothing, per se, to do withphysics (cf. Sprevak 2022).
In the general philosophy of science literature onscientific explanation there is a distinction between so-called “how-actually”and “how-possibly” explanation, where the former aims toconvey how a particular outcome actually came about, and the latteraims to convey how the occurrence of an event can have been possible.That how-actually explanation actually explains is uncontroversial,but the merit (if any) of how-possibly explanation has been debated.While some view how-possibly explanation as genuinely explanatory,others have argued that how-possibly ‘explanation’ isbetter thought of as, at best, a merely heuristically usefulexercise.
It turns out that the science of quantum computation is able toilluminate this debate. Cuffaro (2015) argues that when one examinesthe question of the source of quantum speed-up, one sees that toanswer this question is to compare algorithmic processes of variouskinds, and in so doing to describe the possibility spaces associatedwith these processes. By doing so one explains how it is possible forone process to outperform its rival. Further, Cuffaro argues that inexamples like this, once one has answered the how-possibly question,nothing is actually gained by subsequently asking a how-actuallyquestion.
Finally, another philosophical implication of the realisation of alarge scale quantum computer regards the long-standing debate in thephilosophy of mind on the autonomy of computational theories of themind (Fodor 1974). In the shift from strong to weak artificialintelligence, the advocates of this view tried to impose constraintson computer programs before they could qualify as theories ofcognitive science (Pylyshyn 1984). These constraints include, forexample, the nature of physical realisations of symbols and therelations between abstract symbolic computations and the physicalcausal processes that execute them. The search for the computationalfeature of these theories, i.e., for what makes themcomputational theories of the mind, involved isolating somefeatures of the computeras such. In other words, theadvocates of weak AI were looking for computational properties, orkinds, that would bemachine independent, at least in thesense that they would not be associated with the physical constitutionof the computer, nor with the specific machine model that was beingused. These features were thought to be instrumental in debates withincognitive science, e.g., the debates surrounding functionalism (Fodor& Pylyshyn 1988).
Note, however, that once the physical Church-Turing thesis isviolated, arguably some computational notions cease to be autonomous.In other words, given that quantum computers may be able toefficiently solve classically intractable problems, hence re-describethe abstract space of computational complexity (Bernstein &Vazirani 1997), computational concepts and even computational kindssuch as ‘an efficient algorithm’ or ‘the classNP’, arguably become machine-dependent, and recourse to‘hardware’ becomes inevitable in any analysis thereof(Cuffaro 2018b; Hagar 2007). Advances in quantum computing may thusmilitate against the functionalist view about theunphysicalcharacter of the types and properties that are used in computerscience. Consequently, efficient quantum algorithms may serve ascounterexamples to a-priori arguments against reductionism (Pitowsky1996)—although the conceptual challenges to the physicalistversion of that view would also seem to be non-trivial (Brown2023).
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
Bell’s Theorem |Church-Turing Thesis |computability and complexity |computational complexity theory |quantum mechanics |quantum mechanics: collapse theories |quantum mechanics: the role of decoherence in |quantum theory: philosophical issues in |quantum theory: quantum entanglement and information |quantum theory: the Einstein-Podolsky-Rosen argument in |Turing, Alan |Turing machines
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054