Part of a series of articles about |
Quantum mechanics |
---|
Scientists |
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Quantum indeterminacy" – news ·newspapers ·books ·scholar ·JSTOR(December 2008) (Learn how and when to remove this message) |
Quantum indeterminacy is the apparentnecessary incompleteness in the description of aphysical system, that has become one of the characteristics of the standard description ofquantum physics. Prior to quantum physics, it was thought that
Quantum indeterminacy can be quantitatively characterized by aprobability distribution on the set of outcomes ofmeasurements of anobservable. The distribution is uniquely determined by the system state, and moreover quantum mechanics provides a recipe for calculating this probability distribution.
Indeterminacy in measurement was not an innovation of quantum mechanics, since it had been established early on by experimentalists thaterrors in measurement may lead to indeterminate outcomes. By the later half of the 18th century, measurement errors were well understood, and it was known that they could either be reduced by better equipment or accounted for by statistical error models. In quantum mechanics, however,indeterminacy is of a much more fundamental nature, having nothing to do with errors or disturbance.
An adequate account of quantum indeterminacy requires a theory of measurement. Many theories have been proposed since the beginning ofquantum mechanics andquantum measurement continues to be an active research area in both theoretical and experimental physics.[1] Possibly the first systematic attempt at a mathematical theory was developed byJohn von Neumann. The kinds of measurements he investigated are now called projective measurements. That theory was based in turn on the theory ofprojection-valued measures forself-adjoint operators that had been recently developed (by von Neumann and independently byMarshall Stone) and theHilbert space formulation of quantum mechanics (attributed by von Neumann toPaul Dirac).
In this formulation, the state of a physical system corresponds to avector of length 1 in aHilbert spaceH over thecomplex numbers. An observable is represented by a self-adjoint (i.e.Hermitian) operatorA onH. IfH is finitedimensional, by thespectral theorem,A has anorthonormal basis ofeigenvectors. If the system is in stateψ, then immediately after measurement the system will occupy a state that is an eigenvectore ofA and the observed valueλ will be the corresponding eigenvalue of the equationAe =λe. It is immediate from this that measurement in general will be non-deterministic. Quantum mechanics, moreover, gives a recipe for computing a probability distribution Pr on the possible outcomes given the initial system state isψ. The probability iswhereE(λ) is the projection onto the space of eigenvectors ofA with eigenvalueλ.
In this example, we consider a singlespin 1/2particle (such as an electron) in which we only consider the spin degree of freedom. The corresponding Hilbert space is the two-dimensional complex Hilbert spaceC2, with each quantum state corresponding to a unit vector inC2 (unique up to phase). In this case, the state space can be geometrically represented as the surface of a sphere, as shown in the figure on the right.
ThePauli spin matricesareself-adjoint and correspond to spin-measurements along the 3 coordinate axes.
The Pauli matrices all have the eigenvalues +1, −1.
Thus in the stateσ1 has the determinate value +1, while measurement ofσ3 can produce either +1, −1 each with probability 1/2. In fact, there is no state in which measurement of bothσ1 andσ3 have determinate values.
There are various questions that can be asked about the above indeterminacy assertion.
Von Neumann formulated the question 1) and provided an argument why the answer had to be no,if one accepted the formalism he was proposing. However, according to Bell, von Neumann's formal proof did not justify his informal conclusion.[2] A definitive but partial negative answer to 1) has been established by experiment: becauseBell's inequalities are violated, any such hidden variable(s) cannot belocal (seeBell test experiments).
The answer to 2) depends on how disturbance is understood, particularly since measurement entails disturbance (however note that this is theobserver effect, which is distinct from the uncertainty principle). Still, in the most natural interpretation the answer is also no. To see this, consider two sequences of measurements: (A) that measures exclusivelyσ1 and (B) that measures onlyσ3 of a spin system in the stateψ. The measurement outcomes of (A) are all +1, while the statistical distribution of the measurements (B) is still divided between +1, −1 with equal probability.
Quantum indeterminacy can also be illustrated in terms of a particle with a definitely measured momentum for which there must be a fundamental limit to how precisely its location can be specified. This quantumuncertainty principle can be expressed in terms of other variables, for example, a particle with a definitely measured energy has a fundamental limit to how precisely one can specify how long it will have that energy.The magnitude involved in quantum uncertainty is on the order of thePlanck constant (6.62607015×10−34 J⋅Hz−1[3]).
Quantum indeterminacy is the assertion that the state of a system does not determine a unique collection of values for all its measurable properties. Indeed, according to theKochen–Specker theorem, in the quantum mechanical formalism it is impossible that, for a given quantum state, each one of these measurable properties (observables) has a determinate (sharp) value. The values of an observable will be obtained non-deterministically in accordance with a probability distribution that is uniquely determined by the system state. Note that the state is destroyed by measurement, so when we refer to a collection of values, each measured value in this collection must be obtained using a freshly prepared state.
This indeterminacy might be regarded as a kind of essential incompleteness in our description of a physical system. Notice however, that the indeterminacy as stated above only applies to values of measurements not to the quantum state. For example, in the spin 1/2 example discussed above, the system can be prepared in the stateψ by using measurement ofσ1 as afilter that retains only those particles such thatσ1 yields +1. By the von Neumann (so-called) postulates, immediately after the measurement the system is assuredly in the stateψ.
However,Albert Einstein believed that quantum state cannot be a complete description of a physical system and, it is commonly thought, never came to terms with quantum mechanics. In fact, Einstein,Boris Podolsky andNathan Rosen showed that if quantum mechanics is correct, then the classical view of how the real world works (at least after special relativity) is no longer tenable. This view included the following two ideas:
This failure of the classical view was one of the conclusions of the EPRthought experiment in which two remotely locatedobservers, now commonly referred to asAlice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called aspin singlet state. It was a conclusion of EPR, using the formal apparatus of quantum theory, that once Alice measured spin in thex direction, Bob's measurement in thex direction was determined with certainty, whereas immediately before Alice's measurement Bob's outcome was only statistically determined. From this it follows that either value of spin in thex direction is not an element of reality or that the effect of Alice's measurement has infinite speed of propagation.
We have described indeterminacy for a quantum system that is in apure state.Mixed states are a more general kind of state obtained by a statistical mixture of pure states. For mixed statesthe "quantum recipe" for determining the probability distribution of a measurement is determined as follows:
LetA be an observable of a quantum mechanical system.A is given by a denselydefined self-adjoint operator onH. Thespectral measure ofA is a projection-valued measure defined by the condition
for every Borel subsetU ofR. Given a mixed stateS, we introduce thedistribution ofA underS as follows:
This is a probability measure defined on the Borel subsets ofR that is the probability distribution obtained by measuringA inS.
Quantum indeterminacy is often understood as information (or lack of it) whose existence we infer, occurring in individual quantum systems, prior to measurement.Quantum randomness is the statistical manifestation of that indeterminacy, witnessable in results of experiments repeated many times. However, the relationship between quantum indeterminacy and randomness is subtle and can be considered differently.[4]
Inclassical physics, experiments of chance, such as coin-tossing and dice-throwing, are deterministic, in the sense that, perfect knowledge of the initial conditions would render outcomes perfectly predictable. The ‘randomness’ stems from ignorance of physical information in the initial toss or throw. In diametrical contrast, in the case ofquantum physics, the theorems of Kochen and Specker,[5] the inequalities of John Bell,[6] and experimental evidence ofAlain Aspect,[7][8] all indicate that quantum randomness does not stem from any suchphysical information.
In 2008, Tomasz Paterek et al. provided an explanation inmathematical information. They proved that quantum randomness is, exclusively, the output of measurement experiments whose input settings introducelogical independence into quantum systems.[9][10]
Logical independence is a well-known phenomenon inMathematical Logic. It refers to the null logical connectivity that exists between mathematical propositions (in the same language) that neither prove nor disprove one another.[11]
In the work of Paterek et al., the researchers demonstrate a link connecting quantum randomness andlogical independence in a formal system of Boolean propositions. In experiments measuring photon polarisation, Paterek et al. demonstrate statistics correlating predictable outcomes with logically dependent mathematical propositions, and random outcomes with propositions that are logically independent.[12][13]
In 2020, Steve Faulkner reported on work following up on the findings of Tomasz Paterek et al.; showing what logical independence in the Paterek Boolean propositions means, in the domain of Matrix Mechanics proper. He showed how indeterminacy'sindefiniteness arises in evolved density operators representing mixed states, where measurement processes encounter irreversible 'lost history' and ingression of ambiguity.[14]