Quantum mechanics, with its revolutionary implications, has posedinnumerable problems to philosophers of science. In particular, it hassuggested reconsidering basic concepts such as the existence of aworld that is, at least to some extent, independent of the observer,the possibility of getting reliable and objective knowledge about it,and the possibility of taking (under appropriate circumstances) atleast some properties to be objectively possessed by physical systems.It has also raised many others questions which are well known to thoseinvolved in the debate on the interpretation of this pillar of modernscience. One can argue that most of the problems are not only due tothe intrinsic revolutionary nature of the phenomena which have led tothe development of the theory. They are also related to the fact that,in its standard formulation and interpretation, quantum mechanics is atheory which is excellent (in fact it has an unprecedented success inthe history of science) in telling us everything aboutwhat weobserve, but it meets with serious difficulties in telling uswhat there is. We are making here specific reference to thecentral problem of the theory, usually referred to asthemeasurement problem, which is accompanying quantum theory sinceits birth. It is just one of the many attempts to overcome thedifficulties posed by this problem that has led to the development ofCollapse Theories, i.e., to theDynamical ReductionProgram (DRP). As we shall see, this approach consists inaccepting that the dynamical equation of the standard theory should bemodified by the addition of stochastic and nonlinear terms. The nicefact is that the resulting theory is capable, on the basis of a singledynamics which is assumed to govern all natural processes, to accountat the same time for all well-established facts about microscopicsystems as described by the standard theory, as well as for theso-called postulate of wave packet reduction (WPR), which accompaniesthe interaction of a microscopic system with a measuring device. As iswell known, such a postulate is assumed in the standard scheme just inorder to guarantee thatmeasurements have outcomes but, as weshall discuss below, it meets with insurmountable difficulties if onetries to derive it by assuming the measurement itself to be a processgoverned by the linear laws of the theory. Finally, the collapsetheories account in a completely satisfactory way for the classicalbehavior of macroscopic systems.
Two specifications are necessary in order to make clear from thebeginning what the limitations and the merits of the program are. Theonly satisfactory explicit models of this type (the model proposed byGhirardi, Rimini, and Weber (1986), usually referred to as the GRWtheory, as well as all subsequent developments) are phenomenologicalattempts to solve a foundational problem. At present, they involvephenomenological parameters which, if the theory is taken seriously,acquire the status of new constants of nature. Moreover, the problemof building satisfactory relativistic generalizations of collapsemodels is very difficult, though some improvements have been made,which have elucidated some crucial points.
In spite of their phenomenological character, Collapse Theories areassuming a growing relevance, since they provide a clear resolutionfor the difficulties of the formalism, toclose the circle inthe precise sense defined by Abner Shimony (1989). Moreover, they haveallowed a clear identification of the formal features which shouldcharacterize any unified theory of micro and macro processes.
Last but not least, Collapse Theories qualify themselves as rivaltheories of quantum mechanics and one can easily identify some oftheir physical implications which, in principle, would allow crucialtests discriminating between the two. Getting stringent indicationsfrom such tests requires experiments, whose technology has beendeveloped only very recently. Actually, it is just due to remarkableimprovements in the field of opto-mechanics and cold atoms, as well asnuclear physics, that specific bounds have already been obtained forthe parameters characterizing the theories under investigation; moreimportant, precise families of physical processes in which a violationof the linear nature of the standard formalism might emerge have beenclearly identified and are the subject of systematic investigationswhich might lead, in the end, to relevant discoveries.
A very natural question, which all scientists who are concerned aboutthe meaning and the value of science have to face, is whether one candevelop a coherent worldview that can accommodate our knowledge ofnatural phenomena as it is embodied in our best theories. Such aprogram meets serious difficulties with quantum mechanics, essentiallybecause of two formal aspects of the theory according to its standardformulation, which are common to all of its versions, from theoriginal nonrelativistic formulations of the 1920s, to current quantumfield theories: the linear nature of the state space and of theevolution equation; in other words: the validity of the superpositionprinciple and the related phenomenon of entanglement, which, inSchrödinger’s words:
is not one but the characteristic trait of quantum mechanics, the onethat enforces its entire departure from classical lines of thought(Schrödinger 1935: 807).
These two formal features have embarrassing consequences, since theyimply
For the sake of generality, we shall first of all present a veryconcise sketch of ‘the rules of the quantum game’.
Let us recall the axiomatic structure of quantum theory:
States of physical systems are associated with normalized vectors in aHilbert space, a complex, infinite-dimensional, complete and separablelinear vector space equipped with a scalar product. Linearity impliesthat the superposition principle holds: if \(\ket{f}\) is a state and\(\ket{g}\) is a state, then (for \(a\) and \(b\) arbitrary complexnumbers) also
\[ \ket{K} = a\ket{f} + b\ket{g} \]is a state. Moreover, the state evolution is linear, i.e., itpreserves superpositions: if \(\ket{f,t}\) and \(\ket{g,t}\) are thestates obtained by evolving the states \(\ket{f,0}\) and\(\ket{g,0}\), respectively, from the initial time \(t=0\) to the time\(t\), then \(a\ket{f,t} + b\ket{g,t}\) is the state obtained by theevolution of \(a\ket{f,0} + b\ket{g,0}\). Finally, the completenessassumption is made, i.e., that the knowledge of its statevectorrepresents, in principle, the most accurate information one can haveabout the state of an individual physical system.
Observable quantities are represented by self-adjoint operators \(B\)on the Hilbert space containing the possible states of the system. Theassociated eigenvalue equations \(B\ket{b_k} = b_k \ket{b_k}\) and thecorresponding eigenmanifolds (the linear manifolds spanned by theeigenvectors associated to a given eigenvalue, also calledeigenspaces) play a basic role for the predictive content of thetheory. In fact:
We stress that, according to the above scheme, quantum mechanics makesonly conditional probabilistic predictions (conditional on themeasurement being actually performed) for the outcomes of prospective(and in general incompatible among themselves) measurement processes.Only if a state belongs to an eigenmanifold of the observable, whichis going to be measured, already before the act of measurement, onecan predict the outcome with certainty. In all other cases—ifthe completeness assumption is made—one has objectivenonepistemic probabilities for different outcomes.
The orthodox position gives a very simple answer to the question: whatdetermines the outcome, when different outcomes are possible? Theanswer is “nothing”—the theory is complete andtherefore it is illegitimate to raise any question about propertiespossessed prior to a measurement, when different outcomes of ameasurement of an observable have non-vanishing probabilities ofoccurring, if the measurement is actually performed. Correspondingly,the referents of the theory are only the results of measurements.These are to be described in classical terms and involve in generalmutually exclusive physical conditions.
Regarding the legitimacy of attributing properties to physicalsystems, one could say that quantum mechanics warns us againstattributing too many properties to physical systems.However—with Einstein—one can adopt a sufficient conditionfor the existence of an objective individual property the possibilityto predict with certainty the outcome of a measurement. This impliesthat, whenever the overall statevector factorizes into the state ofthe Hilbert space of the physical system \(S\) times the state for therest of the universe, \(S\) does possess some properties (actually acomplete set of properties, i.e., those associated to appropriatemaximal sets of commuting observables).
Before concluding this section some comments about the measurementprocess are of relevance. Quantum theory was created to describe thebehavior of microscopic phenomena as was emerging from observations.In order to obtain information about system at the molecular and(sub-) atomic scale, one must be able to establish strict correlationsbetween the states of the microscopic system and the states of themeasuring devices, which we directly perceive. Within the formalism,this is described by considering appropriate micro-macro interactions.The fact that when the measurement is completed one can makestatements about the outcome is accounted for by the already mentionedWPR postulate (Dirac 1935):a measurement always causes a systemto jump in an eigenstate of the observed quantity.Correspondingly, also the statevector of the apparatus‘jumps’ into the manifold associated to the recordedoutcome.
We shall now clarify why the formalism we have just presented givesrise to the measurement problem. To this purpose we shall, first ofall, discuss the standard oversimplified argument based on theso-called von Neumann ideal measurement scheme.
Let us begin by recalling how measurements are described in thestandard formalism:
Suppose that a microsystem \(S\), immediately before the measurementof one of its observables, say \(B\), is in the eigenstate\(\ket{b_j}\) of the corresponding operator. The apparatus (amacrosystem) used to gain information about \(B\) is initially assumedto be in a precise macroscopic state, its ready state, correspondingto a definite macro property—e.g., its pointer points at 0 on ascale. Since the apparatus \(A\) is made of elementary particles,atoms and so on, it should be possible to describe it within quantummechanics, which will associate a well defined state vector\(\ket{A_0}\) to it (at least in principle). One then assumes thatthere is an appropriate system-apparatus interaction lasting for afinite time, such that when the initial state of the apparatus istriggered by the state \(\ket{b_j}\), it ends up in a finalconfiguration \(\ket{A_j}\), which is macroscopically distinguishablefrom the initial one and from the other configurations \(\ket{A_k}\)in which it would end up if triggered by a different eigenstate\(\ket{b_k}\). Moreover, for simplicity one assumes that the system isleft by the measurement in its initial state. In brief, one assumesthat one can dispose things in such a way that the system-apparatusinteraction can be described as:
\[\begin{align}\tag{1} (\textit{initial state}){:}\ & \ket{b_k} \ket{A_0} \\ (\textit{final state}){:}\ & \ket{b_k} \ket{A_k} \end{align}\]Equation (1) and the hypothesis that the superposition principlegoverns all natural processes tell us that, if the initial state ofthe microsystem is a linear superposition of different eigenstates(for simplicity we will consider only two of them), one has:
\[\begin{align}\tag{2} (\textit{initial state}){:}\ & (a\ket{b_k} + b\ket{b_j})\ket{A_0 } \\ (\textit{final state}){:}\ & (a\ket{b_k} \ket{A_k} + b\ket{b_j} \ket{A_j}). \end{align}\]Some remarks about this are in order:
Nowadays, there is a general consensus that this solution isabsolutely unacceptable. It corresponds to assuming that the linearnature of the theory is broken at some point, without clearlyspecifying when. Thus, quantum theory is unable to explain how it canhappen that apparatuses behave as required by the WPR postulate (whichis one of the axioms of the theory), instead of satisfying theSchrödinger equation. Even if one were to accept that quantummechanics has a limited field of applicability, so that it does notaccount for all natural processes and, in particular, it breaks downat the macrolevel, it is clear that the theory does not contain anyprecise criterion for identifying the borderline between micro andmacro, linear and nonlinear, deterministic and stochastic, reversibleand irreversible. To use the words of J.S. Bell, there is nothing inthe theory fixing such a borderline and thesplit between thetwo above types of processes is fundamentallyshifty.
If one looks at the historical debate on this problem, one can easilysee that it is precisely by continuously resorting to this ambiguityabout the split that adherents of the Copenhagen orthodoxy oreasysolvers (Bell 1990) of the measurement problem have rejected thecriticism of theheretics (Gottfried 2000). For instance,Bohr succeeded in rejecting Einstein’s criticisms at the SolvayConferences by stressing that some macroscopic parts of the apparatushad to be treated fully quantum mechanically; von Neumann and Wignerdisplaced the split by locating it between the physical processes andthe consciousness (but what is a conscious being, from the physicalpoint of view?), and so on.
It is not our task to review here the various attempts to solve theabove difficulties. One can find many exhaustive treatments of thisproblem in the literature. We conclude this section by discussing howthe measurement problem is indeed a consequence of very general, infact unavoidable, assumptions on the nature of measurements, and notspecifically of the assumptions of von (oversimplified) vonNeumann’s model. This was established in a series of theorems ofincreasing generality, notably the ones by Fine (1970),d’Espagnat (1971), Shimony (1974), Brown (1986) and Busch &Shimony (1996). Possibly the most general and direct proof is given byBassi and Ghirardi (2000), whose results we briefly summarize. Theassumptions of the theorem are:
From these very general assumptions one can show that, repeating themeasurement on systems prepared in the superposition of the two giveneigenstates, in the great majority of cases one ends up in asuperposition of macroscopically and perceptually different situationsof the whole universe. This, again, is the measurement problem ofquantum mechanics, which calls for a resolution.
The debate on the macro-objectification problem continued for manyyears after the early days of quantum mechanics. In the early 1950s animportant step was taken by D. Bohm who presented (Bohm 1952) amathematically precise deterministic completion of quantum mechanics(see the entry on Bohmian Mechanics), which was anticipated by deBroglie in the 1920s. In the area of Collapse Theories, one shouldmention the contribution by Bohm and Bub (1966), which was based onthe interaction of the statevector with Wiener-Siegel hiddenvariables. But let us come to Collapse Theories in the sense currentlyattached to this expression.
Important investigations during the 1970s can be considered aspreliminary steps for the subsequent developments. In the years 1970the school of L. Fonda in Italy concerned with quantum decay processesand in particular with the possibility of deriving, within a quantumcontext, the exponential decay law (Fonda, Ghirardi, and Rimini 1978).Some features of this approach turned out to be relevant for thesubsequent development of Collapse Theories:
Obviously, in these papers the considered reduction processes were notassumed to be ‘spontaneous and fundamental’ naturalprocesses, but due to system- environment interactions. Accordingly,these attempts did not represent original proposals for solving themacro-objectification problem, yet they have paved the way for theelaboration of the GRW theory.
Around the same years, P. Pearle (1976, 1979), and subsequently N.Gisin (1984a,b) and Diosi (1988), had developed models which accountedfor the reduction process in terms of stochastic differentialequations. While they were looking for a new dynamical equationoffering a solution to the macro-objectification problem, they did notsucceed in identifying the states to which the dynamical equationshould lead. The these states were assumed to depend on the particularmeasurement process one was considering, leaving the program offormulating a universal dynamics accounting for the quantum propertiesof microscopic systems together the classical properties ofmacroscopic objects incomplete. In those years N. Gisin formulatedsubsequently an interesting argument (Gisin 1989) according to whichnonlinear modifications of the Schrödinger equation in generalare unacceptable, since they imply the possibility of sendingsuperluminal signals. This argument eventually proved that only a veryspecific class of nonlinear (and stochastic) modifications of theSchrödinger equation is physically acceptable (Caiaffa, Smirne& Bassi 2017, and references therein), the class which collapsemodels belong to.
As already mentioned, the Collapse Theory we are going to describeamounts to accepting a modification of the standard evolution law ofquantum theory such that microprocesses and macroprocesses aregoverned by a single dynamics. Such a dynamics must imply that themicro-macro interaction in a measurement process leads to WPR. Bearingthis in mind, recall that the characteristic feature distinguishingthe quantum evolution from WPR is that, while Schrödinger’sequation is linear and deterministic (at the wave function level), WPRis nonlinear and stochastic. It is then natural to consider, as wassuggested for the first time in the above quoted papers by P. Pearle,the possibility of nonlinear and stochastic modifications of thestandard Schrödinger dynamics. Such modifications, to beuniversal, must satisfy one important requirement, called the triggerproblem by Pearle (1989): the reduction mechanism must become more andmore effective in going from the micro to the macro domain. Thesolution to this problem constitutes the central feature of CollapseTheories of the GRW type. To discuss these points, let us brieflyreview the GRW model, first consistent model to appear in theliterature.
Within such a model, initially referred to as QMSL (Quantum Mechanicswith Spontaneous Localizations), the problem of the choice of thepreferred basis is solved by noting that the most embarrassingsuperpositions, at the macroscopic level, are those involvingdifferent spatial locations of macroscopic objects. Actually, asEinstein has stressed, this is a crucial point which has to be facedby anybody aiming at taking a macro-objective position about naturalphenomena: ‘A macro-body must always have a quasi-sharplydefined position in the objective description of reality’ (Born1971: 223). Accordingly, QMSL considers the possibility of spontaneousprocesses, which are assumed to occur instantaneously and at themicroscopic level, which tend to suppress the linear superpositions ofdifferently localized states. The required trigger mechanism must thenfollow consistently.
The key assumption of QMSL is the following: each elementaryconstituent of any physical system is subjected, at random times, torandom and spontaneous localization processes (which we will callhittings) around appropriate positions. To have a precise mathematicalmodel one has to be very specific about the above assumptions, makingexplicit HOW the process works (which modifications of the wavefunction are induced by the localizations), WHERE it occurs (whatdetermines the occurrence of a localization at a certain positionrather than at another one), and finally WHEN (at what times), itoccurs. The answers to these questions are now presented.
Let us consider a system of \(N\) distinguishable particles and let usdenote by \(F(\boldsymbol{q}_1, \boldsymbol{q}_2 , \ldots,\boldsymbol{q}_N )\) the coordinate representation (wave function) ofthe state vector (we disregard spin variables since hittings areassumed not to act on them).
where \(d\) represents the localization accuracy. Let us denote as
\[ L_i (\boldsymbol{q}_1, \boldsymbol{q}_2, \ldots, \boldsymbol{q}_N ; \boldsymbol{x}) = F(\boldsymbol{q}_1, \boldsymbol{q}_2, \ldots, \boldsymbol{q}_N) G(\boldsymbol{q}_i, \boldsymbol{x}) \]the wave function immediately after the localization, as yetunnormalized.
It is straightforward to see that the hitting process leads, when itoccurs, to the localization of states of the particle, which areinitially delocalized over distances greater than \(d\). As a simpleexample we can consider a single particle whose wavefunction isdifferent from zero only in two small and far apart regions \(h\) and\(t\). Suppose that a localization occurs around \(h\); the stateafter the hitting is then appreciably different from zero only in aregion around \(h\) itself. A completely analogous argument holds ifthe hitting takes place around \(t\). Regarding the possibility forthe state to collapse around points, which are far from both \(h\) and\(t\), one easily sees that the probability density for such hittings, according to the multiplication rule determining \(L_i\), ispractically zero; moreover, that if such a hitting were to occur,after it is normalized, the wave function of the system would remainalmost unchanged.
We can now discuss the most important feature of the theory: theTrigger Mechanism. To understand the way in which the spontaneouslocalization mechanism is enhanced by increasing the number ofparticles which are in far apart spatial regions (as compared to\(d)\), one can consider, for simplicity, the superposition\(\ket{S}\), with equal weights, of two macroscopic pointer states\(\ket{H}\) and \(\ket{T}\), corresponding to two different pointerpositions \(H\) and \(T\), respectively. Taking into account that thepointer is ‘almost rigid’ and contains a macroscopicnumber \(N\) of microscopic constituents, the state can be written, inobvious notation, as:
\[\tag{4} \ket{S} = [\ket{1 \near h_1} \ldots \ket{N \near h_N} + \ket{1 \near t_1} \ldots \ket{N \near t_N}], \]where \(h_i\) is near \(H\), and \(t_i\) is near \(T\). The statesappearing in first term on the right-hand side of equation (4) aredifferent from zero only when their arguments \((1,\ldots ,N)\) areall near \(H\), while those of the second term are different from zeroonly when they are all near \(T\). It is now evident that if any ofthe particles (say, the \(i\)-th particle) undergoes a hittingprocess, for example around \(h_i\), the multiplication prescriptionleads practically to the suppression of the second term in (4). Thus,any spontaneous localization of any of the constituents amounts to alocalization of the pointer. The hitting frequency is thereforeeffectively amplified proportionally to the number of constituents.Notice that, for simplicity, the argument refers to an almost rigidbody, one for which all particles are around \(H\) in one of thestates of the superposition and around \(T\) in the other state. Itshould however be obvious that what really matters in amplifying thereductions is the number of particles which are in different positionsin the two states appearing in the superposition itself.
Under these premises we can now proceed to choose the parameters \(d\)and \(f\) of the theory, i.e., the localization accuracy and the meanlocalization frequency. The argument given above suggests how one canchoose the parameters in such a way that the quantum predictions formicroscopic systems remain fully valid while the embarrassingmacroscopic superpositions in measurement-like situations aresuppressed in very short times. Accordingly, as a consequence of theunified dynamics governing all physical processes, individualmacroscopic objects acquire definite macroscopic properties. Thechoice suggested in the GRW-model is:
\[\begin{align}\tag{5} f &= 10^{-16} \text{ s}^{-1} \\ d &= 10^{-5} \text{ cm} \end{align}\]It follows that a microscopic system undergoes a localization, onaverage, every hundred million years, while a macroscopic oneundergoes a localization every \(10^{-7}\) seconds. With reference tothe challenging version of the macro-objectification problem presentedby Schrödinger with the famous example of his cat, J.S. Bellcomments (1987: 44): [within QMSL]the cat is not both dead andalive for more than a split second. Besides the extremely lowfrequency of the hittings for microscopic systems, also the fact thatthe localization width is large compared to the dimensions of atoms(so that even when a localization occurs it does very little violenceto the internal economy of an atom) plays an important role inguaranteeing that no violation of well-tested quantum mechanicalpredictions is implied by the modified dynamics.
Contrary to standard quantum mechanics, the GRW theory allows tolocate precisely the ‘split’ between micro and macro,reversible and irreversible, quantum and classical. This is anotherway of saying that GRW solved the measurement problem. The transitionbetween the two types of ‘regimes’ is governed by thenumber of particles, which are localized by the collapse process. Aconsequence of this is that GRW makes predictions, which are differentfrom standard quantum mechanical predictions. We will come back onthis important issue.
Concerning the choice of the parameters of the model, it has to bestressed that, as it is obvious, the just mentioned transition regionfrom micro to macro depends crucially on their values. departing fromthe original choice of GRW, Adler (2003) has suggested to increase thevalue of \(f\) by a factor of the order of \(10^9\). The reasons forthis derive from requiring that during latent image formation inphotography, the collapse becomes effective right after a grain of theemulsion has been excited; this is equivalent to requiring that when ahuman eye is hit by few photons (the perceptual threshold being verylow) reduction takes place in the rods of the eye (Bassi, Deckert andFerialdi 2010). As we will discuss in what follows, if one takes theoriginal GRW value for \(f\), reduction cannot occur in the rods(because a relatively small number of molecules—less than\(10^5\)—are affected), but only during the transmission alongthe nervous signal within the brain, a process which involves thedisplacement of a number of ions of the order of \(10^{12}\).
It is interesting to remark that the drastic change suggested by Adler(2003) has physical implications which have already beenexperimentally falsified, see Curceanu, Hiesmayr, and Piscicchia 2015;Bassi, Deckert, and Ferialdi 2010; Vinante et al. 2016; andToroš and Bassi 2018.
The model just presented (QMSL) has a serious drawback: it does notallow to deal with systems containing identical constituents, becauseit does not respect the symmetry or antisymmetry requirements for suchparticles. A quite natural idea to overcome this difficulty is torelate the hitting process not to the individual particles but to theparticle number density averaged over an appropriate volume. This canbe done by introducing a new phenomenological parameter in the theorywhich however can be eliminated by an appropriate limiting procedure(see below).
Another way to overcome this problem derives from injecting thephysically appropriate principles of the GRW model within the originalapproach of P. Pearle. This line of thought has led to what is knownas the CSL (Continuous Spontaneous Localization) model (Pearle 1989;Ghirardi, Pearle, and Rimini 1990) in which the discontinuous jumpswhich characterize QMSL are replaced by a continuous stochasticevolution in the Hilbert space (a sort of Brownian motion of thestatevector).
The basic working principles are CSL are similar to those of the GRWmodel, though the technical detail might different significantly. Fora review see (Bassi and Ghirardi 2003; Adler 2007, Bassi, Lochan, etal. 2013). At this regard, it is interesting to note (Ghirardi,Pearle, & Rimini 1990) that for any CSL dynamics there is ahitting dynamics which, from a physical point of view, is ‘asclose to it as one wants’. Instead of entering into the detailsof the CSL formalism, it is useful, for the discussion below, toanalyze a simplified version of it.
With the aim of understanding the physical implications of the CSLmodel, such as the rate of suppression of coherence, we make now somesimplifying assumptions. First, we assume that we are dealing withonly one kind of particles (e.g., the nucleons), secondly, wedisregard the standard Schrödinger term in the evolution and,finally, we divide the whole space in cells of volume \(d^3\). Wedenote by \(\ket{n_1, n_2 ,\ldots}\) a Fock state in which there are\(n_i\) particles in cell \(i\), and we consider a superposition oftwo states \(\ket{n_1, n_2 , \ldots}\) and \(\ket{m_1, m_2 , \ldots}\)which differ in the occupation numbers of the various cells of theuniverse. With these assumptions it is quite easy to prove that therate of suppression of the coherence between the two states (so thatthe final state is one of the two and not their superposition) isgoverned by the quantity:
\[\tag{6} \exp\{-f [(n_1 - m_1)^2 + (n_2 - m_2)^2 +\ldots]t\}, \]all cells of the universe appearing in the sum within the squarebrackets in the exponent. Apart from differences relating to theidentity of the constituents, the overall physics is quite similar tothat implied by QMSL.
Equation 6 offers the opportunity of discussing the possibility ofrelating the suppression of coherence to gravitational effects. Infact, with reference to this equation we notice that the worst casescenario (from the point of view of the time necessary to suppresscoherence) is that corresponding to the superposition of two statesfor which the occupation numbers of the individual cells differ onlyby one unit. In this case the amplifying effect of taking the squareof the differences disappears. Let us then ask the question: how manynucleons (at worst) should occupy different cells, in order for thegiven superposition to be dynamically suppressed within the time whichcharacterizes human perceptual processes? Since such a time is of theorder of \(10^{-2}\) sec and \(f = 10^{-16}\text{ sec}^{-1}\), thenumber of displaced nucleons must be of the order of \(10^{18}\),which corresponds, to a remarkable accuracy, to a Planck mass. Thisfigure seems to point in the same direction as Penrose’sattempts to relate reduction mechanisms to quantum gravitationaleffects (Penrose 1989).
By modifying the quantum dynamics, CSL like all collapse models makespredictions, which slightly differ from the standard quantummechanical ones. We review the most important cases.
Effects in superconducting devices. A detailed analysis hasbeen presented in (Ghirardi & Rimini 1990). As shown there and asfollows from estimates about possible effects for superconductingdevices (Rae 1990; Gallis and Fleming 1990; Rimini 1995), and for theexcitation of atoms (Squires 1991), it turns out not to be possible,with present technology, to perform clear-cut experiments allowing todiscriminate the model from standard quantum mechanics.
Loss of coherence in diffraction experiments withmacromolecules. The Viennese groups of A. Zeilinger first, andthen of M. Arndt, have performed important diffraction experimentsinvolving macromolecules. The most relevant ones involve C\(_{60}\),(720 nucleons) (Arndt et al. 1999), C\(_{70}\), (840 nucleons)(Hackermueller et al. 2004) and more complex molecules (over 10,000nucleons, Eibenberger et al. 2013, Fein et al. 2019). So far theseexperiments are not capable of testing the proposal of Alder,therefore even less the weaker proposal of GRW, for the collapse rate(Toroš, Gasbarri, and Bassi 2017). Significant technologicaldevelopment is necessary in order to probe these values, possibly byrealizing the experiment in outer space where coherences can bemaintained for longer times (Kaltenbaek, Hechenblaikner, et al.2012).
Loss of coherence in opto-mechanical interferometers.Recently, an interesting proposal of testing the superpositionprinciple by resorting to an experimental set-up involving a(mesoscopic) mirror has been advanced (Marshall et al. 2003). Thisstimulating proposal has led a group of scientists directly interestedin Collapse Theories (Bassi, Ippoliti & Adler 2005; Adler, Bassi& Ippoliti 2005) to check whether the proposed experiment might bea crucial one for testing dynamical reduction models versus quantummechanics. The problem is extremely subtle because the extension ofthe oscillations of the mirror is much smaller than the localizationaccuracy of GRW, so that the localizations processes become almostineffective. However, quite recently a detailed reconsideration of thephysics of such systems has been performed and it has allowed to drawthe relevant conclusion that the proposal by Adler (2007) of changingthe frequency of the GRW theory of a factor like the one he hasconsidered is untenable.
Non-interferometric tests in opto-mechanical systems. In2003, an interesting proposal of testing the superposition of amesoscopic mirror was put forward (Marshall et al. 2003). Thisstimulating proposal faced strong technical difficulties, such asenvironmental decoherence, which prevents the detection of thesuperposition. There is, however, a side effect of collapse theoriesthat can be exploited in such systems. Indeed, the collapse of thewavefunction leads to an effective noise on the center of mass of thesystem (Collett & Pearle 2003), which can be eventually boundedthrough experiments. The optomechanical application has been proposed(Bahrami, Paternostro, et al. 2014; Nimmrichter, et al. 2014; Diosi2015) to test such an effect, and various experiments showed thepotential of this technique. They range from nanomechanical cantilevercooled to millikelvin temperature (Vinante, Bahrami, et al. 2016;Vinante, Mezzena, et al. 2017, Vinante et al. 2020) to thegravitational wave detectors LIGO, AURIGA and LISA Pathfinder(Carlesso, Bassi, et al. 2016; Helou et al. 2017) as well as opticallyor magnetically levitated systems at room temperature (Zheng et al.2020; Pontin et al. 2020). In the meantime, several proposals werepresented to push even further the bounds on the collapse parameters,by exploiting different possible modification of current experiments:i) multi-layer structure of the test mass (Carlesso, Vinante, et al.2018) would amplify the bound for particular values of d; ii)resorting to different degrees of freedom, for example the rotationalones, which are more sensitive to collapse effects (Carlesso,Paternostro, et al. 2018; Schrinski, Stickler, & Hornberger 2017,Altamura et al. 2024, Altamura et al. 2025). A first application wasimplemented in a torsional experiment (Komori et al. 2020).
Non-interferometric experiments with cold atoms. The recentadvances in trapping, cooling and manipulating ensembles of atomspaved the way for testing collapse models with cold atoms. Similarlyto optomechanical systems, bounds on the collapse parameters arederived by quantifying the Brownian noise induced by the collapseprocess. The focus is on the energy increase or the positiondiffusion. To make an example, if the atomic cloud can freely evolve,the energy will grow linearly with time, while the position spreadgoes with the cubic power. Experimental bounds were obtained from bothvariables and analyzed in Laloe, Muillin, and Pearle 2014 andBilardello et al. 2016 respectively.
Spontaneous X-ray emission from Germanium detectors. Collapsemodels not only forbid macroscopic superpositions to be stable, theyshare several other features which are forbidden by the standardtheory. One of these is the spontaneous emission of radiation fromotherwise stable systems, like atoms. While the standard theorypredicts that such systems—if not excited—do not emitradiation, collapse models allow for radiation to be produced, as aconsequence of the interaction between the system and the noiseresponsible for the collapse. The emission rate has been computed forfree charged particles (Fu 1997; Adler, Bassi, & Donadi 2013), anharmonic oscillator (Bassi & Donadi 2014; Donadi, Deckert, &Bassi 2014) and for hydrogenic atoms (Adler et al. 2007). A formulafor the radiation emission from a generic system is given in (Donadi& Bassi 2014). The importance of this class of experiments lies inthe fact that—so far—they provide the strongest upperbounds on the collapse parameters (Curceanu, Hiesmayr, &Piscicchia 2015; Curceanu, Bartalucci, et al. 2016; Piscicchia et al.2017; Donadi et al. 2021; Arnquist et al. 2022). In particular, it hasproven experimentally that the proposal by Adler (2007) of a drasticchange of the frequency of the localizations with respect to those ofthe original GRW paper is incompatible with the experimental data,unless the CSL model is modified by taking a non-white noise (which isactually a reasonable assumption, if the noise is physical); also theGRW value is about to be explored by the same kind of experiment.Moreover, the so-called Diosi-Penrose model has been falsifiedexperimentally (Donadi et al. 2021), at least in its simplerformulation.
Overall these experiments now place constraints just below \(f =10^{-13} \text{sec}^{-1}\) at \(d = 10^{-7} \text{m}.\)
CSL and conscious perceptions. One interesting feature of CSLis that when conscious perceptions are involved, the collapse time oftwo brain states in a superposition and the time which is necessaryfor the emergence of a definite perception, are quite similar, andthis has some (small but significant) implications concerning theprobabilities of the outcomes. This point has been analyzed in detailand explicitly evaluated by resorting to a simple model of a quantumsystem subjected to reduction processes (Ghirardi & Romano 2014).The idea is to consider a spin 1/2 particle whose spin rotates aroundthex-axis with a frequency of about one hundredth of the oneof the random measurements ascertaining whether its spin is UP or DOWNwith respect to thez-axis. It turns out that for asuperposition with amplitudes \(a\) and \(b\) of the two eigenstatesof S\(_z\), the probability of the two supervening perceptionsassociated to the two outcomes will differ of about 1% from thosepredicted by quantum mechanics, i.e. \(\lvert a\rvert^2\) and \(\lvertb\rvert^2\), respectively.
The test would be quite interesting also for the general meaning ofcollapse theories because it will give some practical evidenceconcerning the fact that, in the case in which a superposition of twomicroscopic different states which are able to trigger two precise(and different) perceptions, the brain actually collapses thewavefunction yielding only one perception, an clear-cut indicationthat the standard theory cannot run the whole process.
Summarizing, due to fast technological improvements, experiments inwhich one might test the deviations from Standard Quantum Theoryimplied by Collapse Models, seems to have become more and morefeasible.
A. Pais famously recalls in his biography of Einstein:
We often discussed his notions on objective reality. I recall thatduring one walk Einstein suddenly stopped, turned to me and askedwhether I really believed that the moon exists only when I look at it.(Pais 1982: 5)
In the context of Einstein’s remarks inAlbert Einstein,Philosopher-Scientist (Schilpp 1949), we can regard thisreference to the moon as an extreme example of ‘a fact thatbelongs entirely within the sphere of macroscopic concepts’, asis also a mark on a strip of paper that is used to register theoutcome of a decay experiment, so that
as a consequence, there is hardly likely to be anyone who would beinclined to consider seriously […] that the existence of thelocation is essentially dependent upon the carrying out of anobservation made on the registration strip. For, in the macroscopicsphere it simply is considered certain that one must adhere to theprogram of a realistic description in space and time; whereas in thesphere of microscopic situations one is more readily inclined to giveup, or at least to modify, this program. (1949: 671)
However,
the ‘macroscopic’ and the ‘microscopic’ are sointer-related that it appears impracticable to give up this program inthe ‘microscopic’ alone. (1949: 674)
One might speculate that Einstein would not have taken the DRPseriously, given that it is a fundamentally indeterministic program.On the other hand, the DRP allows precisely for this middle ground,between giving up a ‘classical description in space andtime’ altogether (the moon is not there when nobody looks), andrequiring that it be applicable also at the microscopic level (aswithin some kind of ‘hidden variables’ theory). It wouldseem that the pursuit of ‘realism’ for Einstein was more aprogram that had been very successful rather than an a prioricommitment, and that in principle he would have accepted attemptsrequiring a radical change in our classical conceptions concerningmicrosystems, provided they would nevertheless allow to take amacrorealist position matching our definite perceptions at thisscale.
In the DRP, we can say of an electron in an EPR-Bohm situation that‘when nobody looks’, it has no definite spin in anydirection , and in particular that when it is in a superposition oftwo states localised far away from each other, it cannot be thought tobe at a definite place (see, however, the remarks in Section 11). Inthe macrorealm, however, objects do have definite positions and aregenerally describable in classical terms. That is, in spite of thefact that the DRP program is not adding ‘hidden variables’to the theory, it implies that the moon is definitely there even if nosentient being looks at it. In the words of J. S. Bell, the DRP
allows electrons (in general microsystems) to enjoy the cloudiness ofwaves, while allowing tables and chairs, and ourselves, and blackmarks on photographs, to be rather definitely in one place rather thananother, and to be described in classical terms. (Bell 1989a: 364)
Such a program, as we have seen, is implemented by assuming only theexistence of wave functions, and by proposing a unified dynamics thatgoverns both microscopic processes and ‘measurements’.With reference to the latter, no vague definitions are needed. The newdynamical equations govern the unfolding of any physical process, andthe macroscopic ambiguities that would arise from the linear evolutionare theoretically possible, but only of momentary duration, of nopractical importance and no source of embarrassment.
We have not yet analyzed the implications about locality, but since inthe DRP program no hidden variables are introduced, the situation canbe no worse than in ordinary quantum mechanics:‘by addingmathematical precision to the jumps in the wave function’,the GRW theory‘simply makes precise the action at adistance of ordinary quantum mechanics’ (Bell 1987: 46).Indeed, a detailed investigation of the locality properties of thetheory becomes possible as shown by Bell himself (Bell 1987: 47).Moreover, as it will become clear when we will discuss theinterpretation of the theory in terms of mass density, the QMSL andCSL theories naturally account for the behaviour of macroscopicobjects, corresponding to our definite perceptions about them, themain objective of Einstein’s requirements.
The achievements of the DRP which are relevant for the debate aboutthe foundations of quantum mechanics can also be concisely summarizedin the words of H.P. Stapp:
The collapse mechanisms so far proposed could, on the one hand, beviewed as ad hoc mutilations designed to force ontology to kneel toprejudice. On the other hand, these proposals show that one cancertainly erect a coherent quantum ontology that generally conforms toordinary ideas at the macroscopic level. (Stapp 1989: 157)
As soon as the GRW proposal appeared and attracted the attention ofJ.S. Bell, it stimulated him to look at it from the point of view ofrelativity theory. As he stated subsequently:
When I saw this theory first, I thought that I could blow it out ofthe water, by showing that it wasgrossly in violation ofLorentz invariance. That’s connected with the problem of‘quantum entanglement’, the EPR paradox. (Bell 1989b:1)
Actually, he had already investigated this point by studying theeffect on the theory of a transformation mimicking a nonrelativisticapproximation of a Lorentz transformation and he arrived at asurprising conclusion:
… the model is as Lorentz invariant as it could be in itsnonrelativistic version. It takes away the ground of my fear that anyexact formulation of quantum mechanics must conflict with fundamentalLorentz invariance. (Bell 1987: 49)
What Bell had actually proved by resorting to a two-times formulationof the Schrödinger equation is that the model violates localityby violating outcome independence and not, as deterministic hiddenvariable theories do, parameter independence.
Indeed, with reference to this point we recall that, as extensivelydiscussed in the literature (Suppes & Zanotti 1976; van Fraassen1982; Jarrett 1984; Shimony 1984; see also the entry onBell’s Theorem), Bell’s locality assumption is equivalent to the conjunction oftwo other assumptions, viz., in Shimony’s terminology, parameterindependence and outcome independence. In view of the experimentalviolation of Bell’s inequality, one has to give up either one orboth these assumptions. The above splitting of the localityrequirement into two logically independent conditions is particularlyuseful in discussing the different status of CSL and deterministichidden variable theories with respect to relativistic requirements.Actually, as proved by Jarrett himself, when parameter independence isviolated, if one had access to the variables which specify completelythe state of individual physical systems, one could sendfaster-than-light signals from one wing of the apparatus to the other.Moreover, in Ghirardi and Grassi (1996) it has been proved that it isimpossible to build agenuinely relativistically invarianttheory which, in its nonrelativistic limit, exhibits parameterdependence. Here we use the termgenuinely invariant todenote a theory for which there is no (hidden) preferred referenceframe. On the other hand, if locality is violated only by theoccurrence of outcome dependence then faster-than-light signalingcannot be achieved (Eberhard 1978; Ghirardi, Rimini, & Weber1980). Few years after the just mentioned proof by Bell, it has beenshown in complete generality (Ghirardi, Grassi, Butterfield, &Fleming 1993) that the GRW and CSL theories, just as standard quantummechanics, exhibit only outcome dependence. This is to some extentencouraging and shows that there are no reasons of principle makingunviable the project of building a relativistically invariant DRM.
Let us be more specific about this crucial problem. P. Pearle was thefirst to propose (Pearle 1990) a relativistic generalization of CSL toa quantum field theory describing a fermion field coupled to a mesonscalar field enriched with the introduction of stochastic andnonlinear terms. A quite detailed discussion of this proposal waspresented in (Ghirardi, Grassi & Pearle 1990) where it was shownthat the theory enjoys of all properties which are necessary in orderto meet the relativistic constraints. Pearle’s approach requiresthe precise formulation of the idea of stochastic Lorentzinvariance.
In this model, one considers a fermion field coupled to a meson fieldand puts forward the idea of inducing localizations for the fermionsthrough their coupling to the mesons with a stochastic dynamicalreduction mechanism acting on the meson variables. In practice,working in the interaction picture, one considers the standardHeisenberg evolution equations for the two coupled fields and aTomonaga-Schwinger CSL-type evolution equation with a skew-hermitiancoupling to a c-number stochastic potential for the state vector. Thisapproach has been systematically investigated by Ghirardi, Grassi, andPearle (1990), to which we refer the reader for a detailed discussion.Here we stress that, under specific approximations, one obtains in thenon-relativistic limit a CSL-type equation inducing spatiallocalization. However, due to the white noise nature of the stochasticpotential, novel renormalization problems arise: the increase per unittime and per unit volume of the energy of the meson field is infinitedue to the fact that infinitely many mesons are created. This pointhas also been lucidly discussed by Bell (1989c [2007]) in the talk hedelivered at Trieste on the occasion of the 25th anniversary of theInternational Centre for Theoretical Physics. This talk appeared underthe titleThe Trieste Lecture of John Stewart Bell. For thesereasons one cannot consider this as a satisfactory example of arelativistic reduction model.
In the years following the just mentioned attempts there has been aflourishing of researches aimed at getting the desired result. Let usbriefly comment on them. As already mentioned, the source of thedivergences is the assumption of point interactions between thequantum field operators in the dynamical equation for the statevector,or, equivalently, the white character of the stochastic noise. Havingthis aspect in mind, P. Pearle (1989), L. Diosi (1990) and A. Bassiand G.C. Ghirardi (2002) reconsidered the problem from the beginningby investigating nonrelativistic theories with nonwhite Gaussiannoises. The problem turns out to be very difficult from themathematical point of view, but steps forward have been made. Inrecent years, a precise formulation of the nonwhite generalization(Bassi & Ferialdi 2009a and 2009b) of the so-called QMUPL model,which represents a simplified version of GRW and CSL, has beenproposed. Moreover, a perturbative approach for the CSL model has beenworked out (Adler & Bassi 2007, 2008). Further work is necessary.This line of thought is very interesting at the nonrelativistic level;however, it is not yet clear whether it will lead to a real stepforward in the development of relativistic theories of spontaneouscollapse.
In the same spirit, Nicrosini and Rimini (2003) tried to smear out thepoint interactions without success because, in their approach, apreferred reference frame had to be chosen in order to circumvent thenonintegrability of the Tomonaga-Schwinger equation
Other interesting and different approaches have been suggested. Amongthem we mention the one by Dove and Squires (1996) based on discreterather than continuous stochastic processes and those by Dowker andHerbauts (2004) and Dowker and Henson (2004) formulated on a discretespace-time.
Precisely in the same years similar attempts to formulate arelativistic generalization of Bohmian Mechanics were ongoing,encountering difficulties. Relevant steps are represented by a paper(Dürr 1999) resorting to a preferred spacetime slicing, by theinvestigations of Goldstein and Tumulka (2003) and by other scientists(Berndl et al. 1996). However, we must acknowledge that no one ofthese attempts has led to a fully satisfactory solution of the problemof having a theory without observers, like Bohmian mechanics, which isperfectly satisfactory from the relativistic point of view, preciselydue to the fact that they are notgenuinely Lorentz invariantin the sense we have made precise before. Mention should be made alsoof the attempt by Horton and Dewdney (2001) to build arelativistically invariant model based on particle trajectories.
Let us come back to the relativistic DRP. Some important changes haveoccurred quite recently. Tumulka (2006a) succeeded in proposing arelativistic version of the GRW theory for N non-interactingdistinguishable particles, based on the consideration of a multi-timewavefunction whose evolution is governed by Dirac like equations andadopts as its Primitive Ontology (see the next section) the one whichattaches a primary role to the space and time points at whichspontaneous localizations occur, as originally suggested by Bell(1987). To our knowledge this represents the first proposal of arelativistic dynamical reduction mechanism, which satisfies allrelativistic requirements. In particular it is free from divergencesand foliation independent. However, it is formulated only for systemscontaining a fixed number of noninteracting fermions.
D. Bedingham (2011) following strictly the original proposal by Pearle(1990) of a quantum field theory inducing reductions based on aTomonaga- Schwinger equation, has worked out an analogous model which,however, overcomes the difficulties of the original model. In fact,Bedingham has circumvented the crucial problems arising from pointinteractions by (paying the price of) introducing, besides the fieldscharacterizing the Quantum Field Theories he is interested in, anauxiliary relativistic field that amounts to a smearing of theinteractions whilst preserving Lorentz invariance and frameindependence. Adopting this point of view and taking advantage also ofthe proposal by Ghirardi (2000) concerning the appropriate way todefine objective properties at any space-time point \(x\), he has beenable to work out a fully satisfactory and consistent relativisticscheme for quantum field theories in which reduction processes mayoccur.
Taking once more advantage of the ideas of the paper by Ghirardi(2000), various of the just quoted authors (Bedingham, Duerr,Ghirardi, et al. 2014), have been able to prove that it is possible towork out a relativistic generalization of Collapse models when theirprimitive ontology is taken to be the one given by the mass densityinterpretation for the nonrelativistic case we will present in whatfollows.
In view of these results and taking into account the interestinginvestigations concerning relativistic Bohmian-like theories,theconclusions that Tumulka has drawn concerning the status of attemptsto account for the macro-objectification process from a relativisticperspective are well-founded:
A somewhat surprising feature of the present situation is that we seemto arrive at the following alternative: Bohmian mechanics shows thatone can explain quantum mechanics, exactly and completely, if one iswilling to pay with using a preferred slicing of spacetime; our modelsuggests that one should be able to avoid a preferred slicing ofspacetime if one is willing to pay with a certain deviation fromquantum mechanics, (Tumulka 2006a: 842)
a conclusion that he has rephrased and reinforced in Tumulka(2006c):
Thus, with the presently available models we have the alternative:either the conventional understanding of relativity is not right, orquantum mechanics is not exact.
Very recently, a thorough and illuminating discussion of the importantapproach by Tumulka has been presented by Tim Maudlin (2011) in thethird revised edition of his bookQuantum Non-Locality andRelativity. Tumulka’s position is perfectly consistent withthe present ideas concerning the attempts to transform relativisticstandard quantum mechanics into an ‘exact’ theory in thesense which has been made precise by J. Bell. Since the only unified,mathematically precise and formally consistent formulations of thequantum description of natural processes are Bohmian mechanics andGRW-like theories, if one chooses the first alternative one has toaccept the existence of a preferred reference frame, while in thesecond case one is not led to such a drastic change of position withrespect to relativistic concepts but must accept that the ensuingtheory disagrees with the predictions of quantum mechanics andacquires the status of a rival theory with respect to it.
In spite of the fact that the situation is, to some extent, still openand requires further investigations, it has to be recognized that theefforts which have been spent on such a program have made a betterunderstanding of some crucial points possible and have thrown light onsome important conceptual issues. First, they have led to a completelygeneral and rigorous formulation of the concept of stochasticinvariance. Second, they have prompted a critical reconsideration,based on the discussion of smeared observables with compact support,of the problem of locality at the individual level. This analysis hasbrought out the necessity of reconsidering the criteria for theattribution of objective local properties to physical systems. Inspecific situations, one cannot attribute any local property to amicrosystem: any attempt to do so gives rise to ambiguities. However,when dealing with macroscopic systems, the impossibility ofattributing to them local properties (or, equivalently, the ambiguityassociated to such properties) lasts only for time intervals necessaryfor the dynamical reduction to take place. Moreover, no objectiveproperty corresponding to a local observable, even for microsystems,can emerge as a consequence of a measurement-like event occurring in aspace-like separated region: such properties emerge only in the futurelight cone of the considered macroscopic event. Finally, recentinvestigations (Ghirardi & Grassi 1996; Ghirardi 2000) have shownthat the very formal structure of the theory is such that it does notallow, even conceptually, to establish cause-effect relations betweenspace-like events.
The conclusion of this section, is that the question of whether arelativistic dynamical reduction program can find a satisfactoryformulation seems to admit a positive answer.
Efforts to embed collapse models within a relativistic quantumfield-theoretic framework continue (Tumulka 2020). In pursuing thisprogram, one has to face an important no-go result, valid for anyrelativistically invariant, Markovian collapse theory formulatedwithin the standard framework of quantum field theory (Myrvold 2017):if the theory is required to ensure vacuum stability and to excludesuperluminal signaling, then it cannot be consistently constructedwithout the introduction of nonstandard degrees of freedom. Thisimplies that either the demand for strict Lorentz invariance, theMarkovian character of collapse dynamics, or the standardfield-theoretic ontology must be abandoned if collapse theories are tobe made compatible with relativity, as done for example in the abovecited works of Tumulka.
Some authors (Albert & Vaidman 1989; Albert 1990, 1992) haveraised an interesting objection concerning the emergence of definiteperceptions within Collapse Theories. The objection is based on thefact that one can easily imagine situations leading to definiteperceptions, that nevertheless do not involve the displacement of alarge number of particles up to the stage of the perception itself.These cases would then constitute actual measurement situations whichcannot be described by the GRW theory, contrary to what happens forthe idealized (according to the authors) situations considered in manypresentations of it, i.e., those involving the displacement of somesort of pointer. To be more specific, the above papers consider a‘measurement-like’ process whose output is the emission ofa burst of few photons triggered by the position in which a particlehits a screen. This can easily be devised by considering, e.g., aStern-Gerlach set-up in which a spin 1/2 microsystem, according to thevalue of its spin component, hits a fluorescent screen in differentplaces and excites a small number of atoms which subsequently decay,emitting a small number of photons.
The argument goes as follows: if one triggers the apparatus with asuperposition of two spin states, since only a few atoms are excited,since the excitations involve displacements which are smaller than thecharacteristic localization distance of GRW, since GRW does not inducereductions on photon states and, finally, since the photon statesimmediately overlap, there is no way for the spontaneous localizationmechanism to become effective in suppressing the ensuing superpositionof the states ‘photons emerging from point \(A\) of thescreen’ and ‘photons emerging from point \(B\) of thescreen’. On the other hand, since the visual perceptionthreshold is quite low (about 6–7 photons), there is no doubtthat the naked eye of a human observer is sufficient to detect whetherthe luminous spot on the screen is at \(A\) or at \(B\). Theconclusion follows: in the case under consideration no dynamicalreduction can take place and as a consequence no measurement is over,no outcome is definite, up to the moment in which a conscious observerperceives the spot.
Aicardi et al. (1991) have presented a detailed answer to thiscriticism: it is agreed that in the considered case the superpositionpersists for long times (actually the superposition must persistsince, being the system under consideration microscopic, one couldperform interference experiments which everybody would expect toconfirm quantum mechanics). However, to deal in the appropriate andcorrect way with such a criticism, one has to consider all the systemswhich enter into play (electron, screen, photons and brain) and theuniversal dynamics governing all relevant physical processes. A simpleestimate of the number of ions which are involved in the transmissionof the nervous signal up to the higher virtual cortex makes perfectlyplausible that, in the process, a sufficient number of particles aredisplaced by a sufficient spatial amount to satisfy the conditionsunder which, according to the GRW theory, the suppression of thesuperposition of the two nervous signals will take place within thetime scale of the perception.
This analysis by no means amounts to attributing a special role to theconscious observer or to the perception process. The observer’sbrain is the only system present in the set-up in which asuperposition of two states involving different locations of a largenumber of particles occurs. As such it is the only place where thereduction can and actually must take place according to the theory. Itis extremely important to stress that if in place of the eye of ahuman being one puts in front of the photons’ beam a sparkchamber or a device leading to the displacement of a macroscopicpointer, or producing ink spots on a computer output, reduction willequally take place. In the given example, the human nervous system issimply a physical system, a specific assembly of particles, whichperforms the same function as any other device, if no other suchdevice interacts with the photons before the human observer does. Itfollows that it is incorrect and seriously misleading to claim thatthe GRW theory requires a conscious observer in order for measurementsto have a definite outcome.
A further remark may be appropriate. The above analysis could be takenby the reader as indicating a very naive and oversimplified attitudetowards the deep problem of the mind-brain correspondence. There is noclaim and no presumption that GRW allows a physicalist explanation ofconscious perception. It is only pointed out that, based on what weknow about the purely physical aspects of the process, one can statethat before the nervous pulses reach the higher visual cortex, theconditions guaranteeing the suppression of one of the two signals areverified. In brief, a consistent use of the dynamical reductionmechanism in the above situation accounts for the definiteness of theconscious perception, even in the extremely peculiar situation devisedby Albert and Vaidman.
As stressed in the opening sentences of this contribution, the mostserious problem of standard quantum mechanics lies in its beingextremely successful in telling us aboutwhat we observe, butbeing basically silent onwhat there is. This specificfeature is closely related to the probabilistic interpretation of thestatevector, combined with the completeness assumption of the theory.Notice that what is under discussion is the probabilisticinterpretation, not the probabilistic character, of the theory. Alsocollapse theories have a fundamentally stochastic character but, dueto their most specific feature, i.e., that of driving the statevectorof any individual physical system into appropriate and physicallymeaningful manifolds, they allow for a different interpretation. Onecould even say (if one wants to avoid that they too, as the standardtheory, speak only ofwhat we find) that theyrequire a different interpretation, one that accounts for ourperceptions at the appropriate, i.e., macroscopic, level.
We must admit that this opinion is not universally shared. Accordingto various authors, the ‘rules of the game’ embodied inthe precise formulation of the GRW and CSL theories represent allthere is to say about them. However, this cannot be the whole story:stricter and more precise requirements than the purely formal onesmust be imposed for a theory to be taken seriously as a fundamentaldescription of natural processes (an opinion shared by J. Bell). Thisrequest of going beyond the purely formal aspects of a theoreticalscheme has been denoted as (the necessity of specifying) the PrimitiveOntology (PO) of the theory in an extremely interesting paper (Allori,et al. 2008). The fundamental requisite of the PO is that it shouldmake absolutely precise what the theory is fundamentally about.
This is not a new problem; as already mentioned it has been raised byJ. Bell since his first presentation of the GRW theory. Let mesummarize the terms of the debate. Given that the wavefunction of amany-particle system lives in a (high-dimensional) configurationspace, which is not endowed with a direct physical meaning connectedto our experience of the world around us, Bell wanted to identify the‘local beables’ of the theory, the quantities on which onecould base a description of the perceived reality in ordinarythree-dimensional space. In the specific context of QMSL, he (Bell1987: 45) suggested that the ‘GRW jumps’, which we called‘hittings’, could play this role. In fact they occur atprecise times in precise positions of the three-dimensional space. Assuggested in (Allori, et al. 2008) we will denote this positionconcerning the PO of the GRW theory as the ‘flashesontology.’
However, later Bell himself suggested that the most naturalinterpretation of the wavefunction in the context of a collapse theorywould be that it describes the ‘density […] ofstuff’ in the 3N-dimensional configuration space (Bell 1990:30), the natural mathematical framework for describing a system of\(N\) particles. Allori et al. (2008) appropriately have pointed outthat this position amounts to avoiding commitment about the POontology of the theory and, consequently, to leaving vague the preciseand meaningful connections it permits to be established between themathematical description of the unfolding of physical processes andour perception of them.
The interpretation which, in our opinion, is most appropriate forcollapse theories, has been proposed in (Ghirardi, Grassi, &Benatti 1995) and has been referred in Allori et al. 2008 as‘the mass density ontology’. Let us briefly describeit.
First of all, various investigations (Pearle & Squires 1994) hadmade clear that QMSL and CSL needed a modification, i.e., thecharacteristic localization frequency of the elementary constituentsof matter had to be made proportional to the mass characterizing theparticle under consideration. In particular, the original frequencyfor the hitting processes \(f = 10^{-16}\) sec\(^{-1}\) is the onecharacterizing the nucleons, while, e.g., electrons would sufferhittings with a frequency reduced by about 2000 times. Unfortunatelywe have no space to discuss here the physical reasons which make thischoice appropriate; we refer the reader to the above paper, as well asto the detailed analysis by Peruzzi and Rimini (2000). With thismodification, what the nonlinear dynamics strives to make‘objectively definite’ is the mass distribution in thewhole universe. Second, a deep critical reconsideration (Ghirardi,Grassi, & Benatti 1995) has made evident how the concept of‘distance’ that characterizes the Hilbert space isinappropriate in accounting for the similarity or difference betweenmacroscopic situations. Just to give a convincing example, considerthree states \(\ket{h} , \ket{h^*}\) and \(\ket{t}\) of a macrosystem(let us say a massive macroscopic bulk of matter), the firstcorresponding to its being located here, the second to its having thesame location but one of its atoms (or molecules) being in a stateorthogonal to the corresponding state in \(\ket{h}\), and the thirdhaving exactly the same internal state of the first but beingdifferently located (there). Then, despite the fact that the first twostates are indistinguishable from each other at the macrolevel, whilethe first and the third correspond to completely different anddirectly perceivable situations, the Hilbert space distance between\(\ket{h}\) and \(\ket{h^*}\), is equal to that between \(\ket{h}\)and \(\ket{t}\).
When the localization frequency is related to the mass of theconstituents, then, in completely generality (i.e., even when one isdealing with a body which is not almost rigid, such as a gas or acloud), the mechanism leading to the suppression of the superpositionsof macroscopically different states is fundamentally governed by theintegral of the squared differences of the mass densities associatedto the two superposed states. Actually, in the original paper the massdensity at a point was identified with its average over thecharacteristic volume of the theory, i.e., \(10^{-15}\) cm\(^3\)around that point. It is however easy to convince oneself that thereis no need to do so and that the mass density at any point, directlyidentified by the statevector (see below), is the appropriate quantityon which to base an appropriate ontology. Accordingly, we take thefollowing attitude: what the theory is about, what is real ‘outthere’ at a given space point \(\boldsymbol{x}\), is just afield, i.e., a variable \(m(\mathbf{x},t)\) given by the expectationvalue of the mass density operator \(M(\boldsymbol{x})\) at\(\boldsymbol{x}\) obtained by multiplying the mass of any kind ofparticle times the number density operator for the considered type ofparticle and summing over all possible types of particles which can bepresent:
\[\begin{align}\tag{7} m(\boldsymbol{x},t) &= \langle F,t \mid M(\boldsymbol{x}) \mid F,t \rangle; \\ M(\boldsymbol{x}) &= {\sum}_{(k)} m_{(k)}a^*_{(k)}(\boldsymbol{x})a_{(k)}(\boldsymbol{x}). \end{align} \]Here \(\ket{F,t}\) is the statevector characterizing the system at thegiven time, and \(a^*_{(k)}(\boldsymbol{x})\) and\(a_{(k)}(\boldsymbol{x})\) are the creation and annihilationoperators for a particle of type \(k\) at point \(\boldsymbol{x}\). Itis obvious that within standard quantum mechanics such a functioncannot be endowed with any objective physical meaning due to theoccurrence of linear superpositions which give rise to values that donot correspond to what we find in a measurement process or what weperceive. In the case of GRW or CSL theories, if one considers onlythe states allowed by the dynamics one can give a description of theworld in terms of \(m(\boldsymbol{x},t)\), i.e., one recovers aphysically meaningful account of physical reality in the usual3-dimensional space and time. To illustrate this crucial point weconsider, first of all, the embarrassing situation of a macroscopicobject in the superposition of two differently located positionstates. We have then simply to recall that in a collapse modelrelating reductions to mass density differences, the dynamicssuppresses in extremely short times the embarrassing superpositions ofsuch states to recover the mass distribution corresponding to ourperceptions. Let us come now to a microsystem and let us consider theequal weight superposition of two states \(\ket{h}\) and \(\ket{t}\)describing a microscopic particle in two different locations. Such astate gives rise to a mass distribution corresponding to 1/2 of themass of the particle in the two considered space regions. This seems,at first sight, to contradict what is revealed by any measurementprocess. But in such a case we know that the theory implies that thedynamics running all natural processes within GRW ensures thatwhenever one tries to locate the particle they will always find it ina definite position, e.g., one and only one of the Geiger counterswhich might be triggered by the passage of the proton will fire, justbecause a superposition of ‘a counter which has fired’ and‘one which has not fired’ is dynamically forbidden.
This analysis shows that one can consider at all levels (the micro andthe macroscopic ones) the field \(m(\mathbf{x},t)\) as accounting for‘what is out there’, as originally suggested bySchrödinger with his realistic interpretation of the square ofthe wave function of a particle as representing the‘fuzzy’ character of the mass (or charge) of the particle.Obviously, within standard quantum mechanics such a position cannot bemaintained because
wavepackets diffuse, and with the passage of time become infinitelyextended … but however far the wavefunction has extended, thereaction of a detector … remains spotty (Bell 1990: 39).
As we hope to have made clear, the picture is radically different whenone takes into account the new dynamics which succeeds perfectly inreconciling the spread and sharp features of the wavefunction and ofthe detection process, respectively.
It is also extremely important to stress that, by resorting to thequantity (7) one can define an appropriate ‘distance’between two states as the integral over the whole 3-dimensional spaceof the square of the difference of \(m(\boldsymbol{x},t)\) for the twogiven states, a quantity which turns out to be perfectly appropriateto ground the concept of macroscopically similar or distinguishableHilbert space states. In turn, this distance can be used as a basis todefine a sensible psychophysical correspondence within the theory.
There has been a lively debate around a problem which has its origin,according to some of the authors which have raised it, in the factthat even though the localization process which corresponds tomultiplying the wave function times a Gaussian, thus leading to wavefunctions strongly peaked around the position of the hitting, theyallow nevertheless the final wavefunction to be different from zeroover the whole space. The first criticism of this kind was raised byA. Shimony (1990) and can be summarized by his sentence,
[one should not] tolerate“tails” which are so broad thatdifferent parts [...] can be discriminated by the senses, even if verylow probability amplitude is assigned to the tail (1990: 53)
After a localization of a macroscopic system, typically the pointer ofthe apparatus, its centre of mass will be associated to a wavefunction which is different from zero over the whole space. If oneadopts the probabilistic interpretation of the standard theory, thismeans that even when the measurement process is over, there is anonzero (even though extremely small) probability of finding itspointer in an arbitrary position, instead of the one corresponding tothe registered outcome. This is taken as unacceptable, as indicatingthat the DRP does not actually overcome the macro-objectificationproblem.
Let us state immediately that the (alleged) problem arises entirelyfrom keeping the standard interpretation of the wave functionunchanged, in particular assuming that its squared modulus gives theprobability density of the position variable. However, as we havediscussed in the previous section, there are much more serious reasonsof principle which require to abandon the probabilistic interpretationand replace it either with the ‘flash ontology’, or withthe ‘ mass density ontology’ which we have discussedabove.
Before entering into a detailed discussion of this subtle point weneed to focus the problem better. Suppose one adopts, for the moment,the conventional quantum position. We agree that, within such aframework, the fact that wave functions never have strictly compactspatial support can be considered puzzling. However this is anunavoidable problem arising directly from the mathematical features(spreading of wave functions) and from the probabilisticinterpretation of the theory, and not at all a problem peculiar todynamical reduction models. Indeed, the fact that, e.g., the wavefunction of the center of mass of a pointer or of a table has not acompact support has never been taken to be a problem for standardquantum mechanics. When, e.g., the center of mass of a table isextremely well peaked around a given point in space, it has alwaysbeen accepted that it describes a table located at some position, andthat this corresponds in some way to our perception of it. It isobviously true that, for the given wave function, the quantum rulesentail that if a measurement were performed the table could be found(with an extremely small probability) to be kilometers far away, butthisis not the measurement or the macro-objectificationproblem of the standard theory. The latter concerns a completelydifferent situation, i.e., that in which one is confronted with asuperposition with comparable weights of two macroscopically separatedwave functions, both of which possess tails (i.e., have non-compactsupport) but are appreciably different from zero only in far-awaynarrow intervals. This is the really embarrassing situation whichconventional quantum mechanics is unable to make understandable. Towhich perception of the position of the pointer (of the table) doesthis wave function correspond?
Within GRW, the superposition of two states which, when consideredindividually, are assumed to lead to different and definiteperceptions of macroscopic locations, are dynamically forbidden. Ifsome process tends to produce such superpositions, then the reducingdynamics induces the localization of the centre of mass (theassociated wave function being appreciably different from zero only ina narrow and precise interval). Correspondingly, the possibilityarises of attributing to the system the property of being in adefinite place and thus of accounting for our definite perception ofit. Summarizing, we stress once more that the criticism about thetails as well as the requirement that the appearance ofmacroscopically extended (even though extremely small) tails bestrictly forbidden is exclusively motivated by uncritically committingoneself to the probabilistic interpretation of the theory, even forwhat concerns the psycho-physical correspondence: when this positionis taken, states assigning non-exactly vanishing probabilities todifferent outcomes of position measurements should correspond toambiguous perceptions about these positions. Since neither within thestandard formalism nor within the framework of dynamical reductionmodels a wave function can have compact support, taking such aposition leads to conclude that it is just the linear character of theHilbert space description of physical systems which has to be givenup.
It ought to be stressed that there is nothing in the GRW theory whichforbids or makes problematic to assume that the localization functionhas compact support, but it also has to be noted that following thisline would be totally useless: since the evolution equation containsthe kinetic energy term, any function, even if it has compact supportat a given time, will instantaneously spread, acquiring a tailextending over the whole space. If one sticks to the probabilisticinterpretation and one accepts the completeness of the description ofthe states of physical systems in terms of the wave function, the tailproblem cannot be avoided.
The solution to the tails problem can only derive from abandoningcompletely the probabilistic interpretation and from adopting a morephysical and realistic interpretation relating ‘what is outthere’ to, e.g., the mass density distribution over the wholeuniverse. In this connection, the following example will beinstructive. Take a massive sphere of normal density and mass of about1 kg. Classically, the mass of this body would be totally concentratedwithin the radius of the sphere, call it \(r\). In QMSL, after theextremely short time interval in which the collapse dynamics leads toa ‘regime’ situation, and if one considers a sphere withradius \(r + 10^{-5}\) cm, the integral of the mass density over therest of space turns out to be an incredibly small fraction (of theorder of 1 over 10 to the power \(10^{15})\) of the mass of a singleproton. In such conditions, it seems quite legitimate to claim thatthe macroscopic body is localised within the sphere.
However, also this quite reasonable conclusion has been questioned andit has been claimed (Lewis 1997), that the very existence of the tailsimplies that the enumeration principle (i.e., the fact that the claim‘particle 1 is within this box & particle 2 is within thisbox & … & particle \(n\) is within this box & noother particle is within this box’ implies the claim‘there are \(n\) particles within this box’) does nothold, if one takes seriously the mass density interpretation ofcollapse theories. This paper has given rise to a long debate whichwould be inappropriate to reproduce here.
We conclude this brief analysis by stressing once more that, in ouropinion, all the disagreements and the misunderstandings concerningthis problem have their origin in the fact that the idea that theprobabilistic interpretation of the wave function must be abandonedhas not been fully accepted by the authors who find some difficultiesin the proposed mass density interpretation of the Collapse Theories.For a more recent reconsideration of the problem we refer the readerto the paper by Lewis (2003).
We recall that, as stated in Section 3, the macro-objectificationproblem has been at the centre of the most lively and most challengingdebate originated by the quantum view of natural processes. Accordingto the majority of those who adhere to the orthodox position such aproblem does not deserve a particular attention: classical conceptsare a logical prerequisite for the very formulation of quantummechanics and, consequently, the measurement process itself, thedividing line between the quantum and the classical world, cannot andmust not be investigated, but simply accepted. This position has beenlucidly summarized by J. Bell himself:
Making a virtue of necessity and influenced by positivistic andinstrumentalist philosophies, many came to hold not only that it isdifficult to find a coherent picture but that it is wrong to look forone—if not actually immoral then certainly unprofessional.(1981: 45)
The situation has seen many changes in the course of time, and thenecessity of making a clear distinction between what is quantum andwhat is classical has given rise to many proposals for ‘easysolutions’ to the problem which are based on the possibility,for all practical purposes (FAPP), of locating the splittingbetween these two faces of reality at different levels.
Then came Bohmian mechanics, a theory which has made clear, in a lucidand perfectly consistent way, that there is no reason of principle torequire a dichotomic description of the world. A universal dynamicalprinciple runs all physical processes and even though ‘itcompletely agrees with standard quantum predictions’, itaccounts for the standard wave-packet reduction in micro-macrointeractions as well as the classical behaviour of macroscopicobjects.
As we have mentioned, the other consistent proposal, at thenonrelativistic level, of a conceptually satisfactory solution of themacro-objectification problem is represented by the Collapse Theorieswhich are the subject of these pages. Contrary to Bohmian mechanics,they are rival to quantum mechanics, since they make differentpredictions (even though quite difficult to put into evidence)concerning various physical processes.
A common criticism makes reference to the fact that within anycollapse model the ensuing dynamics for the statistical operator canbe considered as the reduced dynamics deriving from a unitary (and,consequently, essentially a standard quantum) dynamics for the statesof an enlarged Hilbert space of a composite quantum system \(S+E\)involving, besides the physical system \(S\) of interest, an ancilla\(E\) whose degrees of freedom are completely unaccessible. Due to thequantum dynamical semigroup nature of the evolution equation for thestatistical operator, any GRW-like model can always be seen as aphenomenological model deriving from a standard quantum evolution on alarger Hilbert space. In this way, the unitary deterministic evolutioncharacterizing quantum mechanics would be fully restored.
Apart from the obvious remark that such a critical attitude completelyfails to grasp the most important feature of collapse theories, i.e.,of dealing with individual quantum systems and not with statisticalensembles and of yielding a perfectly satisfactory description,matching our perceptions concerningindividual macroscopicsystems, invoking an unaccessible ancilla to account for thenonlinear and stochastic character of GRW-type theories is once more apurely verbal way of avoiding facing the real puzzling aspects of thequantum description of macroscopic systems.
Other reasons for ignoring the dynamical reduction program have beenput forward within the quantum information community. We will notspend too much time in analyzing and discussing the new position aboutthe foundational issues which have motivated the elaboration ofcollapse theories. The crucial fact is that, from this perspective,one takes the theory not to be about something real ‘occurringout there’ in a real word, but simply about information. Thispoint is made extremely explicit by Zeilinger (2002: 252):
information is the most basic notion of quantum mechanics, and it isinformation about possible measurement results that is represented inthe quantum state. Measurement results are nothing more than states ofthe classical apparatus used by the experimentalist. The quantumsystem then is nothing other than the consistently constructedreferent of the information represented in the quantum state.
It is clear that if one takes such a position almost all motivationsto be worried by the measurement problem disappear, and with them thereasons to work out what Bell has denoted as ‘an exact versionof quantum mechanics’. The most appropriate reply to this typeof criticisms is to recall that J. Bell (1990) has included‘information’ among the words which must have no place ina formulation with any pretension to physical precision. In particularhe has stressed that one cannot even mention information unless onehas given a precise answer to the two following questions:Whoseinformation? andInformation about what?
A much more serious attitude is to call attention, as many seriousauthors do, to the fact that since collapse theories represent rivaltheories with respect to standard quantum mechanics they lead to theidentification of experimental situations which would allow, inprinciple, crucial tests to discriminate between the two. As we havediscussed above, presently, fully discriminating tests are not out ofreach.
We presented a comprehensive picture of the ideas, the implications,the achievements and the problems of the DRP. We conclude by stressingonce more our position with respect to Collapse Theories. Theirinterest derives entirely from the fact that they have given somehints about a possible way out from the difficulties characterizingstandard quantum mechanics, by proving that explicit and precisemodels can be worked out, which agree with all known predictions ofthe theory and nevertheless allow, on the basis of a universaldynamics governing all natural processes, to overcome in amathematically clean and precise way the basic problems of thestandard theory. In particular, Collapse Models show how one can workout a theory that makes perfectly legitimate to take a macrorealisticposition about natural processes, without contradicting any of theexperimentally tested predictions of standard quantum mechanics.Finally, they might give precise hints about where to look in order toput into evidence, experimentally, possible violations of thesuperposition principle.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
Bell’s Theorem |quantum mechanics |quantum mechanics: Bohmian mechanics
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054