Causal determinism is, roughly speaking, the idea that every event isnecessitated by antecedent events and conditions together with the lawsof nature. The idea is ancient, but first became subject toclarification and mathematical analysis in the eighteenth century.Determinism is deeply connected with our understanding of the physicalsciences and their explanatory ambitions, on the one hand, and with ourviews about human free action on the other. In both of these generalareas there is no agreement over whether determinism is true (or evenwhether it can be known true or false), and what the import for humanagency would be in either case.
In most of what follows, I will speak simply ofdeterminism, rather than ofcausal determinism. Thisfollows recent philosophical practice of sharply distinguishing viewsand theories of what causation is from any conclusions about thesuccess or failure of determinism (cf. Earman, 1986; an exception isMellor 1994). For the most part this disengagement of the two conceptsis appropriate. But as we will see later, the notion of cause/effect isnot so easily disengaged from much of what matters to us aboutdeterminism.
Traditionally determinism has been given various, usually imprecisedefinitions. This is only problematic if one is investigatingdeterminism in a specific, well-defined theoretical context; but it isimportant to avoid certain major errors of definition. In order to getstarted we can begin with a loose and (nearly) all-encompassingdefinition as follows:
Determinism: Theworld isgovernedby (or isunder the sway of) determinism if and only if,given a specifiedway things are at a time t, the way thingsgothereafter isfixed as a matter ofnaturallaw.
The italicized phrases are elements that require further explanationand investigation, in order for us to gain a clear understanding of theconcept of determinism.
The roots of the notion of determinism surely lie in a very commonphilosophical idea: the idea thateverything can, in principle, beexplained, or thateverything that is, has a sufficient reasonfor being and being as it is, and not otherwise. In other words,the roots of determinism lie in what Leibniz named the Principle ofSufficient Reason. But since precise physical theories began to beformulated with apparently deterministic character, the notion hasbecome separable from these roots. Philosophers of science arefrequently interested in the determinism or indeterminism of varioustheories, without necessarily starting from a view about Leibniz'Principle.
Since the first clear articulations of the concept, there has been atendency among philosophers to believe in the truth of some sort ofdeterminist doctrine. There has also been a tendency, however, toconfuse determinism proper with two related notions:predictability andfate.
Fatalism is easily disentangled from determinism, to the extent thatone can disentangle mystical forces and gods' wills and foreknowledge(aboutspecific matters) from the notion of natural/causallaw. Not every metaphysical picture makes this disentanglementpossible, of course. As a general matter, we can imagine that certainthings are fated to happen, without this being the result ofdeterministic natural laws alone; and we can imagine the world beinggoverned by deterministic laws, without anything at all beingfated to occur (perhaps because there are no gods, normystical forces deserving the titlesfate ordestiny,and in particular no intentional determination of the “initialconditions” of the world). In a looser sense, however, it is truethat under the assumption of determinism, one might say thatgiven the way things have gone in the past, all future eventsthat will in fact happen are alreadydestined to occur.
Prediction and determinism are also easy to disentangle, barringcertain strong theological commitments. As the following famousexpression of determinism by Laplace shows, however, the two are alsoeasy to commingle:
We ought to regard the present state of the universe as theeffect of its antecedent state and as the cause of the state that is tofollow. An intelligence knowing all the forces acting in nature at agiven instant, as well as the momentary positions of all things in theuniverse, would be able to comprehend in one single formula the motionsof the largest bodies as well as the lightest atoms in the world,provided that its intellect were sufficiently powerful to subject alldata to analysis; to it nothing would be uncertain, the future as wellas the past would be present to its eyes. The perfection that the humanmind has been able to give to astronomy affords but a feeble outline ofsuch an intelligence. (Laplace 1820)
In this century,Karl Popper defineddeterminism in terms of predictability also.
Laplace probably had God in mind as the powerful intelligence towhose gaze the whole future is open. If not, he should have:19th and 20th century mathematical studies haveshown convincingly that neither a finite, nor an infinite butembedded-in-the-world intelligence can have the computing powernecessary to predict the actual future, in any world remotely likeours. “Predictability” is therefore afaçon deparler that at best makes vivid what is at stake in determinism;in rigorous discussions it should be eschewed. The world could behighly predictable, in some senses, and yet not deterministic; and itcould be deterministic yet highly unpredictable, as many studies ofchaos (sensitive dependence on initial conditions) show.
Predictability does however make vivid what is at stake indeterminism: our fears about our own status as free agents in theworld. In Laplace's story, a sufficiently bright demon who knew howthings stood in the world 100 years before my birth could predict everyaction, every emotion, every belief in the course of my life. Were shethen to watch me live through it, she might smile condescendingly, asone who watches a marionette dance to the tugs of strings that it knowsnothing about. We can't stand the thought that we are (in some sense)marionettes. Nor does it matter whether any demon (or even God) can, orcares to, actually predict what we will do: the existence of thestrings ofphysical necessity, linked to far-past states ofthe world and determining our current every move, is what alarms us.Whether such alarm is actually warranted is a question well outside thescope of this article (see the entries onfree will andincompatibilist theories of freedom). But a clear understanding of what determinismis, and how we might be able to decide its truth or falsity, is surelya useful starting point for any attempt to grapple with this issue. Wereturn to the issue of freedom inDeterminism and Human Action below.
Recall that we loosely defined causal determinism as follows, withterms in need of clarification italicized:
Causal determinism: Theworld isgoverned by (or isunder the sway of) determinism ifand only if, given a specifiedway things are at a time t, theway things gothereafter is fixed as a matter ofnaturallaw.
Why should we start so globally, speaking of theworld,with all its myriad events, as deterministic? One might have thoughtthat a focus on individual events is more appropriate: an eventE is causally determined if and only if there exists a set ofprior events {A,B,C …} thatconstitute a (jointly) sufficient cause ofE. Then ifall—or even justmost—eventsE that areour human actions are causally determined, the problem that matters tous, namely the challenge to free will, is in force. Nothing so globalas states of the whole world need be invoked, nor evenacomplete determinism that claimsall events to becausally determined.
For a variety of reasons this approach is fraught with problems, andthe reasons explain why philosophers of science mostly prefer to dropthe word “causal” from their discussions of determinism.Generally, as John Earman quipped (1986), to go this route is to“… seek to explain a vague concept—determinism—interms of a truly obscure one—causation.” More specifically,neither philosophers' nor laymen's conceptions ofevents haveany correlate in any modern physical theory.[1] The same goes for thenotions ofcause andsufficient cause. A furtherproblem is posed by the fact that, as is now widely recognized, a setof events {A,B,C …} can only begenuinelysufficient to produce an effect-event if the setincludes an open-endedceteris paribus clause excluding thepresence of potential disruptors that could intervene to preventE. For example, the start of a football game on TV on a normalSaturday afternoon may be sufficientceteris paribus to launchTed toward the fridge to grab a beer; but not if a million-ton asteroidis approaching his house at .75c from a few thousand milesaway, nor if the phone is about to ring with news of a tragic nature,…, and so on.Bertrand Russellfamously argued against the notion of cause along these lines (andothers) in 1912, and the situation has not changed. By trying to definecausal determination in terms of a set of prior sufficient conditions,we inevitably fall into the mess of an open-ended list of negativeconditions required to achieve the desired sufficiency.
Moreover, thinking about how such determination relates to freeaction, a further problem arises. If theceteris paribusclause is open-ended, who is to say that it should not include thenegation of a potential disruptor corresponding to my freely decidingnot to go get the beer? If it does, then we are left saying “WhenA,B,C, … Ted will then go to thefridge for a beer, unlessD orE orF or… or Ted decides not to do so.” The marionette strings ofa “sufficient cause” begin to look rather tenuous.
They are also too short. For the typical set of prior events thatcan (intuitively, plausibly) be thought to be a sufficient cause of ahuman action may be so close in time and space to the agent, as to notlook like a threat to freedom so much as like enabling conditions. IfTed is propelled to the fridge by {seeing the game's on; desiring torepeat the satisfactory experience of other Saturdays; feeling a bitthirsty; etc}, such things look more likegood reasons to havedecided to get a beer, not like external physical events farbeyond Ted's control. Compare this with the claim that {state of theworld in 1900; laws of nature} entail Ted's going to get the beer: thedifference is dramatic. So we have a number of good reasons forsticking to the formulations of determinism that arise most naturallyout of physics. And this means that we are not looking at how aspecific event of ordinary talk is determined by previous events; weare looking at howeverything that happens is determined bywhat has gone before. The state of the world in 1900 only entails that Ted grabs a beer from the fridge by way of entailing the entire physical state of affairs at the later time.
The typical explication of determinism fastens on thestate ofthe (whole) world at a particular time (or instant), for a varietyof reasons. We will briefly explain some of them. Why take the state ofthe whole world, rather than some (perhaps very large) region, as ourstarting point? One might, intuitively, think that it would be enoughto give the complete state of thingson Earth, say, or perhapsin the whole solar system, att, to fix what happensthereafter (for a time at least). But notice that all sorts ofinfluences from outside the solar system come in at the speed of light,and they may have important effects. Suppose Mary looks up at the skyon a clear night, and a particularly bright blue star catches her eye;she thinks “What a lovely star; I think I'll stay outside a bitlonger and enjoy the view.” The state of the solar system onemonth ago did not fix that that blue light from Sirius would arrive andstrike Mary's retina; it arrived into the solar system only a day ago,let's say. So evidently, for Mary's actions (and hence, all physicalevents generally) to be fixed by the state of things a month ago, thatstate will have to be fixed over a much larger spatial region than justthe solar system. (If no physical influences can go faster than light,then the state of things must be given from a spherical volume of space1 light-month in radius.)
But in making vivid the “threat” of determinism, weoften want to fasten on the idea of theentire future of theworld as being determined. No matter what the “speed limit”on physical influences is, if we want the entire future of the world tobe determined, then we will have to fix the state of things over all ofspace, so as not to miss out something that could later come in“from outside” to spoil things. In the time of Laplace, ofcourse, there was no known speed limit to the propagation of physicalthings such as light-rays. In principle light could travel at anyarbitrarily high speed, and some thinkers did suppose that it wastransmitted “instantaneously.” The same went for the forceof gravity. In such a world, evidently, one has to fix the state ofthings over thewhole of the world at a timet, inorder for events to be strictly determined, by the laws of nature, forany amount of time thereafter.
In all this, we have been presupposing the common-sense Newtonianframework of space and time, in which the world-at-a-time is anobjective and meaningful notion. Below when we discuss determinism inrelativistic theories we will revisit this assumption.
For a wide class of physical theories (i.e., proposed sets of laws ofnature), if they can be viewed as deterministic at all, they can beviewed asbi-directionally deterministic. That is, aspecification of the state of the world at a timet, alongwith the laws, determines not only how things goafter t, butalso how things gobefore t. Philosophers, while not exactlyunaware of this symmetry, tend to ignore it when thinking of thebearing of determinism on the free will issue. The reason for this isthat we tend to think of the past (and hence, states of the world inthe past) asdone, over, fixed and beyond our control.Forward-looking determinism then entails that these paststates—beyond our control, perhaps occurring long before humanseven existed—determine everything we do in our lives. It thenseems a mere curious fact that it is equally true that the state ofthe worldnow determines everything that happened in the past. We havean ingrained habit of taking the direction of both causation andexplanation as being past—-present, even when discussing physicaltheories free of any such asymmetry. We will return to this pointshortly.
Another point to notice here is that the notion of things beingdeterminedthereafter is usually taken in an unlimitedsense—i.e., determination of all future events, no matter howremote in time. But conceptually speaking, the world could be onlyimperfectly deterministic: things could be determined only,say, for a thousand years or so from any given starting state of theworld. For example, suppose that near-perfect determinism wereregularly (but infrequently) interrupted by spontaneous particlecreation events, which occur on average only once every thousand yearsin a thousand-light-year-radius volume of space. This unrealisticexample shows how determinism could be strictly false, and yet theworld be deterministic enough for our concerns about free action to beunchanged.
In the loose statement of determinism weare working from, metaphors such as “govern” and“under the sway of” are used to indicate the strong forcebeing attributed to the laws of nature. Part of understandingdeterminism—and especially, whether and why it is metaphysicallyimportant—is getting clear about the status of the presumed laws ofnature.
In the physical sciences, the assumption that there are fundamental,exceptionless laws of nature, and that they have some strong sort ofmodal force, usually goes unquestioned. Indeed, talk of laws“governing” and so on is so commonplace that it takes aneffort of will to see it as metaphorical. We can characterize the usualassumptions about laws in this way: the laws of nature are assumed tobepushy explainers. Theymake things happen in certainways , and by having this power, their existence lets usexplain why things happen in certain ways. (For a recent defense of thisperspective on laws, see Maudlin (2007)). Laws, we might say,are implicitly thought of as thecause of everything thathappens. If the laws governing our world are deterministic, then inprinciple everything that happens can be explained as following fromstates of the world at earlier times. (Again, we note that even thoughthe entailment typically works in the future past direction also, wehave trouble thinking of this as a legitimateexplanatoryentailment. In this respect also, we see that laws of nature are beingimplicitly treated as the causes of what happens: causation,intuitively, can only go past future.)
It is a remarkable fact that philosophers tend to acknowledge theapparent threat determinism poses to free will, even when theyexplicitly reject the view that laws are pushy explainers. Earman(1986), for example, explicitly adopts a theory of laws of nature thattakes them to be simply the best system of regularities thatsystematizes all the events in universal history. This is the BestSystems Analysis (BSA), with roots in the work of Hume, Mill andRamsey, and most recently refined and defended by David Lewis (1973,1994) and by Earman (1984, 1986). (cf. entry onlaws of nature). Yet he ends hiscomprehensivePrimer on Determinism with a discussion of thefree will problem, taking it as a still-important and unresolvedissue.Prima facie at least, this is quite puzzling, for theBSA is founded on the idea that the laws of nature are ontologicallyderivative, not primary; it is the events of universal history, asbrute facts, that make the laws be what they are, andnotvice-versa. Taking this idea seriously, the actions of every humanagent in history are simply a part of the universe-wide pattern ofevents that determines what the laws are for this world. It is thenhard to see how the most elegant summary of this pattern, the BSAlaws, can be thought of as determiners of human actions. Thedetermination or constraint relations, it would seem, can go one wayor the other, not both!
On second thought however it is not so surprising that broadlyHumean philosophers such as Ayer, Earman, Lewis and others still see apotential problem for freedom posed by determinism. For even if humanactions are part of what makes the laws be what they are, this does notmean that we automaticallyhave freedom of the kind we thinkwe have, particularly freedomto have done otherwise givencertain past states of affairs. It is one thing to say that everythingoccurring in and around my body,and everything everywhereelse, conforms to Maxwell's equations and thus the Maxwell equationsare genuine exceptionless regularities, and that because they inaddition are simple and strong, they turn out to be laws. It is quiteanother thing to add: thus, I might have chosen to do otherwise atcertain points in my life, and if I had, then Maxwell's equations wouldnot have been laws. One might try to defend this claim—unpalatableas it seems intuitively, to ascribe ourselves law-breaking power—butit does not follow directly from a Humean approach to laws of nature.Instead, on such views that deny laws most of their pushiness andexplanatory force, questions about determinism and human freedom simplyneed to be approached afresh.
A second important genre of theories of laws of nature holds thatthe laws are in some sensenecessary. For any such approach,laws are just the sort of pushy explainers that are assumed in thetraditional language of physical scientists and free will theorists.But a third and growing class of philosophers holds that (universal,exceptionless, true) laws of naturesimply do not exist. Amongthose who hold this are influential philosophers such as NancyCartwright, Bas van Fraassen, and John Dupré. For thesephilosophers, there is a simple consequence: determinism is a falsedoctrine. As with the Humeans, this does not mean that concerns abouthuman free action are automatically resolved; instead, they must beaddressed afresh in the light of whatever account of physical naturewithout laws is put forward. See Dupré (2001) for one suchdiscussion.
We can now put our—still vague—pieces together. Determinismrequires a world that (a) has a well-defined state or description, atany given time, and (b) laws of nature that are true at all places andtimes. If we have all these, then if (a) and (b) togetherlogicallyentail the state of the world at all other times (or, at least,all times later than that given in (b)), the world is deterministic.Logical entailment, in a sense broad enough to encompass mathematicalconsequence, is the modality behind the determination in“determinism.”
How could we ever decide whether our world is deterministic or not?Given that some philosophers and some physicists have held firmviews—with many prominent examples on each side—one wouldthink that it should be at least a clearlydecidablequestion. Unfortunately, even this much is not clear, and theepistemology of determinism turns out to be a thorny and multi-facetedissue.
As we saw above, for determinism to be true there have tobe some laws of nature. Most philosophers and scientists sincethe 17th century have indeed thought that there are. But inthe face of more recent skepticism, how can it be proven that thereare? And if this hurdle can be overcome, don't we have to know, withcertainty, preciselywhat the laws of our world are, in orderto tackle the question of determinism's truth or falsity?
The first hurdle can perhaps be overcome by a combination ofmetaphysical argument and appeal to knowledge we already have of thephysical world. Philosophers are currently pursuing this issueactively, in large part due to the efforts of the anti-laws minority.The debate has been most recently framed by Cartwright inTheDappled World (Cartwright 1999) in terms psychologicallyadvantageous to her anti-laws cause. Those who believe in the existenceof traditional, universal laws of nature arefundamentalists;those who disbelieve arepluralists. This terminology seems tobe becoming standard (see Belot 2001), so the first task in theepistemology of determinism is for fundamentalists to establish thereality of laws of nature (see Hoefer 2002b).
Even if the first hurdle can be overcome, the second, namelyestablishing precisely what the actual laws are, may seem dauntingindeed. In a sense, what we are asking for is precisely what19th and 20th century physicists sometimes set astheir goal: the Final Theory of Everything. But perhaps, as Newton saidof establishing the solar system's absolute motion, “the thing isnot altogether desperate.” Many physicists in the past 60 yearsor so have been convinced of determinism's falsity, because they wereconvinced that (a) whatever the Final Theory is, it will be somerecognizable variant of the family of quantum mechanical theories; and(b) all quantum mechanical theories are non-deterministic. Both (a) and(b) are highly debatable, but the point is that one can see howarguments in favor of these positions might be mounted. The same wastrue in the 19th century, when theorists might have arguedthat (a) whatever the Final Theory is, it will involve only continuousfluids and solids governed by partial differential equations; and (b)all such theories are deterministic. (Here, (b) is almost certainlyfalse; see Earman (1986),ch. XI). Even if we now are not, we may infuture be in a position to mount a credible argument for or againstdeterminism on the grounds of features we think we know the FinalTheory must have.
Determinism could perhaps also receive directsupport—confirmation in the sense of probability-raising, notproof—from experience and experiment. For theories (i.e.,potential laws of nature) of the sort we are used to in physics, it istypically the case that if they are deterministic, then to the extentthat one can perfectly isolate a system and repeatedly imposeidentical starting conditions, the subsequent behavior of the systemsshould also be identical. And in broad terms, this is the case in manydomains we are familiar with. Your computer starts up every time youturn it on, and (if you have not changed any files, have no anti-virussoftware, re-set the date to the same time before shutting down, andso on …) always in exactly the same way, with the same speedand resulting state (until the hard drive fails). The light comes onexactly 32µsec after the switch closes (until the day the bulbfails). These cases of repeated, reliable behavior obviously requiresome seriousceteris paribus clauses, are never perfectlyidentical, and always subject to catastrophic failure at some point.But we tend to think that for the small deviations,probablythere are explanations for them in terms of different startingconditions or failed isolation, and for the catastrophic failures,definitely there are explanations in terms of differentconditions.
There have even been studies of paradigmatically “chancy”phenomena such as coin-flipping, which show that if startingconditions can beprecisely controlled and outsideinterferences excluded, identical behavior results (see Diaconis,Holmes & Montgomery 2004). Most of these bits of evidence fordeterminism no longer seem to cut much ice, however, because of faithin quantum mechanics and its indeterminism. Indeterminist physicistsand philosophers are ready to acknowledge thatmacroscopicrepeatability is usually obtainable, where phenomena are solarge-scale that quantum stochasticity gets washed out. But they wouldmaintain that this repeatability is not to be found in experiments atthe microscopic level, and also that at leastsome failuresof repeatability (in your hard drive, or coin-flipping experiments)are genuinely due to quantum indeterminism, not just failures toisolate properly or establish identical initial conditions.
If quantum theories were unquestionably indeterministic, anddeterministic theories guaranteed repeatability of a strong form, therecould conceivably be further experimental input on the question ofdeterminism's truth or falsity. Unfortunately, the existence ofBohmian quantum theories casts strong doubt onthe former point, whilechaos theory casts strong doubt on thelatter. More will be said about each of these complications below.
If the world were governed by strictly deterministic laws, might itstilllook as though indeterminism reigns? This is one of thedifficult questions that chaos theory raises for the epistemology ofdeterminism.
A deterministic chaotic system has, roughly speaking, two salientfeatures: (i) the evolution of the system over a long time periodeffectively mimics a random or stochastic process—it lackspredictability or computability in some appropriate sense; (ii) twosystems with nearly identical initial states will have radicallydivergent future developments, within a finite (and typically, short)timespan. We will use “randomness” to denote the firstfeature, and “sensitive dependence on initial conditions”(SDIC) for the latter. Definitions of chaos may focus on either or bothof these properties; Batterman (1993) argues that only (ii) provides anappropriate basis for defining chaotic systems.
A simple and very important example of a chaotic system in bothrandomness and SDIC terms is the Newtonian dynamics of a pool tablewith a convex obstacle (or obstacles) (Sinai 1970 and others). SeeFigure 1:
Figure 1: Billiard table with convexobstacle
The usual idealizing assumptions are made: no friction, perfectlyelastic collisions, no outside influences. The ball's trajectory isdetermined by its initial position and direction of motion. If weimagine aslightly different initial direction, the trajectorywill at first be only slightly different. And collisions with thestraight walls will not tend to increase very rapidly the difference betweentrajectories. But collisions with the convex object will have theeffect ofamplifying the differences. After several collisionswith the convex body or bodies, trajectories that started out veryclose to one another will have become wildly different—SDIC.
In the example of the billiard table, we know that we are startingout with a Newtonian deterministic system—that is how the idealizedexample is defined. But chaotic dynamical systems come in a greatvariety of types: discrete and continuous, 2-dimensional, 3-dimensionaland higher, particle-based and fluid-flow-based, and so on.Mathematically, we may suppose all of these systems share SDIC. Butgenerally they will also display properties such as unpredictability,non-computability, Kolmogorov-random behavior, and so on—at leastwhen looked at in the right way, or at the right level of detail. Thisleads to the following epistemic difficulty: if, in nature, we find atype of system that displays some or all of these latter properties,how can we decide which of the following two hypotheses is true?
1. The system is governed by genuinely stochastic,indeterministic laws (or by no laws at all), i.e., its apparentrandomness is in factreal randomness.2. The system is governed by underlying deterministic laws, but ischaotic.
In other words, once one appreciates the varieties of chaoticdynamical systems that exist,mathematically speaking, itstarts to look difficult—maybe impossible—for us to ever decidewhether apparently random behavior in nature arises from genuinestochasticity, or rather from deterministic chaos. Patrick Suppes(1993, 1996) argues, on the basis of theorems proven by Ornstein (1974and later) that “There are processes which can equally well beanalyzed as deterministic systems of classical mechanics or asindeterministic semi-Markov processes, no matter how many observationsare made.” And he concludes that “Deterministicmetaphysicians can comfortably hold to their view knowing they cannotbe empirically refuted, but so can indeterministic ones as well.”(Suppes (1993), p. 254)
There is certainly an interesting problem area here for theepistemology of determinism, but it must be handled with care. It maywell be true that there are some deterministic dynamical systems that,when viewed properly, display behavior indistinguishable fromthat of a genuinely stochastic process. For example, using the billiardtable above, if one divides its surface into quadrants and looks atwhich quadrant the ball is in at 30-second intervals, the resultingsequence is no doubt highly random. But this does not mean that thesame system, when viewed in adifferent way (perhaps at ahigher degree of precision) does not cease to look random and insteadbetray its deterministic nature. If we partition our billiard tableinto squares 2 centimeters a side and look at which quadrant the ballis in at .1 second intervals, the resulting sequence will be far fromrandom. And finally, of course, if we simply look at the billiard tablewith our eyes, and see itas a billiard table, there is noobvious way at all to maintain that it may be a truly random processrather than a deterministic dynamical system. (See Winnie (1996) for anice technical and philosophical discussion of these issues. Winnieexplicates Ornstein's and others' results in some detail, and disputesSuppes' philosophical conclusions.)
The dynamical systems usually studied under the label of“chaos” are usually either purely abstract, mathematicalsystems, or classical Newtonian systems. It is natural to wonderwhether chaotic behavior carries over into the realm of systemsgoverned by quantum mechanics as well. Interestingly, it is much harderto find natural correlates of classical chaotic behavior in truequantum systems. (See Gutzwiller (1990)). Some, at least, of theinterpretive difficulties of quantum mechanics would have to beresolved before a meaningful assessment of chaos in quantum mechanicscould be achieved. For example, SDIC is hard to find in theSchrödinger evolution of a wavefunction for a system with finitedegrees of freedom; but inBohmian quantum mechanics it is handled quite easily on the basis of particletrajectories. (See Dürr, Goldstein and Zhangì (1992)).
The popularization of chaos theory in the past decade and a half hasperhaps made it seem self-evident that nature is full of genuinelychaotic systems. In fact, it is far from self-evident that such systemsexist, other than in an approximate sense. Nevertheless, themathematical exploration of chaos in dynamical systems helps us tounderstand some of the pitfalls that may attend our efforts to knowwhether our world is genuinely deterministic or not.
Let us suppose that we shall never have the Final Theory ofEverything before us—at least in our lifetime—and that we alsoremain unclear (on physical/experimental grounds) as to whether thatFinal Theory will be of a type that can or cannot be deterministic. Isthere nothing left that could sway our belief toward or againstdeterminism? There is, of course: metaphysical argument. Metaphysicalarguments on this issue are not currently very popular. Butphilosophical fashions change at least twice a century, and grandsystemic metaphysics of the Leibnizian sort might one day come backinto favor. Conversely, the anti-systemic, anti-fundamentalistmetaphysics propounded by Cartwright (1999) might also come topredominate. As likely as not, for the foreseeable future metaphysicalargument may be just as good a basis on which to discuss determinism'sprospects as any arguments from mathematics or physics.
John Earman'sPrimer on Determinism (1986) remains therichest storehouse of information on the truth or falsity ofdeterminism in various physical theories, from classical mechanics toquantum mechanics and general relativity. (See also his recent update on thesubject, “Aspects of Determinism in Modern Physics” (2007)). Here I will give only a briefdiscussion of some key issues, referring the reader to Earman (1986)and other resources for more detail. Figuring out whetherwell-established theories are deterministic or not (or to what extent,if they fall only a bit short) does not do much to help us know whetherour world isreally governed by deterministic laws; all ourcurrent best theories, including General Relativity and the StandardModel of particle physics, are too flawed and ill-understood to bemistaken for anything close to a Final Theory. Nevertheless, as Earman(1986) stressed, the exploration is very valuable because of the way itenriches our understanding of the richness and complexity ofdeterminism.
Despite the common belief that classical mechanics (the theory thatinspired Laplace in his articulation of determinism) is perfectlydeterministic, in fact the theory is rife with possibilities fordeterminism to break down. One class of problems arises due to theabsence of an upper bound on the velocities of moving objects. Below wesee the trajectory of an object that is accelerated unboundedly, its velocity becoming in effect infinite in a finite time. See Figure 2:
Figure 2: An object accelerates so as to reach spatialinfinity in a finite time
By the timet = t*, the object has literally disappearedfrom the world—its world-line never reaches thet = t*surface. (Never mind how the object gets accelerated in this way; thereare mechanisms that are perfectly consistent with classical mechanicsthat can do the job. In fact, Xia (1992) showed that such accelerationcan be accomplished by gravitational forces from only 5 finite objects,without collisions. No mechanism is shown in these diagrams.) This“escape to infinity,” while disturbing, does not yet looklike a violation of determinism. But now recall that classicalmechanics is time-symmetric: any model has a time-inverse, which isalso a consistent model of the theory. The time-inverse of our escapingbody is playfully called a “space invader.”
Figure 3: A ‘space invader’ comes in from spatialinfinity
Clearly, a world with a space invader does fail to be deterministic.Beforet = 0, there was nothing in the state of things toenable the prediction of the appearance of the invader att =0+.[2]One might think that the infinity of space is to blame for this strangebehavior, but this is not obviously correct. In finite,“rolled-up” or cylindrical versions of Newtonian space-timespace-invader trajectories can be constructed, though whether a“reasonable” mechanism to power them exists is notclear.[3]
A second class of determinism-breaking models can be constructed onthe basis of collision phenomena. The first problem is that ofmultiple-particle collisions for which Newtonian particle mechanicssimply does not have a prescription for what happens. (Consider threeidentical point-particles approaching each other at 120 degree anglesand colliding simultaneously. That they bounce back along theirapproach trajectories is possible; but it is equally possible for themto bounce in other directions (again with 120 degree angles betweentheir paths), so long as momentum conservation is respected.)
Moreover, there is a burgeoning literature of physical orquasi-physical systems, usually set in the context of classicalphysics, that carry out supertasks (see Earman and Norton (1998) andthe entry onsupertasks for areview). Frequently, the puzzle presented is to decide, on the basis ofthe well-defined behavior before timet =a, whatstate the system will be in att =a itself. Afailure of CM to dictate a well-defined result can then be seen as afailure of determinism.
In supertasks, one frequently encounters infinite numbers ofparticles, infinite (or unbounded) mass densities, and other dubiousinfinitary phenomena. Coupled with some of the other breakdowns ofdeterminism in CM, one begins to get a sense that most, if not all,breakdowns of determinism rely on some combination of the following setof (physically) dubious mathematical notions: {infinite space;unbounded velocity; continuity; point-particles; singular fields}. Thetrouble is, it is difficult to imagineany recognizablephysics (much less CM) that eschews everything in the set.
Finally, an elegant example of apparent violation of determinism in classical physics has been created by John Norton (2003). As illustrated in Figure 4, imagine a ball sitting at the apex of a frictionless dome whose equation is specified as a function of radial distance from the apex point. This rest-state is our initial condition for the system; what should its future behavior be? Clearly one solution is for the ball to remain at rest at the apex indefinitely.
Figure 4: A ball may spontaneously start sliding down this dome, with no violation of Newton's laws.
(Reproduced courtesy of John D. Norton andPhilosopher's Imprint)
But curiously, this is not the only solution under standard Newtonianlaws. The ball may also start into motion sliding down the dome—atany moment in time, and in any radial direction. This exampledisplays “uncaused motion” without, Norton argues, any violation ofNewton's laws, including the First Law. And it does not, unlike somesupertask examples, require an infinity of particles. Still, manyphilosophers are uncomfortable with the moral Norton draws from hisdome example, and point out reasons for questioning the dome's statusas a Newtonian system (see e.g. Malament (2007)).
Two features of special relativistic physics make it perhaps themost hospitable environment for determinism of any major theoreticalcontext: the fact that no process or signal can travel faster than thespeed of light, and the static, unchanging spacetime structure. Theformer feature, including a prohibition against tachyons (hypotheticalparticles travelling faster than light)[4]), rules out spaceinvaders and other unbounded-velocity systems. The latter feature makesthe space-time itself nice and stable and non-singular—unlike thedynamic space-time of General Relativity, as we shall see below. Forsource-free electromagnetic fields in special-relativistic space-time,a nice form of Laplacean determinism is provable. Unfortunately,interesting physics needs more than source-free electromagnetic fields.Earman (1986) ch. IV surveys in depth the pitfalls for determinism thatarise once things are allowed to get more interesting (e.g. by theaddition of particles interacting gravitationally).
Defining an appropriate form of determinism for the context ofgeneral relativistic physics is extremely difficult, due to bothfoundational interpretive issues and the plethora of weirdly-shapedspace-time models allowed by the theory's field equations. The simplestway of treating the issue of determinism in GTR would be to stateflatly: determinsim fails, frequently, and in some of the mostinteresting models. To leave it at that would however be to miss animportant opportunity to use determinism to probe physical andphilosophical issues of great importance (a use of determinism stressedfrequently by Earman). Here we will briefly describe some of the mostimportant challenges that arise for determinism, directing the readeryet again to Earman (1986), and also Earman (1995) for more depth.
In GTR, we specify a model of the universe by giving a triple ofthree mathematical objects, <M,g,T>.M represents acontinuous “manifold”: that means a sort of unstructuredspace (-time), made up of individual points and having smoothness orcontinuity, and dimensionality (usually, 4-dimensional), but no furtherstructure. What is the further structure a space-time needs? Typically,at least, we expect the time-direction to be distinguished fromspace-directions; and we expect there to be well-defined distancesbetween distinct points; and also a determinategeometry(making certain continuous paths inM be straight lines,etc.). All of this extra structure is coded intog. SoM andg together represent space-time.T represents the matter and energy content distributedaround in space-time (if any, of course).
For mathematical reasons not relevant here, it turns out to bepossible to take a given model spacetime and perform a mathematicaloperation called a “hole diffeomorphism”h* on it;the diffeomorphism's effect is to shift around the matter contentT and the metricg relative to thecontinuous manifoldM.[5] If the diffeomorphism is chosenappropriately, it can move aroundT andg after a certain timet = 0, but leaveeverything alone before that time. Thus, the new model represents thematter content (nowh*T) and the metric(h*g) as differently located relative to thepoints ofM making up space-time. Yet, the new model is also aperfectly valid model of the theory. This looks on the face of it likea form of indeterminism: GTR's equations do not specify how things willbe distributed in space-time in the future, even when the past before agiven timet is held fixed. See Figure 5:
Figure 5: “Hole” diffeomorphism shifts contents ofspacetime
Usually the shift is confined to a finite region calledthehole (for historical reasons). Then it is easy to see that thestate of the world at timet = 0 (and all the history thatcame before) does not suffice to fix whether the future will be that ofour first model, or its shifted counterpart in which events inside thehole are different.
This is a form of indeterminism first highlighted by Earman andNorton (1987) as an interpretive philosophical difficulty for realismabout GTR's description of the world, especially the point manifoldM. They showed that realism about the manifold as a part ofthe furniture of the universe (which they called “manifoldsubstantivalism”) commits us to a radical, automaticindeterminism in GTR, and they argued that this is unacceptable. (Seethe hole argument and Hoefer (1996)for one response on behalf of the space-time realist, and discussion ofother responses.) For now, we will simply note that this indeterminism,unlike most others we are discussing in this section, is empiricallyvacuous: our two models <M,g,T> and the shifted model <M,h*g,h*T> areempirically indistinguishable.
The separation of space-time structures into manifold and metric (orconnection) facilitates mathematical clarity in many ways, but alsoopens up Pandora's box when it comes to determinism. The indeterminismof the Earman and Norton hole argument is only the tip of the iceberg;singularities make up much of the rest of the berg. In general terms, asingularity can be thought of as a “place where things gobad” in one way or another in the space-time model. For example,near the center of a Schwarzschild black hole, curvature increaseswithout bound, and at the center itself it is undefined, which meansthat Einstein's equations cannot be said to hold, which means(arguably) that this point does not exist as a part of the space-timeat all! Some specific examples are clear, but giving a generaldefinition of a singularity, like defining determinism itself in GTR,is a vexed issue (see Earman (1995) for an extended treatment;Callender and Hoefer (2001) gives a brief overview). We will notattempt here to catalog the various definitions and types ofsingularity.
Different types of singularity bring different types of threat todeterminism. In the case of ordinary black holes, mentioned above, allis well outside the so- called “event horizon”, which isthe spherical surface defining the black hole: once a body or lightsignal passes through the event horizon to the interior region of theblack hole, it can never escape again. Generally, no violation ofdeterminism looms outside the event horizon; but what about inside?Some black hole models have so-called “Cauchy horizons”inside the event horizon, i.e., surfaces beyond which determinismbreaks down.
Another way for a model spacetime to be singular is to have pointsor regions go missing, in some cases by simple excision. Perhaps themost dramatic form of this involves taking a nice model with aspace-like surfacet =E (i.e., a well-defined partof the space-time that can be considered “the state state of theworld at timeE”), and cutting out and throwing awaythis surface and all points temporally later. The resulting spacetimesatisfies Einstein's equations; but, unfortunately for any inhabitants,the universe comes to a sudden and unpredictable end at timeE. This is too trivial a move to be considered a real threatto determinism in GTR; we can impose a reasonable requirement thatspace-time not “run out” in this way without some physical reason(the spacetime should be “maximally extended”). Fordiscussion of precise versions of such a requirement, and whether theysucceed in eliminating unwanted singularities, see Earman (1995,chapter 2).
The most problematic kinds of singularities, in terms ofdeterminism, arenaked singularities (singularities not hiddenbehind an event horizon). When a singularity forms from gravitationalcollapse, the usual model of such a process involves the formation ofan event horizon (i.e. a black hole). A universe with an ordinary blackhole has a singularity, but as noted above, (outside the event horizonat least) nothing unpredictable happens as a result. A nakedsingularity, by contrast, has no such protective barrier. In much theway that anything can disappear by falling into an excised-regionsingularity, or appear out of a white hole (white holes themselves are,in fact, technically naked singularities), there is the worry thatanything at all could pop out of a naked singularity, without warning(hence, violating determinismen passant). While most whitehole models have Cauchy surfaces and are thusarguablydeterministic, other naked singularity models lack this property.Physicists disturbed by the unpredictable potentialities of suchsingularities have worked to try to prove variouscosmic censorshiphypotheses that show—under (hopefully) plausible physicalassumptions—that such things do not arise by stellar collapse in GTR(and hence are not liable to come into existence in our world). To dateno very general and convincing forms of the hypothesis have beenproven, so the prospects for determinism in GTR as a mathematicaltheory do not look terribly good.
As indicated above, QM is widely thought to be a stronglynon-deterministic theory. Popular belief (even among most physicists)holds that phenomena such as radioactive decay, photon emission andabsorption, and many others are such that only aprobabilisticdescription of them can be given. The theory does not say what happensin a given case, but only says what the probabilities of variousresults are. So, for example, according to QM the fullest descriptionpossible of a radium atom (or a chunk of radium, for that matter), doesnot suffice to determine when a given atom will decay, nor how manyatoms in the chunk will have decayed at any given time. The theorygives only the probabilities for a decay (or a number of decays) tohappen within a given span of time. Einstein and others perhaps thoughtthat this was a defect of the theory that should eventually be removed,by a supplementalhidden variable theory[6] that restoresdeterminism; but subsequent work showed that no such hidden variablesaccount could exist. At the microscopic level the world is ultimatelymysterious and chancy.
So goes the story; but like much popular wisdom, it is partlymistaken and/or misleading. Ironically, quantum mechanics is one of thebest prospects for a genuinely deterministic theory in modern times!Even more than in the case of GTR and the hole argument, everythinghinges on what interpretational and philosophical decisions one adopts.The fundamental law at the heart of non-relativistic QM is theSchrödinger equation. The evolution of a wavefunction describing aphysical system under this equation is normally taken to be perfectlydeterministic.[7] If one adopts an interpretation of QMaccording to which that's it—i.e., nothing ever interruptsSchrödinger evolution, and the wavefunctions governed by theequation tell the complete physical story—then quantum mechanics isa perfectly deterministic theory. There are several interpretationsthat physicists and philosophers have given of QM which go this way.(See the entry onquantum mechanics.)
More commonly—and this is part of the basis for the popularwisdom—physicists have resolved thequantum measurement problem bypostulating that some process of “collapse of thewavefunction” occurs from time to time (particularly duringmeasurements and observations) that interrupts Schrödingerevolution. The collapse process is usually postulated to beindeterministic, with probabilities for various outcomes,viaBorn's rule, calculable on the basis of a system's wavefunction. Theonce-standard, Copenhagen interpretation of QM posits such a collapse.It has the virtue of solving certain paradoxes such as the infamousSchrödinger's cat paradox, but few philosophers or physicists cantake it very seriously unless they are either idealists orinstrumentalists. The reason is simple: the collapse process is notphysically well-defined, and feels tooad hoc to be afundamental part of nature's laws.[8]
In 1952 David Bohm created an alternative interpretation ofQM—perhaps better thought of as an alternative theory—thatrealizes Einstein's dream of a hidden variable theory, restoringdeterminism and definiteness to micro-reality. InBohmian quantum mechanics, unlike other interpretations, it is postulated that allparticles have, at all times, a definite position and velocity. Inaddition to the Schrödinger equation, Bohm posited aguidanceequation that determines, on the basis of the system'swavefunction and particles' initial positions and velocities, whattheir future positions and velocities should be. As much as anyclassical theory of point particles moving under force fields, then,Bohm's theory is deterministic. Amazingly, he was also able to showthat, as long as the statistical distribution of initial positions andvelocities of particles are chosen so as to meet a “quantumequilibrium” condition, his theory is empirically equivalent tostandard Copenhagen QM. In one sense this is a philosopher's nightmare:with genuine empirical equivalence as strong as Bohm obtained, it seemsexperimental evidence can never tell us which description of reality iscorrect. (Fortunately, we can safely assume thatneither isperfectly correct, and hope that our Final Theory has no suchempirically equivalent rivals.) In other senses, the Bohm theory is aphilosopher's dream come true, eliminating much (but not all) of theweirdness of standard QM and restoring determinism to the physics ofatoms and photons. The interested reader can find out more from thelink above, and references therein.
This small survey of determinism's status in some prominent physicaltheories, as indicated above, does not really tell us anything aboutwhether determinism is true of our world. Instead, it raises a coupleof further disturbing possibilities for the time when we do have theFinal Theory before us (if such time ever comes): first, we may havedifficulty establishing whether the Final Theory is deterministic ornot—depending on whether the theory comes loaded with unsolvedinterpretational or mathematical puzzles. Second, we may have reason toworry that the Final Theory, if indeterministic, has an empiricallyequivalent yet deterministic rival (as illustrated by Bohmian quantummechanics.)
Some philosophers maintain that if determinism holds in our world,then there are noobjective chances in our world. And oftenthe word ‘chance’ here is taken to be synonymous with'probability', so these philosophers maintain that there are nonon-trivial objective probabilities for events in our world. (Thecaveat “non-trivial” is added here because on some accounts all futureevents that actually happen have probability, conditional on pasthistory, equal to 1, and future events that do not happen haveprobability equal to zero. Non-trivial probabilities are probabilitiesstrictly between zero and one.) Conversely, it is often held, ifthere are laws of nature that are irreducibly probabilistic,determinism must be false. (Some philosophers would go on to add thatsuch irreducibly probabilistic laws are the basis of whatever genuineobjective chances obtain in our world.)
The discussion of quantum mechanics in section 4 shows that it may bedifficult to know whether a physical theory postulates genuinelyirreducible probabilistic laws or not. If a Bohmian version of QM iscorrect, then the probabilities dictated by the Born rule are notirreducible. If that is the case, should we say that theprobabilities dictated by quantum mechanics are not objective? Orshould we say that we need to distinguish ‘chance’ and‘probabillity’ after all—and hold that not allobjective probabilities should be thought of as objectivechances? The first option may seem hard to swallow, giventhe many-decimal-place accuracy with which such probability-basedquantities as half-lives and cross-sections can be reliably predictedand verified experimentally with QM.
Whether objective chance and determinism are really incompatible ornot may depend on what view of the nature of laws is adopted. On a“pushy explainers” view of laws such as that defended byMaudlin (2007), probabilistic laws are interpreted as irreducibledynamical transition-chances between allowed physical states, and theincompatibility of such laws with determinism is immediate. But whatshould a defender of a Humean view of laws, such as the BSA theory(section 2.4 above), say about probabilistic laws? The first thingthat needs to be done is explain how probabilistic laws can fit intothe BSA account at all, and this requires modification or expansion ofthe view, since as first presented the only candidates for laws ofnature are true universal generalizations. If‘probability’ were a univocal, clearly understood notionthen this might be simple: We allow universal generalizations whoselogical form is something like: “Whenever conditionsYobtain, Pr(A) =x”. But it is not at all clear howthe meaning of ‘Pr’ should be understood in such ageneralization; and it is even less clear what features the Humeanpattern of actual events must have, for such a generalization to beheld true. (See the entry oninterpretations of probability and Lewis (1994).)
Humeans about laws believe that what laws there are is a matter ofwhat patterns are there to be discerned in the overall mosaic ofevents that happen in the history of the world. It seems plausibleenough that the patterns to be discerned may include not only strictassociations (wheneverX,Y), but also stable statisticalassociations. If the laws of nature can include either sort ofassociation, a natural question to ask seems to be: why can't there benon-probabilistic laws strong enough to ensure determinism, and on topof them, probabilistic laws as well? If a Humean wanted to capture thelaws not only of fundamental theories, but also non-fundamentalbranches of physics such as (classical) statistical mechanics, such apeaceful coexistence of deterministic laws plus further probabilisticlaws would seem to be desirable. Loewer (2004) argues that thispeaceful coexistence can be achieved within Lewis' version of the BSAaccount of laws.
In the introduction, we noted the threat that determinism seems topose to human free agency. It is hard to see how, if the state of theworld 1000 years ago fixes everything I do during my life, I canmeaningfully say that I am a free agent, the author of my own actions,which I could have freely chosen to perform differently. After all, Ihave neither the power to change the laws of nature, nor to change thepast! So in what sense can I attribute freedom of choice to myself?
Philosophers have not lacked ingenuity in devising answers to thisquestion. There is a long tradition ofcompatibilists arguing that freedom is fully compatible withphysical determinism. Hume went so far as to argue that determinism isanecessary condition for freedom—or at least, he arguedthat some causality principle along the lines of “same cause,same effect” is required. There have been equally numerous andvigorous responses by those who are not convinced. Can a clearunderstanding of what determinism is, and how it tends to succeed orfail in real physical theories, shed any light on the controversy?
Physics, particularly 20th century physics, does have onelesson to impart to the free will debate; a lesson about therelationship betweentime and determinism. Recall that wenoticed that the fundamental theories we are familiar with, if they aredeterministic at all, are time-symmetrically deterministic. That is,earlier states of the world can be seen as fixing all later states; butequally, later states can be seen as fixing all earlier states. We tendto focus only on the former relationship, but we are not led to do soby the theories themselves.
Nor does 20th (21st) -century physicscountenance the idea that there is anything ontologically special aboutthe past, as opposed to the present and the future. In fact, it failsto use these categories in any respect, and teaches that in some sensesthey are probably illusory.[9] So there is no support in physics for theidea that the past is “fixed” in some way that the presentand future are not, or that it has some ontological power to constrainour actions that the present and future do not have. It is not hard touncover the reasons why we naturally do tend to think of the past asspecial, and assume that both physical causation and physicalexplanation work only in the past present/future direction (see theentry onthermodynamic asymmetry in time). But these pragmatic matters have nothing to do withfundamental determinism. If we shake loose from the tendency to see thepast as special, when it comes to the relationships of determinism, itmay prove possible to think of a deterministic world as one in whicheach part bears a determining—or partial-determining—relation toother parts, but in which no particular part (i.e., region ofspace-time) has a special, stronger determining role than any other.Hoefer (2002) uses these considerations to argue in a novel way for thecompatiblity of determinism with human free agency.
compatibilism |free will |Hume, David |incompatibilism: (nondeterministic) theories of free will |laws of nature |Popper, Karl |probability, interpretations of |quantum mechanics |quantum mechanics: Bohmian mechanics |Russell, Bertrand |space and time: supertasks |space and time: the hole argument |time: thermodynamic asymmetry in
The author would like to acknowledge the invaluable help of John Norton in the preparation of this entry. Thanks also to A. Ilhamy Amiry for bringing to my attention some errors in an earlier version of this entry.