![]() | This articlepossibly containsoriginal research. Pleaseimprove it byverifying the claims made and addinginline citations. Statements consisting only of original research should be removed.(September 2017) (Learn how and when to remove this message) |
Inphilosophy, thephilosophy of physics deals with conceptual and interpretational issues inphysics, many of which overlap with research done by certain kinds oftheoretical physicists. Historically, philosophers of physics have engaged with questions such as the nature of space, time, matter and the laws that govern their interactions, as well as the epistemological and ontological basis of the theories used by practicing physicists. The discipline draws upon insights from various areas of philosophy, includingmetaphysics,epistemology, andphilosophy of science, while also engaging with the latest developments in theoretical and experimental physics.
Contemporary work focuses on issues at the foundations of the three pillars ofmodern physics:
Other areas of focus include the nature ofphysical laws,symmetries, andconservation principles; the role of mathematics; and philosophical implications of emerging fields likequantum gravity,quantum information, andcomplex systems. Philosophers of physics have argued that conceptual analysis clarifies foundations, interprets implications, and guides theory development in physics.
The existence and nature of space and time (or space-time) are central topics in the philosophy of physics.[1] Issues include (1) whether space and time are fundamental or emergent, and (2) how space and time are operationally different from one another.
In classical mechanics,time is taken to be afundamental quantity (that is, a quantity which cannot be defined in terms of other quantities). However, certain theories such asloop quantum gravity claim that spacetime is emergent. AsCarlo Rovelli, one of the founders of loop quantum gravity, has said: "No more fields on spacetime: just fields on fields".[2] Time isdefined via measurement—by its standard time interval. Currently, the standard time interval (called "conventionalsecond", or simply "second") is defined as 9,192,631,770oscillations of ahyperfine transition in the 133caesiumatom. (ISO 31-1). What time is and how it worksfollows from the above definition. Time then can be combined mathematically with the fundamental quantities ofspace andmass to define concepts such asvelocity,momentum,energy, andfields.
BothIsaac Newton andGalileo Galilei,[3] as well as most people up until the 20th century, thought that time was the same for everyone everywhere.[4] The modern conception of time is based onAlbert Einstein's theory of relativity andHermann Minkowski'sspacetime, in which rates of time run differently in different inertial frames of reference, andspace and time are merged intospacetime. Einstein'sgeneral relativity as well as theredshift of the light from receding distant galaxies indicate that the entireUniverse and possibly space-time itself began about13.8 billion years ago in theBig Bang. Einstein's theory of special relativity mostly (though not universally) made theories of time where there is something metaphysically special about the present seem much less plausible, as the reference-frame-dependence of time seems to not allow the idea of a privileged present moment.
Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to the definition of other fundamental quantities (liketime andmass), space is defined via measurement. Currently, the standard space interval, called a standard metre or simply metre, is defined as thedistance traveled by light in a vacuum during a time interval of 1/299792458 of a second (exact).
Inclassical physics, space is a three-dimensionalEuclidean space where any position can be described using threecoordinates and parameterised by time. Special and general relativity use four-dimensionalspacetime rather than three-dimensional space; and currently there are many speculative theories which use more than three spatial dimensions.
Quantum mechanics is a large focus of contemporary philosophy of physics, specifically concerning the correct interpretation of quantum mechanics. Very broadly, much of the philosophical work that is done in quantum theory is trying to make sense of superposition states:[5] the property that particles seem to not just be in one determinate position at one time, but are somewhere 'here', and also 'there' at the same time. Such a radical view turns many common sense metaphysical ideas on their head. Much of contemporary philosophy of quantum mechanics aims to make sense of what the very empirically successful formalism of quantum mechanics tells us about the physical world.
Theuncertainty principle is a mathematical relation asserting an upper limit to the accuracy of the simultaneous measurement of any pair ofconjugate variables, e.g. position and momentum. In the formalism ofoperator notation, this limit is the evaluation of thecommutator of the variables' corresponding operators.
The uncertainty principle arose as an answer to the question: How does one measure the location of an electron around a nucleus if an electron is a wave? When quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics.
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine thatquantum mechanics is incompatible withlocal hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to theprinciple of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated byphysical fields cannot propagate faster than thespeed of light. "Hidden variables" are putative properties of quantum particles that are not included in the theory but nevertheless affect the outcome of experiments. In the words of physicistJohn Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."[6]
The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On theEinstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935thought experiment thatAlbert Einstein,Boris Podolsky andNathan Rosen proposed, arguing that quantum physics is an "incomplete" theory.[7][8] By 1935, it was already recognized that the predictions of quantum physics areprobabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair isentangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparentlycollapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also influenced the second particle faster than the speed of light,or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, likeelectrons andphotons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".
Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named theBell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles can carry non-classical correlations no matter how widely they ever become separated.[9][10]
Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 byJohn Clauser andStuart Freedman.[11] More advanced experiments, known collectively asBell tests, have been performed many times since. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory.[12][13]
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for theinterpretation of quantum mechanics remain unresolved.
In March 1927, working inNiels Bohr's institute,Werner Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as theCopenhagen interpretation of quantum mechanics. Heisenberg had been studying the papers ofPaul Dirac andPascual Jordan. He discovered a problem with measurement of basic variables in the equations. His analysis showed that uncertainties, or imprecisions, always turned up if one tried to measure the position and the momentum of a particle at the same time. Heisenberg concluded that these uncertainties or imprecisions in the measurements were not the fault of the experimenter, but fundamental in nature and are inherent mathematical properties of operators in quantum mechanics arising from definitions of these operators.[14]
The Copenhagen interpretation is somewhat loosely defined, as many physicists and philosophers of physics have advanced similar but not identical views of quantum mechanics. It is principally associated with Heisenberg and Bohr, despite their philosophical differences.[15][16] Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using theBorn rule, and the principle ofcomplementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously.[17] Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object,except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of any arbitrary factors in the physicist's mind.[18]: 85–90
Themany-worlds interpretation of quantum mechanics byHugh Everett III claims that the wave-function of a quantum system is telling us claims about the reality of that physical system. It denies wavefunction collapse, and claims thatsuperposition states should be interpreted literally as describing the reality of many-worlds where objects are located, and not simply indicating the indeterminacy of those variables. This is sometimes argued as a corollary ofscientific realism,[19] which states that scientific theories aim to give us literally true descriptions of the world.
One issue for the Everett interpretation is the role that probability plays on this account. The Everettian account is completely deterministic, whereas probability seems to play an ineliminable role in quantum mechanics.[20] Contemporary Everettians have argued that one can get an account of probability that follows the Born rule through certain decision-theoretic proofs,[21] but there is as yet no consensus about whether any of these proofs are successful.[22][23][24]
PhysicistRoland Omnès noted that it is impossible to experimentally differentiate between Everett's view, which says that as the wave-function decoheres into distinct worlds, each of which exists equally, and the more traditional view that says that a decoherent wave-function leaves only one unique real result. Hence, the dispute between the two views represents a great "chasm". "Every characteristic of reality has reappeared in its reconstruction by our theoretical model; every feature except one: the uniqueness of facts."[25]
The philosophy ofthermal andstatistical physics is concerned with the foundational issues and conceptual implications ofthermodynamics andstatistical mechanics. These branches of physics deal with the macroscopic behavior of systems comprising a large number of microscopic entities, such as particles, and the nature of laws that emerge from these systems likeirreversibility andentropy. Interest of philosophers instatistical mechanics first arose from the observation of an apparent conflict between thetime-reversal symmetry of fundamental physical laws and theirreversibility observed in thermodynamic processes, known as thearrow of time problem. Philosophers have sought to understand how the asymmetric behavior of macroscopic systems, such as the tendency of heat to flow from hot to cold bodies, can be reconciled with the time-symmetric laws governing the motion of individual particles.
Another key issue is theinterpretation of probability instatistical mechanics, which is primarily concerned with the question of whether probabilities in statistical mechanics areepistemic, reflecting our lack of knowledge about the precise microstate of a system, orontic, representing an objective feature of the physical world. The epistemic interpretation, also known as the subjective orBayesian view, holds thatprobabilities instatistical mechanics are a measure of our ignorance about the exact state of a system. According to this view, we resort to probabilistic descriptions only due to the practical impossibility of knowing the precise properties of all its micro-constituents, like the positions and momenta of particles. As such, theprobabilities are not objective features of the world but rather arise from our ignorance. In contrast, theontic interpretation, also called the objective orfrequentist view, asserts that probabilities in statistical mechanics are real, physical properties of the system itself. Proponents of this view argue that the probabilistic nature of statistical mechanics is not merely a reflection of our ignorance but an intrinsic feature of the physical world, and that even if we had complete knowledge of the microstate of a system, the macroscopic behavior would still be best described by probabilistic laws.
![]() | This section'sfactual accuracy isdisputed. Relevant discussion may be found on thetalk page. Please help to ensure that disputed statements arereliably sourced.(September 2022) (Learn how and when to remove this message) |
Aristotelian physics viewed the universe as asphere with a center. Matter, composed of theclassical elements: earth, water, air, and fire; sought to go down towards the center of the universe, the center of the Earth, or up, away from it. Things in theaether such as the Moon, the Sun, planets, or stars circled the center of the universe.[26] Movement is defined as change in place,[26] i.e. space.[27]
The implicit axioms of Aristotelian physics with respect to movement of matter in space were superseded inNewtonian physics byNewton'sfirst law of motion.[28]
Every body perseveres in its state either of rest or of uniform motion in a straight line, except insofar as it is compelled to change its state by impressed forces.
"Every body" includes the Moon, and an apple; and includes all types of matter, air as well as water, stones, or even a flame. Nothing has a natural or inherent motion.[29]Absolute space beingthree-dimensionalEuclidean space, infinite and without a center.[29] Being "at rest" means being at the same place in absolute space over time.[30] Thetopology andaffine structure of space must permit movement in astraight line at a uniform velocity; thus both space and time must havedefinite, stable dimensions.[31]
Gottfried Wilhelm Leibniz, 1646–1716, was a contemporary of Newton. He contributed a fair amount to the statics and dynamics emerging around him, often disagreeing withDescartes andNewton. He devised a new theory ofmotion (dynamics) based onkinetic energy andpotential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is hisSpecimen Dynamicum of 1695.[32]
Until the discovery of subatomic particles and thequantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense.
He anticipated Albert Einstein by arguing, against Newton, thatspace,time and motion are relative, not absolute:[33] "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions."[34]
...the existence and nature of space and time (or space-time) is a central topic.
Bohr, Heisenberg, and Pauli recognized its main difficulties and proposed a first essential answer. They often met in Copenhagen ... 'Copenhagen interpretation has not always meant the same thing to different authors. I will reserve it for the doctrine held with minor differences by Bohr, Heisenberg, and Pauli.