Incosmology, thecosmological constant problem orvacuum catastrophe is the substantial disagreement between the observed values ofvacuum energydensity (the small value of thecosmological constant) and the much larger theoretical value ofzero-point energy suggested byquantum field theory.
Depending on thePlanck energy cutoff and other factors, the quantum vacuum energy contribution to the effective cosmological constant is calculated to be between 50 and as many as 120 orders of magnitude greater than has actually been observed,[1][2] a state of affairs described by physicists as "the largest discrepancy between theory and experiment in all of science"[1] and "probably the worst theoretical prediction in the history of physics".[3]
The basic problem of a vacuum energy producing a gravitational effect was identified as early as 1916 byWalther Nernst.[4][5][6]He predicted that the value had to be either zero or very small. In 1926,Wilhelm Lenz concluded that "If one allows waves of the shortest observed wavelengthsλ ≈ 2 × 10−11 cm, ... and if this radiation, converted to material density (u/c2 ≈ 106), contributed to the curvature of the observable universe – one would obtain a vacuum energy density of such a value that the radius of the observable universe would not reach even to the Moon."[7][6]
After the development of quantum field theory in the 1940s, the first to address contributions of quantum fluctuations to the cosmological constant wasYakov Zeldovich in the 1960s.[8][9] In quantum mechanics, the vacuum itself should experience quantum fluctuations. In general relativity, those quantum fluctuations constitute energy that would add to the cosmological constant. However, this calculated vacuum energy density is many orders of magnitude bigger than the observed cosmological constant.[10] Original estimates of the degree of mismatch were as high as 120 to 122 orders of magnitude;[11][12] however, modern research suggests that, whenLorentz invariance is taken into account, the degree of mismatch is closer to 60 orders of magnitude.[12][13]
With the development ofinflationary cosmology in the 1980s, the problem became much more important: as cosmic inflation is driven by vacuum energy, differences in modeling vacuum energy lead to huge differences in the resulting cosmologies. Were the vacuum energy precisely zero, as was once believed, then theexpansion of the universe would notaccelerate as observed, according to the standardΛ-CDM model.[14]
The vacuum energy density of the Universe based on 2015 measurements by thePlanck collaboration isρvac =5.96×10−27 kg/m3 ≘5.3566×10−10 J/m3 =3.35 GeV/m3[15][note 1] or about2.5×10−47 GeV4 ingeometrized units.
One assessment, made by Jérôme Martin of theInstitut d'Astrophysique de Paris in 2012, placed the expected theoretical vacuum energy scale around 108 GeV4, for a difference of about 55 orders of magnitude.[12]
The calculated vacuum energy is a positive, rather than negative, contribution to the cosmological constant because the existing vacuum has negative quantum-mechanicalpressure, while ingeneral relativity, the gravitational effect of negative pressure is a kind of repulsion. (Pressure here is defined as the flux ofquantum-mechanical momentum across a surface.) Roughly, the vacuum energy is calculated by summing over all known quantum-mechanical fields, taking into account interactions and self-interactions between the ground states, and then removing all interactions below a minimum "cutoff" wavelength to reflect that existing theories break down and may fail to be applicable around the cutoff scale. Because the energy is dependent on how fields interact within the current vacuum state, the vacuum energy contribution would have been different in the early universe; for example, the vacuum energy would have been significantly different prior toelectroweak symmetry breaking during thequark epoch.[12]
The vacuum energy in quantum field theory can be set to any value byrenormalization. This view treats the cosmological constant as simply another fundamental physical constant not predicted or explained by theory.[16] Such a renormalization constant must be chosen very accurately because of the many-orders-of-magnitude discrepancy between theory and observation, and many theorists consider this ad-hoc constant as equivalent to ignoring the problem.[1]
UsingPlanck mass as the cut-off for a cut-off regularization scheme provides a difference of 120 order of magnitude between the vacuum energy and the cosmological constant.[17] However this method violatesLorentz covariance.[17] Usingdimensional regularization instead, reduces this difference to about 56 orders of magnitude.[17]
Some proposals involve modifying gravity to diverge from general relativity. These proposals face the hurdle that the results of observations and experiments so far have tended to be extremely consistent with general relativity and the ΛCDM model, and inconsistent with thus-far proposed modifications. In addition, some of the proposals are arguably incomplete, because they solve the "new" cosmological constant problem by proposing that the actual cosmological constant is exactly zero rather than a tiny number, but fail to solve the "old" cosmological constant problem of why quantum fluctuations seem to fail to produce substantial vacuum energy in the first place. Nevertheless, many physicists argue that, due in part to a lack of better alternatives, proposals to modify gravity should be considered "one of the most promising routes to tackling" the cosmological constant problem.[18]
Bill Unruh and collaborators have argued that when the energy density of the quantum vacuum is modeled more accurately as a fluctuating quantum field, the cosmological constant problem does not arise.[19] Going in a different direction,George F. R. Ellis and others have suggested that inunimodular gravity, the troublesome contributions simply do not gravitate.[20][21] Recently, a fully diffeomorphism-invariant action principle that gives the equations of motion for trace-free Einstein gravity has been proposed, where the cosmological constant emerges as an integration constant.[22]
Another argument, due toStanley Brodsky and Robert Shrock, is that inlight front quantization, thequantum field theory vacuum becomes essentially trivial. In the absence of vacuum expectation values, there is no contribution fromquantum electrodynamics,weak interactions, andquantum chromodynamics to the cosmological constant. It is thus predicted to be zero in a flatspacetime.[23][24] Fromlight front quantization insight, the origin of the cosmological constant problem is traced back to unphysicalnon-causal terms in the standard calculation, which lead to an erroneously large value of the cosmological constant.[25]
In 2018, a mechanism for cancelling Λ out has been proposed through the use of asymmetry breaking potential in a Lagrangian formalism in which matter shows a non-vanishing pressure. The model assumes that standard matter provides a pressure which counterbalances the action due to the cosmological constant. Luongo and Muccino have shown that this mechanism permits to take vacuum energy asquantum field theory predicts, but removing the huge magnitude through a counterbalance term due tobaryons andcold dark matter only.[26]
In 1999,Andrew Cohen,David B. Kaplan andAnn Nelson proposed that correlations between theUV and IR cutoffs ineffectivequantum field theory are enough to reduce the theoretical cosmological constant down to the measured cosmological constant due to the Cohen–Kaplan–Nelson (CKN) bound.[27] In 2021, Nikita Blinov and Patrick Draper confirmed through theholographic principle that the CKN bound predicts the measured cosmological constant, all while maintaining the predictions of effective field theory in less extreme conditions.[28]
Some propose an anthropic solution,[29] and argue that we live in one region of a vastmultiverse that has different regions with different vacuum energies. Theseanthropic arguments posit that only regions of small vacuum energy such as the one in which we live are reasonably capable of supporting intelligent life. Such arguments have existed in some form since at least 1981. Around 1987,Steven Weinberg estimated that the maximum allowable vacuum energy for gravitationally-bound structures to form is problematically large, even given the observational data available in 1987, and concluded the anthropic explanation appears to fail; however, more recent estimates by Weinberg and others, based on other considerations, find the bound to be closer to the actual observed level of dark energy.[30][31] Anthropic arguments gradually gained credibility among many physicists after the discovery of dark energy and the development of the theoreticalstring theory landscape, but are still derided by a substantial skeptical portion of the scientific community as being problematic to verify. Proponents of anthropic solutions are themselves divided on multiple technical questions surrounding how to calculate the proportion of regions of the universe with various dark energy constants.[30][18]