For methodological challenges quantifying and mitigating the risk, proposed mitigation measures, and related organizations, seeGlobal catastrophic risk.
Nuclear war is an often-predicted cause of the extinction of humankind.[1]
The scientific consensus is that there is a relatively low risk of near-term human extinction due to natural causes.[2][3] The likelihood of human extinction through humankind's own activities, however, is a current area of research and debate.
Before the 18th and 19th centuries, the possibility that humans or other organisms could become extinct was viewed with scepticism.[4] It contradicted theprinciple of plenitude, a doctrine that all possible things exist.[4] The principle traces back toAristotle and was an important tenet ofChristian theology.[5] Ancient philosophers such asPlato, Aristotle, andLucretius wrote of the end of humankind only as part of a cycle of renewal.Marcion of Sinope was a proto-Protestant who advocated forantinatalism that could lead to human extinction.[6][7] Later philosophers such asAl-Ghazali,William of Ockham, andGerolamo Cardano expanded the study oflogic andprobability and began wondering if abstract worlds existed, including a world without humans. PhysicistEdmond Halley stated that the extinction of the human race may be beneficial to the future of the world.[8]
The notion that species can become extinct gained scientific acceptance during theAge of Enlightenment in the 17th and 18th centuries, and by 1800Georges Cuvier had identified 23 extinct prehistoric species.[4] The doctrine was further gradually bolstered by evidence from the natural sciences, particularly the discovery of fossil evidence of species that appeared to no longer exist and the development of theories of evolution.[5] InOn the Origin of Species,Charles Darwin discussed the extinction of species as a natural process and a core component of natural selection.[9] Notably, Darwin was skeptical of the possibility of sudden extinction, viewing it as a gradual process. He held that the abrupt disappearances of species from the fossil record were not evidence of catastrophic extinctions but rather represented unrecognized gaps[clarification needed] in the record.[9]
As the possibility of extinction became more widely established in the sciences, so did the prospect of human extinction.[4] In the 19th century, human extinction became a popular topic in science (e.g.,Thomas Robert Malthus'sAn Essay on the Principle of Population) and fiction (e.g.,Jean-Baptiste Cousin de Grainville'sThe Last Man). In 1863, a few years after Darwin publishedOn the Origin of Species,William King proposed thatNeanderthals were an extinct species of the genusHomo. TheRomantic authors and poets were particularly interested in the topic.[4]Lord Byron wrote about the extinction of life on Earth in his 1816 poem "Darkness," and in 1824 envisaged humanity being threatened by a comet impact and employing a missile system to defend against it.[4]Mary Shelley's 1826 novelThe Last Man is set in a world where humanity has been nearly destroyed by a mysterious plague.[4] At the turn of the 20th century,Russian cosmism, a precursor to moderntranshumanism, advocated avoiding humanity's extinction by colonizing space.[4]
The invention of the atomic bomb prompted a wave of discussion among scientists, intellectuals, and the public at large about the risk of human extinction.[4] In a 1945 essay,Bertrand Russell wrote:
The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense.[10]
In 1950,Leo Szilard suggested it was technologically feasible to build acobalt bomb that could render the planet unlivable. A 1950 Gallup poll found that 19% of Americans believed that another world war would mean "an end to mankind".[11]Rachel Carson's 1962 bookSilent Spring raised awareness of environmental catastrophe. In 1983,Brandon Carter proposed theDoomsday argument, which usedBayesian probability to predict the total number of humans that will ever exist.
The discovery of "nuclear winter" in the early 1980s, a specific mechanism by which nuclear war could result in human extinction, again raised the issue to prominence. Writing about these findings in 1983,Carl Sagan argued that measuring the severity of extinction solely in terms of those who die "conceals its full impact," and that nuclear war "imperils all of our descendants, for as long as there will be humans."[12]
John Leslie's 1996 bookThe End of the World was an academic treatment of the science and ethics of human extinction. In it, Leslie considered a range of threats to humanity and what they have in common. In 2003, BritishAstronomer Royal SirMartin Rees publishedOur Final Hour, in which he argues that advances in certain technologies create new threats to the survival of humankind and that the 21st century may be a critical moment in history when humanity's fate is decided.[13] Edited byNick Bostrom andMilan M. Ćirković,Global Catastrophic Risks, published in 2008, is a collection of essays from 26 academics on various global catastrophic and existential risks.[14]Nicholas P. Money's 2019 bookThe Selfish Ape delves into the environmental consequences ofoverexploitation.[15]Toby Ord's 2020 bookThe Precipice argues that preventing existential risks is one of the most important moral issues of our time. The book discusses, quantifies, and compares different existential risks, concluding that the greatest risks are presented by unaligned artificial intelligence and biotechnology.[16] Lyle Lewis' 2024 bookRacing to Extinction explores the roots of human extinction from anevolutionary biology perspective. Lewis argues that humanity treats unused natural resources as waste and is driving ecological destruction through overexploitation, habitat loss, and denial of environmental limits. He uses vivid examples, like the extinction of thepassenger pigeon and the environmental cost ofrice production, to show how interconnected and fragile ecosystems are.[17]
Humans (e.g.,Homo sapiens sapiens) as a species may also be considered to have "gone extinct" simply by being replaced with distant descendants whose continuedevolution may produce new species or subspecies ofHomo or ofhominids.
Without intervention from unforeseen forces, thestellar evolution of theSun is expected to render Earth uninhabitable and ultimately lead to its destruction. The entire universe may eventually become uninhabitable, depending on itsultimate fate and the processes that govern it.
Experts generally agree that anthropogenic existential risks are (much) more likely than natural risks.[18][13][19][2][20] A key difference between these risk types is that empirical evidence can place an upper bound on the level of natural risk.[2] Humanity has existed for at least 200,000 years, over which it has been subject to a roughly constant level of natural risk. If the natural risk were high enough, humanity wouldn't have survived this long. Based on a formalization of this argument, researchers have concluded that we can be confident that natural risk is lower than 1 in 14,000 per year (equivalent to 1 in 140 per century, on average).[2]
Another empirical method to study the likelihood of certain natural risks is to investigate the geological record.[18] For example, acomet or asteroid impact event sufficient in scale to cause animpact winter that would cause human extinction before the year 2100 has been estimated at one in a million.[21][22] Moreover, largesupervolcano eruptions may cause avolcanic winter that could endanger the survival of humanity.[23] The geological record suggests that supervolcanic eruptions are estimated to occur on average about once every 50,000 years, though most such eruptions would not reach the scale required to cause human extinction.[23] Famously, the supervolcanoMt. Toba may have almost wiped out humanity at the time of its last eruption (though this is contentious).[23][24]
Since anthropogenic risk is a relatively recent phenomenon, humanity's track record of survival cannot provide similar assurances.[2] Humanity has only existed for 80 years since the creation of nuclear weapons, and there is no historical track record for future technologies. This has led thinkers likeCarl Sagan to conclude that humanity is currently in a "time of perils,"[25] a uniquely dangerous period in human history, where it is subject to unprecedented levels of risk, beginning from when humans first started posing risk to themselves through their actions.[18][26] PaleobiologistOlev Vinn has suggested that humans presumably have a number of inherited behavior patterns (IBPs) that are not fine-tuned for conditions prevailing in technological civilization. Some IBPs may be highly incompatible with such conditions and have a high potential to induce self-destruction. These patterns may include responses of individuals seeking power over conspecifics in relation to harvesting and consuming energy.[27] Nonetheless, there are ways to address the issue of inherited behavior patterns.[28]
Given the limitations of ordinary observation and modeling,expert elicitation is frequently used instead to obtain probability estimates.[29]
Humanity has a 95% probability of being extinct in 8,000,000 years, according toJ. Richard Gott's formulation of the controversialdoomsday argument, which argues that we have probably already lived through half the duration of human history.[30]
In 1996,John A. Leslie estimated a 30% risk over the next five centuries (equivalent to around 6% per century, on average).[31]
TheGlobal Challenges Foundation's 2016 annual report estimates an annual probability of human extinction of at least 0.05% per year (equivalent to 5% per century, on average).[32]
As of July 29, 2025,Metaculus users estimate a 1% probability of human extinction by 2100.[33]
A 2020 study published in Scientific Reports warns that ifdeforestation andresource consumption continue at current rates, these factors could lead to a "catastrophic collapse in human population" and possibly "an irreversible collapse of our civilization" in the next 20 to 40 years. According to the most optimistic scenario provided by the study, the chances that human civilization survives are smaller than 10%. To avoid this collapse, the study says, humanity should pass from a civilization dominated by the economy to a "cultural society" that "privileges the interest of the ecosystem above the individual interest of its components, but eventually in accordance with the overall communal interest."[34][35]
that it would be "misguided"[36] to assume that the probability of near-term extinction is less than 25%, and
that it will be "a tall order" for the human race to "get our precautions sufficiently right the first time," given that an existential risk provides no opportunity to learn from failure.[3][21]
Philosopher John A. Leslie assigns a 70% chance of humanity surviving the next five centuries, based partly on the controversial philosophicaldoomsday argument that Leslie champions. Leslie's argument is somewhatfrequentist, based on the observation that human extinction has never been observed but requires subjective anthropic arguments.[37] Leslie also discusses the anthropicsurvivorship bias (which he calls an "observational selection" effect) and states that thea priori certainty of observing an "undisastrous past" could make it difficult to argue that we must be safe because nothing terrible has yet occurred. He quotesHolger Bech Nielsen's formulation: "We do not even know if there should exist some extremely dangerous decay of, say, the proton, which caused the eradication of the earth, because if it happens we would no longer be there to observe it, and if it does not happen there is nothing to observe."[38]
Jean-Marc Salotti calculated the probability of human extinction caused by a giant asteroid impact.[39] If no planets are colonized, it will be 0.03 to 0.3 for the next billion years. According to that study, the most frightening object is a giant long-period comet with a warning time of only a few years and, therefore, no time for any intervention in space or settlement on the Moon or Mars. The probability of a giant comet impact in the next hundred years is2.2×10−12.[39]
Bill Gates toldThe Wall Street Journal on January 27, 2025, that he believes there is a 10–15% (median - 12.5%) chance of a natural pandemic hitting in the next four years, but he estimated that there was also a 65–97.5% (median - 81.25%) chance of a natural pandemic hitting in the next 26 years.[41]
On March 19, 2025,Henry Gee said that humanity will be extinct in the next 10,000 years. To avoid it happening, he wanted all humanity to establish space colonies in the next 200-300 years.[42]
On September 11, 2025, Warp News estimated a 20% chance of global catastrophe and a 6% chance of human extinction by 2100. They also estimated a 100% chance of global catastrophe and a 30% chance of human extinction by 2500.[43]
On November 13, 2024, theAmerican Enterprise Institute estimated a probability of nuclear war during the 21st century between 0% and 80% (median average—40%).[44] A 2023 article ofThe Economist estimated an 8% chance of nuclear war causing global catastrophe and a 0.5625% chance of nuclear war causing human extinction.[45]
On November 13, 2024, theAmerican Enterprise Institute estimated an annual probability of supervolcanic eruption around 0.0067% (0.67% per century on average).[44]
A 2008 survey by the Future of Humanity Institute estimated a 5% probability of extinction by superintelligence by 2100.[19]
A 2016 survey of AI experts found a median estimate of 5% that human-level AI would cause an outcome that was "extremely bad (e.g., human extinction)".[46] In 2019, the risk was lowered to 2%, but in 2022, it was increased back to 5%. In 2023, the risk doubled to 10%. In 2024, the risk increased to 15%.[47]
In 2020,Toby Ord estimates existential risk in the next century at "1 in 6" in his bookThe Precipice.[18][48] He also estimated a "1 in 10" risk of extinction by unaligned AI within the next century.
According to a July 10, 2023 article ofThe Economist, scientists estimated a 12% chance of AI-caused catastrophe and a 3% chance of AI-caused extinction by 2100. They also estimated a 100% chance of AI-caused catastrophe and a 25% chance of AI-caused extinction by 2833.
On December 27, 2024,Geoffrey Hinton estimated a 10-20% (median average—15%) probability of AI-caused extinction in the next 30 years.[49] He also estimated a 50-100% (median average - 75%) probability of AI-caused extinction in the next 150 years.
On May 6, 2025,Scientific American estimated a 0-10% (median average - 5%) probability of an AI-caused extinction by 2100.[50]
On August 1, 2025, Holly Elmore estimated a 15-20% (median average - 17.5%) probability of an AI-caused extinction in the next 1-10 years (median average - 5.5 years). She also estimated a 75-100% (median average-87.5%) probability of an AI-caused extinction in the next 5-50 years (median average-27.5 years).[51]
On November 10, 2025, Elon Musk estimated the probability of AI-driven human extinction at 20%, while others—including Bengio’s colleagues—placed the risk anywhere between 10% and 90% (median average—50%). In other words, Elon Musk and Yoshua Bengio's colleagues estimated a 20-50% (median average—35%) probability of an AI-caused extinction.[52]
In a 2010 interview withThe Australian, the late Australian scientistFrank Fenner predicted the extinction of the human race within a century, primarily as the result ofhuman overpopulation,environmental degradation, and climate change.[53] There are several economists who have discussed the importance of global catastrophic risks. For example,Martin Weitzman argues that most of the expected economic damage fromclimate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[54]Richard Posner has argued that humanity is doing far too little, in general, about small, hard-to-estimate risks of large-scale catastrophes.[55]
Although existential risks are less manageable by individuals than, for example, health risks, according to Ken Olum,Joshua Knobe, and Alexander Vilenkin, the possibility of human extinctiondoes have practical implications. For instance, if the "universal"doomsday argument is accepted, it changes the most likely source of disasters and hence the most efficient means of preventing them.[56]
Some scholars argue that certain scenarios, including globalthermonuclear war, would struggle to eradicate every last settlement on Earth. Physicist Willard Wells points out that any credible extinction scenario would have to reach into a diverse set of areas, including the underground subways of major cities, the mountains of Tibet, the remotest islands of the South Pacific, and evenMcMurdo Station in Antarctica, which has contingency plans and supplies for long isolation.[57] In addition, elaborate bunkers exist for government leaders to occupy during a nuclear war.[21] The existence ofnuclear submarines, capable of remaining hundreds of meters deep in the ocean for potentially years, should also be taken into account. Any number of events could lead to a massive loss of human life, but if the last few (seeminimum viable population) most resilient humans are unlikely to also die off, then that particular human extinction scenario may not seem credible.[58]
"Existential risks" are risks that threaten the entire future of humanity, whether by causing human extinction or by otherwise permanently crippling human progress.[3] Multiple scholars have argued, based on the size of the "cosmic endowment," that because of the inconceivably large number of potential future lives that are at stake, even small reductions of existential risk have enormous value.
In one of the earliest discussions of the ethics of human extinction,Derek Parfit offers the following thought experiment:[59]
I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
(1) Peace. (2) A nuclear war that kills 99% of the world's existing population. (3) A nuclear war that kills 100%.
(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.
— Derek Parfit
The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—what humanity could expect to achieve if it survived.[18] From autilitarian perspective, the value of protecting humanity is the product of its duration (how long humanity survives), its size (how many humans there are over time), and its quality (on average, how good is life for future people).[18]: 273 [60] On average, species survive for around a million years before going extinct. Parfit points out that the Earth will remain habitable for around a billion years.[59] And these might be lower bounds on our potential: if humanity is able toexpand beyond Earth, it could greatly increase the human population and survive for trillions of years.[61][18]: 21 The size of the foregone potential that would be lost were humanity to become extinct is very large. Therefore, reducing existential risk by even a small amount would have a very significant moral value.[3][62]
If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people infuture generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss – including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.[63]
PhilosopherRobert Adams in 1989 rejected Parfit's "impersonal" views but spoke instead of a moral imperative for loyalty and commitment to "the future of humanity as a vast project... The aspiration for a better society—more just, more rewarding, and more peaceful... our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects."[64]
PhilosopherNick Bostrom argues in 2013 thatpreference-satisfactionist, democratic, custodial, and intuitionist arguments all converge on the common-sense view that preventing existential risk is a high moral priority, even if the exact "degree of badness" of human extinction varies between these philosophies.[65]
Parfit argues that the size of the "cosmic endowment" can be calculated from the following argument: If Earth remains habitable for a billion more years and can sustainably support a population of more than a billion humans, then there is a potential for 1016 (or 10,000,000,000,000,000) human lives of normal duration.[66] Bostrom goes further, stating that if the universe is empty, then theaccessible universe can support at least 1034 biological human life-years and, if some humans were uploaded onto computers, could even support the equivalent of 1054 cybernetic human life-years.[3]
Some economists and philosophers have defended views, includingexponential discounting andperson-affecting views of population ethics, on which future people do not matter (or matter much less), morally speaking.[67] While these views are controversial,[21][68][69] they would agree that an existential catastrophe would be among the worst things imaginable. It would cut short the lives of eight billion presently existing people, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, there may be strong reasons to reduce existential risk, grounded in concern for presently existing people.[70]
Beyond utilitarianism, other moral perspectives lend support to the importance of reducing existential risk. An existential catastrophe would destroy more than just humanity—it would destroy all cultural artifacts, languages, and traditions, and many of the things we value.[18][71] So moral viewpoints on which we have duties to protect and cherish things of value would see this as a huge loss that should be avoided.[18] One can also consider reasons grounded in duties to past generations. For instance,Edmund Burke writes of a "partnership...between those who are living, those who are dead, and those who are to be born".[72] If one takes seriously the debt humanity owes to past generations, Ord argues the best way of repaying it might be to "pay it forward" and ensure that humanity's inheritance is passed down to future generations.[18]: 49–51
Some philosophers adopt the antinatalist position that human extinction would be a beneficial thing.David Benatar argues that coming into existence is always serious harm, and therefore it is better that people do not come into existence in the future.[73] Further, Benatar, animal rights activistSteven Best, and anarchistTodd May posit that human extinction would be a positive thing for the other organisms on the planet and the planet itself, citing, for example, the omnicidal nature of human civilization.[74][75][76] The environmental view in favor of human extinction is shared by the members ofthe Voluntary Human Extinction Movement and theChurch of Euthanasia, who call for refraining from reproduction and allowing the human species to go peacefully extinct, thus stopping furtherenvironmental degradation.[77]
^Di Mardi (October 15, 2020)."The grim fate that could be 'worse than extinction'".BBC News. RetrievedNovember 11, 2020.When we think of existential risks, events like nuclear war or asteroid impacts often come to mind.
^abRaup, David M. (1995)."The Role of Extinction in Evolution". In Fitch, W. M.; Ayala, F. J. (eds.).Tempo And Mode in Evolution: Genetics And Paleontology 50 Years After Simpson. National Academies Press (US).
^abReese, Martin (2003).Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century – On Earth and Beyond.Basic Books.ISBN0-465-06863-4.
^Money, Nicholas P. (2019).The Selfish Ape: Human Nature and Our Path to Extinction. London: Reaktion Books, Limited.ISBN978-1-78914-155-9.
^Ord, Toby (2020).The Precipice: Existential Risk and the Future of Humanity. New York: Hachette. 4:15–31.ISBN9780316484916.This is an equivalent, though crisper statement ofNick Bostrom's definition: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." Source: Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority". Global Policy.
^Sagan, Carl (1994).Pale Blue Dot. Random House. pp. 305–6.ISBN0-679-43841-6.Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.
^Parfit, Derek (2011).On What Matters Vol. 2. Oxford University Press. p. 616.ISBN9780199681044.We live during the hinge of history ... If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period.
^Bostrom, Nick (2002), "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards",Journal of Evolution and Technology, vol. 9,My subjective opinion is that setting this probability lower than 25% would be misguided, and the best estimate may be considerably higher.
^Grace, Katja; Salvatier, John; Dafoe, Allen; Zhang, Baobao; Evans, Owain (May 3, 2018). "When Will AI Exceed Human Performance? Evidence from AI Experts".arXiv:1705.08807 [cs.AI].
^Adams, Robert Merrihew (October 1989). "Should Ethics be More Impersonal? a Critical Notice of Derek Parfit, Reasons and Persons".The Philosophical Review.98 (4):439–484.doi:10.2307/2185115.JSTOR2185115.
^Best, Steven (2014). "Conclusion: Reflections on Activism and Hope in a Dying World and Suicidal Culture".The Politics of Total Liberation: Revolution for the 21st Century.Palgrave Macmillan. p. 165.doi:10.1057/9781137440723_7.ISBN978-1137471116.In an era of catastrophe and crisis, the continuation of the human species in a viable or desirable form, is obviously contingent andnot a given or necessary good. But considered fromthe standpoint of animals and the earth, the demise of humanity would be the best imaginable event possible, and the sooner the better. The extinction of Homo sapiens would remove the malignancy ravaging the planet, destroy a parasite consuming its host, shut down the killing machines, and allow the earth to regenerate while permitting new species to evolve.
^May, Todd (December 17, 2018)."Would Human Extinction Be a Tragedy?".The New York Times.Human beings are destroying large parts of the inhabitable earth and causing unimaginable suffering to many of the animals that inhabit it. This is happening through at least three means. First, human contribution to climate change is devastating ecosystems ... Second, the increasing human population is encroaching on ecosystems that would otherwise be intact. Third, factory farming fosters the creation of millions upon millions of animals for whom it offers nothing but suffering and misery before slaughtering them in often barbaric ways. There is no reason to think that those practices are going to diminish any time soon. Quite the opposite.
^Barcella, Laura (2012).The end: 50 apocalyptic visions from pop culture that you should know about – before it's too late. San Francisco, California: Zest Books.ISBN978-0982732250.
^Dinello, Daniel (2005).Technophobia!: science fiction visions of posthuman technology (1st ed.). Austin, Texas: University of Texas press.ISBN978-0-292-70986-7.
de Bellaigue, Christopher, "A World Off the Hinges" (review ofPeter Frankopan,The Earth Transformed: An Untold History, Knopf, 2023, 695 pp.),The New York Review of Books, vol. LXX, no. 18 (23 November 2023), pp. 40–42. De Bellaigue writes: "Like theMaya and theAkkadians we have learned that a brokenenvironment aggravatespolitical andeconomic dysfunction and that the inverse is also true. Like theQing we rue the deterioration of oursoils. But the lesson is never learned. [...]Denialism [...] is one of the most fundamental of human traits and helps explain our current inability to come up with a response commensurate with the perils we face." (p. 41.)
Holt, Jim, "The Power of Catastrophic Thinking" (review ofToby Ord,The Precipice: Existential Risk and the Future of Humanity, Hachette, 2020, 468 pp.),The New York Review of Books, vol. LXVIII, no. 3 (February 25, 2021), pp. 26–29.Jim Holt writes (p. 28): "Whether you are searching for a cure for cancer, or pursuing a scholarly or artistic career, or engaged in establishing more just institutions, a threat to the future of humanity is also a threat to the significance of what you do."
Torres, Phil. (2017).Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. Pitchstone Publishing.ISBN978-1634311427.
Michel Weber, "Book Review:Walking Away from Empire",Cosmos and History: The Journal of Natural and Social Philosophy, vol. 10, no. 2, 2014, pp. 329–336.
"Treading Thin Air: Geoff Mann on Uncertainty and Climate Change",London Review of Books, vol. 45, no. 17 (7 September 2023), pp. 17–19. "[W]e are in desperate need of apolitics that looks [the] catastrophicuncertainty [ofglobal warming andclimate change] square in the face. That would mean taking much bigger and more transformative steps: all but eliminatingfossil fuels... and prioritizingdemocratic institutions over markets. The burden of this effort must fall almost entirely on the richest people and richest parts of the world, because it is they who continue to gamble with everyone else's fate." (p. 19.)