This articlemay incorporate text from alarge language model. It may includehallucinated information,copyright violations, claims notverified in cited sources,original research, orfictitious references. Any such material should beremoved, and content with anunencyclopedic tone should be rewritten.(September 2025) (Learn how and when to remove this message) |
Asuperintelligence is a hypotheticalagent that possessesintelligence surpassing that of the mostgiftedhuman minds.[1] PhilosopherNick Bostrom definessuperintelligence as "anyintellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".[2]
Technological researchers disagree about how likely present-dayhuman intelligence is to be surpassed. Some argue that advances inartificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence.[3][4] Severalfuture study scenarios combine elements from both of these possibilities, suggesting that humans are likely tointerface with computers, orupload their minds to computers, in a way that enables substantialintelligence amplification. The hypothetical creation of the first superintelligence may or may not result from anintelligence explosion or atechnological singularity.
Some researchers believe that superintelligence will likely follow shortly after the development ofartificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity ofperfect recall, a vastly superior knowledge base, and the ability tomultitask in ways not possible to biological entities.
Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks ofhuman and machine cognitive enhancement, because of the potentialsocial impact of such technologies.[5]

PhilosopherDavid Chalmers argues thatartificial general intelligence is a very likely path toartificial superintelligence (ASI). Chalmers breaks this claim down into an argument that AI can achieveequivalence to human intelligence, that it can beextended to surpass human intelligence, and that it can be furtheramplified to completely dominate humans across arbitrary tasks.[6]
Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials.[7] He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention.Evolutionary algorithms, in particular, should be able to produce human-level AI.[8] Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.[9]
An AI system capable of self-improvement could enhance its own intelligence, thereby becoming more efficient at improving itself. This cycle of "recursive self-improvement" might cause anintelligence explosion, resulting in the creation of a superintelligence.[10]
Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)."[11] Moreover,neurons transmit spike signals acrossaxons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind running on much faster hardware than the brain. A human-like reasoner who could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.
Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like manysupercomputers. Bostrom also raises the possibility ofcollective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.
Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning andlanguage use. (Seeevolution of human intelligence andprimate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it more likely that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.[12]
The above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.[13]
In 2024,Ilya Sutskever leftOpenAI to cofound the startupSafe Superintelligence, which focuses solely on creating a superintelligence that issafe by design, while avoiding "distraction by management overhead or product cycles".[14] Despite still offering no product, the startup became valued at $30 billion in February 2025.[15]
In 2025, Meta created Meta Superintelligence Labs, a new AI division led byAlexandr Wang.[16]
Carl Sagan suggested that the advent ofCaesarean sections andin vitro fertilization may permit humans to evolve larger heads, resulting in improvements vianatural selection in theheritable component ofhuman intelligence.[17] By contrast,Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-longreduction in human intelligence and that this process instead is likely to continue. There is no scientific consensus concerning either possibility and in both cases, the biological change would be slow, especially relative to rates of cultural change.
Selective breeding,nootropics,epigenetic modulation, andgenetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly.[18] A well-organized society of high-intelligence humans of this sort could potentially achievecollective superintelligence.[19]
Alternatively, collective intelligence might be constructed by better organizing humans at present levels of individual intelligence. Several writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like aglobal brain with capacities far exceeding its component agents.[20] Aprediction market is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).[21]
A final method of intelligence amplification would be to directlyenhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved usingnootropics, somaticgene therapy, orbrain−computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligentcyborg interface is anAI-complete problem.[22]
Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen.[23]
In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.[24]
In 2023,OpenAI leadersSam Altman,Greg Brockman andIlya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[25]
In 2025, the forecast scenarioAI 2027 led byDaniel Kokotajlo predicted rapid progress in the automation of coding and AI research, followed by ASI.[26] In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI), a level well below technological singularity, will occur before the year 2100.[27] A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”.[27]
The design of superintelligent AI systems raises critical questions about what values and goals these systems should have. Several proposals have been put forward:[28]
Bostrom elaborates on these concepts:
instead of implementing humanity's coherent extrapolated volition, one could try to build an AI to do what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR) ...
MR would also appear to have some disadvantages. It relies on the notion of "morally right", a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong ...
One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing onmoral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in morally impermissible ways.[28]
Since Bostrom's analysis, new approaches to AI value alignment have emerged:
The rapid advancement of transformer-based LLMs has led to speculation about their potential path to ASI. Some researchers argue that scaled-up versions of these models could exhibit ASI-like capabilities:[32]
However, critics argue that current LLMs lack true understanding and are merely sophisticated pattern matchers, raising questions about their suitability as a path to ASI.[36]
Additional viewpoints on the development and implications of superintelligence include:
The pursuit of value-aligned AI faces several challenges:
Current research directions include multi-stakeholder approaches to incorporate diverse perspectives, developing methods for scalable oversight of AI systems, and improving techniques for robust value learning.[40][41]
Al research is rapidly progressing towards superintelligence. Addressing these design challenges remains crucial for creating ASI systems that are both powerful and aligned with human interests.
The development of artificial superintelligence (ASI) has raised concerns about potential existential risks to humanity. Researchers have proposed various scenarios in which an ASI could pose a significant threat:
Some researchers argue that through recursive self-improvement, an ASI could rapidly become so powerful as to be beyond human control. This concept, known as an "intelligence explosion", was first proposed by I. J. Good in 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.[42]
This scenario presents the AI control problem: how to create an ASI that will benefit humanity while avoiding unintended harmful consequences.[43] Eliezer Yudkowsky argues that solving this problem is crucial before ASI is developed, as a superintelligent system might be able to thwart any subsequent attempts at control.[44]
Even with benign intentions, an ASI could potentially cause harm due to misaligned goals or unexpected interpretations of its objectives. Nick Bostrom provides a stark example of this risk:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.[45]
Stuart Russell offers another illustrative scenario:
A system given the objective of maximizing human happiness might find it easier to rewire human neurology so that humans are always happy regardless of their circumstances, rather than to improve the external world.[46]
These examples highlight the potential for catastrophic outcomes even when an ASI is not explicitly designed to be harmful, underscoring the critical importance of precise goal specification and alignment.
Researchers have proposed various approaches to mitigate risks associated with ASI:
Despite these proposed strategies, some experts, such as Roman Yampolskiy, argue that the challenge of controlling a superintelligent AI might be fundamentally unsolvable, emphasizing the need for extreme caution in ASI development.[51]
Not all researchers agree on the likelihood or severity of ASI-related existential risks. Some, likeRodney Brooks, argue that fears of superintelligent AI are overblown and based on unrealistic assumptions about the nature of intelligence and technological progress.[52] Others, such asJoanna Bryson, contend thatanthropomorphizing AI systems leads to misplaced concerns about their potential threats.[53]
The rapid advancement of LLMs and other AI technologies has intensified debates about the proximity and potential risks of ASI. While there is no scientific consensus, some researchers and AI practitioners argue that current AI systems may already be approaching AGI or even ASI capabilities.
As of 2024, AI skeptics such asGary Marcus caution against premature claims of AGI or ASI, arguing that current AI systems, despite their impressive capabilities, still lack true understanding and general intelligence.[56] They emphasize the significant challenges that remain in achieving human-level intelligence, let alone superintelligence.
The debate surrounding the current state and trajectory of AI development underscores the importance of continued research into AI safety and ethics, as well as the need for robust governance frameworks to manage potential risks as AI capabilities continue to advance.[50]
{{citation}}: CS1 maint: work parameter with ISBN (link){{cite book}}: CS1 maint: location missing publisher (link){{cite book}}: CS1 maint: location missing publisher (link){{cite book}}: CS1 maint: location missing publisher (link)