Artificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism throughoutits history,[7][8] followed by periods of disappointment and loss of funding, known asAI winters.[9][10] Funding and interest vastly increased after 2012 whendeep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with thetransformer architecture,[12] and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoingprogress in what has become known as theAI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about therisks of AI andits long-term effects in the future, prompting discussions aboutregulatory policies to ensure thesafety and benefits of the technology.
Goals
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
Reasoning and problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logicaldeductions.[13] By the late 1980s and 1990s, methods were developed for dealing withuncertain or incomplete information, employing concepts fromprobability andeconomics.[14]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow.[15] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16] Accurate and efficient reasoning is an unsolved problem.
Knowledge representation
An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
Knowledge representation andknowledge engineering[17] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18] scene interpretation,[19] clinical decision support,[20] knowledge discovery (mining "interesting" and actionable inferences from largedatabases),[21] and other areas.[22]
Aknowledge base is a body of knowledge represented in a form that can be used by a program. Anontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23] Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24] situations, events, states, and time;[25] causes and effects;[26] knowledge about knowledge (what we know about what other people know);[27]default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28] and many other aspects and domains of knowledge.
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29] and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[16] There is also the difficulty ofknowledge acquisition, the problem of obtaining knowledge for AI applications.[c]
Planning and decision-making
An "agent" is anything that perceives and takes actions in the world. Arational agent has goals or preferences and takes actions to make them happen.[d][32] Inautomated planning, the agent has a specific goal.[33] Inautomated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": theutility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[34]
Inclassical planning, the agent knows exactly what the effect of any action will be.[35] In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[36]
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., withinverse reinforcement learning), or the agent can seek information to improve its preferences.[37]Information value theory can be used to weigh the value of exploratory or experimental actions.[38] The space of possible future actions and situations is typicallyintractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
AMarkov decision process has atransition model that describes the probability that a particular action will change the state in a particular way and areward function that supplies the utility of each state and the cost of each action. Apolicy associates a decision with each possible state. The policy could be calculated (e.g., byiteration), beheuristic, or it can be learned.[39]
Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.[40]
Learning
Machine learning is the study of programs that can improve their performance on a given task automatically.[41] It has been a part of AI from the beginning.[e]
There are several kinds of machine learning.Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance.[44]Supervised learning requires labeling the training data with the expected answers, and comes in two main varieties:classification (where the program must learn to predict what category the input belongs in) andregression (where the program must deduce a numeric function based on numeric input).[45]
Inreinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[46]Transfer learning is when the knowledge gained from one problem is applied to a new problem.[47]Deep learning is a type of machine learning that runs inputs through biologically inspiredartificial neural networks for all of these types of learning.[48]
Modern deep learning techniques for NLP includeword embedding (representing words, typically asvectors encoding their meaning),[52]transformers (a deep learning architecture using anattention mechanism),[53] and others.[54] In 2019,generative pre-trained transformer (or "GPT") language models began to generate coherent text,[55][56] and by 2023, these models were able to get human-level scores on thebar exam,SAT test,GRE test, and many other real-world applications.[57]
Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, activelidar, sonar, radar, andtactile sensors) to deduce aspects of the world.Computer vision is the ability to analyze visual input.[58]
Kismet, a robot head which was made in the 1990s; it is a machine that can recognize and simulate emotions.[64]
Affective computing is a field that comprises systems that recognize, interpret, process, or simulate humanfeeling, emotion, and mood.[65] For example, somevirtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitatehuman–computer interaction.
However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents.[66] Moderate successes related to affective computing include textualsentiment analysis and, more recently,multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.[67]
AI research uses a wide variety of techniques to accomplish the goals above.[b]
Search and optimization
AI can solve many problems by intelligently searching through many possible solutions.[68] There are two very different kinds of search used in AI:state space search andlocal search.
State space search
State space search searches through a tree of possible states to try to find a goal state.[69] For example,planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process calledmeans-ends analysis.[70]
Adversarial search is used forgame-playing programs, such as chess or Go. It searches through atree of possible moves and countermoves, looking for a winning position.[73]
Local search
Illustration ofgradient descent for 3 different starting points; two parameters (represented by the plan coordinates) are adjusted in order to minimize theloss function (the height)
Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize aloss function. Variants of gradient descent are commonly used to trainneural networks,[75] through thebackpropagation algorithm.
Another type of local search isevolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them,selecting only the fittest to survive each generation.[76]
Deductive reasoning in logic is the process ofproving a new statement (conclusion) from other statements that are given and assumed to be true (thepremises).[81] Proofs can be structured as prooftrees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes byinference rules.
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whoseleaf nodes are labelled by premises oraxioms. In the case ofHorn clauses, problem-solving search can be performed by reasoningforwards from the premises orbackwards from the problem.[82] In the more general case of the clausal form offirst-order logic,resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[83]
Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g.,hidden Markov models orKalman filters).[90]
Expectation–maximizationclustering ofOld Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.
Classifiers and statistical learning methods
The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand.Classifiers[98] are functions that usepattern matching to determine the closest match. They can be fine-tuned based on chosen examples usingsupervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as adata set. When a new observation is received, that observation is classified based on previous experience.[45]
A neural network is an interconnected group of nodes, akin to the vast network ofneurons in thehuman brain.
An artificial neural network is based on a collection of nodes also known asartificial neurons, which loosely model theneurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once theweight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[104]
Learning algorithms for neural networks uselocal search to choose the weights that will get the right output for each input during training. The most common training technique is thebackpropagation algorithm.[105] Neural networks learn to model complex relationships between inputs and outputs andfind patterns in data. In theory, a neural network can learn any function.[106]
Deep learning[110] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, inimage processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[112]
Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, includingcomputer vision,speech recognition,natural language processing,image classification,[113] and others. The reason that deep learning performs so well in so many applications is not known as of 2021.[114] The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i] but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching toGPUs) and the availability of vast amounts of training data, especially the giantcurated datasets used for benchmark testing, such asImageNet.[j]
GPT
Generative pre-trained transformers (GPT) arelarge language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a largecorpus of text that can be from the Internet. The pretraining consists of predicting the nexttoken (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique calledreinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations", although this can be reduced with RLHF and quality data. They are used inchatbots, which allow people to ask a question or request a task in simple text.[122][123]
The application of AI inmedicine andmedical research has the potential to increase patient care and quality of life.[131] Through the lens of theHippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.[132][133]
For medical research, AI is an important tool for processing and integratingbig data. This is particularly important fororganoid andtissue engineering development which usemicroscopy imaging as a key technique in fabrication.[134] It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[134][135] New AI tools can deepen the understanding of biomedically relevant pathways. For example,AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3Dstructure of a protein.[136] In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[137] In 2024, researchers used machine learning to accelerate the search forParkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, ofalpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.[138][139]
Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques.[140]Deep Blue became the first computer chess-playing system to beat a reigning world chess champion,Garry Kasparov, on 11 May 1997.[141] In 2011, in aJeopardy!quiz show exhibition match,IBM'squestion answering system,Watson, defeated the two greatestJeopardy! champions,Brad Rutter andKen Jennings, by a significant margin.[142] In March 2016,AlphaGo won 4 out of 5 games ofGo in a match with Go championLee Sedol, becoming the firstcomputer Go-playing system to beat a professional Go player withouthandicaps. Then, in 2017, itdefeated Ke Jie, who was the best Go player in the world.[143] Other programs handleimperfect-information games, such as thepoker-playing programPluribus.[144]DeepMind developed increasingly generalisticreinforcement learning models, such as withMuZero, which could be trained to play chess, Go, orAtari games.[145] In 2019, DeepMind's AlphaStar achieved grandmaster level inStarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.[146] In 2021, an AI agent competed in a PlayStationGran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.[147] In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseenopen-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.[148]
Mathematics
Large language models, such asGPT-4,Gemini,Claude,LLaMa orMistral, are increasingly used in mathematics. These probabilistic models are versatile, but can also produce wrong answers in the form ofhallucinations. They sometimes need a large database of mathematical problems to learn from, but also methods such assupervisedfine-tuning[149] or trainedclassifiers with human-annotated data to improve answers for new problems and learn from corrections.[150] A February 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.[151] One technique to improve their performance involves training the models to produce correctreasoning steps, rather than just the correct result.[152] TheAlibaba Group developed a version of itsQwen models calledQwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems.[153] In January 2025, Microsoft proposed the techniquerStar-Math that leveragesMonte Carlo tree search and step-by-step reasoning, enabling a relatively small language model likeQwen-7B to solve 53% of theAIME 2024 and 90% of the MATH benchmark problems.[154]
Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such asAlphaTensor,AlphaGeometry andAlphaProof all fromGoogle DeepMind,[155]Llemma fromEleutherAI[156] orJulius.[157]
When natural language is used to describe mathematical problems, converters can transform such prompts into a formal language such asLean to define mathematical tasks.
Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[158]
Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.[159]
According to Nicolas Firzli, director of theWorld Pensions & Investments Forum, it may be too early to see the emergence of highly innovative AI-informed financial products and services. He argues that "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."[160]
Various countries are deploying AI military applications.[161] The main applications enhancecommand and control, communications, sensors, integration and interoperability.[162] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous andautonomous vehicles.[161] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions,target acquisition, coordination and deconfliction of distributedJoint Fires between networked combat vehicles, both human operated andautonomous.[162]
AI has been used in military operations in Iraq, Syria, Israel and Ukraine.[161][163][164][165]
Generative AI
Vincent van Gogh in watercolour created by generative AI software
Generative artificial intelligence (Generative AI, GenAI,[166] or GAI) is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.[167][168][169] These modelslearn the underlying patterns and structures of theirtraining data and use them to produce new data[170][171] based on the input, which often comes in the form of natural languageprompts.[172][173]
Generative AI has uses across a wide range of industries, including software development,[180] healthcare,[181] finance,[182] entertainment,[183] customer service,[184] sales and marketing,[185] art, writing,[186] fashion,[187] and product design.[188] However, concerns have been raised about the potential misuse of generative AI such ascybercrime, the use offake news ordeepfakes to deceive or manipulate people, andthe mass replacement of human jobs.[189][190] Intellectual property law concerns also exist around generative models that are trained on and emulate copyrighted works of art.[191]
Agents
Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, includingvirtual assistants,chatbots,autonomous vehicles,game-playing systems, andindustrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[192][193][194]
Sexuality
Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction,[195] AI-integrated sex toys (e.g.,teledildonics),[196] AI-generated sexual education content,[197] and AI agents that simulate sexual and romantic partners (e.g.,Replika).[198] AI is also used for the production of non-consensualdeepfake pornography, raising significant ethical and legal concerns.[199]
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.[202] A few examples areenergy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions,foreign policy, or supply chain management.
AI applications for evacuation anddisaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[203][204][205]
In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conductpredictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
During the2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creatingdeepfakes of allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages.[206]
AI has potential benefits and potential risks.[207] AI may be able to advance science and find solutions for serious problems:Demis Hassabis ofDeepMind hopes to "solve intelligence, and then use that to solve everything else".[208] However, as the use of AI has become widespread, several unintended consequences and risks have been identified.[209] In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.[210]
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns aboutprivacy,surveillance andcopyright.
AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
Sensitive user data collected may include online activity records, geolocation data, video, or audio.[211] For example, in order to buildspeech recognition algorithms,Amazon has recorded millions of private conversations and allowedtemporary workers to listen to and transcribe some of them.[212] Opinions about this widespread surveillance range from those who see it as anecessary evil to those for whom it is clearlyunethical and a violation of theright to privacy.[213]
AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such asdata aggregation,de-identification anddifferential privacy.[214] Since 2016, some privacy experts, such asCynthia Dwork, have begun to view privacy in terms offairness.Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."[215]
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[216][217] Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file.[218] In 2023, leading authors (includingJohn Grisham andJonathan Franzen) sued AI companies for using their work to train generative AI.[219][220] Another discussed approach is to envision a separatesui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[221]
In January 2024, theInternational Energy Agency (IEA) releasedElectricity 2024, Analysis and Forecast to 2026, forecasting electric power use.[227] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[228]
Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[229]
A 2024Goldman Sachs Research Paper,AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[230] Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[231]
In 2024, theWall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[232]Nvidia CEOJen-Hsun Huang said nuclear power is a good option for the data centers.[233]
In September 2024,Microsoft announced an agreement withConstellation Energy to re-open theThree Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the USNuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 USInflation Reduction Act.[234] The US government and the state of Michigan are investing almost $2 billion (US) to reopen thePalisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO ofExelon who was responsible for Exelon spinoff of Constellation.[235]
After the last approval in September 2023,Taiwan suspended the approval of data centers north ofTaoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages.[236] Taiwan aims tophase out nuclear power by 2025.[236] On the other hand,Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[236]
Although most nuclear plants in Japan have been shut down after the 2011Fukushima nuclear accident, according to an October 2024Bloomberg article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[237] Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[237]
On 1 November 2024, theFederal Energy Regulatory Commission (FERC) rejected an application submitted byTalen Energy for approval to supply some electricity from the nuclear power stationSusquehanna to Amazon's data center.[238] According to the Commission ChairmanWillie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[238]
In 2025 a report prepared by the International Energy Agency estimated thegreenhouse gas emissions from the energy consumption of AI at 180 million tons. By 2035, these emissions could rise to 300-500 million tonnes depending on what measures will be taken. This is below 1.5% of the energy sector emissions. The emissions reduction potential of AI was estimated at 5% of the energy sector emissions, butrebound effects (for example if people will pass from public transport to autonomous cars) can reduce it.[239]
YouTube,Facebook and others userecommender systems to guide users to more content. These AI programs were given the goal ofmaximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choosemisinformation,conspiracy theories, and extremepartisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people intofilter bubbles where they received multiple versions of the same misinformation.[240] This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[241] The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem.[242]
In 2022,generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[243] AI pioneerGeoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.[244]
Machine learning applications will bebiased[k] if they learn from biased data.[246] The developers may not be aware that the bias exists.[247] Bias can be introduced by the waytraining data is selected and by the way a model is deployed.[248][246] If a biased algorithm is used to make decisions that can seriouslyharm people (as it can inmedicine,finance,recruitment,housing orpolicing) then the algorithm may causediscrimination.[249] The field offairness studies how to prevent harms from algorithmic biases.
On June 28, 2015,Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people,[250] a problem called "sample size disparity".[251] Google "fixed" this problem by preventing the system from labellinganything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[252]
COMPAS is a commercial program widely used byU.S. courts to assess the likelihood of adefendant becoming arecidivist. In 2016,Julia Angwin atProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.[253] In 2017, several researchers[l] showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.[255]
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".[256] Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."[257]
Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions asrecommendations, some of these "recommendations" will likely be racist.[258] Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will bebetter than the past. It is descriptive rather than prescriptive.[m]
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.[251]
There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category isdistributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negativestereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict withanti-discrimination laws.[245]
At its 2022Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), theAssociation for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[dubious –discuss][260]
Many AI systems are so complex that their designers cannot explain how they reach their decisions.[261] Particularly withdeep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist.[262]
It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with aruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale.[263] Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.[264]
People who have been harmed by an algorithm's decision have a right to an explanation.[265] Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union'sGeneral Data Protection Regulation in 2016 included an explicit statement that this right exists.[n] Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.[266]
DARPA established theXAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.[267]
Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.[268] LIME can locally approximate a model's outputs with a simpler, interpretable model.[269]Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.[270]Deconvolution,DeepDream and othergenerative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[271] Forgenerative pre-trained transformers,Anthropic developed a technique based ondictionary learning that associates patterns of neuron activations with human-understandable concepts.[272]
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentiallyweapons of mass destruction.[274] Even when used in conventional warfare, they currently cannot reliably choose targets and could potentiallykill an innocent person.[274] In 2014, 30 nations (including China) supported a ban on autonomous weapons under theUnited Nations'Convention on Certain Conventional Weapons, however theUnited States and others disagreed.[275] By 2015, over fifty countries were reported to be researching battlefield robots.[276]
There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[280]
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[281]
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[282] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-termunemployment, but they generally agree that it could be a net benefit ifproductivity gains areredistributed.[283] Risk estimates vary; for example, in the 2010s, Michael Osborne andCarl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[p][285] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[281] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[286][287]
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence;The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[288] Jobs at extreme risk range fromparalegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[289]
From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward byJoseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.[290]
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicistStephen Hawking stated, "spell the end of the human race".[291] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.[q] These sci-fi scenarios are misleading in several ways.
First, AI does not require human-likesentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. PhilosopherNick Bostrom argued that if one givesalmost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of apaperclip factory manager).[293]Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."[294] In order to be safe for humanity, asuperintelligence would have to be genuinelyaligned with humanity's morality and values so that it is "fundamentally on our side".[295]
Second,Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things likeideologies,law,government,money and theeconomy are built onlanguage; they exist because there are stories that billions of people believe. The current prevalence ofmisinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[296]
In May 2023,Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".[299] He notably mentioned risks of anAI takeover,[300] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[301]
In 2023, many leading AI experts endorsedthe joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[302]
Some other researchers were more optimistic. AI pioneerJürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[303] While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[304][305]Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[306]Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[307] In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[308] However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[309]
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans.Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[310]
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[311]The field of machine ethics is also called computational morality,[311]and was founded at anAAAI symposium in 2005.[312]
Active organizations in the AI open-source community includeHugging Face,[315]Google,[316]EleutherAI andMeta.[317] Various AI models, such asLlama 2,Mistral orStable Diffusion, have been made open-weight,[318][319] meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freelyfine-tuned, which allows companies to specialize them with their own data and for their own use-case.[320] Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitatebioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[321]
Frameworks
Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by theAlan Turing Institute and based on the SUM values, outlines four main ethical dimensions, defined as follows:[322][323]
Respect the dignity of individual people
Connect with other people sincerely, openly, and inclusively
Care for the wellbeing of everyone
Protect social values, justice, and the public interest
Other developments in ethical frameworks include those decided upon during theAsilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[324] however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks.[325]
Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[326]
TheUK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[327]
The first globalAI Safety Summit was held in the United Kingdom in November 2023 with a declaration calling for international cooperation.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[328] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[329] According to AI Index atStanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[330][331] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[332] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[332] TheGlobal Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[332]Henry Kissinger,Eric Schmidt, andDaniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[333] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[334] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[335] In 2024, theCouncil of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[336]
In a 2022Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[330] A 2023Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[337] In a 2023Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[338][339]
In November 2023, the first globalAI Safety Summit was held inBletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[340] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[341][342] In May 2024 at theAI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[343][344]
In 2024, AI patents in China and the US numbered more than three-fourths of AI patents worldwide.[345] Though China had more AI patents, the US had 35% more patents per AI patent-applicant company than China.[345]
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly toAlan Turing'stheory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning.[346][347] This, along with concurrent discoveries incybernetics,information theory andneurobiology, led researchers to consider the possibility of building an "electronic brain".[r] They developed several areas of research that would become part of AI,[349] such asMcCullouch andPitts design for "artificial neurons" in 1943,[115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced theTuring test and showed that "machine intelligence" was plausible.[350][347]
The field of AI research was founded ata workshop atDartmouth College in 1956.[s][6] The attendees became the leaders of AI research in the 1960s.[t] They and their students produced programs that the press described as "astonishing":[u] computers were learningcheckers strategies, solving word problems in algebra, provinglogical theorems and speaking English.[v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[347]
Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine withgeneral intelligence and considered this the goal of their field.[354] In 1965Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".[355] In 1967Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[356] They had, however, underestimated the difficulty of the problem.[w] In 1974, both the U.S. and British governments cut off exploratory research in response to thecriticism ofSir James Lighthill[358] and ongoing pressure from the U.S. Congress tofund more productive projects.[359]Minsky's andPapert's bookPerceptrons was understood as proving thatartificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether.[360] The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[9]
In the early 1980s, AI research was revived by the commercial success ofexpert systems,[361] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan'sfifth generation computer project inspired the U.S. and British governments to restore funding foracademic research.[8] However, beginning with the collapse of theLisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[10]
Up to this point, most of AI's funding had gone to projects that used high-levelsymbols to representmental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especiallyperception,robotics,learning andpattern recognition,[362] and began to look into "sub-symbolic" approaches.[363]Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive.[x]Judea Pearl,Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[86][368] But the most important development was the revival of "connectionism", including neural network research, byGeoffrey Hinton and others.[369] In 1990,Yann LeCun successfully showed thatconvolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[370]
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such asstatistics,economics andmathematics).[371] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as theAI effect).[372]However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield ofartificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[4]
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[11]For many specific tasks, other methods were abandoned.[y]Deep learning's success was based on both hardware improvements (faster computers,[374]graphics processing units,cloud computing[375]) and access tolarge amounts of data[376] (including curated datasets,[375] such asImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[332]
The number of Google searches for the term "AI" accelerated in 2022.
In 2016, issues offairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. Thealignment problem became a serious field of academic study.[309]
In the late 2010s and early 2020s,AGI companies began to deliver programs that created enormous interest. In 2015,AlphaGo, developed byDeepMind, beat the world championGo player. The program taught only the game's rules and developed a strategy by itself.GPT-3 is alarge language model that was released in 2020 byOpenAI and is capable of generating high-quality human-like text.[377]ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months.[378] It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness.[379] These programs, and others, inspired an aggressiveAI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI".[380] About 800,000 "AI"-related U.S. job openings existed in 2022.[381] According to PitchBook research, 22% of newly fundedstartups in 2024 claimed to be AI companies.[382]
Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines.[383] Another major focus has been whether machines can be conscious, and the associated ethical implications.[384] Many other topics in philosophy are relevant to AI, such asepistemology andfree will.[385] Rapid advancements have intensified public discussions on the philosophy andethics of AI.[384]
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?"[386] He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[386] He devised the Turing test, which measures the ability of a machine to simulate human conversation.[350] Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes thatwe can not determine these things about other people but "it is usual to have a polite convention that everyone thinks."[387]
The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior.[388]
Russell andNorvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1] However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly likepigeons that they can fool other pigeons.'"[389] AI founderJohn McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[390]
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".[391] Another AI founder,Marvin Minsky, similarly describes it as "the ability to solve hard problems".[392] The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google,[393] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[394] with many companies during the early 2020s AI boom using the term as a marketingbuzzword, often even if they did "not actually use AI in a material way".[395]
Evaluating approaches to AI
No established unifying theory orparadigm has guided AI research for most of its history.[aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostlysub-symbolic,soft andnarrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
Symbolic AI and its limits
Symbolic AI (or "GOFAI")[397] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed thephysical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[398]
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning.Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[399] PhilosopherHubert Dreyfus hadargued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[400] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]
The issue is not resolved:sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such asalgorithmic bias. Critics such asNoam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[402][403] in part because sub-symbolic AI is a move away fromexplainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field ofneuro-symbolic artificial intelligence attempts to bridge the two approaches.
"Neats" hope that intelligent behavior is described using simple, elegant principles (such aslogic,optimization, orneural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[404] but eventually was seen as irrelevant. Modern AI has elements of both.
Finding a provably correct or optimal solution isintractable for many important problems.[15] Soft computing is a set of techniques, includinggenetic algorithms,fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
AI researchers are divided as to whether to pursue the goals of artificial general intelligence andsuperintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[405][406] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.
Thephilosophy of mind does not know whether a machine can have amind,consciousness andmental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence.Russell andNorvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[407] However, the question has become central to the philosophy of mind. It is also typically the central question at issue inartificial intelligence in fiction.
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[408] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how thisfeels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While humaninformation processing is easy to explain, humansubjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person toknow what red looks like.[409]
Computationalism is the position in thephilosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to themind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophersJerry Fodor andHilary Putnam.[410]
PhilosopherJohn Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[ac] Searle challenges this claim with hisChinese room argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.[414]
AI welfare and rights
It is difficult or impossible to reliably evaluate whether an advancedAI is sentient (has the ability to feel), and if so, to what degree.[415] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[416][417]Sapience (a set of capacities related to high intelligence, such as discernment orself-awareness) may provide another moral basis for AI rights.[416]Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society.[418]
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[419] Critics argued in 2018 that granting rights to AI systems would downplay the importance ofhuman rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[420][421]
Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be amoral blind spot analogous toslavery orfactory farming, which could lead tolarge-scale suffering if sentient AI is created and carelessly exploited.[417][416]
However, technologies cannot improve exponentially indefinitely, and typically follow anS-shaped curve, slowing when they reach the physical limits of what the technology can do.[423]
Arguments fordecomputing have been raised byDan McQuillan (Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar todegrowth, the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. It argues that a different future is possible, in which distance between people is reduced rather than increased through AI intermediaries.[426]
Isaac Asimov introduced theThree Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[430] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[431]
^It is among the reasons thatexpert systems proved to be inefficient for capturing knowledge.[30][31]
^"Rational agent" is general term used ineconomics,philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or in the case of AI, a computer program.
^Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".[42] In 1956, at the original Dartmouth AI summer conference,Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".[43]
^Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must beconditionally independent of one another.AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[93]
^Expectation–maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknownlatent variables.[95]
^Geoffrey Hinton said, of his work on neural networks in the 1990s, "our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow."[121]
^In statistics, abias is a systematic error or deviation from the correct value. But in the context offairness, it refers to a tendency in favor or against a certain group or individual characteristic, usually in a way that is considered unfair or harmful. A statistically unbiased AI system that produces disparate outcomes for different demographic groups may thus be viewed as biased in the ethical sense.[245]
^Moritz Hardt (a director at theMax Planck Institute for Intelligent Systems) argues that machine learning "is fundamentally the wrong tool for a lot of domains, where you're trying to design interventions and mechanisms that change the world."[259]
^When the law was passed in 2018, it still contained a form of this provision.
^"Electronic brain" was the term used by the press around this time.[346][348]
^Daniel Crevier wrote, "the conference is generally recognized as the official birthdate of the new science."[351]Russell andNorvig called the conference "the inception of artificial intelligence."[115]
^Russell andNorvig wrote "for the next 20 years the field would be dominated by these people and their students."[352]
^Russell andNorvig wrote, "it was astonishing whenever a computer did anything kind of smartish".[353]
^Matteo Wong wrote inThe Atlantic: "Whereas for decades, computer-science fields such as natural-language processing, computer vision, and robotics used extremely different methods, now they all use a programming method called "deep learning". As a result, their code and approaches have become more similar, and their models are easier to integrate into one another."[373]
^Jack Clark wrote inBloomberg: "After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever", and noted that the number of software projects that use machine learning atGoogle increased from a "sporadic usage" in 2012 to more than 2,700 projects in 2015.[375]
^Nils Nilsson wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all about."[396]
^Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[401]
^Searle presented this definition of "Strong AI" in 1999.[411] Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."[412] Strong AI is defined similarly byRussell andNorvig: "Stong AI – the assertion that machines that do so areactually thinking (as opposed tosimulating thinking)."[413]
^Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence".Business Horizons.62:15–25.doi:10.1016/j.bushor.2018.08.004.ISSN0007-6813.S2CID158433736.
^Ciaramella, Alberto; Ciaramella, Marco (2024).Introduction to Artificial Intelligence: from data analysis to generative AI. Intellisemantic Editions.ISBN978-8-8947-8760-3.
^Srivastava, Saurabh (29 February 2024). "Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap".arXiv:2402.19450 [cs.AI].
^Azerbayev, Zhangir; Schoelkopf, Hailey; Paster, Keiran; Santos, Marco Dos; McAleer', Stephen; Jiang, Albert Q.; Deng, Jia; Biderman, Stella; Welleck, Sean (16 October 2023)."Llemma: An Open Language Model For Mathematics".EleutherAI Blog. Retrieved26 January 2025.
^Newsom, Gavin; Weber, Shirley N. (5 September 2023)."Executive Order N-12-23"(PDF). Executive Department, State of California.Archived(PDF) from the original on 21 February 2024. Retrieved7 September 2023.
^Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). "Generative AI for Medical Imaging: extending the MONAI Framework".arXiv:2307.15208 [eess.IV].
^Karpathy, Andrej; Abbeel, Pieter; Brockman, Greg; Chen, Peter; Cheung, Vicki; Duan, Yan; Goodfellow, Ian; Kingma, Durk; Ho, Jonathan; Rein Houthooft; Tim Salimans; John Schulman; Ilya Sutskever; Wojciech Zaremba (16 June 2016)."Generative models". OpenAI.Archived from the original on 17 November 2023. Retrieved15 March 2023.
^Brynjolfsson, Erik; Li, Danielle; Raymond, Lindsey R. (April 2023),Generative AI at Work (Working Paper), Working Paper Series,doi:10.3386/w31161,archived from the original on 28 March 2024, retrieved21 January 2024
^Sun, Yuran; Zhao, Xilei; Lovreglio, Ruggiero; Kuligowski, Erica (1 January 2024), Naser, M. Z. (ed.),"8 – AI for large-scale evacuation modeling: promises and challenges",Interpretable Machine Learning for the Analysis, Design, Assessment, and Informed Decision Making for Civil Infrastructure, Woodhead Publishing Series in Civil and Structural Engineering, Woodhead Publishing, pp. 185–204,ISBN978-0-1282-4073-1,archived from the original on 19 May 2024, retrieved28 June 2024.
^abMochizuki, Takashi; Oda, Shoko (18 October 2024)."エヌビディア出資の日本企業、原発近くでAIデータセンター新設検討".Bloomberg (in Japanese).Archived from the original on 8 November 2024. Retrieved7 November 2024.
Ertel, Wolfgang (2017).Introduction to Artificial Intelligence (2nd ed.). Springer.ISBN978-3-3195-8486-7.
Ciaramella, Alberto; Ciaramella, Marco (2024).Introduction to Artificial Intelligence: from data analysis to generative AI (1st ed.). Intellisemantic Editions.ISBN978-8-8947-8760-3.
Anderson, Susan Leigh (2008). "Asimov's "three laws of robotics" and machine metaethics".AI & Society.22 (4):477–493.doi:10.1007/s00146-007-0094-5.S2CID1809459.
Anderson, Michael; Anderson, Susan Leigh (2011).Machine Ethics. Cambridge University Press.
Arntz, Melanie; Gregory, Terry; Zierahn, Ulrich (2016), "The risk of automation for jobs in OECD countries: A comparative analysis",OECD Social, Employment, and Migration Working Papers 189
Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). "Cognitive developmental robotics: a survey".IEEE Transactions on Autonomous Mental Development.1 (1):12–34.doi:10.1109/tamd.2009.2021702.S2CID10168773.
Barfield, Woodrow; Pagallo, Ugo (2018).Research handbook on the law of artificial intelligence. Cheltenham, UK: Edward Elgar Publishing.ISBN978-1-7864-3904-8.OCLC1039480085.
Bertini, M; Del Bimbo, A; Torniai, C (2006). "Automatic annotation and semantic retrieval of video sequences using multimedia ontologies".MM '06 Proceedings of the 14th ACM international conference on Multimedia. 14th ACM international conference on Multimedia. Santa Barbara: ACM. pp. 679–682.
Bushwick, Sophie (16 March 2023),"What the New GPT-4 AI Can Do",Scientific American,archived from the original on 22 August 2023, retrieved5 October 2024
Butler, Samuel (13 June 1863)."Darwin among the Machines". Letters to the Editor.The Press. Christchurch, New Zealand.Archived from the original on 19 September 2008. Retrieved16 October 2014 – via Victoria University of Wellington.
Buttazzo, G. (July 2001). "Artificial consciousness: Utopia or real possibility?".Computer.34 (7):24–30.doi:10.1109/2.933500.
Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]".IEEE Computational Intelligence Magazine.9 (2):48–57.doi:10.1109/MCI.2014.2307227.S2CID206451986.
Cybenko, G. (1988). Continuous valued neural networks with two hidden layers are sufficient (Report). Department of Computer Science, Tufts University.
Fearn, Nicholas (2007).The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers. New York: Grove Press.ISBN978-0-8021-1839-4.
Galvan, Jill (1 January 1997). "Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"".Science Fiction Studies.24 (3):413–429.JSTOR4240644.
Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016),Deep Learning, MIT Press., archived fromthe original on 16 April 2016, retrieved12 November 2017
Iphofen, Ron; Kritikos, Mihalis (3 January 2019). "Regulating artificial intelligence and robotics: ethics by design in a digital society".Contemporary Social Science.16 (2):170–184.doi:10.1080/21582041.2018.1563803.ISSN2158-2041.S2CID59298502.
Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. (2019).Regulation of artificial intelligence in selected jurisdictions.LCCN2019668143.OCLC1110727808.
McGarry, Ken (1 December 2005). "A survey of interestingness measures for knowledge discovery".The Knowledge Engineering Review.20 (1):39–61.doi:10.1017/S0269888905000408.S2CID14987656.
Merkle, Daniel; Middendorf, Martin (2013). "Swarm Intelligence". In Burke, Edmund K.; Kendall, Graham (eds.).Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques. Springer Science & Business Media.ISBN978-1-4614-6940-7.
NRC (United States National Research Council) (1999). "Developments in Artificial Intelligence".Funding a Revolution: Government Support for Computing Research. National Academy Press.
Omohundro, Steve (2008).The Nature of Self-Improving Artificial Intelligence. presented and distributed at the 2007 Singularity Summit, San Francisco, CA.
Pennachin, C.; Goertzel, B. (2007). "Contemporary Approaches to Artificial General Intelligence".Artificial General Intelligence. Cognitive Technologies. Berlin, Heidelberg: Springer. pp. 1–30.doi:10.1007/978-3-540-68677-4_1.ISBN978-3-5402-3733-4.
Smoliar, Stephen W.; Zhang, HongJiang (1994). "Content based video indexing and retrieval".IEEE MultiMedia.1 (2):62–72.doi:10.1109/93.311653.S2CID32710913.
Solomonoff, Ray (1956).An Inductive Inference Machine(PDF). Dartmouth Summer Research Conference on Artificial Intelligence.Archived(PDF) from the original on 26 April 2011. Retrieved22 March 2011 – via std.com, pdf scanned copy of the original. Later published as Solomonoff, Ray (1957). "An Inductive Inference Machine".IRE Convention Record. Vol. Section on Information Theory, part 2. pp. 56–62.
Wallach, Wendell (2010).Moral Machines. Oxford University Press.
Wason, P. C.; Shapiro, D. (1966)."Reasoning". In Foss, B. M. (ed.).New horizons in psychology. Harmondsworth: Penguin.Archived from the original on 26 July 2020. Retrieved18 November 2019.
Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI",Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–198.George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientistAlex Pentland writes: "CurrentAI machine-learningalgorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
Frank, Michael (22 September 2023)."US Leadership in Artificial Intelligence Can Shape the 21st Century Global Order".The Diplomat.Archived from the original on 16 September 2024. Retrieved8 December 2023.Instead, the United States has developed a new area of dominance that the rest of the world views with a mixture of awe, envy, and resentment: artificial intelligence... From AI models and research to cloud computing and venture capital, U.S. companies, universities, and research labs – and their affiliates in allied countries – appear to have an enormous lead in both developing cutting-edge AI and commercializing it. The value of U.S. venture capital investments in AI start-ups exceeds that of the rest of the world combined.
Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process?"New York Times Magazine (July 18, 2023)onlineArchived 20 July 2023 at theWayback Machine
Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell,Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.),The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures,reason andpurpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
Halpern, Sue, "The Coming Tech Autocracy" (review ofVerity Harding,AI Needs You: How We Can Change AI's Future and Save Our Own, Princeton University Press, 274 pp.;Gary Marcus,Taming Silicon Valley: How We Can Ensure That AI Works for Us, MIT Press, 235 pp.;Daniela Rus andGregory Mone,The Mind's Mirror: Risk and Reward in the Age of AI, Norton, 280 pp.;Madhumita Murgia,Code Dependent: Living in the Shadow of AI, Henry Holt, 311 pp.),The New York Review of Books, vol. LXXI, no. 17 (7 November 2024), pp. 44–46. "'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count ongovernments driven bycampaign finance contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and thetech companies. They includetransparency on how AI systems work; compensation for individuals if their data [are] used to train LLMs (large language model)s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminatingSection 230, imposing cash penalties, and passing stricterproduct liability laws... Marcus also suggests... that a new, AI-specific federal agency, akin to theFDA, theFCC, or theFTC, might provide the most robust oversight.... [T]heFordham law professorChinmayi Sharma... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way tomedical licenses,malpractice suits, and theHippocratic oath in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed todo no harm?'" (p. 46.)
Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzleCain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms",Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount ofcontext they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyzeancient languages. In some cases, there are few historical records on long-gonecivilizations to serve astraining data for such a purpose." (p. 82.)
Immerwahr, Daniel, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?",The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that ofcartoons, especially smutty ones." (p. 59.)
Johnston, John (2008)The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research",Scientific American, vol. 330, no. 6 (June 2024), pp. 80–81.
Lepore, Jill, "The Chit-Chatbot: Is talking with a machine a conversation?",The New Yorker, 7 October 2024, pp. 12–16.
Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems",Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45.
Mitchell, Melanie (2019).Artificial intelligence: a guide for thinking humans. New York: Farrar, Straus and Giroux.ISBN978-0-3742-5783-5.
Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?",The New Yorker, 20 November 2023, pp. 20–26.
Roivainen, Eka, "AI's IQ:ChatGPT aced a [standard intelligence] test but showed thatintelligence cannot be measured byIQ alone",Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ,ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."
Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race",Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–144. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)
Tarnoff, Ben, "The Labor Theory of AI" (review ofMatteo Pasquinelli,The Eye of the Master: A Social History of Artificial Intelligence, Verso, 2024, 264 pp.),The New York Review of Books, vol. LXXII, no. 5 (27 March 2025), pp. 30–32. The reviewer, Ben Tarnoff, writes: "The strangeness at the heart of thegenerative AI boom is that nobody really knows how the technology works. We know how thelarge language models withinChatGPT and its counterparts are trained, even if we don't always know whichdata they're being trained on: they are asked to predict the next string of characters in a sequence. But exactly how they arrive at any givenprediction is a mystery. Thecomputations that occur inside the model are simply too intricate for any human to comprehend." (p. 32.)
Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots",London Review of Books, vol. 46, no. 19 (10 October 2024), pp. 29–32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency toanthropomorphise." (p. 29.)