The number of Google searches for the term "AI" accelerated in 2022.
In 1950,Alan Turing proposed the idea of "Thinking Machines". These were computers that would be able to reason at the same level as humans.[8] He began his well-known "Turing Test", where an interrogator is provided with two materials and they must determine which one was done by artificial intelligence and which one was done by a human being.[8][9] In 1956,John McCarthy used the term "artificial intelligence" for the first time, eventually being labeled as the father of artificial intelligence.[8][10]
In 1956, theDartmouth conference was held, organized byJohn McCarthy,Nathaniel Rochester,Marvin Minsky, andClaude Shannon.[11] This conference is considered the birthplace of artificial intelligence as a field of study, as a workshop was held for 2 months. During this workshop, top researchers explored the concept of creating machines that could mimic the same intelligence as human beings.[11][12]
John McCarthy
In 1958, John McCarthy created the programming languageLISP.[13] LISP stands for "List Processing" and works as the mainprogramming language for artificial intelligence.[14] The programming language gained traction atMIT, being used for many of their projects that dealt with AI, such as theIBM 704. While many languages rose and fell, LISP remained the most common programming language for artificial intelligence in theUnited States even in 2006.[15] LISP became so reliable due to how artificial intelligence works. Artificial intelligence of the time often had lists that constantly change size, making fixed-length methods, such as vectors, unusable.[15]
In 1966,Joseph Weizenbaum createdELIZA.[18] ELIZA was designed to be an emotional tool, being considered a "Rogerian psychotherapist".[18] This was done by making it seem like thechatbot reflected on the user's input, turning questions back to the user. ELIZA is known as the first artificial intelligence chatbot. ELIZA uses strategies such aspattern matching and substitution in order to provide outputs that make users believe they are talking to a real person. Weizenbaum's ELIZA was a huge advancement for regular use AI, acting as a building block for future chatbots such asOpenAI'sChatGPT orGoogle'sGemini.
Artificial intelligence began being added to new devices. A popular implementation of artificial intelligence would beAI assistants. In 2011,Apple released theiPhone 4S. This new smartphone would include a new AI assistant namedSiri,[19] originally developed by Dag Kittlaus, Adam Cheyer, and Tom Gruber in 2007.[20] Originally owning their own company,Siri Inc., Apple saw the potential in the assistant and chose to integrate it into their newiOS.[20] Siri was revolutionary, acting as the first mainstream smartphone AI assistant. Navigating or setting tasks became way simpler, only needing to use your voice for ahands-free approach to interacting with your smartphone.[19] After the success of Siri, companies likeGoogle andAmazon took inspiration to create their own AI assistants. In 2014, Amazon released its AI assistantAlexa with their newEcho smart speaker. Alexa allows users to interact with the AI assistant without needing a smartphone, running off a speaker.[21] In 2016, Google released itsGoogle Assistant, having the same functions as Amazon's Alexa.[21] Thetext-to-image models DALL-E 2 and Midjourney were released in 2022.[22]
ChatGPT, an AI chatbot created byOpenAI, was launched at the end of 2022. It grew to over 100 million users in only 2 months,[23] becoming the fastest-growing software application.[24] As of 2025, ChatGPT remains the 4th-most visited website, behind sites such asGoogle andFacebook.[25] Other chatbots such asGemini,Claude, andCopilot fall under the same category, known aslarge language models (LLMs).[26] Large language models are designed to be capable of responding appropriately to human language as well as being able to conduct a wide range of tasks.[27] They do this by feeding the models an immense amount of data in order to produce acceptable responses.[27] Present day chatbots incorporategenerative AI, includingAI image generation. Over half of American adults who responded to a 2025 survey stated they had used an LLM at least once.[26]
An image generated by Stable Diffusion based on the text prompt "a photograph of an astronaut riding a horse"
As time passed, the power ofgenerative AI grew stronger. In 2015, initial popularity began to grow with the release ofGoogle'sDeepDream. DeepDream is a generative AI that takes inputs from a previous image and morphs them to producehallucinogenic images.[32]
In January 2021,OpenAI releasedDALL-E, allowing forimage generation through text prompts.[33] This allows users to generate any image with a simple prompt. Soon after, other powerful models followed DALL-E, such asGoogle'sGemini.[34]
The popularity oftext-to-video generative AI tools grew exponentially. With the release of models such asOpenAI'sSora in 2024, the use oftext-to-video tools became normalized, as people utilized them foradvertisements, which saves on production costs and increases production speed.[35][36]
Generative AI is growing at a rapid rate, outpacing modern-daydetection tools.[37] With the common public having access to these tools, it raises concerns about the ethical use of generative AI. There have been multiple occasions where misinformation has been spread over the internet about politics due to a generated ordeep-faked video, posing as asecurity threat.[38][39]
In 2016,Google'sDeepMind producedWaveNet. WaveNet allowed the generation of raw audio of speech and piano.[45] WaveNet is able to generate different voices by identifying the speakers.[45] This acted as a fundamental building block for future models, allowing audio to be formed from scratch. This wouldn't only help with the production of music, but voice generation as well.[46]
Following in the footsteps ofWaveNet,OpenAI released Jukebox, the first large-scale model to generate songs. Jukebox allowed forraw audio in different genres and styles, showing that AI had the power to generate complex audio.[47]Google publishedMusicLM, allowing users to generate raw audio through text prompts.[48] The model can also create full songs with only a hummed melody and text.[48] This marked a leap as music generation tools became more accessible to the public.
In March 2020,15.ai was founded. 15.ai allowed forvoice imitation, playing a major role in the AI boom. With only a short amount of training, it was able to generate acceptable voices and became mainstream as people used it for their favorite fictional characters.[49]
Artificially generated vocals were able to be generated with tools such asElevenLabs. ElevenLabs allows for the creation of vocals with any public audio.[50] This allows for any celebrity or politician who has voice clips on the internet to be subject tovoice imitation, as songs from artists that never existed began being produced. This also led todeep-faking the voices of politicians, asJoe Biden received attention for a fakerobocall that voters received.[51]
Electricity consumed by hardware used for AI has increased demands on power grids, which has led to prolonged use of fossil fuel power plants which would otherwise have been deactivated.[52][53][54]
Microsoft, Google, and Amazon have all invested in existing or proposed nuclear power plants to meet these demands.[55][56] In September 2024, Microsoft signed a deal withConstellation Energy to purchase power from a reactor atThree Mile Island which had been shut down in 2019. The reactor is set to reopen in 2028 to provide power to Microsoft's data centers. The reactor is next to the unit which caused theworst nuclear power accident in US history in 1979.[57][58][59]
Whileartificial intelligence rises, people become split about their opinions on AI. Some people stand for AI as it becomes more normalized in society, while others stand against AI as it raises many concerns for the public. Many Americans believe that AI would help withdata analysis, medicine development, andweather forecasting.[60] It's also shown that people show acceptance towards AI if they are aware of the AI being controlled correctly.[61] Many people believe the opposite, believing that AI will be the demise of humans. A major point that they believe is that AI will weakenhuman creativity and limithuman relations.[60] This would be due to humans' reliance on artificial intelligence in order to communicate with people and to perform creative tasks such as making art. The issue about artificial intelligence replacing people's jobs is another strong point that gets brought up, as many people in the tech industry would be replaced by an AI.[62]
In 2024, AI patents in China and the U.S. numbered more than three-fourths of AI patents worldwide.[63] Though China had more AI patents, the U.S. had 35% more patents per AI patent-applicant company than China.[63]
Some economists have been optimistic about the potential of the current wave of AI to boost productivity and economic growth. Notably,Stanford University economistErik Brynjolfsson, in a series of articles has argued for an "AI-powered Productivity Boom"[64] and a "Coming Productivity Boom".[65] At the same time, others likeNorthwestern University economistRobert Gordon remain more pessimistic.[66] Brynjolfsson and Gordon have made a formal bet, registered atlong bets, about the rate of productivity growth in the 2020s, to be resolved at the end of the decade.[67]
Big Tech companies view the AI boom as both opportunity and threat; Alphabet's Google, for example, realized that ChatGPT could be aninnovator's dilemma-like replacement forGoogle Search. The company merged DeepMind andGoogle Brain, a rival internal unit, to accelerate its AI research.[68]
Themarket capitalization ofNvidia, whose GPUs are in high demand to train and use generative AI models, rose to over US$3.3 trillion, making it the world'slargest company by market capitalization as of June 19, 2024[69] and became the first company to reach US$4 trillion on July 9, 2025[70] and subsequently US$5 trillion on October 29, 2025,[71] just under 112 days later.
In 2023,San Francisco's population increased for the first time in years, with the boom cited as a contributing factor.[72]
Machine learning resources, hardware or software can be bought and licensed off-the-shelf or ascloud platform services.[73] This enables wide and publicly available uses, spreading AI skills.[73] Over half of businesses consider AI to be a top organizational priority and to be the most crucial technological advancement in many decades.[74]
Across industries, generative AI tools are becoming widely available through the AI boom and are increasingly used in businesses across regions.[75] A main area of use isdata analytics. Seen as an incremental change, machine learning improves industry performance.[76] Businesses report AI to be most useful in increased process efficiency, improved decision-making and strengthening of existing services and products.[77] Through adoption, AI has already positively influenced revenue generation in multiple business functions. Businesses have experienced revenue increases of up to 16%, mainly in manufacturing,risk management andresearch and development.[75]
AI and generative AI investments have been increasing with the boom, increasing from $18 billion in 2014 to $119 billion in 2021. Most notably, the share of generative AI investments was around 30% in 2023.[78] Further, generative AI businesses have seen considerableventure capital investments even though regulatory and economic outlooks remain in question.[79]
Tech giants capture the bulk of the monetary gains from AI and act as major suppliers to or customers of private users and other businesses.[80][81]
With the introduction of AI, there has been an exponential rise in production for businesses. It's expected that workers could use resources provided by artificial intelligence in order to boost theirproductivity.[82] As many small businesses don't use AI, it's believed that if it's adopted by more businesses, the whole work structure could be changed, as many tasks will beautomated by AI.[83]
Althoughproduction would increase, the effects on theeconomy would be negative. AI would cause more inequality as it risks concentrating wealth and power, and even possibly causing asocioeconomic divide.[84] AI could also cause changes in aspects such aswages orpayroll due to the fact that employers couldautomate jobs for less than a normal human, saving businesses money onlabor costs.
Inaccuracy,cybersecurity andintellectual property infringement are considered to be the main risks associated with the boom, although not many actively attempt to mitigate the risk.[75] Large language models have been criticized for reproducingbiases inherited from their training data, including discriminatory biases related to ethnicity or gender.[85] As adual-use technology, AI carries risks of misuse by malicious actors.[86] As AI becomes more sophisticated, it may eventually become cheaper and more efficient than human workers, which could causetechnological unemployment and a transition period of economic turmoil.[87][88] Public reaction to the AI boom has been mixed, with some hailing the new possibilities that AI creates, its sophistication and potential for benefiting humanity;[89][90] while others denounced it forthreatening job security[91][92] and for giving 'uncanny' or flawed responses.[93]
Tech companies such as Meta, OpenAI andNvidia have been sued by artists, writers, journalists, and software developers for using their work to train AI models.[99][100] Early generative AI chatbots, such as the GPT-1, used theBookCorpus, and books are still the best source of training data for producing high-quality language models. ChatGPT aroused suspicion that its sources included libraries of pirated content after the chatbot produced detailed summaries of every part ofSarah Silverman'sThe Bedwetter and verbatim excerpts ofpaywalled content fromThe New York Times.[101][102] In protest of theUK government holding consultations on how copyrighted music can legally be used to train AI models,[103] more than a thousand British musicians released an album with no sounds, entitledIs This What We Want?[104]
AVoice of America video covering potential dangers of AI-generated impersonation, and laws passed in California to combat it
The ability to generate convincing, personalized messages as well as realistic images may facilitate large-scalemisinformation, manipulation, and propaganda.[105]
On May 20, 2024, following the release of a demo of updates to OpenAI's ChatGPT Voice Mode feature a week earlier,[108][109] actorScarlett Johansson issued a statement[110] in relation to the "Sky" voice shown in the demo, accusing OpenAI of producing it to be very similar to her own, and her portrayal of the artificial intelligence voice assistant Samantha in the filmHer (2013), despite Johansson refusing an earlier offer from the company to provide her voice for the system. The agent of the unnamed voice actress who voiced Sky stated that she had recorded her lines in her natural speaking voice and that OpenAI had not mentioned the movieHer nor Johansson.[111][112]
Several incidents involving sharing of non-consensualdeepfake pornography have occurred. In late January 2024,deepfake images of American musician Taylor Swift proliferated. Several experts have warned that deepfake pornography is more quickly created and disseminated, due to the relative ease of using the technology.[113] Canada introduced federal legislation targeting sharing of non-consensual sexually explicit AI-generated photos; most provinces already had such laws.[114] In the United States, the DEFIANCE Act was introduced in March 2024.[115]
A large amount of electricity isneeded to power generative AI products,[116] making it more difficult for companies to achievenet zero emissions. From 2019 to 2024, Google'sgreenhouse gas emissions increased by nearly 50%, partly as a result of increased energy consumption by AI data centres.[117]
AI is expected by researchers of theCenter for AI Safety to improve the "accessibility, success rate, scale, speed, stealth and potency ofcyberattacks", potentially causing "significant geopolitical turbulence" if it reinforces attack more than defense.[86][118] Concerns have been raised about the potential capability of future AI systems to engineer particularly lethal and contagiouspathogens.[119][120]
The AI boom is said to have started an arms race in which large companies are competing against each other to have the most powerful AI model on the market, with speed and profit prioritized over safety and user protection.[121][122][123]
Coverage of advances in machine learning and artificial intelligence have coincided with discussions ofdigital sentience and morality,[126] such as whether AI programs should be granted rights.[127]
Much of the AI boom has been funded by loans and venture capital, but many commercial AI services remain of questionable practical utility or quality for business.[128] Despite more than $60 billion in corporate investment in AI in 2025,[129] 95% of business AI projects are unprofitable, according to research from MIT.[130] Producers of generative AI, such as OpenAI, also themselves currently have costs greatly exceeding their revenue.[131] As other major tech companies such asNvidia are both heavily invested into AI and dependent on the AI ecosystem and its hardware demands for their own ongoing growth,[128][130] this has raised speculation of a wider economic bubble in the tech industry, particularly if future demand falls short of the current levels of AI investment.[132][133][134]
^Oord, Aaron van den; Dieleman, Sander; Zen, Heiga; Simonyan, Karen; Vinyals, Oriol; Graves, Alex; Kalchbrenner, Nal; Senior, Andrew; Kavukcuoglu, Koray (September 19, 2016),WaveNet: A Generative Model for Raw Audio,arXiv:1609.03499
^"Jukebox".openai.com. September 21, 2022. RetrievedDecember 1, 2025.