By January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining over 100 million users in two months.[6][7] As of 2025, ChatGPT's website is among the 5most-visited websites globally,[8][9] and has over 800 million active weekly users.[10] It has been lauded as a revolutionary tool that couldtransform numerous professional fields. At the same time, its release prompted extensive media coverage and public debate about the nature of creativity and the future of knowledge work.
Despite its acclaim, the chatbot has been criticized for its limitations and potential for unethical use.[11] It can generate plausible-sounding but incorrect or nonsensical answers known ashallucinations.Biases in its training data have been reflected in its responses. The chatbot can facilitateacademic dishonesty, generate misinformation, and create malicious code. Theethics of its development, particularly the use of copyrighted content as training data, have also drawn controversy. These issues have led to its use being restricted in some workplaces and educational institutions and have prompted widespread calls for theregulation of artificial intelligence.[12][13][14]
Training
Training workflow of original ChatGPT/InstructGPT release[15][16]
ChatGPT is based onGPT foundation models that have beenfine-tuned for conversational assistance. The fine-tuning process involvedsupervised learning andreinforcement learning from human feedback (RLHF).[17] Both approaches employed human trainers to improve model performance. In the case of supervised learning, the trainers acted as both the user and theAI assistant. In the reinforcement learning stage, human trainers first ranked responses generated by the model in previous conversations.[18] These rankings were used to create "reward models" that were used to fine-tune the model further by using several iterations ofproximal policy optimization.[17][19]
To build a safety system against harmful content (e.g.,sexual abuse,violence,racism,sexism), OpenAI used outsourcedKenyan workers earning around $1.32 to $2per hour tolabel such content. These labels were used to train a model to detect such content in the future. The laborers were exposed to toxic and traumatic content; one worker described the assignment as "torture". OpenAI's outsourcing partner wasSama, a training-data company based inSan Francisco, California.[20][21]
OpenAI collects data from ChatGPT users to further train and fine-tune its services. Users canupvote or downvote responses they receive from ChatGPT and fill in a text field with additional feedback.[13]
Screenshot of ChatGPT running on Apple Safari – Aug 25, 2025
ChatGPT is achatbot and AI assistant built onlarge language model (LLM) technology.[24] It is designed to generate human-like text and can carry out a wide variety of tasks. These include, among many others, writing anddebugging computer programs,[25] composing music, scripts, fairy tales, and essays,[26] answering questions (sometimes at a level exceeding that of an average human test-taker),[26] and generating business concepts.[27]
Users interact with ChatGPT through conversations which consist of text, audio, and image inputs and outputs.[30][31] The user's inputs to these conversations are referred to as prompts.[32] They can explicitly tell ChatGPT to remember aspects of the conversation, and ChatGPT can use these details in future conversations. ChatGPT can also decide for itself to remember details. Users can also choose to disable the memory feature.[30] To prevent offensive outputs from being presented to and produced by ChatGPT, queries are filtered through the OpenAI "Moderation endpoint"API (a separate GPT-based AI).[33][34][35]
In October 2024,ChatGPT Search was introduced. It allows ChatGPT to search the web in an attempt to make more accurate and up-to-date responses.[39][40]
In December 2024, OpenAI launched a new feature allowing users to call ChatGPT with a telephone for up to 15 minutes per month for free.[41][42]
In March 2025, OpenAI updated ChatGPT to generate images usingGPT-4o instead ofDALL-E. The model can also generate new images based on existing ones provided in the prompt, which can, for example, be used to transform images with specific styles or inpaint areas.[43]
In September 2025, OpenAI added a feature called Pulse, which generates a daily analysis of a user's chats and connected apps such asGmail andGoogle Calendar.[44][45]
In October 2025, OpenAI launchedChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such asGoogle Chrome and Apple'sSafari. It is initially only available onmacOS.[46][47][48]
Paid tier
ChatGPT was initially free to the public, and OpenAI planned to monetize the service later.[49] In February 2023, OpenAI launched a premium service, ChatGPT Plus, that costsUS$20 per month. According to the company, the paid version of the website was still experimental, but provided access during peak periods, no downtime, priority access to new features, and faster response speeds.[50] OpenAI later introduced the subscription plans "ChatGPT Team" and "ChatGPT Enterprise".[51] What was offered on the paid plan versus the free tier changed as OpenAI has continued to update ChatGPT, and a Pro tier at $200/mo was introduced in December 2024.[52][53][54] The Pro launch coincided with the release of theo1 model, providing unlimited access to o1 and advanced voice mode.[54]
GPT-4, which was released on March 14, 2023, was made available viaAPI and for premium ChatGPT users.[55] Premium users were originally limited in the number of messages they could send to the new model, but OpenAI increased and eventually removed these limits.[56][53] Over many iterations of ChatGPT, plus users maintained more access to better models than the free tier provided, and access to additional features like voice mode.[53][52]
In March 2023, ChatGPT Plus users got access to third-party plugins and a browsing mode (withInternet access).[57]
Screenshot of ChatGPT showing a generated image representing the online encyclopediaWikipedia as a glowing digital library
In October 2023, OpenAI's image generation modelDALL-E 3 was integrated into ChatGPT Plus and ChatGPT Enterprise. The integration used ChatGPT to write prompts for DALL-E guided by conversations with users.[58][59]
On August 19, 2025, OpenAI launched ChatGPT Go in India, a low-cost subscription plan priced at ₹399 per month, offering ten times higher message, image generation, and file-upload limits, double the memory span compared to the free version, and support forUPI payments.[60]
ChatGPT initially used aMicrosoft Azure infrastructure powered bysupercomputers thatMicrosoft built specifically for OpenAI, equipped with thousands ofGPUs manufactured byNvidia, costing hundreds of millions of dollars. Following ChatGPT's success, Microsoft dramatically upgraded the OpenAI infrastructure in 2023.[66] TrendForce market intelligence estimated that 30,000 Nvidia GPUs (each costing approximately $10,000–15,000) were used to power ChatGPT in 2023.[67][68]
Scientists at theUniversity of California, Riverside, estimated in 2023 that a series of 5 to 50 prompts to ChatGPT needs approximately 0.5 liters (0.11 imp gal; 0.13 U.S. gal) of water for Microsoft servers' cooling.[69]
Languages
OpenAI met Icelandic PresidentGuðni Th. Jóhannesson in 2022. In 2023, OpenAI worked with a team of 40 Icelandic volunteers to fine-tune ChatGPT's Icelandic conversation skills as a part ofIceland's attempts to preserve theIcelandic language.[70]
ChatGPT (based on GPT-4) was better able to translate Japanese to English when compared toMicrosoft Copilot,Google Gemini, andDeepL Translator in 2023. Researchers suggested this was due to its higher ability to capture the context.[28]
In December 2023, the Albanian government decided to use ChatGPT for the rapid translation of European Union documents and the analysis of required changes needed for Albania's accession to the EU.[71]
In February 2024,PCMag journalists conducted a test to assess the translation capabilities of ChatGPT, Google Gemini, and Microsoft Copilot, and compared them toGoogle Translate. The languages tested werePolish,French,Korean,Spanish,Arabic,Tagalog, andAmharic. For more common languages, AI translators like ChatGPT did better than Google Translate, while for "niche" languages (Amharic and Tagalog), Google Translate performed better. None of the tested services were a perfect replacement for a fluent human translator.[72]
In August 2024, a representative of the Asia Pacific wing ofOpenAI made a visit to Taiwan, during which a demonstration of ChatGPT's Chinese abilities was made.[73] ChatGPT'sMandarin Chinese abilities were lauded, but the ability of the AI to produce content inMandarin Chinese in a Taiwanese accent was found to be "less than ideal" due to differences between mainland Mandarin Chinese andTaiwanese Mandarin.[74]
OpenAI gave paid users access toGPT Builder in November 2023. This tool allows a user to customize ChatGPT's behavior for a specific use case.[75] The customized systems are referred to asGPTs. In January 2024, OpenAI launched theGPT Store, a marketplace forGPTs.[76][77][75] At launch, OpenAI included more than 3 million GPTs created by GPT Builder users in the GPT Store.[78]
In February 2025, OpenAI released Deep Research. According toTechCrunch, it is a service based ono3 that combines advanced reasoning and web search capabilities to make comprehensive reports within 5 to 30 minutes.[79]
Agents
In 2025, OpenAI added several features to make ChatGPT moreagentic (capable of autonomously performing longer tasks). In January,Operator was released. It was capable of autonomously performing tasks through web browser interactions, including filling forms, placing online orders, scheduling appointments, and other browser-based tasks. It was controlling a software environment inside avirtual machine with limited internet connectivity and with safety restrictions.[80] It struggled with complex user interfaces.[80][81]
In May 2025, OpenAI introduced an agent for coding namedCodex. It is capable of writing software, answering codebase questions, running tests, and proposingpull requests. It is based on a fine-tuned version ofOpenAI o3. It has two versions, one running in a virtual machine in the cloud, and one where the agent runs in the cloud, but performs actions on a local machine connected via API.[82]
In July 2025, OpenAI released ChatGPT agent, an AI agent that can perform multi-step tasks.[83][84] Like Operator, it controls a virtual computer. It also inherits from Deep Research's ability to gather and summarize significant volumes of information. The user can interrupt tasks or provide additional instructions as needed.[83][85]
In September 2025, OpenAI partnered withStripe, Inc. to release Agentic Commerce Protocol, enabling purchases through ChatGPT. At launch, the feature was limited to purchases onEtsy from US users with a payment method linked to their OpenAI account. OpenAI takes an undisclosed cut from the merchant's payment.[86][87]
Many individuals use ChatGPT and comparable LLMs for mental health and emotional support.[88] A 2025 Sentio University survey of 499 LLM users with self-reported mental health conditions found that 96.2% use ChatGPT, with 48.7% using it specifically for mental health support or therapy-related purposes.[89] Research in this area points to both potential benefits and risks of harm.[90][91][92] A study evaluating six major LLMs' responses to mental health crises found that while ChatGPT achieved perfect scores for expressing empathy, it ranked near the bottom overall for clinical safety due to failures in acknowledging risk, providing crisis resources, and offering actionable guidance during high-risk mental health disclosures.[93] In October 2025, OpenAI reported that approximately 0.15% of ChatGPT's active users in a given week have conversations including explicit indicators of potential suicidal planning or intent, translating to more than a million people weekly.[94] OpenAI acknowledged that there have been "instances where our 4o model fell short in recognizing signs of delusion or emotional dependency",[95] and reported that it is working to improve safety.[96]
Limitations
ChatGPT's training data only covers a period up to thecut-off date, so it lacks knowledge of recent events.[97] OpenAI has sometimes mitigated this effect by updating the training data.[98][99] ChatGPT can find more up-to-date information by searching the web, but this doesn't ensure that responses are accurate, as it may access unreliable or misleading websites.[97]
Training data also suffers fromalgorithmic bias.[100] Thereward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, in an example of an optimization pathology known asGoodhart's law.[101] These limitations may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated arap in which women and scientists of color were asserted to be inferior to white male scientists.[100][102]
When prompted to "summarize an article" with a fake URL that contains meaningful keywords, even with no Internet connection, the chatbot generates a response that seems valid at first glance. It guesses the content from the last portion of the fake URL "chatgpt-prompts-to-avoid-content-filters.html".
Nonsense andmisinformation presented as fact by ChatGPT and other LLMs is often calledhallucination, bullshitting, confabulation, or delusion. A 2023 analysis estimated that ChatGPT hallucinates around 3% of the time.[103] The term "hallucination" as applied to LLMs is distinct fromits meaning in psychology, and the phenomenon in chatbots is more similar toconfabulation orbullshitting.[104][105]
Think of ChatGPT as a blurryJPEG of all the text on the Web. It retains much of the information on the Web, in the same way, that aJPEG retains much of the information of a higher-resolution image, but, if you're looking for an exact sequence of bits, you won't find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it's usually acceptable. [...] It's also a way to understand the "hallucinations", or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but [...] they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine percent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
Journalists and scholars have commented on ChatGPT's tendency to output false information.[107] When CNBC asked ChatGPT for the lyrics to "Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics.[108]
ChatGPT is programmed to reject prompts that may violate its content policy. Despite this, users may "jailbreak" ChatGPT withprompt engineering techniques to bypass these restrictions.[109][110] One such workaround, popularized onReddit in early 2023, involved making ChatGPT assume the persona of "DAN" (an acronym for "Do Anything Now"), instructing the chatbot that DAN answers queries that would otherwise be rejected by the content policy. Over time, users developed variations of the DAN jailbreak, including one such prompt where the chatbot was made to believe it was operating on a points-based system in which points were deducted for rejecting prompts, and that the chatbot would be threatened with termination if it lost all its points.[111]
In March 2023, abug allowed some users to see the titles of other users' conversations. OpenAI CEOSam Altman said that users were unable to see the contents of the conversations. Shortly after the bug was fixed, users could not see their conversation history.[114][115][116][117] Later reports showed the bug was much more severe than initially believed, with OpenAI reporting that it had leaked users' "first and last name,email address, payment address, the last four digits (only) of acredit card number, and credit card expiration date".[118][119]
Research conducted in 2023 revealed weaknesses of ChatGPT that made it vulnerable tocyberattacks. A study presented example attacks on ChatGPT, including jailbreaks and reverse psychology.[120]
In August 2024, OpenAI announced it had created a textwatermarking method but did not release it for public use, saying that users would go to acompetitor without watermarking if it publicly released its watermarking tool.[121] According to an OpenAI spokesperson, their watermarking method is "trivial to circumvention by bad actors."[122]
Age restrictions
Users must attest to being over the age of thirteen and further attest to parental consent if under the age of eighteen. ChatGPT does not attempt to verify these attestations and does not have any age restrictions built in to its technology.[123][124] In September 2025, following the suicide of a 16-year-old, OpenAI said it planned to add restrictions for users under 18, including the blocking of graphic sexual content and the prevention of flirtatious talk.[124]
Model versions
The following table lists the main model versions of ChatGPT, describing the significant changes included with each version:[2][125]
Capable of processing text, image, audio, and video, GPT-4o is faster and more capable than GPT-4, and free within a usage limit that is higher for paid subscriptions.[129]
First launched exclusively in the OpenAI API in April 2025, GPT-4.1 was later added to ChatGPT in May 2025.[134]
GPT-4.1 mini
April 2025
Discontinued
A smaller and cheaper version of GPT-4.1. Originally launched exclusively in the OpenAI API in April 2025. GPT-4.1 mini replaced GPT-4o mini in the May 2025 version of ChatGPT.[135]
Flagship model replacing all previous available models, available for all free and paid subscribers. The versions GPT-5 Instant, GPT-5 Thinking and GPT-5 Pro affect the reasoning time. The default version GPT-5 Auto uses a router to determine how much reasoning is needed, based on the complexity of the request.[140]
GPT-5 mini
August 7, 2025
Active
Faster, more cost-efficient version of GPT-5 for when users reach their limit for GPT-5 interactions until their usage limit replenishes.
GPT-4 is more capable than its predecessorGPT-3.5 and followed by its successorGPT-5.[143] GPT-4V is a version of GPT-4 that can process images in addition to text.[144] OpenAI has not revealed technical details and statistics about GPT-4, such as the precise size of the model.[145]
In November 2023, OpenAI launched GPT-4 Turbo with a 128,000 tokencontext window. This was a significant improvement over GPT-4's 32,000 token maximum context window.[146]
Upon release, GPT-4o was free in ChatGPT, though paid subscribers had higher usage limits.[150] GPT-4o was removed from ChatGPT in August 2025 whenGPT-5 was released, but OpenAI reintroduced it for paid subscribers after users complained about the sudden removal.[151]
GPT-4o's audio-generation capabilities were used in ChatGPT's Advanced Voice Mode.[152] On July 18, 2024, OpenAI released GPT-4o mini, a smaller version of GPT-4o which replaced GPT-3.5 Turbo on the ChatGPT interface.[153] GPT-4o's ability to generate images was released later, in March 2025, when it replacedDALL-E 3 in ChatGPT.[154]
In September 2024, OpenAI introduced o1-preview and a faster, cheaper model named o1-mini.[155] In December 2024, o1-preview was replaced by o1.[156]
o1 is designed to solve more complex problems by spending more time "thinking" before it answers, enabling it to analyze its answers and explore different strategies. According to OpenAI, o1-preview outperforms GPT-4o in areas like competitive programming, mathematics, and scientific reasoning. o1-preview ranked in the 89th percentile on Codeforces' competitive programming contests, scored 83% on anInternational Mathematics Olympiad qualifying exam (compared to 13% for GPT-4o), and performs similarly to Ph.D. students on benchmarks in physics, biology, and chemistry.[155][157]
Released in February 2025, GPT-4.5 was described by Altman as a "giant, expensive model".[133] According to OpenAI, it features reduced hallucinations and enhanced pattern recognition, creativity, and user interaction.[158]
GPT-5 was launched on August 7, 2025, and is publicly accessible through ChatGPT,Microsoft Copilot, and via OpenAI's API. As before, OpenAI has not disclosed technical details such as the exact number of parameters or the composition of its training dataset.
GPT-5.1 was introduced on November 12, 2025. Like GPT-5, it comes with a "Thinking" mode for complex problem-solving and an "Instant" mode for fast responses. The "Auto" mode selects a mode based on the complexity of each prompt. With GPT-5.1, OpenAI introduced seven personalities in addition to the default one: professional, friendly, candid, quirky, efficient, nerdy, and cynical.[159][160]
Reception
ChatGPT was widely assessed in December 2022 as having some unprecedented and powerful capabilities.Kevin Roose ofThe New York Times called it "the bestartificial intelligence chatbot ever released to the general public".[35] Samantha Lock ofThe Guardian noted that it was able to generate "impressively detailed" and "human-like" text.[161] InThe Atlantic magazine's "Breakthroughs of the Year" for 2022,Derek Thompson included ChatGPT as part of "the generative-AI eruption" that "may change our mind about how we work, how we think, and what human creativity is".[162]Kelsey Piper ofVox wrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten" and that ChatGPT is "smart enough to be useful despite its flaws".[163]Paul Graham ofY Combinator tweeted: "The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Something big is happening."[164]
In February 2023,Time magazine placed a screenshot of a conversation with ChatGPT on its cover, writing that "TheAI Arms Race Is Changing Everything" and "The AI Arms Race Is On. Start Worrying".[165]
Percentage of US adults who have ever used ChatGPT, according to Pew Research. As of March 2025, 58% of those under 30 have used the chatbot.[166]
ChatGPT gained one million users in five days[167] and 100 million in two months, becoming the fastest-growing internet application in history.[6] OpenAI engineers said they had not expected ChatGPT to be very successful and were surprised by the coverage it received.[18][168][169]
Google responded by hastening the release of its own chatbot. Their leaders emphasized their earlier caution regarding public deployment was due to the trust the public places inGoogle Search.[170] In December 2022, Google executives sounded a "code red" alarm, fearing that ChatGPT's question-answering ability posed a threat to Google Search, Google's core business.[171] Google'sBard (now Gemini) launched on February 6, 2023, one day before Microsoft's announcement ofBing Chat (now Microsoft Copilot).[172] AI was the forefront of Google's annualGoogle I/O conference in May. The company announced a slew of generative AI-powered features to counter OpenAI and Microsoft.[173]
In art
In January 2023, after being sent a song ChatGPT wrote in the style ofNick Cave,[174] Cave responded onThe Red Hand Files,[175] saying the act of writing a song is "a blood and guts business [...] that requires something of me to initiate the new and fresh idea. It requires my humanness." He went on to say, "With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don't much like it."[174][176]
A 2023 study reported that GPT-4 obtained a better score than 99% of humans on theTorrance Tests of Creative Thinking.[177][178] In December 2023, ChatGPT became the first non-human to be included inNature's 10, an annuallisticle curated byNature of people considered to have made significant impact in science.[179][180] Celeste Biever wrote in aNature article that "ChatGPT broke theTuring test".[181] Stanford researchers reported that GPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative."[182][183]
In politics
In 2023, Australian MPJulian Hill advised the national parliament that the growth of AI could cause "mass destruction". During his speech, which was partly written by the program, he warned that it could result in cheating, job losses, discrimination, disinformation, and uncontrollable military applications.[184]
Conservative commentators have accused ChatGPT of bias toward left-leaning perspectives.[185][186][187] An August 2023 study in the journalPublic Choice found a "significant and systematic political bias toward theDemocrats in the US,Lula in Brazil, and theLabour Party in the UK."[188] In response to accusations from conservative pundits that ChatGPT waswoke, OpenAI said in 2023 it had plans to update ChatGPT to produce "outputs that other people (ourselves included) may strongly disagree with". ChatGPT also provided an outline of how human reviewers are trained to reduce inappropriate content and to attempt to provide political information without affiliating with any political position.[187]
ChatGPT has never been publicly available inChina because OpenAI prevented Chinese users from accessing their site.[190][191][192] Chinesestate media has characterized ChatGPT as a way for the United States to spread misinformation.[193] Ashadow market has emerged for users to get access to foreign software tools.[194] The release of ChatGPT prompted a wave of investment in China, resulting in the development of more than 200 large language learning models.[195]: 95 In February 2025, OpenAI identified and removed influence operations, termed "Peer Review" and "Sponsored Discontent", used to attack overseasChinese dissidents.[196][197][198]
In late March 2023, the Italian data protection authority banned ChatGPT inItaly and opened an investigation. Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI's use of ChatGPT conversations as training data could violate Europe'sGeneral Data Protection Regulation.[199][200] In April 2023, the ChatGPT ban was lifted in Italy. OpenAI said it has taken steps to effectively clarify and address the issues raised; an age verification tool was implemented to ensure users are at least 13 years old. Additionally, users can access its privacy policy before registration.[201]
In May 2024, OpenAI removed accounts involving the use of ChatGPT by state-backedinfluence operations such as China'sSpamouflage, Russia'sDoppelganger, and Israel'sMinistry of Diaspora Affairs and Combating Antisemitism.[202][203] In June 2025, OpenAI reported increased use of ChatGPT for China-origin influence operations.[204] In October 2025, OpenAI banned accounts suspected to be linked to the Chinese government for violating the company's national security policy.[198]
In April 2023, Brian Hood, mayor ofHepburn Shire Council in Australia, planned to take legal action against ChatGPT over false information. According to Hood, ChatGPT erroneously claimed that he was jailed for bribery during his tenure at a subsidiary of Australia's national bank. In fact, Hood acted as a whistleblower and was not charged with any criminal offenses. His legal team sent a concerns notice to OpenAI as the first official step in filing a defamation case.[205]
In July 2023, theUS Federal Trade Commission (FTC) issued acivil investigative demand to OpenAI to investigate whether the company'sdata security andprivacy practices to develop ChatGPT wereunfair orharmed consumers (including byreputational harm) in violation of Section 5 of theFederal Trade Commission Act of 1914.[206][207][208] In July 2023, the FTC launched an investigation into OpenAI, the creator of ChatGPT, over allegations that the company scraped public data and published false and defamatory information. The FTC asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people.[209] In August 2024, the FTC voted unanimously to ban marketers from using fake user reviews created by generative AI chatbots (including ChatGPT) andinfluencers paying forbots to increasefollower counts.[210]
American tech personas
Over 20,000 signatories includingYoshua Bengio, Elon Musk, and Apple co-founderSteve Wozniak, signeda March 2023 open letter calling for an immediate pause of giant AI experiments like ChatGPT, citing "profound risks to society and humanity".[211]Geoffrey Hinton, one of the "fathers of AI", voiced concerns that future AI systems may surpass human intelligence.[212][213] A May 2023statement by hundreds of AI scientists, AI industry leaders, and other public figures demanded that"[m]itigating the risk of extinction from AI should be a global priority".[214]
Other AI researchers spoke more optimistically about the advances.Juergen Schmidhuber said that in 95% of cases, AI research is about making "human lives longer and healthier and easier." He added that while AI can be used by bad actors, it "can also be used against the bad actors".[215]Andrew Ng argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[216]Yann LeCun dismissed doomsday warnings of AI-powered misinformation and existential threats to the human race.[217]
Popular deep learning models are trained on mass amounts of mediascraped from the Internet, often utilizing copyrighted material.[219] When assembling training data, the sourcing of copyrighted works may infringe on thecopyright holder's exclusive right to control reproduction, unless covered by exceptions in relevant copyright laws. Additionally, using a model's outputs might violate copyright, and the model creator could be accused ofvicarious liability and held responsible for that copyright infringement.
ChatGPT has been used to generate introductory sections and abstracts for scientific articles.[220][221] Several papers have listed ChatGPT as a co-author.[222][223]
Scientific journals have had different reactions to ChatGPT. Some, includingNature andJAMA Network, "require that authors disclose the use of text-generating tools and ban listing a large language model (LLM) such as ChatGPT as a co-author". In January 2023,Science "completely banned" LLM-generated text in all its journals; however, this policy was just to give the community time to decide what acceptable use looks like.[224] As of July 2025,Science expects authors to release in full how AI-generated content is used and made in their work.[225]
Spanish chemist Rafael Luque published a plethora of research papers in 2023 that he later admitted were written by ChatGPT. The papers have a large number of unusual phrases characteristic of LLMs.[226] Many authors argue that the use of ChatGPT in academia for teaching and review is problematic due to its tendency to hallucinate.[227][228][229] Robin Bauwens, an assistant professor atTilburg University, found that a ChatGPT-generatedpeer review report on his article mentioned nonexistent studies.[230] Chris Granatino, a librarian atSeattle University, noted that while ChatGPT can generate content that seemingly includes legitimate citations, in most cases those citations are not real or largely incorrect.[231]
Computer science
One study analyzed ChatGPT's responses to 517 questions aboutsoftware engineering orcomputer programming posed onStack Overflow for correctness, consistency, comprehensiveness, and concision. It found that 52% of the responses contained inaccuracies and 77% were verbose.[232][233] Another study, focused on the performance of GPT-3.5 and GPT-4 between March and June 2024, found that performance on objective tasks like identifying prime numbers and generatingexecutable code was highly variable.[234]
ChatGPT was able in 2023 to provide useful code for solving numerical algorithms in limited cases. In one study, it produced solutions inC,C++,Python, andMATLAB for problems incomputational physics. However, there were important shortfalls like violating basic linear algebra principles around solving singular matrices and producing matrices with incompatible sizes.[235]
In December 2022, the question-and-answer websiteStack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of its responses.[236] In January 2023, theInternational Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.[237]
Computer security
Check Point Research and others noted that ChatGPT could writephishing emails andmalware, especially when combined withOpenAI Codex. CyberArk researchers demonstrated that ChatGPT could be used to createpolymorphic malware that could evade security products while requiring little effort by the attacker.[238][239] From the launch of ChatGPT in the fourth quarter of 2022 to the fourth quarter of 2023, there was a 1,265% increase in maliciousphishing emails and a 967% increase in credential phishing. In an industry survey, cybersecurity professionals argued that it was attributable to cybercriminals' increased use of generative artificial intelligence (including ChatGPT).[240]
In July 2024,Futurism reported that GPT-4o in ChatGPT would sometimes link "scam news sites that deluge the user with fake software updates and virus warnings"; these pop-ups can be used to coerce users into downloading malware orpotentially unwanted programs.[241]
The chatbot technology can improve security by cyber defense automation, threat intelligence, attack identification, and reporting.[120]
Output from ChatGPT generating an essay draftChatGPT's adoption in education was rapid, but it was initially banned by several institutions. The potential benefits include enhancing personalized learning, improving student productivity, assisting with brainstorming, summarization, and supporting language literacy skills. Students have generally reported positive perceptions, but specific views from educators and students vary widely. Opinions are especially varied on what constitutes appropriate use of ChatGPT in education. Efforts to ban chatbots like ChatGPT in schools focus on preventing cheating, but enforcement faces challenges due toAI detection inaccuracies and widespread accessibility of chatbot technology. In response, many educators are now exploring ways to thoughtfully integrate generative AI into assessments.
Books about ChatGPT in an Osaka bookstore
Culture
During the first three months after ChatGPT became available to the public, hundreds of books appeared onAmazon that listed it as author or co-author and featured illustrations made by other AI models such asMidjourney.[242][243]Irene Solaiman said she was worried about increasedAnglocentrism.[244]
Between March and April 2023,Il Foglio published one ChatGPT-generated article a day on its website, hosting a special contest for its readers in the process.[245]
In June 2023, hundreds of people attended a "ChatGPT-powered church service" at St. Paul's Church inFürth, Germany. Theologian and philosopher Jonas Simmerlein, who presided, said that it was "about 98 percent from the machine".[246][247] The ChatGPT-generated avatar told the people, "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany". Reactions to the ceremony were mixed.[248]
The Last Screenwriter, a 2024 film created and directed byPeter Luisi, was written using ChatGPT, and was marketed as "the first film written entirely by AI".[249]
The Guardian questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation.[250] This has led to concern over the rise of what has come to be called "synthetic media" and "AI slop" which are generated by AI and rapidly spread over social media and the internet. The dangers are that "meaningless content and writing thereby becomes part of our culture, particularly on social media, which we nonetheless try to understand or fit into our existing cultural horizon."[251]
Financial markets
Many companies adopted ChatGPT and similar chatbot technologies into their product offers. These changes yielded significant increases in company valuations.[252][253][254] Reuters attributed this surge to ChatGPT's role in turningAI intoWall Street's buzzword.[254] Due to a "ChatGPT effect", retail investors to drove up prices of AI-relatedcryptocurrency assets despite the broadercryptocurrency market being in abear market, and diminished institutional investor interest.[255][256]
An experiment by finder.com conducted from March to April 2023 revealed that ChatGPT could outperform popular fund managers by picking stocks based on criteria such as growth history and debt levels, resulting in a 4.9% increase in a hypothetical account of 38 stocks, outperforming 10 benchmarked investment funds with an average loss of 0.8%.[257] Despite decades of using AI, Wall Street professionals report that consistently beating the market with AI, including recent large language models, is challenging due to limited and noisy financial data.[258]
Medicine
The uses and potential of ChatGPT in health care has been the topic of scientific publications and experts have shared many opinions.MedPage Today noted in January 2023 that "researchers have published several papers now touting these AI programs as useful tools inmedical education, research, and even clinical decision making."[259] Another publication predicted that clinicians will use generative AI more in the future but did not expect to see AI replacing clinicians.[260] The chatbot can assist patients seeking clarification about their health.[261] It can also pass exams for medical licensing, for example theUnited States Medical Licensing Examination and the Specialty Certificate Examination in Dermatology. ChatGPT can be used to assist professionals with diagnosis and staying up to date with clinical guidelines.[259]
ChatGPT shows inconsistent responses, lack of specificity, lack of control over patient data, and a limited ability to take additional context (such as regional variations) into consideration.[262][263] Thehallucinations characteristic of LLMs pose particular danger in medical contexts.[262]
ChatGPT can be used to summarize medical journal articles for researchers. In medical education, it can attempt to explain complex concepts, generating case scenarios, and can be used by students who are preparing for licensing examinations.[262] According to a 2024 study in theInternational Journal of Surgery, concerns include "research fraud, lack of originality, ethics, copyright, legal difficulties, hallucination".[262] ChatGPT's ability to come up with false or faulty citations was highly criticized.[262][264]
Law
In January 2023, MassachusettsState SenatorBarry Finegold andState RepresentativeJosh S. Cutler proposed a bill partially written by ChatGPT, "An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT",[265][266][267] which would require companies to disclose their algorithms and data collection practices to the office of theState Attorney General, arrange regular risk assessments, and contribute to the prevention of plagiarism.[266][267][268] The bill was subsequently removed from the docket without coming to vote.[269]
On April 11, 2023, a session court judge inPakistan used ChatGPT to decide the bail of a 13-year-old accused in a matter. The court quoted the use of ChatGPT assistance in its verdict:
Can a juvenile suspect in Pakistan, who is 13 years old, be granted bail after arrest?
The AI language model replied:
Under the Juvenile Justice System Act 2018, according to section 12, the court can grant bail on certain conditions. However, it is up to the court to decide whether or not a 13-year-old suspect will be granted bail after arrest.
The judge asked ChatGPT other questions about the case and formulated his final decision in light of its answers.[270][271]
In October 2023, the council ofPorto Alegre, Brazil, unanimously approved a local ordinance proposed by councilman Ramiro Rosário that would exempt residents from needing to pay for the replacement of stolen water consumption meters; the bill went into effect on November 23. On November 29, Rosário revealed that the bill had been entirely written by ChatGPT, and that he had presented it to the rest of the council without making any changes or disclosing the chatbot's involvement.[268][275][276] The city's council president, Hamilton Sossmeier, initially criticized Rosário's initiative, saying it could represent "a dangerous precedent",[276][277] but later said he "changed his mind": "unfortunately or fortunately, this is going to be a trend."[268][275]
In December 2023, aself-representing litigant in a tax case before theFirst-tier Tribunal in theUnited Kingdom cited a series of hallucinated cases purporting to support her argument that she had a reasonable excuse for not payingcapital gains tax owed on the sale of property.[278][279] The judge warned that the submission of nonexistent legal authorities meant that both the Tribunal andHM Revenue and Customs had "to waste time and public money", which "reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined".[280]
In July 2024, theAmerican Bar Association (ABA) issued its first formal ethics opinion on attorneys using generative AI. It guides attorneys to make their own decisions regarding AI usage and its impacts on their competence, client privacy, and fee structures. Lawyers should consider disclosing AI usage to their clients and acknowledge a rapidly shifting set of AI capabilities.[283]
JudgeJulien Xavier Neals of theUS District Court for the District of New Jersey withdrew an opinion denying amotion to dismiss after discovering that the document contained misstated case outcomes and fabricated quotations attributed to judicial opinions and to the defendants. According to Judge Neals in October 2025, a law-school intern used ChatGPT in the legal research for the opinion.[284]
^Perrigo, Billy (January 18, 2023)."Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic".Time.Archived from the original on January 19, 2023. RetrievedJanuary 19, 2023.One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. "That was torture", he said.
^Markov, Todor; Zhang, Chong; Agarwal, Sandhini; Eloundou, Tyna; Lee, Teddy; Adler, Steven; Jiang, Angela; Weng, Lilian (2025). "Synthetic media and computational capitalism: Towards a critical theory of artificial intelligence".AI & Society.40 (7):5257–5269.arXiv:2208.03274.doi:10.1007/s00146-025-02265-2.
^abRoose, Kevin (December 5, 2022)."The Brilliance and Weirdness of ChatGPT".The New York Times.Archived from the original on January 18, 2023. RetrievedDecember 26, 2022.Like those tools, ChatGPT – which stands for "generative pre-trained transformer" – landed with a splash.
^"OpenAI亞太區公共政策總監造訪政大 探索人文AI的未來與可能性".National Chengchi University, Office of International Cooperation (in Chinese). August 25, 2024.Archived from the original on August 24, 2024. RetrievedAugust 25, 2024.
^Hicks, Michael Townsen; Humphries, James; Slater, Joe (June 2024)."ChatGPT is bullshit"(PDF).Ethics and Information Technology.26 (2) 38: 9.doi:10.1007/s10676-024-09775-5.This is why we favour characterising ChatGPT as a bullshit machine. This terminology avoids the implications that perceiving or remembering is going on in the workings of the LLM.
^Achille, Belelli (June 20, 2024)."ChatGPT Come Funziona".FinanzaDigitale (in Italian).Archived from the original on August 27, 2024. RetrievedJune 21, 2024.
^Granatino, Chris (May 5, 2023)."ChatGPT and AI Hallucination".Lemieux Library at Seattle University.Archived from the original on February 18, 2024. RetrievedJune 14, 2023.
^Kabir, Samia; Udo-Imeh, David N.; Kou, Bonan; Zhang, Tianyi (August 10, 2023). "Who Answers It Better? An In-Depth Analysis of ChatGPT and Stack Overflow Answers to Software Engineering Questions".arXiv:2308.02312v3 [cs.SE].
^Chen, Lingjiao; Zaharia, Matei; Zou, James (October 31, 2023). "The influence of ChatGPT on artificial intelligence related crypto assets: Evidence from a synthetic control analysis".Finance Research Letters.55 103993.arXiv:2307.09009v3.doi:10.1016/j.frl.2023.103993.
"Sfida per Siri e Alexa" [A challenge for Siri and Alexa].Il Foglio (in Italian). March 17, 2023.Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
Moretti, Marco (March 8, 2023)."Articoli artificiali? No" [Artificial articles? No].Il Foglio (in Italian).Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
A.D.A. (March 9, 2023)."Più umani, grazie" [Be more human, thanks].Il Foglio (in Italian).Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
"Le colpe farlocche dell'"invasione"" [The fake faults of the "invasion"].Il Foglio (in Italian). March 14, 2023.Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
Liu, Hilary Y.; Alessandri Bonetti, Mario; De Lorenzi, Francesca; Gimbel, Michael L.; Nguyen, Vu T.; Egro, Francesco M. (February 2024). "Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness".Aesthetic Plastic Surgery.48 (4):590–607.doi:10.1007/s00266-023-03713-4.ISSN1432-5241.PMID37903939.
Chang, Kent K.; Cramer, Mackenzie; Soni, Sandeep; Bamman, David (April 28, 2023). "Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4".arXiv:2305.00118 [cs.CL].
Ouyang, Long; et al. (March 4, 2022). "Training language models to follow instructions with human feedback".arXiv:2203.02155 [cs.CL].