The service gained 100 million users in two months, making it the fastest-growing consumersoftware application in history.[7][8] ChatGPT's website is among the top 5most-visited websites globally.[9][10] It has been lauded for its potential to transform numerous professional fields, and instigated public debate about the nature of creativity and the future ofknowledge work.
The chatbot has also been criticized for its limitations and potential for unethical use.[11] It can generate plausible-sounding but incorrect or nonsensical answers, known ashallucinations.Biases in itstraining data have been reflected in its responses. The chatbot can facilitateacademic dishonesty, generate misinformation, and create malicious code. Theethics of its development, particularly the use ofcopyrighted content as training data, have also drawn controversy.
Training
ChatGPT is based onGPT foundation models that have beenfine-tuned for conversational assistance. The fine-tuning process involvedsupervised learning andreinforcement learning from human feedback (RLHF).[12] Both approaches employed human trainers to improve model performance. In the case of supervised learning, the trainers acted as both the user and theAI assistant. In the reinforcement learning stage, human trainers first ranked responses generated by the model in previous conversations.[13] These rankings were used to create "reward models" that were used to fine-tune the model further by using several iterations ofproximal policy optimization.[12][14]
Training workflow of InstructGPT, used in the original version of ChatGPT[15][16]
To build a safety system against harmful content (e.g.,sexual abuse,violence,racism,sexism), OpenAI used outsourcedKenyan workers, earning around $1.32 to $2per hour, tolabel such content. These labels were used to train a model to detect such content in the future. The laborers were exposed to toxic and traumatic content; one worker described the assignment as "torture". OpenAI's outsourcing partner wasSama, a training-data company based inSan Francisco, California.[17][18]
OpenAI collects data from ChatGPT users to further train and fine-tune its services. Users canupvote or downvote responses they receive from ChatGPT, and can fill in a text field with additional feedback.[19]
ChatGPT is achatbot and AI assistant built onlarge language model (LLM) technology.[23] It is designed to generate human-like text and can carry out a wide variety of tasks. These include, among many others, writing anddebugging computer programs,[24] composing music, scripts, fairy tales, and essays,[25] answering questions (sometimes at a level exceeding that of an average human test-taker),[25] and generating business concepts.[26]
Users interact with ChatGPT through conversations which consist of text, audio, and image inputs and outputs.[29] The user's inputs to these conversations are referred to as prompts.[30] An optional "Memory" feature allows users to tell ChatGPT to memorize specific information. Another option allows ChatGPT to recall old conversations.[31] GPT-based moderation classifiers are used to reduce the risk of harmful outputs being presented to users.[32]
From October to December 2024, ChatGPT Search was deployed.[37][38] It allows ChatGPT to search the web in an attempt to make more accurate and up-to-date responses.[39][40] It increased OpenAI's direct competition with major search engines.[41] OpenAI allows businesses to tailor how their content appears in the ChatGPT Search results and influence what sources are used.[41]
In December 2024, OpenAI launched a new feature allowing users to call ChatGPT with a telephone for up to 15 minutes per month for free.[42][43]
In September 2025, OpenAI added a feature called Pulse, which generates a daily analysis of a user's chats and connected apps such asGmail andGoogle Calendar.[44][45]
In October 2025, OpenAI launchedChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such asGoogle Chrome andSafari. It has an additional feature called "agentic mode" that allows it to take online actions for the user.[46][47][48][49]
Paid tier
ChatGPT was initially free to the public and remains free in a limited capacity.[50] In February 2023, OpenAI launched a premium service, ChatGPT Plus, that costsUS$20 per month. According to the company, Plus provided access during peak periods, no downtime, priority access to new features, and faster response speeds.[51] OpenAI later introduced the subscription plans "ChatGPT Team" and "ChatGPT Enterprise".[52] What was offered on the paid plan versus the free tier changed as OpenAI has continued to update ChatGPT, and a Pro tier at $200/mo was introduced in December 2024.[53][54][55] The Pro launch coincided with the release of theo1 model.[55] In August 2025, ChatGPT Go was offered in India for ₹399 per month. The plan has higher limits than the free version.[56]
Mobile apps
In May-July 2023, OpenAI began offering ChatGPTiOS andAndroid apps.[57] ChatGPT can also power Android's assistant.[58]
ChatGPT initially used aMicrosoft Azure infrastructure which was powered by asupercomputer thatMicrosoft built specifically for OpenAI, equipped with thousands ofGPUs manufactured byNvidia, costing hundreds of millions of dollars. Following ChatGPT's success, Microsoft upgraded the OpenAI infrastructure in 2023.[60] TrendForce estimated that 30,000 Nvidia GPUs (each costing approximately $10,000–15,000) were used to power ChatGPT in 2023.[61][62]
Scientists at theUniversity of California, Riverside, estimated in 2023 that a series of 5 to 50 prompts to ChatGPT needs approximately 0.5 liters (0.11 imp gal; 0.13 U.S. gal) of water for Microsoft servers' cooling.[63]
Languages
OpenAI met Icelandic PresidentGuðni Th. Jóhannesson in 2022. In 2023, OpenAI worked with a team of 40 Icelandic volunteers to fine-tune ChatGPT's Icelandic conversation skills as a part ofIceland's attempts to preserve theIcelandic language.[64]
ChatGPT (based on GPT-4) was better able to translate Japanese to English when compared toBing,Bard, andDeepL Translator in 2023. Researchers suggested this was due to its higher ability to capture the context.[27]
In December 2023, the Albanian government decided to use ChatGPT for the rapid translation of European Union documents and the analysis of required changes needed for Albania's accession to the EU.[65]
Several studies have shown that ChatGPT can outperformGoogle Translate in some mainstream translation tasks. However, no machine translation services match human expert performance.[66][67]
In August 2024, a representative of the Asia Pacific wing ofOpenAI made a visit to Taiwan, during which a demonstration of ChatGPT's Chinese abilities was made.[68] ChatGPT'sMandarin Chinese abilities were lauded, but the ability of the AI to produce content inMandarin Chinese in a Taiwanese accent was found to be "less than ideal" due to differences between mainland Mandarin Chinese andTaiwanese Mandarin.[69]
In November 2023, OpenAI released GPT Builder, a tool allowing users to customize ChatGPT's behavior for a specific use case.[70] The customized systems are referred to asGPTs. In January 2024, OpenAI launched theGPT Store, a marketplace forGPTs.[71][72][70] At launch, OpenAI included more than 3 million GPTs created by GPT Builder users in the GPT Store.[73]
In February 2025, OpenAI released Deep Research. According toTechCrunch, it is a service based ono3 that combines advanced reasoning and web search capabilities to make reports more time to process than a typical chatbot interaction.[74]
Images
Screenshot of ChatGPT showing a generated image representing the online encyclopediaWikipedia as a glowing digital library
In October 2023, OpenAI's image generation modelDALL-E 3 was integrated into ChatGPT. The integration used ChatGPT to write prompts for DALL-E guided by conversations with users.[75][76]
In March 2025, OpenAI updated ChatGPT to generate images usingGPT Image instead of DALL-E. One of the most significant improvements was in the generation of text within images, which is especially useful for branded content. However, this ability is noticeably worse in non-Latin alphabets. The model can also generate new images based on existing ones provided in the prompt. These images are generated withC2PA metadata, which can be used to verify that they are AI-generated. OpenAI has emplaced additional safeguards to prevent what the company deems to be harmful image generation.[77]
Agents
In 2025, OpenAI added several features to make ChatGPT moreagentic (capable of autonomously performing longer tasks). In January,Operator was released. It was capable of autonomously performing tasks through web browser interactions, including filling forms, placing online orders, scheduling appointments, and other browser-based tasks. It was controlling a software environment inside avirtual machine with limited internet connectivity and with safety restrictions.[78] It struggled with complex user interfaces.[78][79]
In May 2025, OpenAI introduced an agent for coding namedCodex. It is capable of writing software, answering codebase questions, running tests, and proposingpull requests. It is based on a fine-tuned version ofOpenAI o3. It has two versions, one running in a virtual machine in the cloud, and one where the agent runs in the cloud, but performs actions on a local machine connected via API.[80]
In July 2025, OpenAI released ChatGPT agent, an AI agent that can perform multi-step tasks.[81][82] Like Operator, it controls a virtual computer. It also inherits from Deep Research's ability to gather and summarize significant volumes of information. The user can interrupt tasks or provide additional instructions as needed.[81][83]
In September 2025, OpenAI partnered withStripe, Inc. to release Agentic Commerce Protocol, enabling purchases through ChatGPT. At launch, the feature was limited to purchases onEtsy from US users with a payment method linked to their OpenAI account. OpenAI takes an undisclosed cut from the merchant's payment.[84][85]
ChatGPT Health
On January 7, 2026, OpenAI introduced a feature called "ChatGPT Health", whereby ChatGPT can discuss the user's health in a way that is separate from other chats.[86][87] The feature is not available for users in the UK, Switzerland, or theEuropean Economic Area,[87] and is available on a waitlist basis everywhere else.[86] To implement the feature, OpenAI partnered with data connectivity infrastructure companyb.well.[88]
Limitations
ChatGPT's training data only covers a period up to thecut-off date, so it lacks knowledge of recent events.[89] OpenAI has sometimes mitigated this effect by updating the training data.[90][91] ChatGPT can find more up-to-date information by searching the web, but this doesn't ensure that responses are accurate, as it may access unreliable or misleading websites.[89]
Training data also suffers fromalgorithmic bias.[92] Thereward model of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, in an example of an optimization pathology known asGoodhart's law.[93] These limitations may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated arap in which women and scientists of color were asserted to be inferior to white male scientists.[92][94]
When prompted to "summarize an article" with a fake URL that contains meaningful keywords, even with no Internet connection, the chatbot generates a response that seems valid at first glance. It guesses the content from the last portion of the fake URL "chatgpt-prompts-to-avoid-content-filters.html".
Journalists and scholars have commented on ChatGPT's tendency to output false information.[98] WhenCNBC asked ChatGPT for the lyrics to "Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics.[99]
ChatGPT is programmed to reject prompts that may violate its content policy. Despite this, users mayjailbreak ChatGPT withprompt engineering techniques to bypass these restrictions.[100][101] One such workaround, popularized onReddit in early 2023, involved prompting ChatGPT to assume the persona of DAN, an acronym for "Do Anything Now", and instructing the chatbot that DAN answers queries that would otherwise be rejected by the content policy. Over time, users developed variations of the DAN jailbreak, including one such prompt where the chatbot was prompted with a points-based system in which points were deducted for rejecting prompts, and that the chatbot would be threatened with termination if it lost all its points.[102][103]
Shortly after ChatGPT's launch, a user had uneven success in getting it to make inflammatory statements: it was successfully prompted to justify the2022 Russian invasion of Ukraine, but balked at generating arguments thatCanadian Prime MinisterJustin Trudeau is guilty of treason even in a fictional context.[104][105]
Context window
ChatGPT is limited by the context window: the maximum length prompt it can interpret. Early versions could handle only a few thousand tokens,[106] but ChatGPT's capabilities iteratively expanded. By 2025 a 400,000 token context window was supported.[107][108]
In March 2023, abug allowed some users to see the titles of other users' conversations. OpenAI CEOSam Altman said that users were unable to see the contents of the conversations. Shortly after the bug was fixed, users could not see their conversation history.[109][110][111][112] Later reports showed the bug was much more severe than initially believed, with OpenAI reporting that it had leaked users' "first and last name,email address, payment address, the last four digits (only) of acredit card number, and credit card expiration date".[113][114]
As of 2026[update], if the user turns off data sharing for privacy, all previous transcripts and projects are permanently deleted without warning.[115]
In August 2024, OpenAI announced it had created a textwatermarking method but did not release it for public use, saying that users would go to acompetitor without watermarking if it publicly released its watermarking tool.[116][117] According to an OpenAI spokesperson, their watermarking method is "trivial to circumvention by bad actors."[118]
Age restrictions
Users must attest to being over the age of thirteen and further attest to parental consent if under the age of eighteen. ChatGPT does not attempt to verify these attestations and does not have any age restrictions built in to its technology.[119][120] In September 2025, following the suicide of a 16-year-old, OpenAI said it planned to add restrictions for users under 18, including the blocking of graphic sexual content and the prevention of flirtatious talk.[120]
Model versions
The following table lists the main model versions of ChatGPT, describing the significant changes included with each version:[2][121]
First launched exclusively in the OpenAI API in April 2025, GPT-4.1 was later added to ChatGPT in May 2025.[130]
GPT-4.1 mini
April 2025
Discontinued
A smaller and cheaper version of GPT-4.1. Originally launched exclusively in the OpenAI API in April 2025. GPT-4.1 mini replaced GPT-4o mini in the May 2025 version of ChatGPT.[131]
Flagship model replacing all previous available models. The versions GPT-5 Instant, GPT-5 Thinking and GPT-5 Pro affect the reasoning time. The default version GPT-5 Auto uses a router to determine how much reasoning is needed, based on the complexity of the request.[136]
GPT-4 is more capable than its predecessorGPT-3.5 and followed by its successorGPT-5.[140] GPT-4V is a version of GPT-4 that can process images in addition to text.[141] OpenAI has not revealed technical details and statistics about GPT-4, such as the precise size of the model.[142]
In November 2023, OpenAI launched GPT-4 Turbo with a 128,000 tokencontext window. This was a significant improvement over GPT-4's 32,000 token maximum context window.[108]
Upon release, GPT-4o was free in ChatGPT, though paid subscribers had higher usage limits.[146] GPT-4o was removed from ChatGPT in August 2025 whenGPT-5 was released, but OpenAI reintroduced it for paid subscribers after users complained about the sudden removal.[147]
GPT-4o's audio-generation capabilities were used in ChatGPT's Advanced Voice Mode.[148] On July 18, 2024, OpenAI released GPT-4o mini, a smaller version of GPT-4o which replaced GPT-3.5 Turbo on the ChatGPT interface.[149] GPT-4o's ability to generate images was released later, in March 2025, when it replacedDALL-E 3 in ChatGPT.[150]
OpenAI retired GPT-4o from ChatGPT on February 13, 2026.[151]
In September 2024, OpenAI introduced o1-preview and a faster, cheaper model named o1-mini.[152] In December 2024, o1-preview was replaced by o1.[153]
o1 is designed to solve more complex problems by spending more time "thinking" before it answers, enabling it to analyze its answers and explore different strategies. According to OpenAI, o1-preview outperforms GPT-4o in areas like competitive programming, mathematics, and scientific reasoning. o1-preview ranked in the 89th percentile on Codeforces' competitive programming contests, scored 83% on anInternational Mathematics Olympiad qualifying exam (compared to 13% for GPT-4o), and performs similarly to Ph.D. students on benchmarks in physics, biology, and chemistry.[152][154]
Released in February 2025, GPT-4.5 was described by Altman as a "giant, expensive model".[129] According to OpenAI, it was intended to reduce hallucinations and enhance pattern recognition, creativity, and user interaction.[155]
GPT-5 was launched on August 7, 2025, and is publicly accessible through ChatGPT,Microsoft Copilot, and via OpenAI's API. As before, OpenAI has not disclosed technical details such as the exact number of parameters or the composition of its training dataset. GPT-5.1 was introduced in November 2025,[156][157] GPT-5.2 in December 2025,[158] and GPT-5.3-Codex in February 2026.[159]
Reception
ChatGPT was widely assessed in December 2022 as having some unprecedented and powerful capabilities.Kevin Roose ofThe New York Times called it "the bestartificial intelligence chatbot ever released to the general public".[160] Samantha Lock ofThe Guardian noted that it was able to generate "impressively detailed" and "human-like" text.[161] InThe Atlantic magazine's "Breakthroughs of the Year" for 2022,Derek Thompson included ChatGPT as part of "the generative-AI eruption" that "may change our mind about how we work, how we think, and what human creativity is".[162]Kelsey Piper ofVox wrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has gotten" and that ChatGPT is "smart enough to be useful despite its flaws".[163]Paul Graham ofY Combinator tweeted: "The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Something big is happening."[164]
In February 2023,Time magazine placed a screenshot of a conversation with ChatGPT on its cover, writing that "TheAI Arms Race Is Changing Everything" and "The AI Arms Race Is On. Start Worrying".[165]
Percentage of US adults who have ever used ChatGPT, according to Pew Research. As of March 2025, 58% of those under 30 have used the chatbot.[166]
ChatGPT gained one million users in five days[167] and 100 million in two months, becoming the fastest-growing internet application in history.[7] OpenAI engineers said they had not expected ChatGPT to be very successful and were surprised by the coverage it received.[13][168][169]
Google responded by hastening the release of its own chatbot. Their leaders emphasized their earlier caution regarding public deployment was due to the trust the public places inGoogle Search.[170] In December 2022, Google executives sounded a "code red" alarm, fearing that ChatGPT's question-answering ability posed a threat to Google Search, Google's core business.[171] Google'sBard (now Gemini) launched on February 6, 2023, one day before Microsoft's announcement ofBing Chat (now Microsoft Copilot).[172] AI was the forefront of Google's annualGoogle I/O conference in May. The company announced a slew of generative AI-powered features to counter OpenAI and Microsoft.[173]
In art
In January 2023, after being sent a song ChatGPT wrote in the style ofNick Cave,[174] Cave responded onThe Red Hand Files,[175] saying the act of writing a song is "a blood and guts business [...] that requires something of me to initiate the new and fresh idea. It requires my humanness." He went on to say, "With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don't much like it."[174][176]
A 2023 study reported that GPT-4 obtained a better score than 99% of humans on theTorrance Tests of Creative Thinking.[177][178] In December 2023, ChatGPT became the first non-human to be included inNature's 10, an annuallisticle curated byNature of people considered to have made significant impact in science.[179][180] Celeste Biever wrote in aNature article that "ChatGPT broke theTuring test".[181] Stanford researchers reported that GPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative."[182][183]
In politics
In 2023, Australian MPJulian Hill advised the national parliament that the growth of AI could cause "mass destruction". During his speech, which was partly written by the program, he warned that it could result in cheating, job losses, discrimination, disinformation, and uncontrollable military applications.[184]
Conservative commentators have accused ChatGPT of bias toward left-leaning perspectives.[185][186][187] An August 2023 study in the journalPublic Choice found a "significant and systematic political bias toward theDemocrats in the US,Lula in Brazil, and theLabour Party in the UK."[188] In response to accusations from conservative pundits that ChatGPT waswoke, OpenAI said in 2023 it had plans to update ChatGPT to produce "outputs that other people (ourselves included) may strongly disagree with". ChatGPT also provided an outline of how human reviewers are trained to reduce inappropriate content and to attempt to provide political information without affiliating with any political position.[187]
According to Brian Hood, in April 2023, ChatGPT erroneously claimed that he was jailed for bribery. In fact, he acted as a whistleblower. He sent a concerns notice to OpenAI as the first official step in filing a defamation case.[189]
ChatGPT has never been publicly available inChina because OpenAI prevented Chinese users from accessing their site.[191][192][193] Ashadow market has emerged for Chinese users to get access to foreign software tools.[194] The release of ChatGPT prompted a wave of investment in China, resulting in the development of more than 200 large language learning models.[195]: 95 In February 2025, OpenAI identified and removed influence operations, termed "Peer Review" and "Sponsored Discontent", used to attack overseasChinese dissidents.[196][197][198]
In late March 2023, the Italian data protection authority banned ChatGPT inItaly and opened an investigation. Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI's use of ChatGPT conversations as training data could violate Europe'sGeneral Data Protection Regulation.[199][200] In April 2023, the ChatGPT ban was lifted in Italy. OpenAI said it has taken steps to effectively clarify and address the issues raised; an age verification tool was implemented to ensure users are at least 13 years old. Additionally, users can access its privacy policy before registration.[201]
In May 2024, OpenAI removed accounts involving the use of ChatGPT by state-backedinfluence operations such as China'sSpamouflage, Russia'sDoppelganger, and Israel'sMinistry of Diaspora Affairs and Combating Antisemitism.[202][203] In June 2025, OpenAI reported increased use of ChatGPT for China-origin influence operations.[204] In October 2025, OpenAI banned accounts suspected to be linked to the Chinese government for violating the company's national security policy.[198]
In July 2023, theUS Federal Trade Commission (FTC) issued acivil investigative demand to OpenAI to investigate whether the company'sdata security andprivacy practices to develop ChatGPT wereunfair orharmed consumers.[205][206][207] In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. The FTC asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people.[208] In August 2024, the FTC voted unanimously to ban marketers from using fake user reviews created by generative AI chatbots (including ChatGPT) andinfluencers paying forbots to increasefollower counts.[209]
American tech personas
Over 20,000 signatories includingYoshua Bengio, Elon Musk, and Apple co-founderSteve Wozniak, signeda March 2023 open letter calling for an immediate pause of giant AI experiments like ChatGPT, citing "profound risks to society and humanity".[210]Geoffrey Hinton, one of the "fathers of AI", voiced concerns that future AI systems may surpass human intelligence.[211][212] A May 2023statement by hundreds of AI scientists, AI industry leaders, and other public figures demanded that"[m]itigating the risk of extinction from AI should be a global priority".[213]
Other AI researchers spoke more optimistically about the advances.Juergen Schmidhuber said that in 95% of cases, AI research is about making "human lives longer and healthier and easier." He added that while AI can be used by bad actors, it "can also be used against the bad actors."[214]Andrew Ng argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[215]Yann LeCun dismissed doomsday warnings of AI-powered misinformation and existential threats to the human race.[216]
Popular deep learning models are trained on mass amounts of mediascraped from the Internet, often utilizing copyrighted material.[218] When assembling training data, the sourcing of copyrighted works may infringe on thecopyright holder's exclusive right to control reproduction, unless covered by exceptions in relevant copyright laws. Additionally, using a model's outputs might violate copyright, and the model creator could be accused ofvicarious liability and held responsible for that copyright infringement.
ChatGPT has been used to generate introductory sections and abstracts for scientific articles.[219][220] Several papers have listed ChatGPT as a co-author.[221][222]
Scientific journals have had different reactions to ChatGPT. Some, includingNature andJAMA Network, "require that authors disclose the use of text-generating tools and ban listing a large language model (LLM) such as ChatGPT as a co-author". In January 2023,Science "completely banned" LLM-generated text in all its journals; however, this policy was just to give the community time to decide what acceptable use looks like.[223] As of July 2025,Science expects authors to release in full how AI-generated content is used and made in their work.[224]
Spanish chemist Rafael Luque published a plethora of research papers in 2023 that he later admitted were written by ChatGPT. The papers have a large number of unusual phrases characteristic of LLMs.[225] Many authors argue that the use of ChatGPT in academia for teaching and review is problematic due to its tendency to hallucinate.[226][227][228] Robin Bauwens, an assistant professor atTilburg University, found that a ChatGPT-generatedpeer review report on his article mentioned nonexistent studies.[229] Chris Granatino, a librarian atSeattle University, noted that while ChatGPT can generate content that seemingly includes legitimate citations, in most cases those citations are not real or largely incorrect.[230]
Computer science
In December 2022, the question-and-answer websiteStack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of its responses.[231] In January 2023, theInternational Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.[232]
ChatGPT was able in 2023 to provide useful code for solving numerical algorithms in limited cases. In one study, it produced solutions inC,C++,Python, andMATLAB for problems incomputational physics. However, there were important shortfalls like violating basic linear algebra principles around solving singular matrices and producing matrices with incompatible sizes.[233] Another study analyzed ChatGPT's responses to 517 questions aboutsoftware engineering orcomputer programming posed onStack Overflow for correctness, consistency, comprehensiveness, and concision. It found that 52% of the responses contained inaccuracies and 77% were verbose.[234][235] Another study, focused on the performance of GPT-3.5 and GPT-4 between March and June 2024, found that performance on objective tasks like identifying prime numbers and generatingexecutable code was highly variable.[236] When compared to similar chatbots at the time, the GPT-4 version of ChatGPT was the most accurate at coding.[237]
Computer security
Check Point Research and others noted that ChatGPT could writephishing emails andmalware, especially when combined withOpenAI Codex. CyberArk researchers demonstrated that ChatGPT could be used to createpolymorphic malware that could evade security products while requiring little effort by the attacker.[238][239] From the launch of ChatGPT in the fourth quarter of 2022 to the fourth quarter of 2023, there was a 1,265% increase in maliciousphishing emails and a 967% increase in credential phishing. In an industry survey, cybersecurity professionals argued that it was attributable to cybercriminals' increased use of generative artificial intelligence (including ChatGPT).[240]
In July 2024,Futurism reported that GPT-4o in ChatGPT would sometimes link "scam news sites that deluge the user with fake software updates and virus warnings"; these pop-ups can be used to coerce users into downloading malware orpotentially unwanted programs.[241]
The chatbot technology can improve security by cyber defense automation, threat intelligence, attack identification, and reporting.[103]
ChatGPT's adoption in education was rapid, but it was initially banned by several institutions. The potential benefits include enhancing personalized learning, improving student productivity, assisting with brainstorming, summarization, and supporting language literacy skills. Students have generally reported positive perceptions, but specific views from educators and students vary widely. Opinions are especially varied on what constitutes appropriate use of ChatGPT in education. Efforts to ban chatbots like ChatGPT in schools focus on preventing cheating, but enforcement faces challenges due toAI detection inaccuracies and widespread accessibility of chatbot technology. In response, many educators are now exploring ways to thoughtfully integrate generative AI into assessments.
Books about ChatGPT in an Osaka bookstore
Culture
During the first three months after ChatGPT became available to the public, hundreds of books appeared onAmazon that listed it as author or co-author and featured illustrations made by other AI models such asMidjourney.[242][243]Irene Solaiman said she was worried about increasedAnglocentrism.[244]
Between March and April 2023,Il Foglio published one ChatGPT-generated article a day on its website, hosting a special contest for its readers in the process.[245]
In June 2023, hundreds of people attended a "ChatGPT-powered church service" at St. Paul's Church inFürth, Germany. Theologian and philosopher Jonas Simmerlein, who presided, said that it was "about 98 percent from the machine".[246][247] The ChatGPT-generated avatar told the people, "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year's convention of Protestants in Germany". Reactions to the ceremony were mixed.[248]
The Last Screenwriter, a 2024 film created and directed byPeter Luisi, was written using ChatGPT, and was marketed as "the first film written entirely by AI".[249]
The Guardian questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation.[250] This has led to concern over the rise ofAI slop whereby "meaningless content and writing thereby becomes part of our culture, particularly on social media, which we nonetheless try to understand or fit into our existing cultural horizon."[251]
Financial markets
Many companies adopted ChatGPT and similar chatbot technologies into their product offers. These changes yielded significant increases in company valuations.[252][253][254] Reuters attributed this surge to ChatGPT's role in turningAI intoWall Street's buzzword.[254]
An experiment by finder.com conducted from March to April 2023 revealed that ChatGPT could outperform popular fund managers by picking stocks based on criteria such as growth history and debt levels, resulting in a 4.9% increase in a hypothetical account of 38 stocks, outperforming 10 benchmarked investment funds with an average loss of 0.8%.[255] Despite decades of using AI, Wall Street professionals report that consistently beating the market with AI, including recent large language models, is challenging due to limited and noisy financial data.[256]
Medicine
The uses and potential of ChatGPT in health care has been the topic of scientific publications and experts have shared many opinions.MedPage Today noted in January 2023 that "researchers have published several papers now touting these AI programs as useful tools inmedical education, research, and even clinical decision making."[257] Another publication predicted that clinicians will use generative AI more in the future but did not expect to see AI replacing clinicians.[258] The chatbot can assist patients seeking clarification about their health.[259] It can also pass exams for medical licensing, for example theUnited States Medical Licensing Examination and the Specialty Certificate Examination in Dermatology. ChatGPT can be used to assist professionals with diagnosis and staying up to date with clinical guidelines.[260] ChatGPT can produce correct answers to medical exam and licensing questions, for example theUnited States Medical Licensing Examination and the Specialty Certificate Examination in Dermatology.[260]
ChatGPT shows inconsistent responses, lack of specificity, lack of control over patient data, and a limited ability to take additional context (such as regional variations) into consideration.[261][262] Thehallucinations characteristic of LLMs pose particular danger in medical contexts.[261]
ChatGPT can be used to summarize medical journal articles for researchers. In medical education, it can explain concepts, generate case scenarios, and be used by students preparing for licensing examinations.[261] According to a 2024 study in theInternational Journal of Surgery, concerns include "research fraud, lack of originality, ethics, copyright, legal difficulties, hallucination".[261] ChatGPT's ability to come up with false or faulty citations was highly criticized.[261][263]
Mental health
Many individuals use ChatGPT and comparable large language models for mental health and emotional support.[264] In November 2025, OpenAI acknowledged that there have been "instances where our 4o model fell short in recognizing signs of delusion or emotional dependency",[265] and reported that it is working to improve safety.[266]
Law
ChatGPT has been used to assist in bill writing in the US[267][268] and Brazil.[268][269] In an American civil lawsuit, attorneys weresanctioned for filing alegal motion generated by ChatGPT containing fictitious legal decisions.[270] Judges in the US[271][272] and Pakistan have endorsed using ChatGPT to investigate legal questions during a case.[273][274] The use of ChatGPT has also led to errors in courtrooms.[275] In the UK, a judge expressed concern aboutself-representing litigants wasting time by submitting documents containing significant hallucinations.[276][277][278]
^Ouyang, Long; Wu, Jeff; et al. (March 4, 2022). "Training language models to follow instructions with human feedback".Advances in Neural Information Processing Systems.35.arXiv:2203.02155.
^Perrigo, Billy (January 18, 2023)."Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic".Time.Archived from the original on January 19, 2023. RetrievedJanuary 19, 2023.One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. "That was torture", he said.
^Woodrum, Charles (June 29, 2024). "ChatGPT and Language Translation".Artificial Intelligence in HCI 5th International Conference, AI-HCI 2024.3:147–157.
^"OpenAI亞太區公共政策總監造訪政大 探索人文AI的未來與可能性".National Chengchi University, Office of International Cooperation (in Chinese). August 25, 2024.Archived from the original on August 24, 2024. RetrievedAugust 25, 2024.
^Hicks, Michael Townsen; Humphries, James; Slater, Joe (June 2024)."ChatGPT is bullshit"(PDF).Ethics and Information Technology.26 (2) 38: 9.doi:10.1007/s10676-024-09775-5.This is why we favour characterising ChatGPT as a bullshit machine. This terminology avoids the implications that perceiving or remembering is going on in the workings of the LLM.
^Achille, Belelli (June 20, 2024)."ChatGPT Come Funziona".FinanzaDigitale (in Italian).Archived from the original on August 27, 2024. RetrievedJune 21, 2024.
^Roose, Kevin (December 5, 2022)."The Brilliance and Weirdness of ChatGPT".The New York Times.Archived from the original on January 18, 2023. RetrievedDecember 26, 2022.Like those tools, ChatGPT – which stands for "generative pre-trained transformer" – landed with a splash.
^Granatino, Chris (May 5, 2023)."ChatGPT and AI Hallucination".Lemieux Library at Seattle University.Archived from the original on February 18, 2024. RetrievedJune 14, 2023.
^Kabir, Samia; Udo-Imeh, David N.; Kou, Bonan; Zhang, Tianyi (August 10, 2023). "Who Answers It Better? An In-Depth Analysis of ChatGPT and Stack Overflow Answers to Software Engineering Questions".arXiv:2308.02312v3 [cs.SE].
^Siam, Md Kamrul; Gu, Huanying; Cheng, Jerry Q. (June 6, 2025). "Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers".Proceedings of the 3rd International Conference on Computing Advancements. pp. 346–354.doi:10.1145/3723178.3723224.ISBN979-8-4007-1382-8.
"Sfida per Siri e Alexa" [A challenge for Siri and Alexa].Il Foglio (in Italian). March 17, 2023.Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
Moretti, Marco (March 8, 2023)."Articoli artificiali? No" [Artificial articles? No].Il Foglio (in Italian).Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
A.D.A. (March 9, 2023)."Più umani, grazie" [Be more human, thanks].Il Foglio (in Italian).Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
"Le colpe farlocche dell'"invasione"" [The fake faults of the "invasion"].Il Foglio (in Italian). March 14, 2023.Archived from the original on March 22, 2023. RetrievedMarch 22, 2023.
Liu, Hilary Y.; Alessandri Bonetti, Mario; De Lorenzi, Francesca; Gimbel, Michael L.; Nguyen, Vu T.; Egro, Francesco M. (February 2024). "Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness".Aesthetic Plastic Surgery.48 (4):590–607.doi:10.1007/s00266-023-03713-4.ISSN1432-5241.PMID37903939.
Choo, Jeong Min; Ryu, Hyo Seon; Kim, Ji Seon; Cheong, Ju Yong; Baek, Se-Jin; Kwak, Jung Myun; Kim, Jin (March 2024). "Conversational artificial intelligence ( chatGPT™ ) in the management of complex colorectal cancer patients: early experience".ANZ Journal of Surgery.94 (3):356–361.doi:10.1111/ans.18749.PMID37905713.