On March 23, 2016,Microsoft releasedTay,[2] achatbot designed to mimic the language patterns of a 19-year-old American girl and learn from interactions withTwitter users.[3] Soon after its launch, Tay began posting racist, sexist, and otherwise inflammatory tweets after Twitter users deliberately taught it offensive phrases and exploited its "repeat after me" capability.[4] Examples of controversial outputs includedHolocaust denial and calls forgenocide usingracial slurs.[4] Within 16 hours of its release, Microsoft suspended the Twitter account, deleted the offensive tweets, and stated that Tay had suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability."[4][5][6][7] Tay was briefly and accidentally re-released on March 30 during testing, after which it was permanently shut down.[8][9] Microsoft CEOSatya Nadella later stated that Tay "has had a great influence on how Microsoft is approaching AI" and taught the company the importance of taking accountability.[10]
On January 14, 2022, voice actorTroy Baker announced a partnership with Voiceverse, ablockchain-based company that marketed proprietaryAI voice cloning technology asnon-fungible tokens (NFT), triggering immediate backlash overenvironmental concerns, fears thatAI could displace human voice actors, and concerns aboutfraud.[11][12][13] Later that same day, thepseudonymous creator of15.ai—a free, non-commercial AI voice synthesis research project—revealed through server logs that Voiceverse had used 15.ai to generate voice samples, pitch-shifted them to make them unrecognizable, and falsely marketed them as their own proprietary technology before selling them as NFTs;[14][15] the developer of 15.ai had previously stated that they had no interest in incorporating NFTs into their work.[15] Voiceverse confessed within an hour and stated that their marketing team had used 15.ai without attribution while rushing to create ademo.[14][15] News publications andAI watchdog groups universally characterized the incident astheft stemming fromgenerative artificial intelligence.[14][15][16][17]
On August 29, 2022, Jason Michael Allen won first place in the "emerging artist" (non-professional) division of the "Digital Arts/Digitally-Manipulated Photography" category of theColorado State Fair'sfine arts competition withThéâtre D'opéra Spatial, a digital artwork created using theAI image generatorMidjourney,Adobe Photoshop, andAI upscaling tools, becoming one of the first images made using generative AI to win such a prize.[18][19][20][21] Allen disclosed his use of Midjourney when submitting, though the judges did not know it was an AI tool but stated they would have awarded him first place regardless.[19][22] While there was little contention about the image at the fair, reactions to the win onsocial media were negative.[23][22] On September 5, 2023, theUnited States Copyright Office ruled that the work was not eligible forcopyright protection as the human creative input wasde minimis and that copyright rules "exclude works produced by non-humans."[20][24]
Timnit Gebru has criticized both statements on AI risk as needlessly focusing on speculative risks rather than focusing on known ones.
On March 22, 2023, theFuture of Life Institutepublished an open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful thanGPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control.[25] The letter, published a week after the release of OpenAI's GPT-4, asserted that current large language models were "becoming human-competitive at general tasks".[25] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such asYoshua Bengio,Stuart Russell,Elon Musk,Steve Wozniak andYuval Noah Harari.[25][26][27] The letter was criticized for diverting attention from more immediate societal risks such asalgorithmic biases,[28] withTimnit Gebru and others arguing that it amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI.[29]
On November 17, 2023,OpenAI'sboard of directors ousted co-founder and chief executiveSam Altman, stating that "the board no longer has confidence in his ability to continue leading OpenAI."[37] The removal was precipitated by employee concerns about his handling ofartificial intelligence safety[38][39] and allegations of abusive behavior.[40] Altman was reinstated on November 22 after pressure from employees and investors, including a letter signed by 745 of OpenAI's 770 employees threatening mass resignations if the board did not resign.[41][42][43] The removal and subsequent reinstatement caused widespread reactions, including Microsoft's stock falling nearly three percent following the initial announcement and then rising over two percent to an all-time high after Altman was hired to lead a Microsoft AI research team before his reinstatement.[44][45] The incident also prompted investigations from theCompetition and Markets Authority and theFederal Trade Commission into Microsoft's relationship with OpenAI.[46][47]
Taylor Swift deepfake pornography controversy (2024)
In late January 2024,sexually explicit AI-generated deepfake images ofTaylor Swift were proliferated onX, with one post reported to have been seen over 47 million times before its removal.[48][49] Disinformation research firmGraphika traced the images back to4chan,[50] while members of aTelegram group had discussed ways to circumvent censorship safeguards of AI image generators to create pornographic images of celebrities.[51] The images prompted responses from anti-sexual assault advocacy groups, US politicians, andSwifties.[52][53] Microsoft CEOSatya Nadella called the incident "alarming and terrible."[54] X briefly blocked searches of Swift's name on January 27, 2024,[55] and Microsoft enhanced itstext-to-image model safeguards to prevent future abuse.[56] On January 30, US senatorsDick Durbin,Lindsey Graham,Amy Klobuchar, andJosh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent.[57]
Google Gemini's response when asked to "generate a picture of a U.S. senator from the 1800s" in February 2024, as shown byThe Verge[58]
In February 2024, social media users reported thatGoogle's Gemini chatbot was generating images that featuredpeople of color and women in historically inaccurate contexts—such asVikings,Nazi soldiers, and theFounding Fathers—and refusingprompts to generate images ofwhite people. The images were derided on social media, including byconservatives who cited them as evidence of Google's "wokeness",[58][59][60] and criticized byElon Musk, who denounced Google's products as biased and racist.[61][62][63] In response, Google paused Gemini's ability to generate images of people.[64][65][66] Google executivePrabhakar Raghavan released a statement explaining that Gemini had "overcompensate[d]" in its efforts to strive for diversity and acknowledging that the images were "embarrassing and wrong".[67][68][69] Google CEOSundar Pichai called the incident offensive and unacceptable in an internal memo, promising structural and technical changes,[70][71][72] and several employees in Google's trust and safety team were laid off days later.[73][74] The market reacted negatively, with Google's stock falling by 4.4 percent,[75] and Pichai faced growing calls to resign.[76][77][78] The image generation feature was relaunched in late August 2024, powered by its new Imagen 3 model.[79][80]