Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Artificial intelligence controversies

From Wikipedia, the free encyclopedia

Part ofa series on
Artificial intelligence (AI)
Glossary

There have been many debates on the societal effects ofartificial intelligence (AI), particularly in the late 2010s and 2020s, beginning with theaccelerated period of development known as theAI boom. Advocates of AI have emphasized its potential to solve complex problems and improve thequality of life of humans. Detractors have argued that AI presents dangers and challenges involvingethics,plagiarism andtheft,fraud,safety andalignment,environment impacts,unemployment,misinformation,artificial superintelligence andexistential risks.[1]

Pre-2020

[edit]

Microsoft Tay chatbot (2016)

[edit]
Main article:Tay (chatbot)

On March 23, 2016,Microsoft releasedTay,[2] achatbot designed to mimic the language patterns of a 19-year-old American girl and learn from interactions withTwitter users.[3] Soon after its launch, Tay began posting racist, sexist, and otherwise inflammatory tweets after Twitter users deliberately taught it offensive phrases and exploited its "repeat after me" capability.[4] Examples of controversial outputs includedHolocaust denial and calls forgenocide usingracial slurs.[4] Within 16 hours of its release, Microsoft suspended the Twitter account, deleted the offensive tweets, and stated that Tay had suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability."[4][5][6][7] Tay was briefly and accidentally re-released on March 30 during testing, after which it was permanently shut down.[8][9] Microsoft CEOSatya Nadella later stated that Tay "has had a great influence on how Microsoft is approaching AI" and taught the company the importance of taking accountability.[10]

2020–2024

[edit]

Voiceverse NFT plagiarism scandal (2022)

[edit]
Main article:Voiceverse NFT plagiarism scandal
Theponysona of the creator of15.ai, anon-commercialgenerative artificial intelligence voice synthesis research project

On January 14, 2022, voice actorTroy Baker announced a partnership with Voiceverse, ablockchain-based company that marketed proprietaryAI voice cloning technology asnon-fungible tokens (NFT), triggering immediate backlash overenvironmental concerns, fears thatAI could displace human voice actors, and concerns aboutfraud.[11][12][13] Later that same day, thepseudonymous creator of15.ai—a free, non-commercial AI voice synthesis research project—revealed through server logs that Voiceverse had used 15.ai to generate voice samples, pitch-shifted them to make them unrecognizable, and falsely marketed them as their own proprietary technology before selling them as NFTs;[14][15] the developer of 15.ai had previously stated that they had no interest in incorporating NFTs into their work.[15] Voiceverse confessed within an hour and stated that their marketing team had used 15.ai without attribution while rushing to create ademo.[14][15] News publications andAI watchdog groups universally characterized the incident astheft stemming fromgenerative artificial intelligence.[14][15][16][17]

Théâtre D'opéra Spatial (2022)

[edit]
Main article:Théâtre D'opéra Spatial
Impressionistic image of figures in a futuristic opera scene
Théâtre D'opéra Spatial (Space Opera Theater; 2022), an award-winning image made usinggenerative artificial intelligence

On August 29, 2022, Jason Michael Allen won first place in the "emerging artist" (non-professional) division of the "Digital Arts/Digitally-Manipulated Photography" category of theColorado State Fair'sfine arts competition withThéâtre D'opéra Spatial, a digital artwork created using theAI image generatorMidjourney,Adobe Photoshop, andAI upscaling tools, becoming one of the first images made using generative AI to win such a prize.[18][19][20][21] Allen disclosed his use of Midjourney when submitting, though the judges did not know it was an AI tool but stated they would have awarded him first place regardless.[19][22] While there was little contention about the image at the fair, reactions to the win onsocial media were negative.[23][22] On September 5, 2023, theUnited States Copyright Office ruled that the work was not eligible forcopyright protection as the human creative input wasde minimis and that copyright rules "exclude works produced by non-humans."[20][24]

Statements on AI risk (2023)

[edit]
Main articles:Pause Giant AI Experiments: An Open Letter andStatement on AI Risk
Timnit Gebru has criticized both statements on AI risk as needlessly focusing on speculative risks rather than focusing on known ones.

On March 22, 2023, theFuture of Life Institutepublished an open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful thanGPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control.[25] The letter, published a week after the release of OpenAI's GPT-4, asserted that current large language models were "becoming human-competitive at general tasks".[25] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such asYoshua Bengio,Stuart Russell,Elon Musk,Steve Wozniak andYuval Noah Harari.[25][26][27] The letter was criticized for diverting attention from more immediate societal risks such asalgorithmic biases,[28] withTimnit Gebru and others arguing that it amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI.[29]

On May 30, 2023, theCenter for AI Safetyreleased a one-sentence statement signed by hundreds of artificial intelligence experts and other notable figures: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[30][31] Signatories includedTuring laureatesGeoffrey Hinton andYoshua Bengio, as well as the scientific and executive leaders of several major AI companies, includingSam Altman,Demis Hassabis, andBill Gates.[30][31][32] The statement prompted responses from political leaders, includingUK Prime MinisterRishi Sunak, who retweeted it with a statement that the UK government would look carefully into it, andWhite House Press SecretaryKarine Jean-Pierre, who commented that AI "is one of the most powerful technologies that we see currently in our time."[33][34] Skeptics, including fromHuman Rights Watch, argued that scientists should focus on known risks of AI instead of speculative future risks.[35][36]

Removal of Sam Altman from OpenAI (2023)

[edit]
Main article:Removal of Sam Altman from OpenAI
Sam Altman pictured in 2019

On November 17, 2023,OpenAI'sboard of directors ousted co-founder and chief executiveSam Altman, stating that "the board no longer has confidence in his ability to continue leading OpenAI."[37] The removal was precipitated by employee concerns about his handling ofartificial intelligence safety[38][39] and allegations of abusive behavior.[40] Altman was reinstated on November 22 after pressure from employees and investors, including a letter signed by 745 of OpenAI's 770 employees threatening mass resignations if the board did not resign.[41][42][43] The removal and subsequent reinstatement caused widespread reactions, including Microsoft's stock falling nearly three percent following the initial announcement and then rising over two percent to an all-time high after Altman was hired to lead a Microsoft AI research team before his reinstatement.[44][45] The incident also prompted investigations from theCompetition and Markets Authority and theFederal Trade Commission into Microsoft's relationship with OpenAI.[46][47]

Taylor Swift deepfake pornography controversy (2024)

[edit]
Main article:Taylor Swift deepfake pornography controversy

In late January 2024,sexually explicit AI-generated deepfake images ofTaylor Swift were proliferated onX, with one post reported to have been seen over 47 million times before its removal.[48][49] Disinformation research firmGraphika traced the images back to4chan,[50] while members of aTelegram group had discussed ways to circumvent censorship safeguards of AI image generators to create pornographic images of celebrities.[51] The images prompted responses from anti-sexual assault advocacy groups, US politicians, andSwifties.[52][53] Microsoft CEOSatya Nadella called the incident "alarming and terrible."[54] X briefly blocked searches of Swift's name on January 27, 2024,[55] and Microsoft enhanced itstext-to-image model safeguards to prevent future abuse.[56] On January 30, US senatorsDick Durbin,Lindsey Graham,Amy Klobuchar, andJosh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent.[57]

Google Gemini image generation controversy (2024)

[edit]
Main article:Google Gemini image generation controversy
Google Gemini's response when asked to "generate a picture of a U.S. senator from the 1800s" in February 2024, as shown byThe Verge[58]

In February 2024, social media users reported thatGoogle's Gemini chatbot was generating images that featuredpeople of color and women in historically inaccurate contexts—such asVikings,Nazi soldiers, and theFounding Fathers—and refusingprompts to generate images ofwhite people. The images were derided on social media, including byconservatives who cited them as evidence of Google's "wokeness",[58][59][60] and criticized byElon Musk, who denounced Google's products as biased and racist.[61][62][63] In response, Google paused Gemini's ability to generate images of people.[64][65][66] Google executivePrabhakar Raghavan released a statement explaining that Gemini had "overcompensate[d]" in its efforts to strive for diversity and acknowledging that the images were "embarrassing and wrong".[67][68][69] Google CEOSundar Pichai called the incident offensive and unacceptable in an internal memo, promising structural and technical changes,[70][71][72] and several employees in Google's trust and safety team were laid off days later.[73][74] The market reacted negatively, with Google's stock falling by 4.4 percent,[75] and Pichai faced growing calls to resign.[76][77][78] The image generation feature was relaunched in late August 2024, powered by its new Imagen 3 model.[79][80]


References

[edit]
  1. ^Müller, Vincent C. (April 30, 2020)."Ethics of Artificial Intelligence and Robotics".Stanford Encyclopedia of Philosophy.Archived from the original on 10 October 2020.
  2. ^Andrew Griffin (March 23, 2016)."Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to".The Independent.Archived from the original on 2022-05-26.
  3. ^Rob Price (March 24, 2016)."Microsoft is deleting its AI chatbot's incredibly racist tweets".Business Insider. Archived fromthe original on January 30, 2019.
  4. ^abcOhlheiser, Abby (25 March 2016)."Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac".The Washington Post.Archived from the original on March 25, 2016. Retrieved25 March 2016.
  5. ^Hern, Alex (24 March 2016)."Microsoft scrambles to limit PR damage over abusive AI bot Tay".The Guardian.Archived from the original on December 18, 2016. RetrievedDecember 16, 2016.
  6. ^Worland, Justin (24 March 2016)."Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages".Time.Archived from the original on 25 March 2016. Retrieved25 March 2016.
  7. ^Lee, Peter (25 March 2016)."Learning from Tay's introduction".Official Microsoft Blog. Microsoft.Archived from the original on June 30, 2016. Retrieved29 June 2016.
  8. ^Graham, Luke (30 March 2016)."Tay, Microsoft's AI program, is back online".CNBC.Archived from the original on September 20, 2017. Retrieved30 March 2016.
  9. ^Meyer, David (30 March 2016)."Microsoft's Tay 'AI' Bot Returns, Disastrously".Fortune.Archived from the original on March 30, 2016. Retrieved30 March 2016.
  10. ^Moloney, Charlie (29 September 2017).""We really need to take accountability", Microsoft CEO on the 'Tay' chatbot".Access AI. Archived fromthe original on 1 October 2017. Retrieved30 September 2017.
  11. ^McWhertor, Michael (January 14, 2022)."The Last of Us voice actor promotes 'voice NFTs,' pulls out after criticism".Polygon.Archived from the original on January 14, 2022. RetrievedOctober 5, 2025.
  12. ^Brown, Andy (January 14, 2022)."Troy Baker criticised for supporting company that makes AI "voice NFTs"".NME.Archived from the original on October 7, 2025. RetrievedOctober 5, 2025.
  13. ^McKean, Kirk (January 14, 2022)."Actor Troy Baker endorses NFT voice AI that aims to replace actors".USA Today.Archived from the original on July 30, 2025. RetrievedOctober 5, 2025.
  14. ^abcPhillips, Tom (January 17, 2022)."Troy Baker-backed NFT firm admits using voice lines taken from another service without permission".Eurogamer.Archived from the original on January 17, 2022. RetrievedOctober 5, 2025.
  15. ^abcdInnes, Ruby (January 18, 2022)."Voiceverse Is The Latest NFT Company Caught Using Someone Else's Content".Kotaku Australia. Archived fromthe original on July 26, 2024. RetrievedOctober 5, 2025.
  16. ^"Voiceverse NFT caught plagiarising voice lines from AI service 15.ai".AI, Algorithmic, and Automation Incidents and Controversies. January 2022.Archived from the original on October 4, 2025. RetrievedOctober 5, 2025.
  17. ^Sonali, Pednekar (January 14, 2022). Lam, Khoa (ed.)."Incident 277: Voices Created Using Publicly Available App Stolen and Resold as NFT without Attribution".AI Incident Database.Archived from the original on January 13, 2025. RetrievedOctober 5, 2025.
  18. ^"Explained: The controversy surrounding the AI-generated artwork that won US competition".Firstpost. 2022-09-05. Retrieved2023-03-26.
  19. ^abRoose, Kevin (2022-09-02)."An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy".The New York Times.ISSN 0362-4331. Retrieved2023-03-26.
  20. ^abWilson, Suzanne V.; Strong, Maria; Rubel, Jordana; et al. (U.S. Copyright Office Review Board) (5 September 2023)."Théâtre D'opéra Spatial Review Board Decision letter"(PDF).United States Copyright Office.
  21. ^Todorovic, Milos (2024)."AI and Heritage: A Discussion on Rethinking Heritage in a Digital World".International Journal of Cultural and Social Studies.10 (1): 4.doi:10.46442/intjcss.1397403. Retrieved4 July 2024.
  22. ^abMetz, Rachel (2022-09-03)."AI won an art contest, and artists are furious".CNN. Retrieved2023-03-26.
  23. ^Harwell, Drew (2022-09-02)."He used AI to win a fine-arts competition. Was it cheating?".Washington Post.Archived from the original on 24 September 2023. Retrieved2024-09-04.
  24. ^Helmore, Edward (24 September 2023)."An old master? No, it's an image AI just knocked up ... and it can't be copyrighted".The Guardian.
  25. ^abc"Pause Giant AI Experiments: An Open Letter".Future of Life Institute. Retrieved2024-07-19.
  26. ^Metz, Cade; Schmidt, Gregory (2023-03-29)."Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'".The New York Times.ISSN 0362-4331. Retrieved2024-08-20.
  27. ^Hern, Alex (2023-03-29)."Elon Musk joins call for pause in creation of giant AI 'digital minds'".The Guardian.ISSN 0261-3077. Retrieved2024-08-20.
  28. ^Paul, Kari (2023-04-01)."Letter signed by Elon Musk demanding AI research pause sparks controversy".The Guardian.ISSN 0261-3077. Retrieved2023-04-14.
  29. ^Anderson, Margo (7 April 2023)."'AI Pause' Open Letter Stokes Fear and Controversy".IEEE Spectrum.
  30. ^ab"Statement on AI Risk".Center for AI Safety. May 30, 2023.
  31. ^abRoose, Kevin (2023-05-30)."A.I. Poses 'Risk of Extinction,' Industry Leaders Warn".The New York Times.ISSN 0362-4331. Retrieved2023-05-30.
  32. ^Vincent, James (2023-05-30)."Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement".The Verge. Retrieved2024-07-03.
  33. ^"Artificial intelligence warning over human extinction – all you need to know".The Independent. 2023-05-31. Retrieved2023-06-03.
  34. ^"President Biden warns artificial intelligence could 'overtake human thinking'".USA TODAY. Retrieved2023-06-03.
  35. ^Ryan-Mosley, Tate (12 June 2023)."It's time to talk about the real AI risks".MIT Technology Review. Retrieved2024-07-03.
  36. ^Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31)."AI poses 'risk of extinction' on par with nukes, tech leaders say".Washington Post.ISSN 0190-8286. Retrieved2024-07-03.
  37. ^"OpenAI announces leadership transition" (Press release). OpenAI. OpenAI. 2023-11-17. Retrieved2025-09-23.
  38. ^Tong, Anna; Dastin, Jeffrey; Hu, Krystal (2023-11-23)."Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say".Reuters.Archived from the original on December 11, 2023. Retrieved2023-11-23.
  39. ^"OpenAI made huge breakthrough before ousting Sam Altman, introducing Q*".TweakTown. 2023-11-22.Archived from the original on November 23, 2023. Retrieved2023-11-23.
  40. ^Tiku, Nitasha (December 8, 2023)."OpenAI leaders warned of abusive behavior before Sam Altman's ouster".The Washington Post.Archived from the original on December 8, 2023. RetrievedDecember 8, 2023.
  41. ^Efrati, Amir (November 19, 2023)."Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won't Return".The Information.Archived from the original on November 20, 2023. RetrievedNovember 19, 2023.
  42. ^Fried, Ina."OpenAI's drama has two likely endings. Microsoft will like either one".Axios. RetrievedNovember 21, 2023.
  43. ^Kastrenakes, Jacob; Warren, Tom (November 20, 2023)."Hundreds of OpenAI employees threaten to resign and join Microsoft".The Verge.Archived from the original on November 20, 2023. RetrievedNovember 20, 2023.
  44. ^Metz, Rachel (November 17, 2023)."OpenAI Ousts Altman as CEO, Board Says It Lost Confidence In Him".Bloomberg News.Archived from the original on November 20, 2023. RetrievedNovember 18, 2023.
  45. ^Hur, Krystal (November 20, 2023)."Microsoft stock hits all-time high after hiring former OpenAI CEO Sam Altman".CNN.Archived from the original on November 20, 2023. RetrievedNovember 20, 2023.
  46. ^Gammell, Katharine; Seal, Thomas; Bergen, Mark (December 8, 2023)."Microsoft's OpenAI Ties Face Potential UK Antitrust Probe".Bloomberg News.Archived from the original on December 8, 2023. RetrievedDecember 8, 2023.
  47. ^Nylen, Leah (December 8, 2023)."Microsoft's OpenAI Investment Risks Scrutiny from US, UK Regulators".Bloomberg News.Archived from the original on December 8, 2023. RetrievedDecember 8, 2023.
  48. ^"Taylor Swift deepfakes spread online, sparking outrage".CBS News. January 26, 2024.Archived from the original on 6 February 2024. RetrievedFebruary 7, 2024.
  49. ^Wilkes, Emma (February 5, 2024)."Taylor Swift deepfakes spark calls for new legislation".NME.Archived from the original on February 7, 2024. RetrievedFebruary 7, 2024.
  50. ^Hsu, Tiffany (February 5, 2024)."Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says".The New York Times.Archived from the original on February 9, 2024. RetrievedFebruary 10, 2024.
  51. ^Belanger, Ashley (January 26, 2024)."Toxic Telegram group produced X's X-rated fake AI Taylor Swift images, report says".Ars Technica.Archived from the original on January 25, 2024. RetrievedFebruary 7, 2024.
  52. ^Kastrenakes, Jacob (January 26, 2024)."White House calls for legislation to stop Taylor Swift AI fakes".The Verge.Archived from the original on June 10, 2024. RetrievedJune 10, 2024.
  53. ^Rosenzweig-Ziff, Dan."AI deepfakes of Taylor Swift spread on X. Here's what to know".The Washington Post.Archived from the original on January 30, 2024. RetrievedFebruary 7, 2024.
  54. ^Yang, Angela (January 30, 2024)."Microsoft CEO Satya Nadella calls for coordination to address AI risk".NBC News.Archived from the original on February 25, 2024. RetrievedFebruary 26, 2024.
  55. ^Saner, Emine (January 31, 2024)."Inside the Taylor Swift deepfake scandal: 'It's men telling a powerful woman to get back in her box'".The Guardian.ISSN 0261-3077.Archived from the original on February 27, 2024. RetrievedFebruary 26, 2024.
  56. ^Weatherbed, Jess (January 25, 2024)."Trolls have flooded X with graphic Taylor Swift AI fakes".The Verge.Archived from the original on January 25, 2024. RetrievedFebruary 7, 2024.
  57. ^Montgomery, Blake (January 31, 2024)."Taylor Swift AI images prompt US bill to tackle nonconsensual, sexual deepfakes".The Guardian.Archived from the original on January 31, 2024. RetrievedJanuary 31, 2024.
  58. ^abRobertson, Adi (February 21, 2024)."Google apologizes for 'missing the mark' after Gemini generated racially diverse Nazis".The Verge.Archived from the original on February 21, 2024. RetrievedFebruary 22, 2024.
  59. ^Franzen, Carl (February 21, 2024)."Google Gemini's 'wokeness' sparks debate over AI censorship".VentureBeat.Archived from the original on February 22, 2024. RetrievedFebruary 22, 2024.
  60. ^Titcomb, James (February 21, 2024)."Google chatbot ridiculed for ethnically diverse images of Vikings and knights".The Daily Telegraph.ISSN 0307-1235.Archived from the original on February 22, 2024. RetrievedFebruary 22, 2024.
  61. ^Tan, Kwan Wei Kevin (February 22, 2024)."Elon Musk is accusing Google of running 'insane racist, anti-civilizational programming' with its AI".Business Insider.Archived from the original on February 23, 2024. RetrievedFebruary 24, 2024.
  62. ^Hart, Robert (February 23, 2024)."Elon Musk Targets Google Search After Claiming Company AI Is 'Insane' And 'Racist'".Forbes.Archived from the original on February 23, 2024. RetrievedFebruary 24, 2024.
  63. ^Nolan, Beatrice (February 27, 2024)."Elon Musk is going to war with Google".Business Insider.Archived from the original on February 27, 2024. RetrievedFebruary 28, 2024.
  64. ^Kharpal, Arjun (February 22, 2024)."Google pauses Gemini AI image generator after it created inaccurate historical pictures".CNBC.Archived from the original on February 22, 2024. RetrievedFebruary 22, 2024.
  65. ^Milmo, Dan (February 22, 2024)."Google pauses AI-generated images of people after ethnicity criticism".The Guardian.ISSN 0261-3077.Archived from the original on February 22, 2024. RetrievedFebruary 22, 2024.
  66. ^Duffy, Catherine; Thorbecke, Clare (February 22, 2024)."Google halts AI tool's ability to produce images of people after backlash".CNN Business.Archived from the original on February 22, 2024. RetrievedFebruary 22, 2024.
  67. ^Roth, Emma (February 23, 2024)."Google explains Gemini's 'embarrassing' AI pictures of diverse Nazis".The Verge.Archived from the original on February 23, 2024. RetrievedFebruary 24, 2024.
  68. ^Moon, Mariella (February 24, 2024)."Google explains why Gemini's image generation feature overcorrected for diversity".Engadget.Archived from the original on February 24, 2024. RetrievedFebruary 24, 2024.
  69. ^Ingram, David (February 23, 2024)."Google says Gemini AI glitches were product of effort to address 'traps'".NBC News.Archived from the original on February 23, 2024. RetrievedFebruary 24, 2024.
  70. ^Love, Julia (February 27, 2024)."Google CEO Blasts 'Unacceptable' Gemini Image Generation Failure".Bloomberg News.Archived from the original on February 28, 2024. RetrievedFebruary 28, 2024.
  71. ^Allyn, Bobby (March 3, 2024)."Google CEO Pichai says Gemini's AI image results "offended our users"".NPR.Archived from the original on February 28, 2024. RetrievedMarch 3, 2024.
  72. ^Albergotti, Reed (February 27, 2024)."Google CEO calls AI tool's controversial responses 'completely unacceptable'".Semafor.Archived from the original on February 28, 2024. RetrievedFebruary 28, 2024.
  73. ^Alba, Davey; Ghaffary, Shirin (March 1, 2024)."Google Trims Jobs in Trust and Safety While Others Work 'Around the Clock'".Bloomberg News.Archived from the original on March 1, 2024. RetrievedMarch 3, 2024.
  74. ^Victor, Jon (March 2, 2024)."Google Lays Off Trust and Safety Staff Amid Gemini Backlash".The Information.Archived from the original on March 3, 2024. RetrievedMarch 3, 2024.
  75. ^Wittenstein, Jeran (February 26, 2024)."Alphabet Drops During Renewed Fears About Google's AI Offerings".Bloomberg News.Archived from the original on February 26, 2024. RetrievedFebruary 28, 2024.
  76. ^Langley, Hugh (March 1, 2024)."There are growing calls for Google CEO Sundar Pichai to step down".Business Insider.Archived from the original on March 1, 2024. RetrievedMarch 3, 2024.
  77. ^Kafka, Peter (February 26, 2024)."Google has another 'woke' AI problem with Gemini — and it's going to be hard to fix".Business Insider.Archived from the original on February 26, 2024. RetrievedFebruary 26, 2024.
  78. ^Cao, Sissi (February 15, 2024)."Criticism is Mounting Over Sundar Pichai's Stumbles as Google CEO".The New York Observer.Archived from the original on April 8, 2024. RetrievedMay 21, 2024.
  79. ^Grant, Nico (August 28, 2024)."Google Says It Fixed Image Generator That Failed to Depict White People".The New York Times.ISSN 0362-4331.Archived from the original on August 28, 2024. RetrievedAugust 28, 2024.
  80. ^Cerullo, Megan (August 29, 2024)."Google relaunches Gemini AI tool that lets users create images of people".CBS News.Archived from the original on August 29, 2024. RetrievedSeptember 2, 2024.

Further reading

[edit]
Concepts
Applications
Implementations
Audio–visual
Text
Decisional
People
Architectures
Political
Social and economic
Concepts
Chatbots
Models
Text
Coding
Image
Video
Speech
Music
Controversies
Agents
Companies
Retrieved from "https://en.wikipedia.org/w/index.php?title=Artificial_intelligence_controversies&oldid=1326941564"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp