Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Statement on AI Risk

From Wikipedia, the free encyclopedia
Open letter about extinction risk from AI
Part ofa series on
Artificial intelligence (AI)
Glossary

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following shortStatement on AI Risk:[1][2][3]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

At release time, the signatories included over 100 professors ofAI including the two most-cited computer scientists andTuring laureatesGeoffrey Hinton andYoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields.[1][2][4] Media coverage has emphasized the signatures from several tech leaders;[2] this was followed by concerns in other newspapers that the statement could be motivated by public relations or regulatory capture.[5] The statement was released shortly after an open letter calling for apause on AI experiments.

The statement is hosted on the website of the AI research and advocacy non-profitCenter for AI Safety. The idea for such a one-sentence statement was originally proposed by David Scott Krueger, then a professor at theUniversity of Cambridge.[6]It was released with an accompanying text which states that it is still difficult to speak up about extreme risks of AI and that the statement aims to overcome this obstacle.[1] The center's CEODan Hendrycks stated that "systemic bias, misinformation, malicious use, cyberattacks, and weaponization" are all examples of "important and urgent risks from AI... not just the risk of extinction" and added, "[s]ocieties can manage multiple risks at once; it's not 'either/or' but 'yes/and.'"[7][4]

Among the well-known signatories are:Sam Altman,Bill Gates,Peter Singer,Daniel Dennett,Sam Harris,Grimes,Stuart J. Russell,Jaan Tallinn,Vitalik Buterin,David Chalmers,Ray Kurzweil,Max Tegmark,Lex Fridman,Martin Rees,Demis Hassabis,Dawn Song,Ted Lieu,Ilya Sutskever,Martin Hellman,Bill McKibben,Angela Kane,Audrey Tang,David Silver,Andrew Barto,Mira Murati,Pattie Maes,Eric Horvitz,Peter Norvig,Joseph Sifakis,Erik Brynjolfsson,Ian Goodfellow,Baburam Bhattarai,Kersti Kaljulaid,Rusty Schweickart,Nicholas Fairfax,David Haussler,Peter Railton,Bart Selman,Dustin Moskovitz,Scott Aaronson,Bruce Schneier,Martha Minow,Andrew Revkin,Rob Pike,Jacob Tsimerman,Ramy Youssef,James Pennebaker, andRonald C. Arkin.[8]

Reception

[edit]

ThePrime Minister of the United Kingdom,Rishi Sunak,retweeted the statement and wrote, "The government is looking very carefully at this."[9] When asked about the statement, theWhite House Press Secretary,Karine Jean-Pierre, commented that AI "is one of the most powerful technologies that we see currently in our time. But in order to seize the opportunities it presents, we must first mitigate its risks."[10]

Skeptics of the letter point out that AI has failed to reach certain milestones, such as predictions aroundself-driving cars.[4] Skeptics also argue that signatories of the letter were continuing funding of AI research.[3] Companies would benefit from public perception that AI algorithms were far more advanced than currently possible.[3] Skeptics, including fromHuman Rights Watch, have argued that scientists should focus on the known risks of AI instead of distracting with speculative future risks.[11][3]Timnit Gebru has criticized elevating the risk of AI agency, especially by the "same people who have poured billions of dollars into these companies."[11]Émile P. Torres and Gebru both argue against the statement, suggesting it may be motivated byTESCREAL ideologies.[12]

See also

[edit]

References

[edit]
  1. ^abc"Statement on AI Risk".Center for AI Safety. May 30, 2023.
  2. ^abcRoose, Kevin (2023-05-30)."A.I. Poses 'Risk of Extinction,' Industry Leaders Warn".The New York Times.ISSN 0362-4331. Retrieved2023-05-30.
  3. ^abcdGregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31)."AI poses 'risk of extinction' on par with nukes, tech leaders say".Washington Post.ISSN 0190-8286. Retrieved2024-07-03.
  4. ^abcVincent, James (2023-05-30)."Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement".The Verge. Retrieved2024-07-03.
  5. ^Wong, Matteo (2023-06-02)."AI Doomerism Is a Decoy".The Atlantic. Retrieved2023-12-26.
  6. ^"Frequently Asked Questions".Center for AI Safety. September 15, 2025.
  7. ^Lomas, Natasha (2023-05-30)."OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk".TechCrunch. Retrieved2023-05-30.
  8. ^"Statement on AI Risk | CAIS".www.safe.ai. Retrieved2024-03-18.
  9. ^"Artificial intelligence warning over human extinction – all you need to know".The Independent. 2023-05-31. Retrieved2023-06-03.
  10. ^"President Biden warns artificial intelligence could 'overtake human thinking'".USA TODAY. Retrieved2023-06-03.
  11. ^abRyan-Mosley, Tate (12 June 2023)."It's time to talk about the real AI risks".MIT Technology Review. Retrieved2024-07-03.
  12. ^Torres, Émile P. (2023-06-11)."AI and the threat of "human extinction": What are the tech-bros worried about? It's not you and me".Salon. Retrieved2024-07-03.
Concepts
Organizations
People
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Statement_on_AI_Risk&oldid=1316005108"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp