TheMachine Intelligence Research Institute (MIRI), formerly theSingularity Institute for Artificial Intelligence (SIAI), is anon-profitresearch institute focused since 2005 on identifying and managing potentialexistential risks from artificial general intelligence. MIRI's work has focused on afriendly AI approach to system design and on predicting the rate of technology development.
In 2000,Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development ofartificial intelligence (AI).[1][2][3] However, Yudkowsky began to be concerned that AI systems developed in the future could becomesuperintelligent and pose risks to humanity,[1] and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field.[2]
Starting in 2006, the Institute organized theSingularity Summit to discuss the future of AI including its risks, initially in cooperation withStanford University and with funding fromPeter Thiel. TheSan Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy calledtranshumanism".[4][5] In 2011, its offices were four apartments in downtown Berkeley.[6] In December 2012, the institute sold its name, web domain, and the Singularity Summit toSingularity University,[7] and in the following month took the name "Machine Intelligence Research Institute".[8]
In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations.[3][9]: 327
In 2019,Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI.[10] In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years.[11][12]
Nate Soares presenting an overview of the AI alignment problem atGoogle in 2016
MIRI's approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly.[3][14][15]
MIRI researchers advocate early safety work as a precautionary measure.[16] However, MIRI researchers have expressed skepticism about the views ofsingularity advocates likeRay Kurzweil thatsuperintelligence is "just around the corner".[14] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[17]
MIRI aligns itself with the principles and objectives of theeffective altruism movement.[18]
Soares, Nate; Levinstein, Benjamin A. (2017)."Cheating Death in Damascus"(PDF).Formal Epistemology Workshop. Retrieved28 July 2018.
Soares, Nate; Fallenstein, Benja;Yudkowsky, Eliezer; Armstrong, Stuart (2015)."Corrigibility".AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications. Archived fromthe original on 2016-01-15. Retrieved2015-10-16.