Max Tegmark, professor atMIT, one of the founders and current president of the Future of Life Institute
FLI's stated mission is to steer transformative technology towards benefiting life and away from large-scale risks.[2] FLI's philosophy focuses on the potential risk to humanity from the development of human-level orsuperintelligentartificial general intelligence (AGI), but also works to mitigate risk from biotechnology, nuclear weapons and global warming.[3]
Starting in 2017, FLI has offered an annual "Future of Life Award", with the first awardee beingVasili Arkhipov. The same year, FLI releasedSlaughterbots, a short arms-control advocacy film. FLI released a sequel in 2021.[7]
In January 2023, Swedish magazineExpo reported that the FLI had offered a grant of $100,000 to a foundation set up byNya Dagbladet, a Swedishfar-right online newspaper.[9][10] In response, Tegmark said that the institute had only become aware ofNya Dagbladet's positions duringdue diligence processes a few months after the grant was initially offered, and that the grant had been immediately revoked.[10]
In March 2023, FLI published a letter titled "Pause Giant AI Experiments: An Open Letter". This called on major AI developers to agree on a verifiable six-month pause of any systems "more powerful thanGPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter said: "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control".[11] The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[12][13]
Prominent signatories of the letter included Elon Musk,Steve Wozniak,Evan Sharp,Chris Larsen, andGary Marcus; AI lab CEOsConnor Leahy andEmad Mostaque; politicianAndrew Yang; deep-learning researcherYoshua Bengio; andYuval Noah Harari.[14] Marcus stated "the letter isn't perfect, but the spirit is right." Mostaque stated, "I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter." In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[15][16] Musk predicted that "Leading AGI developers will not heed this warning, but at least it was said."[17] Some signatories, including Musk, said they were motivated by fears ofexistential risk from artificial general intelligence.[18] Some of the other signatories, such as Marcus, instead said they signed out of concern about risks such as AI-generated propaganda.[19]
In October 2025, another letter, the "Statement on Superintelligence", was published.[22] It called for a prohibition on the development ofsuperintelligence not lifted before there is "broad scientific consensus that it will be done safely and controllably" and "strong public buy-in". FLI directorAnthony Aguirre explained that "time is running out", expecting that the technology could arrive in as little as one to two years and counting on "widespread realization among society at all its levels" to stop it. He added that "whether it's soon or it takes a while, after we develop superintelligence, the machines are going to be in charge" and "that is not an experiment that we want to just run toward".[23]
Polling released alongside the letter showed that 64% of American agreed that superintelligence "shouldn't be developed until it's provably safe and controllable" and only 5% believed it should be developed as quickly as possible.[23]
FLI has actively contributed to policymaking on AI. In October 2023, for example, U.S. Senate majority leaderChuck Schumer invited FLI to share its perspective on AI regulation with selected senators.[25] In Europe, FLI successfully advocated for the inclusion of more general AI systems, such asGPT-4, in the EU'sArtificial Intelligence Act.[26]
FLI at the United Nations, Geneva HQ, 2021. On autonomous weapons.
The FLI research program started in 2015 with an initial donation of $10 million from Elon Musk.[30][31][32] In this initial round, a total of $7 million was awarded to 37 research projects.[33] In July 2021, FLI announced that it would launch a new $25 million grant program with funding from the Russian–Canadian programmerVitalik Buterin.[34]
In 2014, the Future of Life Institute held its opening event atMIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated byAlan Alda.[35][36] The panelists were synthetic biologistGeorge Church, geneticistTing Wu, economistAndrew McAfee, physicist and Nobel laureateFrank Wilczek and Skype co-founder Jaan Tallinn.[37][38]
Since 2015, FLI has organised biannual conferences with the stated purpose of bringing together AI researchers from academia and industry. As of April 2023[update], the following conferences have taken place:
"The Future of AI: Opportunities and Challenges" conference in Puerto Rico (2015). The stated goal was to identify promising research directions that could help maximize the future benefits of AI.[39] At the conference, FLI circulated anopen letter on AI safety which was subsequently signed byStephen Hawking, Elon Musk, and many artificial intelligence researchers.[40]
TheBeneficial AI conference in Asilomar, California (2017),[41] a private gathering of whatThe New York Times called "heavy hitters of A.I." (includingYann LeCun, Elon Musk, andNick Bostrom).[42] The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed byYoshua Bengio, Yann LeCun, and many other AI researchers.[43] These principles may have influenced theregulation of artificial intelligence and subsequent initiatives, such as theOECD Principles on Artificial Intelligence.[44]
The beneficial AGI conference in Puerto Rico (2019).[45] The stated focus of the meeting was answering long-term questions with the goal of ensuring thatartificial general intelligence is beneficial to humanity.[46]
"The Fight to Define When AI is 'High-Risk'" inWired.
"Lethal Autonomous Weapons exist; They Must Be Banned" inIEEE Spectrum.
"United States and Allies Protest U.N. Talks to Ban Nuclear Weapons" inThe New York Times.
"Is Artificial Intelligence a Threat?" inThe Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.
"But What Would the End of Humanity Mean for Me?", an interview withMax Tegmark on the ideas behind FLI inThe Atlantic.
^Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-03). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big?".Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event Canada: ACM. pp. 610–623.doi:10.1145/3442188.3445922.ISBN978-1-4503-8309-7.
^Government of Costa Rica (February 24, 2023)."FLI address"(PDF).Latin American and the Caribbean conference on the social and humanitarian impact of autonomous weapons.
^Metz, Cade (June 9, 2018)."Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots".NYT.Archived from the original on February 15, 2021. RetrievedJune 10, 2018.The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies.