| Formation | 2005; 20 years ago (2005) |
|---|---|
| Dissolved | 16 April 2024; 19 months ago (2024-04-16) |
| Purpose | Research big-picture questions about humanity and its prospects |
| Headquarters | Oxford,England |
Director | Nick Bostrom |
Parent organization | Faculty of Philosophy,University of Oxford |
| Website | futureofhumanityinstitute.org |
TheFuture of Humanity Institute (FHI) was aninterdisciplinary research centre at theUniversity of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of theFaculty of Philosophy and theOxford Martin School.[1] Its director was philosopherNick Bostrom, and its research staff included futuristAnders Sandberg andGiving What We Can founderToby Ord.[2]
Sharing an office and working closely with theCentre for Effective Altruism, the institute's stated objective was to focus research where it can make the greatest positive difference for humanity in the long term.[3][4] It engaged in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations. The centre's largest research funders includedAmlin,Elon Musk, theEuropean Research Council,Future of Life Institute, andLeverhulme Trust.[5]
On 16 April 2024 the University of Oxford closed the Institute, which said it had "faced increasing administrative headwinds within theFaculty of Philosophy".[6][7]
Nick Bostrom established the institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School.[1] Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers have given policy advice at theWorld Economic Forum, to the private and non-profit sector (such as theMacarthur Foundation, and theWorld Health Organization), as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States.
Bostrom and bioethicistJulian Savulescu also published the bookHuman Enhancement in March 2009.[8] Most recently, FHI has focused on the dangers of advancedartificial intelligence (AI). In 2014, its researchers published several books on AI risk, including Stuart Armstrong'sSmarter Than Us and Bostrom'sSuperintelligence: Paths, Dangers, Strategies.[9][10]
In 2018,Open Philanthropy recommended a grant of up to approximately £13.4 million to FHI over three years, with a large portion conditional on successful hiring.[11]
The largest topic FHI has spent time exploring isglobal catastrophic risk, and in particular existential risk. In a 2002 paper, Bostrom defined an "existential risk" as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".[12] This includes scenarios where humanity is not directly harmed, but it fails tocolonize space and make use of the observable universe's available resources in humanly valuable projects, as discussed in Bostrom's 2003 paper, "Astronomical Waste: TheOpportunity Cost of Delayed Technological Development".[13]
Bostrom andMilan Ćirković's 2008 bookGlobal Catastrophic Risks collects essays on a variety of such risks, both natural and anthropogenic. Possible catastrophic risks from nature includesuper-volcanism,impact events, and energetic astronomical events such asgamma-ray bursts,cosmic rays,solar flares, andsupernovae. These dangers are characterized as relatively small and relatively well understood, thoughpandemics may be exceptions as a result of being more common, and of dovetailing with technological trends.[14][4]
Synthetic pandemics via weaponizedbiological agents are given more attention by FHI. Technological outcomes the institute is particularly interested in includeanthropogenic climate change,nuclear warfare andnuclear terrorism,molecular nanotechnology, andartificial general intelligence. In expecting the largest risks to stem from future technologies, and from advanced artificial intelligence in particular, FHI agrees with other existential risk reduction organizations, such as theCentre for the Study of Existential Risk and theMachine Intelligence Research Institute.[15][16] FHI researchers have also studied the impact of technological progress on social and institutional risks, such astotalitarianism,automation-driven unemployment, and information hazards.[17]
In 2020, FHI Senior Research Fellow Toby Ord published his bookThe Precipice: Existential Risk and the Future of Humanity, in which he argues that safeguarding humanity's future is among the most important moral issues of our time.[18][19]
FHI devotes much of its attention to exotic threats that have been little explored by other organizations, and to methodological considerations that inform existential risk reduction and forecasting. The institute has particularly emphasizedanthropic reasoning in its research, as an under-explored area with general epistemological implications.
Anthropic arguments FHI has studied include thedoomsday argument, which claims that humanity is likely to go extinct soon because it is unlikely that one is observing a point in human history that is extremely early. Instead, present-day humans are likely to be near the middle of the distribution of humans that will ever live.[14] Bostrom has also popularized thesimulation argument.
A recurring theme in FHI's research is theFermi paradox, the surprising absence of observable alien civilizations. Robin Hanson has argued that there must be a "Great Filter" preventing space colonization to account for the paradox. That filter may lie in the past, if intelligence is much more rare than current biology would predict; or it may lie in the future, if existential risks are even larger than is currently recognized.
Closely linked to FHI's work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks ofhuman enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI'sbioethics research focuses on the potential consequences ofgene therapy,life extension,brain implants andbrain–computer interfaces, andmind uploading.[20]
FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress. FHI's work on human irrationality, as exemplified incognitive heuristics andbiases, includes an ongoing collaboration withAmlin to study thesystemic risk arising from biases in modeling.[21][22]
{{cite web}}: CS1 maint: bot: original URL status unknown (link)