Roko's basilisk is athought experiment about the potential risks involved in developingartificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It is named after the member of therationalist communityLessWrong who first publicly described it, though he did not originate it or the underlying ideas. The basilisk resembles afuturist version ofPascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particularsingularitarian ideas or financially support their development. Despite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. While neither LessWrong nor its founderEliezer Yudkowsky advocate the basilisk as true, theydo advocate almost all of the premises that add up to it. Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win insome quantum branch. |