By Brian Tomasik
First published:. Last nontrivial update:.
Following is a quick summary of my beliefs on various propositions and my moral values. The topics include philosophy, physics, artificial intelligence, and animal welfare. A few of the questions are drawn fromThe PhilPapers Surveys. Sometimes I link to essays that justify the beliefs further. Even if I haven't taken the time to defend a belief, I think sharing my subjective probability for it is an efficient way to communicate information. What a person believes about a proposition may be more informative than any single object-level argument, because a probability assessment aggregates many facts, intuitions, and heuristics together.
While a few of the probabilities in this piece are the results of careful thought, most of my numbers are just quick intuitive guesses about somewhat vague propositions. I use numbers only because they're somewhat more specific than words like "probable" or "unlikely". My numbers shouldn't be taken to imply any degree of precision or any underlying methodology more complex than "Hm, this probability seems about right to express my current intuitions...".
Pablo Stafforini has writtenhis own version of this piece.
Note: By "causes net suffering" in this piece, I mean "causes more suffering than is prevented", and the opposite for "prevents net suffering". For example, an action that causes 1 unit of suffering and prevents 4 other units of suffering prevents 3 units of net suffering. Idon't mean the net balance of happiness minus suffering. Net suffering is the relevant quantity for a negative-utilitarian evaluation of an action; for negative utilitarians, an action is good if it prevents net suffering.
| Belief | Probability |
|---|---|
| "Aesthetic value: objective or subjective?" Answer: subjective | 99.5% |
| "Abstract objects: Platonism or nominalism?" Answer: nominalism | 99% |
| Compatibilism onfree will | 98% |
| Moral anti-realism | 98% |
| Artificial general intelligence (AGI) is possible in principle | 98% |
| Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it | 80% |
| Human-inspired colonization of spacewill cause more suffering than it prevents if it happens | 72% |
| Earthwill eventually be controlled by asingleton of some sort | 72% |
| SoftAGI takeoff | 70% |
| Eternalism on philosophy of time | 70% |
| Type-A physicalism regarding consciousness | 69% |
| Rare Earth explanation ofFermi Paradox | 67% |
| By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015 | 67% |
| A government will build the first human-level AGI, assuming humans build one at all | 62% |
| By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past | 60% |
| The Foundational Research Institute reduces net suffering in the far future | 58% |
| The Machine Intelligence Research Institute reduces net suffering in the far future | 53% |
| Electing more liberalpoliticians reduces net suffering in the far future | 52% |
| Human-controlled AGI in expectationwould result in less suffering than uncontrolled | 52% |
| Climate change will cause more suffering than it prevents | 50% |
| Theeffective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a | 50% |
| Cognitive closure of some philosophical problems | 50% |
| Faster technological innovationincreases net suffering in the far future | 50% |
| Crop cultivation prevents net suffering | 50% |
| Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.) | 50% |
| Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) | 50% |
| Fastereconomic growth will cause net suffering in the far future | 47% |
| Modal realism | 40% |
| Many-worlds interpretation of quantum mechanics (or close kin) | 40%b |
| At bottom, physics is discrete/digital rather than continuous | 40% |
| The universe/multiverse isfinite | 37% |
| Whole brain emulation will come beforede novo AGI, assuming both are possible to build | 30%c |
| A full world government will develop before human-level AGI | 25% |
| Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing | 15% |
| Humans will go extinct within millions of years for some reason other than AGI | 5%d |
| A design very close toCEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) | 0.5% |
While I'm a moral anti-realist, I find theParliamentary Model of moral uncertainty helpful for thinking about different and incompatible values that I hold. One might also think in terms of the fraction of one's resources (time, money, social capital) that each of one's values controls. A significant portion of my moral parliament as revealed by my actual choices is selfish, even if I theoretically would prefer to be perfectly altruistic. Among the altruistic portion of my parliament, what I value roughly breaks down as follows:
| Value system | Fraction of moral parliament |
|---|---|
| Negative utilitarianism focused on extreme suffering | 90% |
| Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) | 10% |
However, as is true for most people, my morality can at times be squishy, and I may have random whims in a particular direction on a particular issue. I also may have a few deontological side-constraints on top of consequentialism.
While I think high-level moral goals should be based on utilitarianism, my intuition is that once you've made a solemn promise or entered into a trusting friendship/relationship with another person, you should roughly act deontologically ("ends don't justify the means") in that context. On an emotional level, this deontological intuition feels like a "pure" moral value, although it's also supported by sophisticated consequentialist considerations. Nobody is perfect, but if you regularly and intentionally violate people's trust, you might acquire a reputation as untrustworthy and lose out on the benefits of trusting relationships in the long term.
| The kind of suffering that matters most is... | Fraction of moral parliament |
|---|---|
| hedonic experience | 70% |
| preference frustration | 30% |
This section discusses how much I care about suffering at different levels of abstraction.
My negative-utilitarian intuitions lean toward a "threshold" view according to which small, everyday pains don't really matter, but extreme pains (e.g., burning in abrazen bull or being disemboweled by a predator while conscious) are awful and can't be outweighed by any amount of pleasure, although they can be compared among themselves. Idon't know how I would answer the "torture vs. dust specks" dilemma, but this issue doesn't matter as much for practical situations.
I assess the degree of consciousness of an agent roughly in terms ofanalytic functionalism, i.e., with a focus on what the system does rather than other factors that don't relate to its idealized computation, such as what it's made of or how quickly it runs. That said, I reserve the right to care about non-functional parts of a system to some degree. For instance, I might give greater moral weight to a huge computer implementing a given subroutine than to a tiny computer implementing the exact same subroutine.
I feel that the moral badness of suffering by an animal with N neurons isroughly proportional to N2/5, based on a crude interpolation of how much I care about different types of animals. By this measure, and based onWikipedia's neuron counts, a human's suffering with some organism-relative intensity would be about 11 times as bad as a rat's suffering with comparable organism-relative intensity and about 240 times as bad as a fruit fly's suffering. Note that this doesn't lead to anthropocentrism, though. It's probably much easier to prevent 11 rats or 240 fruit flies from suffering terribly than to prevent the same for one human. For instance, consider that in some buildings, over the course of a summer, dozens of rats may be killed, while hundreds of fruit flies may be crushed, drowned, or poisoned.
My intuitions on the exact exponent for N change a lot over time. Sometimes I use N1/2, N2/3, or maybe even just N, for weighting different animals. Exponents closer to 1 can be motivated by not wanting tiny invertebrates to completely swamp all other animals into oblivion in moral calculations (Shulman 2015), although this could also be accomplished using a piecewise function for moral weight as a function of N, such as one that has a small exponent for N within the set of mammals and another small exponent for N within the set of insects but a big gap between mammals and insects.
Maximizing ideologies like classical utilitarianism, which are more common among effective altruists than other social groups, seem more willing than common-sense morality to take big moral risks and incur big moral costs for the sake of creating as many blissful experiences as possible. Such ideologies may also aim to maximizecreation of new universes if doing so is possible. And so on. Of course, some uncontrolled-AI outcomes would also lead to fanatical maximizing goals, some of which might cause more suffering than classical utilitarianism would, and effective altruism may help reduce the risk of such AI outcomes. (back)
The difficulty of modeling nervous systems raises my estimate of the difficulty of AGI in general, bothde novo and emulation. But humans seem to do an okay job at developing useful software systems without needing to reverse-engineer the astoundingly complicated morass that is biology, which suggests thatde novo AGI will probably be easier. As far as I'm aware, most software innovations have come from people making up their own ideas—whether through theoretical insight or trial and error—and fewer discoveries have relied crucially on biological inspiration?Tyler (2009):
Engineers did not learn how to fly by scanning and copying birds. Nature may have provided a proof of the concept, and inspiration - but it didn't provide the details the engineeres actually used. A bird is not much like a propellor-driven aircraft, a jet aircraft or a helicopter.
The argument applies across many domains. Water filters are not scanned kidneys. The hoover dam is not a scan of a beaver dam. Solar panels are not much like leaves. Humans do not tunnel much like moles do. Submarines do not closely resemble fish. From this perspective, it would be very strange if machine intelligence was much like human intelligence.
The artificial neural networks now prominent in machine learning were, of course, originally inspired by neuroscience [...]. While neuroscience has continued to play a role [...], many of the major developments were guided by insights into the mathematics of efficient optimization, rather than neuroscientific findings [...].