The theory of morality we can call full rule-consequentialism selectsrules solely in terms of the goodness of their consequences and thenclaims that these rules determine which kinds of acts are morallywrong. George Berkeley was arguably the first rule-consequentialist.He wrote, “In framing the general laws of nature, it is grantedwe must be entirely guided by the public good of mankind, but not inthe ordinary moral actions of our lives. … The rule is framedwith respect to the good of mankind; but our practice must be alwaysshaped immediately by the rule” (Berkeley 1712: section 31).
A moral theory is a form of consequentialism if and only if itassesses acts and/or character traits, practices, and institutionssolely in terms of the goodness of the consequences. Historically,utilitarianism has been the best-known form of consequentialism.Utilitarianism assesses acts and/or character traits, practices, andinstitutions solely in terms of overall net benefit. Overall netbenefit is often referred to as aggregate well-being or welfare.Aggregate welfare is calculated by counting a benefit or harm to anyone individual the same as the same size benefit or harm to any otherindividual, and then adding all the benefits and harms together toreach an aggregate sum. There is considerable dispute amongconsequentialists about what the best account of welfare is.
Classical utitilitarians (i.e., Jeremy Bentham, J.S. Mill, and HenrySidgwick) took benefit and harm to be purely a matter of pleasure andpain. The view that welfare is a matter of pleasure minus pain hasgenerally been called hedonism. It has grown in sophistication (Parfit1984: Appendix I; Sumner 1996; Crisp 2006; de Lazari-Radek and Singer2014: ch. 9) but remains committed to the thesis that how wellsomeone’s life goes dependsentirely on his or herpleasure minus pain, albeit with pleasure and pain being construedvery broadly.
Even if pleasures and pains are construed very broadly, hedonismencounters difficulties. The main one is that many (if not all) peoplecare very strongly about things other than their own pleasures andpains. Of course these other things can be important as means topleasures and to the avoidance of pain. But many people care verystrongly about things over and beyond their hedonistic instrumentalvalue. For example, many people want to know the truth about variousmatters even if this won’t increase their (or anyoneelse’s) pleasure. Another example is that many people care aboutachieving things over and beyond the pleasure such achievements mightproduce. Again, many people care about the welfare of their family andfriends in a non-instrumental way. A rival account of these points,especially the last, is that people care about many things other thantheir own welfare.
On any plausible view of welfare, the satisfaction people can feelwhen their desires are fulfilled constitutes an addition to theirwelfare. Likewise, on any plausible view, frustration felt as a resultof unfulfilled desires constitutes a reduction in welfare. What iscontroversial is whether the fulfilment of someone’s desireconstitutes a benefit to that person apart from any effect that thefulfilment of the desire has on that person’s felt satisfactionor frustration. Hedonism answers No, claiming that only effects onfelt satisfaction or felt frustration matter.
A different theory of welfare answers Yes. This theory holds that thefulfilment of any desire of the agent’s constitutes a benefit tothe agent, even if the agent never knows that desire has beenfulfilled and even if the agent derives no pleasure from itsfulfilment. This theory of human welfare is often referred to as thedesire-fulfillment theory of welfare.
Clearly, the desire-fulfillment theory of welfare is broader thanhedonism, in that the desire-fulfillment theory accepts that what canconstitute a benefit is wider than merely pleasure. But there arereasons for thinking that this broader theory is too broad. For onething, people can have sensible desires that are simply toodisconnected from their own lives to be relevant to their own welfare(Williams 1973a: 262; Overvold 1980, 1982; Parfit 1984: 494). I desirethat the starving in far-away countries get food. But the fulfilmentof this desire of mine does not benefitme.
For another thing, people can have desires for absurd things forthemselves. Suppose I desire to count all the blades of grass in thelawns on this road. If I get satisfaction out of doing this, the feltsatisfaction constitutes a benefit to me. But the bare fulfilment ofmy desire to count all the blades of grass in the lawns on this roaddoes not (Rawls 1971: 432; Parfit 1984: 500; Crisp 1997: 56).
On careful reflection, we might think that the fulfilment ofsomeone’s desire constitutes an addition to that person’swelfare if andonly if that desire has one of a certain setof contents. We might think, for example, that the fulfilment ofsomeone’s desire for pleasure, friendship, knowledge,achievement, or autonomy for herselfdoes constitute anaddition to her welfare, and that the fulfilment of any desires shemight have for things that do not fall into these categories do notdirectly benefit her (though, again, the pleasure she derives fromtheir satisfaction does). If we think this, it seems we think there isa list of things that constitute anyone’s welfare (Parfit 1984:Appendix I; Brink 1989: 221–36; Griffin 1996: ch. 2; Crisp 1997:ch. 3; Gert 1998: 92–4; Arneson 1999a).
Insofar as the goods to be promoted are parts of welfare, the theoryremains utilitarian. There is a lot to be said for utilitarianism.Obviously, how lives go is important. And there is something deeplyattractive (if not downright irresistible) in the idea that moralityis fundamentally impartial, i.e., the idea that, at the mostfundamental level of morality, everyone is equally important —women and men, strong and weak, rich and poor, Blacks, Whites,Hispanics, Asians, etc. And utilitarianism plausibly interprets thisequal importance as dictating that in the calculation of overallwelfare a benefit or harm to any one individual counts neither morenor less that the same size benefit or harm to any otherindividual.
The nonutilitarian members of the consequentialist family are theoriesthat assess acts and/or character traits, practices, and institutionssolely in terms of resulting good,where good is not restricted towelfare. “Nonutilitarian” here means “notpurely utilitarian”, rather than “completelyunutilitarian”. When writers describe themselves asconsequentialists rather than as utilitarians, they are normallysignalling that their fundamental evaluations will be in terms of notonly welfare but also some other goods.
What are these other goods? The most common answers have been justice,fairness, and equality.
Justice, according to Plato, is “rendering to each hisdue” (Republic, Bk. 1). We might suppose that whatpeople are due is a matter of what people are owed, either becausethey deserve it or because they have a moral right to it. Suppose weplug these ideas into consequentialism. Then we get the theory thatthings should be assessed in terms of not only how much welfareresults but also the extent to which people get what they deserve andthe extent to which moral rights are respected.
For consequentialism to take this line, however, is for it to restrictits explanatory ambitions. What a theory simply presupposes, it doesnot explain. A consequentialist theory that presupposes both thatjustice is constituted by such-and-such and that justice is one of thethings to be promoted does not explain why the components of justiceare important. It does not explain what desert is. It does not explainthe importance of moral rights, much less try to determine what thecontents of these moral rights are. These are matters too importantand contentious for a consequentialist theory to leave unexplained oropen. If consequentialism is going to refer to justice, desert, andmoral rights, it needs to analyze these concepts and justify the roleit gives them.
Similar things can be said about fairness. If a consequentialisttheory presupposes an account of fairness, and simply stipulates thatfairness is to be promoted, then this consequentialist theory is notexplaining fairness. But fairness (like justice, desert, and moralrights) is a concept too important for consequentialism not to try toexplain.
One way for consequentialists to deal with justice and fairness is tocontend that justice and fairness are constituted by conformity with acertain set of justified social practices, and that what justifiesthese practices is that they generally promote overall welfare andequality. Indeed, the contention might be that what people are due,what people have a moral right to, what justice and fairness require,is conformity to whatever practices promote overall welfare andequality.
Whether equality needs to be included in the formula, however, is verycontroversial. Many think that a purely utilitarian formula hassufficiently egalitarian implications. They think that, even if thegoal is promotion of welfare, not the promotion ofwelfare-plus-equality, there are some contingent but pervasive factsabout human beings that push in the direction of equal distribution ofmaterial resources. According to the “law of diminishingmarginal utility of material resources”, the amount of benefit aperson gets out of a certain unit of material resources is less themore units of that material good the person already has. Suppose I gofrom having no way of getting around except by foot to having abicycle, or, though I live in a place where one can get very cold, Igo from having no warm coat to having one. I will benefit more fromgetting that first bicycle or coat than I would if I go from havingnine bicycles or coats to having ten.
There are exceptions to the law of diminishing marginal utility. Inmost of these exceptions, an additional unit of material resourcepushes someone over some important threshold. For example, considerthe meal or pill or gulp of air that saves someone’s life, orthe car whose acquisition pushes the competitive collector into firstplace. In such cases, the unit that puts the person over the thresholdmight well be as beneficial to that person as any prior unit was.Still, as a general rule, material resources do have diminishingmarginal utility.
To the assumption that material resources have diminishing marginalutility, let us add the assumption that different people generally getroughly the same benefits from the same material resources.Again, there are exceptions. If you live in a freezing climate and Ilive in a hot climate, then you would benefit much more from a warmcoat than I would.
But suppose we live in the same place, which has freezing winters,good paths for riding bicycles, and no public transportation. Andsuppose you have ten bicycles and ten coats (though you are not vyingfor some bicycle- or coat-collector prize). Meanwhile, I am so poorthat I have none. Then, redistributing one of your bicycles and one ofyour coats to me will probably harm you less than it will benefit me.This sort of phenomenon pervades societies where resources areunequally distributed. Wherever the phenomenon occurs, a fundamentallyimpartial morality is under pressure to redistribute resources fromthe richer to the poorer.
However, there are also contingent but pervasive facts about humanbeings that pull in favor of practices that have the foreseenconsequence of material inequality. First of all, higher levels ofoverall welfare can require higher levels of productivity (think ofthe welfare gains resulting from improvements in agriculturalproductivity). In many areas of the economy, the provision of materialrewards for greater productivity seems the most efficient acceptableway of eliciting higher productivity. Some individuals and groups willbe more productive than others (especially if there are incentiveschemes). So the practice of providing material rewards for greaterproductivity will result in material inequality.
Thus, on the one hand, the diminishing marginal utility of materialresources exerts pressure in favor of more equal distributions ofresources. On the other hand, the need to promote productivity exertspressure in favor of incentive schemes that have the foreseenconsequence of material inequality. Utilitarians and most otherconsequentialists find themselves balancing these opposedpressures.
Note that those pressures concern the distribution of resources. Thereis a further question about how equally welfare itself should bedistributed. Many recent writers have taken utilitarianism to beindifferent about the distribution of welfare. Imagine a choicebetween an outcome where overall welfare is large but distributedunequally and an outcome where overall welfare is smaller butdistributed equally. Utilitarians are taken to favor outcomes withgreater overall welfare even if it is also less equallydistributed.
To illustrate this, let us take an artificially simple population,divided into just two groups.
Units of welfare Total welfare for both groups Alternative 1 Per person Per group 10,000 people in group A 1 10,000 100,000 people in group B 10 1,000,000 Impartially calculated: 1,010,000
Units of welfare Total welfare for both groups Alternative 2 Per person Per group 10,000 people in group A 8 80,000 100,000 people in group B 9 900,000 Impartially calculated: 980,000
Many people would think Alternative 2 above better than Alternative 1,and might think that the comparison between these alternatives showsthat there is always pressure in favor of greater equality ofwelfare.
As Derek Parfit (1997) in particular has argued, however, we must notbe too hasty. Consider the following choice:
Units of welfare Total welfare for both groups Alternative 1 Per person Per group 10,000 people in group A 1 10,000 100,000 people in group B 10 1,000,000 Impartially calculated: 1,010,000
Units of welfare Total welfare for both groups Alternative 3 Per person Per group 10,000 people in group A 1 10,000 100,000 people in group B 1 100,000 Impartially calculated: 110,000
Is equality of welfare so important that Alternative 3 is superior toAlternative 1? To take an example of Parfit’s, suppose the onlyway to make everyone equal with respect to sight is to make everyonetotally blind. Is such “levelling down” required bymorality? Indeed, is it in any way at all morally desirable?
If we think the answer is No, then we might think that equality ofwelfare as such is not really an ideal (cf. Temkin 1993). Losses tothe better off are justified only where this benefits the worse off.What we had thought of as pressure in favor of equality of welfare wasinstead pressure in favor of levelling up. We might say that additionsto welfare matter more the worse off the person is whose welfare isaffected. This view has come to be calledprioritarianism(Parfit 1997; Arneson 1999b). It has tremendous intuitive appeal.
For a simplistic example of how prioritarianism might work, supposethe welfare of the worst off counts five times as much as the welfareof the better off. Then Alternative 1 from the tables above comes outat:
\((1 \times 5 \times 10,000) + (10 \times 100,000)\),
which comes to 1,050,000 total units of welfare. Again with thewelfare of the worst off counting five times as much, Alternative 2comes out at:
\((8 \times 5 \times 10,000) + (9 \times 100,000)\),
which comes to 1,300,000 total units of welfare. This accords with thecommon reaction that Alternative 2 is morally superior to Alternative1.
Of course in real examples there is never only one division insociety. Rather there is a scale from the worst off, to the not quiteso badly off, and so on up to the best off. Prioritarianism iscommitted to variable levels of importance of benefitting people atdifferent places on this scale: the worse off a person is, the greaterthe importance attached to that person’s gain.
This raises two serious worries about prioritarianism. The firstconcerns prioritarianism’s difficulty innonarbitrarilydetermining how much more importance to give to the welfare of theworse off. For example, should a unit of benefit to the worst offcount 10 times more than the same size benefit to the best off and 5times more than the same size benefit to the averagely well off? Orshould the multipliers be 20 and 10, or 4 and 2, or other amounts? Thesecond worry about prioritarianism is whether attaching greaterimportance to increases in welfare for some than to the same sizeincreases in welfare for others contradicts fundamental impartiality(Hooker 2000: 60–2).
This is not the place to go further into debates betweenprioritarianism and its critics. So the rest of this article setsaside those debates.
Consequentialists have distinguished three components of their theory:(1) their thesis about what makes acts morally wrong, (2) their thesisabout the procedure agents should use to make their moral decisions,and (3) their thesis about the conditions under which moral sanctionssuch as blame, guilt, and praise are appropriate.
What we might callfull rule-consequentialism consists ofrule-consequentialist criteria for all three. Thus, fullrule-consequentialism claims that an act is morally wrong if and onlyif it is forbidden by rules justified by their consequences. It alsoclaims that agents should do their moral decision-making in terms ofrules justified by their consequences. And it claims that theconditions under which moral sanctions should be applied aredetermined by rules justified by their consequences.
Full rule-consequentialists may think that there is really only oneset of rules about these three different subject matters. Or they maythink that there are different sets that in some sense correspond toor complement one another.
Much more important than the distinction between different kinds offull rule-consequentialism is the distinction between fullrule-consequentialism andpartial rule-consequentialism.Partial rule-consequentialism might take many forms. Let us focus onthe most common form. The most common form of partialrule-consequentialism claims that agents should make their moraldecisions about what to do by reference to rules justified by theirconsequences, but does not claim that moral wrongness is determined byrules justified by their consequences. Partial rule-consequentialiststypically subscribe to the theory that moral wrongness is determineddirectly in terms of the consequences of the act compared to theconsequences of alternative possible acts. This theory of wrongness iscalled act-consequentialism.
Distinguishing between full and partial rule-consequentialismclarifies the contrast between act-consequentialism andrule-consequentialism. Act-consequentialism is best conceived of asmaintaining merely the following:
Act-consequentialist criterion of wrongness: An act is wrong if andonly if it results in less good than would have resulted from someavailable alternative act.
When confronted with that criterion of moral wrongness, many peoplenaturally assume that the way to decide what to do is to apply thecriterion, i.e.,
Act-consequentialist moral decision procedure: On each occasion, anagent should decide what to do by calculating which act would producethe most good.
However, consequentialists nearly never defend thisact-consequentialist decision procedure as a general and typical wayof making moral decisions (Mill 1861: ch 2; Sidgwick 1907:405–6, 413, 489–90; Moore 1903: 162–4; Smart 1956:346; 1973: 43, 71; Bales 1971: 257–65; Hare 1981; Parfit 1984:24–9, 31–43; Railton 1984: 140–6, 152–3; Brink1989: 216–7, 256–62, 274–6; Pettit and Brennan 1986;Pettit 1991, 1994, 1997: 156–61, 2017; de Lazari-Radek andSinger 2014: ch. 10). There are a number of compellingconsequentialist reasons why the act-consequentialist decisionprocedure would be counter-productive.
First, very often the agent does not have detailed information aboutwhat the consequences would be of various acts.
Second, obtaining such information would often involve greater coststhan are at stake in the decision to be made.
Third, even if the agent had the information needed to makecalculations, the agent might make mistakes in the calculations. (Thisis especially likely when the agent’s natural biases intrude, orwhen the calculations are complex, or when they have to be made in ahurry.)
Fourth, there are what we might call expectation effects. Imagine asociety in which people know that others are naturally biased towardsthemselves and towards their loved ones but are trying to make theirevery moral decision by calculating overall good. In such a society,each person might well fear that others will go around breakingpromises, stealing, lying, and even assaulting whenever they convincedthemselves that such acts would produce the greatest overall good(Woodard 2019: 149). In such a society, people would not feel theycould trust one another.
This fourth consideration is more controversial than the first three.For example, Hodgson 1967, Hospers 1972, and Harsanyi 1982 argue thattrust would break down. Singer 1972 and Lewis 1972 argue that it wouldnot.
Nevertheless, most philosophers accept that, for all four of thereasons above, using an act-consequentialist decision procedure wouldnot maximize the good. Hence even philosophers who espouse theact-consequentialist criterion of moral wrongness reject theact-consequentialist moral decision procedure. In its place, theytypically advocate the following:
Rule-consequentialist decision procedure: At least normally, agentsshould decide what to do by applying rules whose acceptance willproduce the best consequences, rules such as “Don’t harminnocent others”, “Don’t steal or vandalizeothers’ property”, “Don’t break yourpromises”, “Don’t lie”, “Pay specialattention to the needs of your family and friends”, “Dogood for others generally”.
Since act-consequentialists about the criterion of wrongness typicallyaccept this decision procedure, act-consequentialists are in factpartial rule-consequentialists. Often, what writers refer to asindirect consequentialism is this combination of act-consequentialismabout wrongness and rule-consequentialism about the appropriatedecision procedure.
Standardly, the decision procedure that full rule-consequentialismendorses is the one that it would be best forsociety toaccept. The qualification “standardly” is needed becausethere are versions of rule-consequentialism that let the rules berelativised to small groups or even individuals (D. Miller 2010; Kahn2012). And act-consequentialism insists upon the decision procedure itwould be best for theindividual to accept. So, according toact-consequentialism, since Jack’s and Jill’s capacitiesand situations may be very different, the best decision procedure forJack to accept may be different from the best decision procedure forJill to accept. However, in practice act-consequentialists typicallyignore for the most part such differences and endorse the aboverule-consequentialist decision procedure (Hare 1981, chs. 2, 3, 8, 9,11; Levy 2000).
When act-consequentialists endorse the above rule-consequentialistdecision procedure, they acknowledge that following this decisionprocedure does not guarantee that we will do the act with the bestconsequences. Sometimes, for example, our following a decisionprocedure that rules out harming an innocent person will prevent usfrom doing that act that would produce the best consequences.Similarly, there will be some circumstances in which stealing,breaking our promises, etc., would produce the best consequences.Still, our following a decision procedure that generally rules outsuch acts will in the long run and on the whole probably produce farbetter consequences than our trying to run consequentialistcalculations on an act-by-act basis.
Because act-consequentialists typically agree with arule-consequentialist decision procedure, whether to classify somephilosopher as an act-consequentialist or as a rule-consequentialistcan be problematic. For example, G.E. Moore (1903, 1912) is sometimesclassified as an act-consequentialist and sometimes as arule-consequentialist. Like so many others, including his teacherHenry Sidgwick, Moore combined an act-consequentialist criterion ofmoral wrongness with a rule-consequentialist procedure for decidingwhat to do. Moore simply went further than most in stressing thedanger of departing from the rule-consequentialist decision procedure(see Shaw 2000). Hare (1981) and Pettit (1991, 1994, 1997) have alsobeen especially influential act-consequentialists about what makesacts right or wrong while holding that everyday decision-making shouldbe conducted in terms of familiar rules focused on things other thanconsequences.
Some writers propose that the purest and most consistent form ofconsequentialism is the view that absolutely everything should beassessed by its consequences, including not only acts but also rules,motives, the imposition of sanctions, etc. Let us follow Pettit andSmith (2000) in referring to this view asglobalconsequentialism. Kagan (2000) pictures it as multi-dimensional directconsequentialism, in that each thing is assessed directly in terms ofwhether its own consequences are as good as the consequences ofalternatives.
How does this global consequentialism differ from what we have beencalling partial rule-consequentialism? What we have been callingpartial rule-consequentialism is nothing but the combination of theact-consequentialist criterion of moral wrongness with therule-consequentialist decision procedure. So defined, partialrule-consequentialism leaves open the question of when moral sanctionsare appropriate.
Some partial rule-consequentialists say that agents should be blamedand feel guilty whenever they fail to choose an act that would resultin the best consequences. A much more reasonable position for apartial rule-consequentialist to take is that agents should be blamedand feel guilty whenever they choose an act that is forbidden by therule-consequentialist decision procedure, whether or not thatindividual act fails to result in the best consequences. Finally,partial rule-consequentialism, as we have defined it, is compatiblewith the claim that whether agents should be blamed or feel guiltydepends not on the wrongness of what they did, nor on whether therecommended procedure for making moral decisions would have led themto choose the act they choose, but instead solely on whether thisblame or guilt will do any good. This is precisely the view ofsanctions that global consequentialism takes.
One objection to global consequentialism is that simultaneouslyapplying a consequentialist criterion to acts, decision procedures,and the imposition of sanctions leads to apparent paradoxes (Crisp1992; Streumer 2003; Lang 2004).
Suppose, on the whole and in the long run, the best decision procedurefor you to accept is one that leads you to do actx now. Butsuppose also that in fact the act with the best consequences in thissituation is notx buty. So global consequentialismtells you (a) to use the best possible decision procedure but also (b)not to do the act picked out by this decision procedure. That seemsparadoxical.
Things get worse when we consider blame and guilt. Suppose you followthe best possible decision procedure but fail to do the act with thebest consequences. Are you to be blamed? Should you feel guilty?Global consequentialism claims that you should be blamed if and onlyif blaming you will produce the best consequences, and that you shouldfeel guilty if and only if this will produce the best consequences.Suppose that for some reason the best consequences would result fromblaming you for following the prescribed decision procedure (and thusdoingx). But surely it is paradoxical for a moral theory tocall for you to be blamed although you followed the moral decisionprocedure mandated by the theory. Or suppose that for some reason thebest consequences would result from blaming you for intentionallychoosing the act with the best consequences (y). Again,surely it is paradoxical for a moral theory to call for you to beblamed although you intentionally chose the very act required by thetheory.
So one problem with global consequentialism is that it createspotential gaps between what acts it claims to be required and whatdecision procedures it tells agents to use, and between each of theseand blamelessness. (For explicit replies to this line of attack, seeDriver 2014: 175 and de Lazari-Radek and Singer 2014:315–16.)
That is not the most familiar problem with global consequentialism.The most familiar problem with it is instead its maximisingact-consequentialist criterion of wrongness. According to thismaximising criterion, an act is wrong if and only if it fails toresult in the greatest good. This criterion judges some acts to be notwrong which certainly seem to be wrong. It also judges some acts thatseem not wrong to be wrong.
For example, consider an act of murder that results in slightly moregood than any other act would have produced. According to the mostfamiliar, maximising act-consequentialist criterion of wrongness, thisact of murder is not wrong. Many other kinds of act such asassaulting, stealing, promise breaking, and lying can be wrong evenwhen doing them would produce slightly more good than not doing themwould. Again, the familiar, maximising form of act-consequentialismdenies this.
Or consider someone who gives to her child, or keeps for herself, someresource of her own instead of contributing it to help some strangerwho would have gained slightly more from that resource. Such an actionhardly seems wrong. Yet the maximising act-consequentialist criterionjudges it to be wrong. Indeed, imagine how much self-sacrifice anaveragely well-off person would have to make before her furtheractions satisfied the maximising act-consequentialist criterion ofwrongness. She would have to give to the point where furthersacrifices from her in order to benefit others would harm her morethan they would benefit the others. Thus, the maximisingact-consequentialist criterion of wrongness is often accused of beingunreasonably demanding.
The objections just directed at maximising act-consequentialism couldbe side-stepped by a version of act-consequentialism that did notrequire maximising the good. This sort of act-consequentialism is nowcalled satisficing consequentialism. See the entry onconsequentialism/ for more on such a theory.
There are a number of different ways of formulatingrule-consequentialism. For example, it can be formulated in terms ofthe good that actually results from rules or in terms of therationally expected good of the consequences of rules. It can beformulated in terms of the consequences of compliance with rules or interms of the wider consequences of acceptance of rules. It can beformulated in terms of the consequences of absolutely everyone’saccepting the rules or in terms of the rules’ acceptance bysomething less than everyone. Rule-consequentialism can also beformulated in terms of the teaching of a code to everyone in the nextgeneration, in the full realization that the degrees of resultingacceptance will vary (Mulgan 2006, 2017, 2020; D. Miller 2014, 2021;T. Miller 2016, 2021). Rule-consequentialism is more plausible ifformulated in some ways than it is if formulated in other ways. Thisis explained in the following three subsections. Questions offormulation are also relevant in later sections on objections torule-consequentialism.
As indicated, full rule-consequentialism consists inrule-consequentialist answers to three questions. The first is, whatmakes acts morally wrong? The second is, what procedure should agentsuse to make their moral decisions? The third is, what are theconditions under which moral sanctions such as blame, guilt, andpraise are appropriate?
As we have seen, the answer that full rule-consequentialists give tothe question about decision procedure is much like the answer thatother kinds of consequentialist give to that question. So let us focuson the points of contrast, i.e., the other two questions. These twoquestions — about what makes acts wrong and about when sanctionsare appropriate — are more tightly connected than sometimesrealized.
Indeed, J.S. Mill, one of the fathers of consequentialism, affirmedtheir tight connection:
We do not call anything wrong, unless we mean to imply that a personought to be punished in some way or other for doing it; if not by law,by the opinion of his fellow creatures; if not by opinion, by thereproaches of his own conscience. (1861: ch. 5, para. 14)
Let us assume that Mill took “ought to be punished, at least byone’s own conscience if not by others” to be roughly thesame as “blameworthy”. With this assumption in hand, wecan interpret Mill as tying wrongness tightly to blameworthiness. In amoment, we can consider what follows if Mill is mistaken thatwrongness is tied tightly to blameworthiness. First, let us considerwhat follows if Mill is correct that wrongness is tied tightly toblameworthiness.
Consider the following argument, whose first premise comes fromMill:
If an act is wrong, it is blameworthy.
Surely, an agent cannot rightly be blamed for accepting and followingrules that the agent could not foresee would have sub-optimalconsequences. From this, we get our second premise:
If an act is blameworthy, the sub-optimal consequences of rulesallowing that act must have been foreseeable.
From these two premises we get the conclusion:
So if an act is wrong, the sub-optimal consequences of rules allowingthat act must have been foreseeable.
Of course, the actual consequences of accepting a set of rules may notbe the same as the foreseeable consequences of accepting that set.Hence, if full rule-consequentialism claims that an act is wrong ifand only if theforeseeable consequences of rules allowingthat act are sub-optimal, rule-consequentialism cannot also hold thatan act is wrong if and only if theactual consequences ofrules allowing that act will be sub-optimal.
Now suppose instead the relation between wrongness and blameworthinessis far looser than Mill suggested (cf. Sorensen 1996). That is,suppose that our criterion of wrongness can be quite different fromour criterion of blameworthiness. In that case, we could hold:
Actualist rule-consequentialist criterion ofwrongness: An act is wrong if and only if it is forbidden byrules that wouldactually result in the greatest good.
and
Expectablist rule-consequentialist criterion ofblameworthiness: An act is blameworthy if and only if it isforbidden by the rules that wouldforeseeably result in thegreatest good.
Let us replace “foreseeably result in the greatest good”with “result in the greatest expected good”. Here is howexpected good of a set of rules is calculated. Suppose we can identifythe value or disvalue of each possible outcome a set of rules mighthave. Multiply the value of each possible outcome by the probabilityof that outcome’s occurring. Take all the products of thesemultiplications and add them together. The resulting number quantifiesthe expected good of that set of rules.
Expected good is not to be calculated by employing whatever crazyestimates of probabilities people might assign to possible outcomes.Rather, expected good is calculated by multiplying the value ordisvalue of possible outcomes byrational orjustified probability estimates.
Different agents have different evidence and thus have differentrational and justified probability estimates. Such differences aresometimes exactly what explains disagreements about what changes to anextant moral code would be improvements. In some cases ofdisagreements, the cause of such disagreement is that at least one ofthe parties is not aware of, or has not fully assimilated, evidencethat is available. Expectablist rule-consequentialist would likelywant to tie rational or justified probability estimates to theevidence that is available at the time, even if some people are notaware of it or have not fully appreciated its implications.
There might be considerable scepticism about how calculations ofexpected good are even possible (Lenman 2000). Even where suchcalculations are possible, they will often be quite impressionisticand imprecise. Nevertheless, we can reasonably hope to make at leastsome informed judgements about thelikelyconsequences of alternative possible rules (Burch-Brown 2014). And wecould be guided by such judgements, as legislators often say they are.In contrast, which rules wouldactually have theverybest consequences will normally be inaccessible. Hence, theexpectablist rule-consequentialist criterion of blameworthiness isappealing.
Now return to the proposal that, while the criterion ofblameworthiness is the expectablist rule-consequentialist one, thecorrect criterion of moral wrongness is the actualistrule-consequentialist one. This proposal rejects Mill’s move oftying moral wrongness to blameworthiness. There is, however, a verystrong objection to this proposal. What is the role and importance ofmoral wrongness if it is disassociated from blameworthiness?
In order to retain an obvious role and importance for moral wrongness,those committed to the expectablist rule-consequentialist criterionof blameworthiness are likely to endorse:
Simple expectablist rule-consequentialist criterion of moralwrongness: An act is morally wrong if and only if it is forbidden bythe rules that would result in the greatestexpected good.
Indeed, once we have before us the distinction between (a) the amountof value thatactually results and (b) the rationallyexpected good, the full rule-consequentialist is likely to gofor expectablist criteria of moral wrongness, blameworthiness, anddecision procedures.
What if, as far as we can tell, no one code has greater expected valuethan its rivals? We will need to amend our expectablist criteria inorder to accommodate this possibility:
Sophisticated expectablist rule-consequentialist criterion ofmoral wrongness: An act is morally wrong if and only if it isforbidden either by the rules that would result in the greatestexpected good, or, if two or more alternative codes of rulesare equally best in terms of expected good, by the one of these codesclosest to conventional morality.
The argument for using closeness to conventional morality to breakties between otherwise equally promising codes begins with theobservation that social change regularly has unexpected consequences.And these unexpected consequences usually seem to be negative.Furthermore, the greater the difference between a new code and the onealready conventionally accepted, the greater the scope for unexpectedconsequences. So, as between two codes we judge to have equally highexpected value, we should choose the one closest to the one we alreadyknow. (For discussion of the situation where two codes have equallyhigh expected value and seem equally close to conventional morality,see Hooker 2008: 83–4.)
An implication is that people should make changes to the status quowhere but only where these changes have greater expected value thansticking with the status quo. Rule-consequentialism manifestly has thecapacity to recommend change. But it does not favor change for thesake of change.
Rule-consequentialism most definitely does need to be formulated so asto deal with ties in expected value. However, for most of the rest ofthis article, I will ignore this complication.
There are other important issues of formulation thatrule-consequentialists face. One is the issue of whetherrule-consequentialism should be formulated in terms of compliance withrules, in terms of acceptance of rules, or in terms of the teaching ofrules. Admittedly, the most important consequence of teaching andaccepting rules is compliance with them. And early formulations ofrule-consequentialism did indeed explicitly mention compliance. Forexample, they said an act is morally wrong if and only if it isforbidden by rulesthe compliance with which will maximizethe good (or the expected good). (See Austin 1832; Brandt 1959; M.Singer 1955, 1961.)
However, acceptance of a rule can have consequences other thancompliance with the rule. As Kagan (2000: 139) writes, “onceembedded, rules can have an impact on results that is independent oftheir impact on acts: it might be, say, that merely thinking about aset of rules reassures people, and so contributes to happiness.”(For more on what we might call these ‘beyond-complianceconsequences’ of rules, see Sidgwick 1907: 405–6, 413;Lyons 1965: 140; Williams 1973: 119–20, 122, 129–30; Adams1976: 470; Scanlon 1998: 203–4; Kagan 1998: 227–34.)
These consequences of acceptance of rules should most definitely bepart of a cost-benefit analysis of rules. Formulatingrule-consequentialism in terms of the consequences of acceptanceallows them to be part of this analysis. Important consequences ofcommunal acceptance of a set of rules include assurance, incentive,and deterrent effects. And consideration of assurance and incentiveeffects has played a large role in the development ofrule-consequentialism (Harsanyi 1977, 1982: 56–61; 1993:116–18; Brandt 1979: 271–77; 1988: 346ff [1992: 142ff.];1996: 126, 144; Johnson 1991, especially chs. 3, 4, 9). Hence, weshould not be surprised that rule-consequentialism has gone from beingformulated in terms of compliance with rules to being formulated interms of acceptance of rules.
However, just as we need to move from thinking about the consequencesof compliance to thinking about the wider consequences of acceptance,we need to go further. Focusing purely on the consequences ofacceptance of rules ignores the “transition” costs for theteachers of teaching those rules, and the costs to theteaching’s recipients of internalizing these rules. And yetthese costs can certainly be significant (Brandt 1963: section 4; 1967[1992: 126]; 1983: 98; 1988: 346–47, 349–50 [1992:140–143, 144–47]; 1996: 126–28, 145, 148, 152,223).
Suppose, for example, that, once a fairly simple and relativelyundemanding code of rules CodeA has been accepted, theexpected value of CodeA would ben. Suppose a morecomplicated and demanding alternative CodeB would have anexpected value of \(n + 5\) once CodeB has been accepted. Soif we just consider the expected values of acceptance of the twoalternative codes, CodeB wins.
But now let us factor into our cost/benefit analysis of rival codesthe relative costs of teaching the two codes and of getting theminternalized by new generations. Since CodeA is fairlysimple and relatively undemanding, the cost of getting it internalizedis −1. Since CodeB is more complicated and demanding,the cost of getting it internalized is −7. So if our comparisonof the two codes considers the respective costs of getting theminternalized, CodeA’s expected value is \(n-1\), andCodeB’s is \(n+5-7\). Once we include the respectivecosts of getting the codes internalized, CodeA wins.
As indicated, the costs of teaching a code successfully, so that thecode is very widely internalized, are “transition costs”.But of course such transitions are always to one arrangement fromanother. The arrangement we are imagining the transition beingto is the acceptance of a certain proposed code. Thearrangement we are imagining the transition beingfrom is… well, what?
One answer is that the arrangement from which the transition issupposed to be starting is whatever moral code the society happens toaccept already. That might seem like the natural answer. However,there is a strong objection to this answer, namely thatrule-consequentialism should not let the cost/benefit analysis of aproposed code be influenced by the costs of getting people to give upwhatever rules they may have already internalised. This is for tworeasons.
Most importantly, rule-consequentialist assessment of codes needs toavoid giving weight directly or indirectly to moral ideas that havetheir source in other moral theories but not in rule-consequentialismitself. Suppose people in a given society were brought up to believethat women should be subservient to men. Should rule-consequentialistevaluation of a proposed non-sexist code have to count the costs ofgetting people to give up the sexist rules they have alreadyinternalised so as to accept the new, non-sexist ones? Since thesexist rules are unjustifiable, that they were accepted should not beallowed to infect rule-consequentialist assessment.
Another reason for rejecting the answer we are considering is that itthreatens to underwrite an unattractive relativism. Differentsocieties may differ considerably in their extant moral beliefs. Thus,a way of assessing proposed codes that considers the costs of gettingpeople already committed to some other code will end up having tocountenance different transition costs to get to the same code. Forexample, the transition costs to a non-racist code are much higherfrom an already accepted racist code than from an already acceptednon-racist one. Formulating rule-consequentialism so that it endorsesthe same code for 1960s Michigan as for 1960s Mississippi isdesirable. (For opposing arguments that rule-consequentialism shouldbe formulated so as to coutenance social relativism, see Kahn 2012; D.Miller 2014, 2021; T. Miller 2021.)
A way to avoid ending up with the social relativism just identified isto formulate rule-consequentialism in terms of acceptance bynewgenerations of humans. The proposal might be that we compare therespective “teaching costs” of alternative codes, on theassumption that these codes will be taught to new generations ofchildren, i.e., children who have not already been educated to accepta moral code. We are to imagine the children start off with natural(non-moral) inclinations to be very partial towards themselves and afew others. We should also assume that there is a cognitive costassociated with the learning of each rule.
These are realistic assumptions, with big implications. One is that acost/benefit analysis of alternative codes of rules would have reasonto favor simpler codes over more complex ones. Of course there canalso be benefits from having more, or more complicated, rules. Yetthere is probably a limit on how complicated or complex a code can beand still have greater expected value than simpler codes, onceteaching costs are included.
Another implication concerns prospective rules about making sacrificesto help others. Since children start off focused on their owngratifications, getting them to internalize a kind of impartialitythat requires them to make large sacrifices repeatedly for the sake ofothers would have extremely high costs. There would also, of course,be enormous benefits from the internalization of such a rule —predominately, benefits to others. Would the benefits be greater thanthe costs? At least since Sidgwick (1907: 434), many utilitarians havetaken for granted that human nature is such that the realpossibilities are (1) that human beings care passionately about someand less about each of the rest of humanity or (2) that human beingscare weakly but impartially about everyone. In other words, what isnot a realistic possibility, according to this view of human nature,is human beings’ caring strongly and impartially about everyonein the world. If this view of human nature is correct, then oneenormous cost of successfully making people completely impartial isthat doing so would leave them with only weak concerns.
Even if that picture of human nature is not correct, that is, even ifmaking people completely impartial could be achieved without drainingthem of enthusiasm and passion, the cost of successfully making peoplecare as much about every other individual as they do about themselveswould be prohibitive. At some point on the spectrum running fromcomplete partiality to complete impartiality, the costs of pushing andinducing everyone further along the spectrum outweigh thebenefits.
A complication worth mentioning comes from the obvious point thatmoral education is developmental. How many rules are to be taught tochildren, how complicated these rules are, how demanding the rulesare, and how to resolve conflicts among the rules depend on thedevelopmental stage of the children. Hence, there can be conflictbetween the simpler and less demanding rules taught to the very youngand the more elaborate, nuanced, and demanding rules taught at higherdevelopmental stages. Of course, rule-consequentialism is bestformulated in terms of the rules featured in the end of thedevelopment rather than the ones figuring in earlier stages.
While rule-consequentialist cost/benefit analyses of codes shouldcount the cost of getting those codes internalised by new generations,such analyses should acknowledge that the internalisation will not beachieved in every last person (D. Miller 2021: 130). No matter howgood the teaching is, the results will be imperfect. Some people willlearn rules that differ to some degree from the ones that were taught,and some of these people will end up with very mistaken views aboutwhat is morally required, morally optional, or morally wrong. Otherpeople will end up with insufficient moral motivation. Other peoplewill end up with no moral motivation at all (psychopaths).Rule-consequentialism needs to have rules for coping with theinevitably imperfect results of moral teaching.
Such rules will crucially include rules about punishment. From arule-consequentialist point of view, one point of punishment is todeter certain kinds of act. Another point of punishment is to getundeterred, dangerous people off the streets. Perhapsrule-consequentialism can admit that another point of punishment is toappease the primitive lust for revenge on the part of victims of suchacts and their family and friends. Finally, there is the expressiveand reinforcing power of rules about punishment.
Some ways of formulating rule-consequentialism make having rules aboutpunishment difficult to explain. One such way of formulatingrule-consequentialism is:
An act is morally wrong if and only if it is prohibited by the code ofrules the full acceptance of which by absolutely everyone wouldproduce the greatest expected good.
Imagine a world in which absolutely every adult human fully acceptsrules forbidding (for example) physical attacks on the innocent,stealing, promise breaking, and lying. Suppose these rules have beeninternalized so deeply by everyone in this world that in this worldthere is complete compliance with these rules. Also assume that, ifeveryone in this world always complies with these rules, this perfectcompliance becomes common knowledge. In this world, there would belittle or no need for rules about punishment and thus little or nobenefit from having such rules. But there are teaching andinternalization costs associated with each rule taught andinternalized. So there are teaching and internalization costsassociated with the inclusion of any rule about punishment. Thecombination of costs without benefits is repellant. Therefore, for aworld of complete compliance, the version of rule-consequentialismimmediately above would not endorse rules about punishment.
We need a form of rule-consequentialism that includes rules fordealing with people who are not committed to the right rules, indeedeven for people who are irredeemable. In other words,rule-consequentialism needs to be formulated so as to conceptualizesociety as containing some people insufficiently committed to theright rules, and even some people not committed to any moral rules.Here is a way of doing so:
An act is wrong if and only if it is prohibited by a code of rules theacceptance of which by the overwhelming majority of people in each newgeneration would have the greatest expected value.
Note that rule-consequentialism neither endorses nor condones thenon-acceptance of the code by those outside the overwhelming majority.On the contrary, rule-consequentialism claims those people are morallymistaken. Indeed, the whole point of formulating rule-consequentialismthis way is to make room for rules about how to respond negatively tosuch people.
Of course, the term “overwhelming majority” is veryimprecise. Suppose we remove the imprecision by picking a precisepercentage of society, say 90%. Picking any precise percentage as anobvious element of arbitrariness to it. For example, if we pick 90%,why not 89% or 91%?
Perhaps, we can argue for a number in the range of 90% as a reasonablecompromise between two pressures. On the one hand, the percentage wepick should be close enough to 100% to retain the idea that, ideally,moral rules would be accepted by everyone. On the other hand, thepercentage needs to be far enough short of 100% to leave considerablescope for rules about punishment. It seems that 90% is in a defensiblerange, given the need to balance those considerations. (For dissentfrom this, see Ridge 2006; for a reply to Ridge, see Hooker andFletcher 2008. The matter receives further discussion in H. Smith2010; Tobia 2013, 2018; T. Miller 2014, 2021; Toppinen 2016; Portmore2017; Yeo 2017; Podgorski 2018; Perl 2021.)
Holly Smith (2010) pointed out that a cost/benefit analysis of theacceptance of any particular code by a positive percentage of thepopulation less than 100% depends on what the rest of the populationaccepts and is disposed to do. Consider the following contrast. Oneimagined scenario is that 90% of the population accept one code, andthe other 10% accept a very similar code, such that the two codesrarely diverge in practice. A second imagined scenario is that 90% ofthe population accept one code, and the other 10% accept various codesthat frequently and dramatically conflict in practice with codeaccepted by 90%. Conflict resolution and enforcement are lessimportant in the first imagined scenario than in the second. Hence, ifrule-consequentialism is formulated in terms of the acceptance of acode by less than 100% of people, it matters what assumptions are madeabout whatever percentage of the population do not accept thiscode.
Some theorists propose that formulating rule-consequentialism in termsof the code the teaching of which has the greatest expected value issuperior to formulating the theory in terms of either a fixed orvariable acceptance rate for codes (Mulgan 2006, 141, 147; 2017, 291;2020, 12–21; T. Miller 2016; 2021; D. Miller 2021). A“teaching formulation” of rule-consequentialism holds:
An act is morally prohibited, obligatory, or optional if and only ifand because it is prohibited, obligatory, or permitted by the code ofrules the teaching of which to everyone has at least as much expectedvalue as the teaching of any other code.
Two clarifications are needed immediately. First,“everyone” here needs to be qualified so as not to includepeople “with significant, cognitive, or conative deficits”(D. Miller 2021: 10). Second, teaching a code to everyone is notassumed to lead to everyone’s internalization of this code.Hence, even if everyone is taught a certain code, foreseeably therewill be some who internalize rules that are more or less differentfrom the rules that were taught. There will also be some whose moralmotivation is unreliable or even indiscernible. And there will be somewho take themselves to be sceptics about moral rules and even aboutthe rest of morality.
There are some definite advantages of teaching formulations ofrule-consequentialism. To be sure, we have to build into ourcost/benefit analysis that enough costs are incurred to make theteaching at least partly successful. In other words, we must insistthat the teaching be successful enough to get enough people tointernalize rules such that a good degree of cooperation and securityresults. But we do not need to be precise about what percentage ofpeople taught this code remain amoralists, what percentage of peopletaught this code end up internalizing codes somewhat different fromthe one being taught, or what percentage end up insufficientlymotivated to comply with the moral code they have internalized unlesseffective enforcement is in place (D. Miller 2021; T. Miller2021).
Another advantage of teaching formulations is that the idea ofteaching a code to everyone connects tightly to the idea that thiscode should be public knowledge. The idea that a moral code must besuitable for being public knowledge is very appealing (Baier 1958:195–196; Rawls 1971: 133; Gert 1998: 10–13, 225,239–240; Hill 2005; Cureton 2015; Pettit 2017: 39, 65, 102). Andrule-consequentialists have championed the idea that moral rules aresubject to this “publicity condition” (Brandt 1992: 136,144; Hooker 2010; 2016, forthcoming; Parfit 2011, 2017a; D. Miller2021: 131–32).
What rules will rule-consequentialism endorse? It will endorse rulesprohibiting physically attacking innocent people, taking or harmingthe property of others, breaking one’s promises, and lying. Itwill also endorse rules requiring one to pay special attention to theneeds of one’s family and friends, but more generally to bewilling to help others with their (morally permissible) projects. Asociety where such rules are prominent in a public code would belikely to have more good in it than one lacking such rules.
The fact that these rules are endorsed by rule-consequentialism makesrule-consequentialism attractive. For, intuitively, these rules seemright. However, other moral theories endorse these rules as well. Mostobviously, a familiar kind of moral pluralism contends that theseintuitively attractive rules constitute the most basic level ofmorality, i.e., that there is no deeper moral principle underlying andunifying these rules. Call this view Rossian pluralism (in honor ofits champion W.D. Ross (1930, 1939)).
Rule-consequentialism may agree with Rossian pluralism in endorsingrules against physically attacking the innocent, stealing, promisebreaking, and rules requiring various kinds of loyalty and moregenerally doing good for others. But rule-consequentialism goes beyondRossian pluralism by specifying an underlying unifying principle thatprovides impartial justification for such rules. Other moral theoriestry to do this too. Such theories include some forms of Kantianism(Audi 2001, 2004) and some forms of contractualism (Scanlon 1998;Parfit 2011; Levy 2013). In any case, the first way of arguing forrule-consequentialism is to argue that it specifies an underlyingprinciple that provides impartial justification for intuitivelyplausible moral rules, and that no rival theory does this as well(Urmson 1953; Brandt 1967; Hospers 1972; Hooker 2000). (Attacks onthis line of argument for rule-consequentialism include Stratton-Lake1997; Thomas 2000; D. Miller 2000; Montague 2000; Arneson 2005; Moore2007; Hills 2010; Levy 2014.)
This first way of arguing for rule-consequentialism might be seen asdrawing on the idea that a theory is better justified to us to theextent that it increases coherence within our beliefs (Rawls 1951,1971: 19–21, 46–51; DePaul 1987; Ebertz 1993; Sayre-McCord1986, 1996). [See the entry oncoherentist theories of epistemic justification.] But the approach might also be seen as moderately foundationalist inthat it begins with a set of beliefs (in various moral rules) to whichit assigns independent credibility though not infallibility (Audi1996, 2004; Crisp 2000). [See the entry onfoundationalist theories of epistemic justification.] Admittedly, coherence with our moral beliefs does not make a moraltheorytrue, since our moral beliefs might be mistaken.Nevertheless, if a moral theory fails significantly to cohere with ourmoral beliefs, this undermines the theory’s ability to bejustified to us.
Wolf (2016) and Copp (2020) argue that the meta-ethical view thatmorality is a social practice with a particular function might lead usto rule-consequentialism. However, Hooker (forthcoming) contends thatthe meta-ethical view that morality is a social practice with aparticular function stands in need of justification in terms ofwhether it coheres with our considered moral principles and morespecific considered moral judgements. In other words, thatmeta-ethical view about the function of morality needs to be judged interms of whether it helps us achieve a reflective equilibrium amongour beliefs or not.
The second way of arguing for rule-consequentialism is very different.It begins with a commitment to consequentialist assessment. With thatcommitment as a first premise, the point is then made that assessingactsindirectly, e.g., by focusing on the consequences ofcommunal acceptance of rules, will in fact produce better consequencesthan assessing acts directly in terms of their own consequences(Austin 1832; Brandt 1963, 1979; Harsanyi 1982: 58–60; 1993;Riley 2000). After all, making decisions about what to do is the mainpoint of moral assessment of acts. So if a way of morally assessingacts is likely to lead to bad decisions, or more generally lead to badconsequences, then, according to a consequentialist point of view, somuch the worse for that way of assessing acts.
Earlier we saw that all consequentialists now accept that assessingeach act individually by its expected value is in general a terribleprocedure for making moral decisions. Agents should decide how to actby instead appealing to certain rules such as “don’tphysically attack others”, “don’t steal”,“don’t break your promises”, “pay specialattention to the needs of your family and friends”, and“be generally helpful to others”. And these are the rulesthat rule-consequentialism endorses. Many consequentialists, however,think this hardly shows that full rule-consequentialism is the bestform of consequentialism. Once a distinction is made between, on theone hand, the best procedure for making moral decisions about what todo and, on the other hand, the criteria of moral rightness andwrongness, all consequentialists can admit that we needrule-consequentialism’s rules for our decision procedure. Butconsequentialists who are not rule-consequentialists contend that suchrules play no role in the criterion of moral rightness. Hence theseconsequentialists reject what this article has called fullrule-consequentialism.
However, whether the objection we have just been considering to thesecond way of arguing for rule-consequentialism is a good objectiondepends on whether it is legitimate to distinguish between proceduresappropriate for making moral decisions and the criteria of moralrightness or wrongness. That matter remains contentious (Hooker 2010;de Lazari-Radek and Singer 2014: ch. 10).
Yet the second way of arguing for rule-consequentialism runs intoanother and quite different objection. This objection is that thefirst step in this argument for rule-consequentialism is a commitmentto consequentialist assessment. This first step itself needsjustification. Why assume that assessing things in a consequentialistway is uniquely justified?
It might be said that consequentialist assessment is justified becausepromoting the impartial good has an obvious intuitive appeal. But thatwon’t do, since there are alternatives to consequentialistassessment that also have obvious intuitive appeal, for example,“act on the code that no one could reasonably reject”.What we need is a way of arguing for a moral theory that does notstart by begging the question which kind of theory is mostplausible.
A third way of arguing for rule-consequentialism is contractualist(Harsanyi 1953, 1955, 1982, 1993; Brandt 1979, 1988, 1996; Scanlon1982, 1998; Parfit 2011; Levy 2013). Suppose we can specify reasonableconditions under which everyone would choose, or at least would havesufficient reason to choose, the same code of rules. Intuitively, suchan idealized agreement would legitimate that code of rules. Now ifthose rules are the ones whose internalisation would maximise theexpected good, contractualism is leading us torule-consequentialism’s rules.
There are different views about what would be reasonable conditionsfor choosing among alternative possible moral rules. One view is thateveryone’s impartiality would have to be insured by theimposition of a hypothetical “veil of ignorance” behindwhich no one knew any specific facts about himself or herself(Harsanyi 1953, 1955). Another view is that we should imagine thatpeople would be choosing a moral code on the basis of (a) fullempirical information about the different effects on everyone, (b)normal concerns (self-interested as well as altruistic), and (c)roughly equal bargaining power (Brandt 1979; cf. Gert 1998). Parfit(2011) proposes seeking rules that everyone has (personal orimpartial) reason to choose or will that everyone accept. If impartialreasons are always sufficient even when opposed by personal ones, theneveryone has sufficient reason to will that everyone accept the ruleswhose universal acceptance will have the best consequences impartiallyconsidered. Similarly, Levy (2013) supposes that no one couldreasonably reject a code of rules that would impose on her burdensthat add up to less than the aggregate of burdens that every othercode would impose on others. Such arguments suggest the extensionalequivalence of contractualism and rule-consequentialism. (Forassessment of whether Parfit’s contractualist arguments forrule-consequentialism succeed, see J. Ross 2009; Nebel 2012; Hooker2014. For similarities between rule-consequentialism andScanlon’s contractualism, and the difficulty of deciding betweenthese two theories, see Suikkanen 2022.)
An oft-repeated line of objection to rule-consequentialism from themid 1960s to the mid 1990s was that this theory is fatally impaled onone or the other horn of the following dilemma: Eitherrule-consequentialism collapses into practical equivalence with thesimpler act-consequentialism, or rule-consequentialism isincoherent.
Here is why some have thought rule-consequentialism collapses intopractical equivalence with act-consequentialism. Consider a rule thatrule-consequentialism purports to favor — e.g.,“don’t steal”. Now suppose an agent is in somesituation where stealing would produce more good than not stealing. Ifrule-consequentialism selects rules on the basis of their expectedgood, rule-consequentialism seems driven to admit that compliance withthe rule “don’t steal except when … or … or…” is better than compliance with the simpler rule“don’t steal”. This point generalizes. In otherwords, for every situation where compliance with some rule would notproduce the greatest expected good, rule-consequentialism seems drivento favor instead compliance with some amended rule that does not missout on producing the greatest expected good in the case at hand. Butif rule-consequentialism operates this way, then in practice it willend up requiring the very same acts that act-consequentialismrequires.
If rule-consequentialism ends up requiring the very same acts thatact-consequentialism requires, then rule-consequentialism is indeed interrible trouble. Rule-consequentialism is the more complicated of thetwo theories. This leads to the following objection. What is the pointof rule-consequentialism with its infinitely amended rules if we canget the same practical result much more efficiently with the simpleract-consequentialism?
Rule-consequentialists in fact have an excellent reply to theobjection that their theory collapses into practical equivalence withact-consequentialism. This reply relies on the point that the bestkind of rule-consequentialism ranks systems of rulesnot interms of the expected good ofcomplying with them, but interms of the expected good of theirteaching andacceptance. Now if a rule forbidding stealing, for example,has exception clause after exception clause after exception clausetacked on to it, the rule with all these exception clauses willprovide too much opportunity for temptation to convince agents thatone of the exception clauses applies, when in fact stealing would beadvantageous to the agent. And this point about temptation will alsoundermine other people’s confidence that their propertywon’t be stolen. The same is true of most other moral rules:incorporating too many exception clauses could underminepeople’s assurance that others will behave in certain ways (suchas keeping promises and avoiding stealing).
Furthermore, when comparing alternative rules, we must also considerthe relative costs of getting them internalised by new generations.Clearly, the costs of getting new generations to learn either anenormous number of rules or hugely complicated rules would beprohibitive. So rule-consequentialism will favor a code of ruleswithout too many rules, and without too much complication within therules.
There are also costs associated with getting new generations tointernalise rules that require one to make enormous sacrifices forothers with whom one has no particular connection. Of course,following such demanding rules will produce many benefits, mainly toothers. But the costs associated with internalising such rules shouldbe weighed against the benefits of following them. At some level ofdemandingness, the costs of getting such demanding rules internalisedwill outweigh the benefits that following them will produce. Hence,doing a careful cost/benefit analysis of internalising demanding ruleswill come out opposing rules’ being too demanding.
The code of rules that rule-consequentialism favours a code comprisedof rules that are not too numerous, too complicated, or too demandingcan sometimes lead people to do acts that do not have the greatestexpected value. For example, following the simpler rule“Don’t steal” will sometimes produce less goodconsequences than following a more complicated rule “Don’tsteal except when … or … or … or … or…”. Another example might be following a rule that allowspeople to give some degree of priority to their own projects, evenwhen they could produce more good by sacrificing their own projects inorder to help others. Still, rule-consequentialism’s contentionis that bringing about widespread acceptance of a simpler and lessdemanding code, even if acceptance of that code does sometimes leadpeople to do acts with sub-optimal consequences, has higher expectedvalue in the long run than bringing about widespread acceptance of amaximally complicated and demanding code. Since rule-consequentialismcan tell people to follow this simpler and less demanding code, evenwhen following it will not to maximise expected good,rule-consequentialism escapes collapse into practical equivalence toact-consequentialism.
To the extent that rule-consequentialism circumvents collapse, thistheory is accused of incoherence. Rule-consequentialism is accused ofincoherence for maintaining that an act can be morally permissible oreven required though the act fails to maximise expected good. Behindthis accusation must be the assumption that rule-consequentialismcontains an overriding commitment to maximise the good. It isincoherent to have this overriding commitment and then to oppose anact required by the commitment. (For developments of this line ofthought, see Arneson 2005; Card 2007; Wall 2009.)
In order to evaluate the incoherence objection torule-consequentialism, we need to be clearer about the supposedlocation of an overriding commitment to maximize the good. Is thiscommitment supposed to be part of the rule-consequentialistagent’s moral psychology? Or is it supposed to be part of thetheory rule-consequentialism?
Well, rule-consequentialists need not have maximizing the good astheir ultimate and overriding moral goal. Instead, they could have amoral psychology as follows:
Their fundamental moral motivation is to do what is impartiallydefensible.
They believe acting on impartially justified rules is impartiallydefensible.
They also believe that rule-consequentialism is on balance the bestaccount of impartially justified rules.
Agents with this moral psychology — i.e., this combination ofmoral motivation and beliefs — would be morally motivated to doas rule-consequentialism prescribes. This moral psychology iscertainly possible. And, for agents who have it, there is nothingincoherent about following rules when doing so will not maximize theexpected good.
So, even if rule-consequentialistagents need not have anoverriding commitment to maximize expected good, does theirtheory contain such a commitment? No, rule-consequentialismis essentially the conjunction of two claims: (1) that rules are to beselected solely in terms of their consequences and (2) that theserules determine which kinds of acts are morally wrong. This is reallyall there is to the theory — in particular, there is not somethird component consisting in or entailing an overriding commitment tomaximize expected good.
Without an overriding commitment to maximize the expected good, thereis nothing incoherent in rule-consequentialism’s forbidding somekinds of act, even when they maximize the expected good. Likewise,there is nothing incoherent about rule-consequentialism’srequiring other kinds of act, even when they conflict with maximizingthe expected good. The best known objection to rule-consequentialismdies once we realize that neither the rule-consequentialist agent northe theory itself contains an overriding commitment to maximize thegood.
The viability of this defense of rule-consequentialism against theincoherence objection may depend in part on what the argument forrule-consequentialism is supposed to be. The defense seems less viableif the argument for rule-consequentialism starts from a commitment toconsequentialist assessment. For starting with such a commitment seemsvery close to starting from an overriding commitment to maximize theexpected good. The defence against the incoherence objection seems farmore secure, however, if the argument for rule-consequentialism isthat this theory does better than any other moral theory at specifyingan impartial justification for intuitively plausible moral rules. (Formore on this, see Hooker 2005, 2007; Rajczi 2016; Wolf 2016; Copp2020.)
Another old objection to rule-consequentialism is thatrule-consequentialists must be “rule-worshipers” —i.e., people who will stick to the rules even when doing so willobviously be disastrous.
The answer to this objection is that rule-consequentialism endorses arule requiring one to prevent disaster, even if doing so requiresbreaking other rules (Brandt 1992: 87–8, 150–1,156–7). To be sure, there are many complexities about whatcounts as a disaster. Think about what counts as a disaster when the“prevent disaster” rule is in competition with a ruleagainst lying. Now think about what counts as a disaster when the“prevent disaster” rule is in competition with a ruleagainst stealing, or even more when in competition with a rule againstphysically harming the innocent. Rule-consequentialism may need to beclearer about such matters. But at least it cannot rightly be accusedof potentially leading to disaster.
An important confusion to avoid is to think thatrule-consequentialism’s including a “preventdisaster” rule means that rule-consequentialism collapses intopractical equivalence with maximising act-consequentialism. Maximisingact-consequentialism holds that we should lie, or steal, or harm theinnocent whenever doing so would produce even alittle moreexpected good than not doing so would. A rule requiring one to preventdisaster does not have this implication. Rather, the “preventdisaster” rule comes into play only when there is a very muchlarger difference in the amounts of expected value at stake.
Woodard (2022) contends that the “prevent disaster” ruleneeds reconceptualizing. Many rules identify kinds of reasons.Examples are the reason to prevent harm, the reason not to steal, andthe reason to keep your promises. But, Woodard holds, we should notthink of the “prevent disaster” rule as being a ruleidentifying a kind of reason, which then is claimed to have overridingforce. We should rather think that the “prevent disaster”rule distinguishes between cases in which the reason to prevent harmoverrides opposing moral reasons and cases in which the reason toprevent harm is weaker than opposing reasons.
Even if Woodard is correct about this, a “preventdisaster” rule can be accused of vagueness and indeterminacy.Indeed, the line between cases in which the moral reason to preventharmoverrides opposing moral reasons and cases in which themoral reason to prevent harmis weaker than opposing moralreasons might well be indeterminant. Woollard (2022) argues thatadmitting such vagueness and indeterminacy does not weakenrule-consequentialism. And she draws on her earlier work (2015) toexplain how rule-consequentialism’s “preventdisaster” rule need not impale the theory on the dilemma ofeither being overly demanding or placing a counterintuitive limit on arequirement to come to the aid of others.
From the mid 1960s until the mid 1990s, most philosophers thoughtrule-consequentialism couldn’t survive the objections discussedin the previous section. So, during those three decades, mostphilosophers didn’t bother with other objections to the theory.However, if rule-consequentialism has convincing replies to all threeof the objections just discussed, then a good question is whether ornot there are other fatal objections to the theory.
Some other objections try to show that, given the theory’scriterion for selecting rules, there are conditions under which itselects intuitively unacceptable rules. For example, Tom Carson (1991)argued that rule-consequentialism turns out to be extremely demandingin the real world. Mulgan (2001, esp. ch. 3) agreed with Carson aboutthat, and went on to argue that, even if rule-consequentialism’simplications in the actual world are fine, the theory hascounterintuitive implications in possible worlds. If Mulgan were rightabout that, this would cast doubt on rule-consequentialism’sclaim to explain why certain demands are appropriate in the actualworld. Debate about such matters continues (Hooker 2003; Lawlor 2004;Parfit 2011, 2017a, 2017b; Woollard 2015: 181–205; Rajczi 2016;Portmore 2017; Podgorski 2018; D. Miller 2021; Perl 2021, 2022). AndMulgan has become a developer of rule-consequentialism rather than acritic (Mulgan 2006, 2009, 2015, 2017, 2020).
A related objection to rule-consequentialism is thatrule-consequentialism makes the justification of familiar rulescontingent on various empirical facts, such as what human nature islike, and how many people there are in need or in positions to help.The objection to rule-consequentialism is that some familiar moralrules are necessarily, not merely contingently, justified (McNaughtonand Rawling 1998; Gaut 1999, 2002; Montague 2000; Suikkanen 2008). Asibling of this objection is that rule-consequentialism makes thejustification of rules depend on the wrong facts (Arneson 2005;Portmore 2009; cf. Woollard 2015: esp. pp. 185–86,203–205; 2022).
The mechanics of teaching new codes throws up serious questions forforms of rule-consequentialism that count the costs of getting rulesinternalised by new generations. As explained earlier, limiting thetargets of the teaching to new generations is meant to avoid having tocount the costs of getting rules internalised by existing generationsof people who have already internalised some other moral rules andideas. But can we come up with a coherent description of those who aresupposed to do the teaching of these new generations? If the teachersare imagined to have already internalised the ideal code themselves,then how is that supposed to have happened? If these teachers areimagined not to have already internalised the ideal code, then therewill be costs associated with the conflict between the ideal code andwhatever they have already internalised. (This objection wasformulated by John Andrews, Robert Ehman, and Andrew Moore. Cf. Levy2000; D. Miller, 2021.) A related objection is thatrule-consequentialism has not yet been formulated in a way thatenables it to deal plausibly with conflicts among rules (Eggleston2007). But see Copp 2020; D. Miller 2021; Woodard 2022.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
This entry has benefited from very generous comments by Rob Lawlor,Gerald Lang, Andrew Moore, Tim Mulgan, Walter Sinnott-Armstrong, andPeter Vallentyne.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054