While moral reasoning can be undertaken on another’s behalf, itis paradigmatically an agent’s first-personal (individual orcollective) practical reasoning about what, morally, they ought to do.Philosophical examination of moral reasoning faces both distinctivepuzzles – about how we recognize moral considerations and copewith conflicts among them and about how they move us to act –and distinctive opportunities for gleaning insight about what we oughtto do from how we reason about what we ought to do.
Part I of this article characterizes moral reasoning more fully,situates it in relation both to first-order accounts of what moralityrequires of us and to philosophical accounts of the metaphysics ofmorality, and explains the interest of the topic. Part II then takesup a series of philosophical questions about moral reasoning, sounderstood and so situated.
This article takes up moral reasoning as a species of practicalreasoning – that is, as a type of reasoning directed towardsdeciding what to do and, when successful, issuing in an intention (seeentry onpractical reason). Of course, we also reason theoretically about what morality requiresof us; but the nature of purely theoretical reasoning about ethics isadequately addressed in the various articles onethics. It is also true that, on some understandings, moral reasoningdirected towards deciding what to do involves forming judgments aboutwhat one ought, morally, to do. On these understandings, asking whatone ought (morally) to do can be a practical question, a certain wayof asking about what to do. (Seesection 1.5 on the question of whether this is a distinctive practical question.)In order to do justice to the full range of philosophical views aboutmoral reasoning, we will need to have a capacious understanding ofwhat counts as a moral question. For instance, since a prominentposition about moral reasoning is that the relevant considerations arenot codifiable, we would beg a central question if we here defined “morality” as involving codifiable principles or rules. For present purposes, wemay understand issues about what is right or wrong, or virtuous orvicious, as raising moral questions.
Even when moral questions explicitly arise in daily life, just as whenwe are faced with child-rearing, agricultural, and business questions,sometimes we act impulsively or instinctively rather than pausing toreason, not just about what to do, but about what we ought to do.Jean-Paul Sartre described a case of one of his students who came tohim in occupied Paris during World War II, asking advice about whetherto stay by his mother, who otherwise would have been left alone, orrather to go join the forces of the Free French, then massing inEngland (Sartre 1975). In the capacious sense just described, this isprobably a moral question; and the young man paused long enough to askSartre’s advice. Does that mean that this young man wasreasoning about his practical question? Not necessarily. Indeed,Sartre used the case to expound his skepticism about the possibilityof addressing such a practical question by reasoning. But what isreasoning?
Reasoning, of the sort discussed here, is active or explicit thinking,in which the reasoner, responsibly guided by her assessments of herreasons (Kolodny 2005) and of any applicable requirements ofrationality (Broome 2009, 2013), attempts to reach a well-supportedanswer to a well-defined question (Hieronymi 2013). For Sartre’sstudent, at least such a question had arisen. Indeed, the question wasrelatively definite, implying that the student had already engaged insome reflection about the various alternatives available to him– a process that has well been described as an important phaseof practical reasoning, one that aptly precedes the effort to make upone’s mind (Harman 1986, 2).
Characterizing reasoning as responsibly conducted thinking of coursedoes not suffice to analyze the notion. For one thing, it fails toaddress the fraught question of reasoning’s relation toinference (Harman 1986, Broome 2009). In addition, it does not settlewhether formulating an intention about what to do suffices to concludepractical reasoning or whether such intentions cannot be adequatelyworked out except by starting to act. Perhaps one cannot adequatelyreason about how to repair a stone wall or how to make an omelet withthe available ingredients without actually starting to repair or tocook (cf. Fernandez 2016). Still, it will do for present purposes. Itsuffices to make clear that the idea of reasoning involves norms ofthinking. These norms of aptness or correctness in practical thinkingsurely do not require us to think along a single prescribed pathway,but rather permit only certain pathways and not others (Broome 2013,219). Even so, we doubtless often fail to live up to them.
Our thinking, including our moral thinking, is often not explicit. Wecould say that we also reason tacitly, thinking in much the same wayas during explicit reasoning, but without any explicit attempt toreach well-supported answers. In some situations, even moral ones, wemight be ill-advised to attempt to answer our practical questions byexplicit reasoning. In others, it might even be a mistake to reasontacitly – because, say, we face a pressing emergency. “Sometimeswe should not deliberate about what to do, and just drive” (Arpaly andSchroeder 2014, 50). Yet even if we are not called upon to thinkthrough our options in all situations, and even if sometimes it wouldbe positively better if we did not, still, if we are called upon to doso, then we should conduct our thinking responsibly: we shouldreason.
Recent work in empirical ethics has indicated that even when we arecalled upon to reason morally, we often do so badly. When asked togive reasons for our moral intuitions, we are often“dumbfounded,” finding nothing to say in their defense(Haidt 2001). Our thinking about hypothetical moral scenarios has beenshown to be highly sensitive to arbitrary variations, such as in theorder of presentation. Even professional philosophers have been foundto be prone to such lapses of clear thinking (e.g., Schwitzgebel &Cushman 2012). Some of our dumbfounding and confusion has been laid atthe feet of our having both a fast, more emotional way of processingmoral stimuli and a slow, more cognitive way (e.g., Greene 2014). Analternative explanation of moral dumbfounding looks to social norms ofmoral reasoning (Sneddon 2007). And a more optimistic reaction to ourconfusion sees our established patterns of “moral consistencyreasoning” as being well-suited to cope with the clashing inputgenerated by our fast and slow systems (Campbell & Kumar 2012) oras constituting “a flexible learning system that generates and updatesa multidimensional evaluative landscape to guide decision and action”(Railton, 2014, 813).
Eventually, such empirical work on our moral reasoning may yieldrevisions in our norms of moral reasoning. This has not yet happened.This article is principally concerned with philosophical issues posedby our current norms of moral reasoning. For example, given thosenorms and assuming that they are more or less followed, how do moralconsiderations enter into moral reasoning, get sorted out by it whenthey clash, and lead to action? And what do those norms indicate aboutwhat we ought to do do?
The topic of moral reasoning lies in between two other commonlyaddressed topics in moral philosophy. On the one side, there is thefirst-order question of what moral truths there are, if any. Forinstance, are there any true general principles of morality, and ifso, what are they? At this level utilitarianism competes withKantianism, for instance, and both compete with anti-theorists ofvarious stripes, who recognize only particular truths about morality(Clarke & Simpson 1989). On the other side, a quite different sortof question arises from seeking to give a metaphysical grounding formoral truths or for the claim that there are none. Supposing there aresome moral truths, whatmakes them true? What account can begiven of the truth-conditions of moral statements? Here arise familiarquestions ofmoral skepticism andmoral relativism; here, the idea of “a reason” is wielded by many hoping todefend a non-skeptical moral metaphysics (e.g., Smith 2013). The topicof moral reasoning lies in between these two other familiar topics inthe following simple sense: moral reasoners operate with what theytake to be morally true but, instead of asking whatmakestheir moral beliefs true, they proceedresponsibly to attempt tofigure out what to do in light of those considerations. Thephilosophical study of moral reasoning concerns itself with the natureof these attempts.
These three topics clearly interrelate. Conceivably, the relationsbetween them would be so tight as to rule out any independent interestin the topic of moral reasoning. For instance, if all that couldusefully be said about moral reasoning were that it is a matter ofattending to the moral facts, then all interest would devolve upon thequestion of what those facts are – with some residual focus onthe idea of moral attention (McNaughton 1988). Alternatively, it mightbe thought that moral reasoning is simply a matter of applying thecorrect moral theory via ordinary modes of deductive and empiricalreasoning. Again, if that were true, one’s sufficient goal wouldbe to find that theory and get the non-moral facts right. Neither ofthese reductive extremes seems plausible, however. Take the potentialreduction to getting the facts right, first.
Contemporary advocates of the importance of correctly perceiving themorally relevant facts tend to focus on facts that we can perceiveusing our ordinary sense faculties and our ordinary capacities ofrecognition, such asthat this person has an infection orthat this person needs my medical help. On such a footing, itis possible to launch powerful arguments against the claim that moralprinciples undergird every moral truth (Dancy 1993) and for the claimthat we can sometimes perfectly well decide what to do by acting onthe reasons we perceive instinctively – or as we have beentrained – without engaging in any moral reasoning. Yet this isnot a sound footing for arguing that moral reasoning, beyondsimply attending to the moral facts, is always unnecessary. On thecontrary, we often find ourselves facing novel perplexities and moralconflicts in which our moral perception is an inadequate guide. Inaddressing the moral questions surrounding whether society ought toenforce surrogate-motherhood contracts, for instance, the scientificand technological novelties involved make our moral perceptionsunreliable and shaky guides. When a medical researcher who has notedan individual’s illness also notes the factthat divertingresources to caring, clinically, for this individual would inhibit theprogress of my research, thus harming the long-term health chances offuture sufferers of this illness, he or she comes face to facewith conflicting moral considerations. At this juncture, it is farless plausible or satisfying simply to say that, employing one’sordinary sensory and recognitional capacities, one sees what is to bedone, both things considered. To posit a special faculty of moralintuition that generates such overall judgments in the face ofconflicting considerations is to wheel in adeus ex machina.It cuts inquiry short in a way that serves the purposes of fictionbetter than it serves the purposes of understanding. It is plausibleinstead to suppose that moral reasoning comes in at this point(Campbell & Kumar 2012).
For present purposes, it is worth noting, David Hume and the moralsense theorists do not count as short-circuiting our understanding ofmoral reasoning in this way. It is true that Hume presents himself,especially in theTreatise of Human Nature, as a disbelieverin any specifically practical or moral reasoning. In doing so,however, he employs an exceedingly narrow definition of“reasoning” (Hume 2000, Book I, Part iii, sect. ii). Forpresent purposes, by contrast, we are using a broader working gloss of“reasoning,” one not controlled by an ambition to parseout the relative contributions of (the faculty of) reason and of thepassions. And about moral reasoning in this broader sense, asresponsible thinking about what one ought to do, Hume has manyinteresting things to say, starting with the thought thatmoral reasoning must involve a double correction ofperspective (seesection 2.4) adequately to account for the claims of other people and of thefarther future, a double correction that is accomplished with the aidof the so-called “calm passions.”
If we turn from the possibility that perceiving the facts aright willdisplace moral reasoning to the possibility that applying the correctmoral theory will displace – or exhaust – moral reasoning,there are again reasons to be skeptical. One reason is that moraltheories do not arise in a vacuum; instead, they develop against abroad backdrop of moral convictions. Insofar as the first potentiallyreductive strand, emphasizing the importance of perceiving moralfacts, has force – and it does have some – it also tendsto show that moral theories need to gain support by systematizing oraccounting for a wide range of moral facts (Sidgwick 1981). As in mostother arenas in which theoretical explanation is called for, thedegree of explanatory success will remain partial and open toimprovement via revisions in the theory (seesection 2.6). Unlike the natural sciences, however, moral theory is an endeavorthat, as John Rawls once put it, is “Socratic” in that itis a subject pertaining to actions “shaped byself-examination” (Rawls 1971, 48f.). If this observation iscorrect, it suggests that the moral questions we set out to answerarise from our reflections about what matters. By the same token– and this is the present point – a moral theory issubject to being overturned because it generates concrete implicationsthat do not sit well with us on due reflection. This being so, andgranting the great complexity of the moral terrain, it seems highlyunlikely that we will ever generate a moral theory on the basis ofwhich we can serenely and confidently proceed in a deductive way togenerate answers to what we ought to do in all concrete cases. Thisconclusion is reinforced by a second consideration, namely thatinsofar as a moral theory is faithful to the complexity of the moralphenomena, it will contain within it many possibilities for conflictsamong its own elements. Even if it does deploy some priority rules,these are unlikely to be able to cover all contingencies. Hence, somemoral reasoning that goes beyond the deductive application of thecorrect theory is bound to be needed.
In short, a sound understanding of moral reasoning will not take theform of reducing it to one of the other two levels of moral philosophyidentified above. Neither the demand to attend to the moral facts northe directive to apply the correct moral theory exhausts orsufficiently describes moral reasoning.
In addition to posing philosophical problems in its own right, moralreasoning is of interest on account of its implications for moralfacts and moral theories. Accordingly, attending to moral reasoningwill often be useful to those whose real interest is in determiningthe right answer to some concrete moral problem or in arguing for oragainst some moral theory. The characteristic ways we attempt to workthrough a given sort of moral quandary can be just as revealing aboutour considered approaches to these matters as are any bottom-linejudgments we may characteristically come to. Further, we may havefirm, reflective convictions about how a given class of problems isbest tackled, deliberatively, even when we remain in doubt about whatshould be done. In such cases, attending to the modes of moralreasoning that we characteristically accept can usefully expand theset of moral information from which we start, suggesting ways tostructure the competing considerations.
Facts about the nature of moral inference and moral reasoning may haveimportant direct implications for moral theory. For instance, it mightbe taken to be a condition of adequacy of any moral theory that itplay a practically useful role in our efforts at self-understandingand deliberation. It should be deliberation-guiding (Richardson 2018,§1.2). If this condition is accepted, then any moral theory thatwould require agents to engage in abstruse or difficult reasoning maybe inadequate for that reason, as would be any theory that assumesthat ordinary individuals are generally unable to reason in the waysthat the theory calls for. J.S. Mill (1979) conceded that we aregenerally unable to do the calculations called for by utilitarianism,as he understood it, and argued that we should be consoled by the factthat, over the course of history, experience has generated secondaryprinciples that guide us well enough. Rather more dramatically, R. M.Hare defended utilitarianism as well capturing the reasoning ofideally informed and rational “archangels” (1981). Taking seriously adeliberation-guidance desideratum for moral theory would favor,instead, theories that more directly inform efforts at moral reasoningby we “proletarians,” to use Hare’s contrasting term.
Accordingly, the close relations between moral reasoning, the moralfacts, and moral theory do not eliminate moral reasoning as a topic ofinterest. To the contrary, because moral reasoning has importantimplications about moral facts and moral theories, these closerelations lend additional interest to the topic of moral reasoning.
The final threshold question is whether moral reasoning is trulydistinct from practical reasoning more generally understood. (Thequestion of whether moral reasoning, even if practical, isstructurally distinct from theoretical reasoning that simply proceedsfrom a proper recognition of the moral facts has already beenimplicitly addressed and answered, for the purposes of the presentdiscussion, in the affirmative.) In addressing this final question, itis difficult to overlook the way different moral theories projectquite different models of moral reasoning – again a link thatmight be pursued by the moral philosopher seeking leverage in eitherdirection. For instance, Aristotle’s views might be as follows:a quite general account can be given of practical reasoning, whichincludes selecting means to ends and determining the constituents of adesired activity. The difference between the reasoning of a viciousperson and that of a virtuous person differs not at all in itsstructure, but only in its content, for the virtuous person pursuestrue goods, whereas the vicious person simply gets side-tracked byapparent ones. To be sure, the virtuous person may be able to achievea greater integration of his or her ends via practical reasoning(because of the way the various virtues cohere), but this is adifference in the result of practical reasoning and not in itsstructure. At an opposite extreme, Kant’s categorical imperativehas been taken to generate an approach to practical reasoning (via a“typic of practical judgment”) that is distinctive fromother practical reasoning both in the range of considerations itaddresses and its structure (Nell 1975). Whereas prudential practicalreasoning, on Kant’s view, aims to maximize one’shappiness, moral reasoning addresses the potential universalizabilityof the maxims – roughly, the intentions – on which oneacts. Views intermediate between Aristotle’s and Kant’s inthis respect include Hare’s utilitarian view and Aquinas’natural-law view. On Hare’s view, just as an ideal prudentialagent applies maximizing rationality to his or her own preferences, anideal moral agent’s reasoning applies maximizing rationality tothe set of everyone’s preferences that its archangelic capacityfor sympathy has enabled it to internalize (Hare 1981). Thomistic,natural-law views share the Aristotelian view about the general unityof practical reasoning in pursuit of the good, rightly or wronglyconceived, but add that practical reason, in addition to demandingthat we pursue the fundamental human goods, also, and distinctly,demands that we not attack these goods. In this way, natural-law viewsincorporate some distinctively moral structuring – such as thedistinctions between doing and allowing and the so-calleddoctrine of double effect’s distinction between intending as a means and accepting as aby-product – within a unified account of practical reasoning(see entry on thenatural law tradition in ethics). In light of this diversity of views about the relation between moralreasoning and practical or prudential reasoning, a general account ofmoral reasoning that does not want to presume the correctness of adefinite moral theory will do well to remain agnostic on the questionof how moral reasoning relates to non-moral practical reasoning.
To be sure, most great philosophers who have addressed the nature ofmoral reasoning were far from agnostic about the content of thecorrect moral theory, and developed their reflections about moralreasoning in support of or in derivation from their moral theory.Nonetheless, contemporary discussions that are somewhat agnostic aboutthe content of moral theory have arisen around important andcontroversial aspects of moral reasoning. We may group these aroundthe following seven questions:
The remainder of this article takes up these seven questions in turn.
One advantage to defining “reasoning” capaciously, ashere, is that it helps one recognize that the processes whereby wecome to be concretely aware of moral issues are integral to moralreasoning as it might more narrowly be understood. Recognizing moralissues when they arise requires a highly trained set of capacities anda broad range of emotional attunements. Philosophers of the moralsense school of the 17th and 18th centuries stressed innate emotionalpropensities, such as sympathy with other humans. Classicallyinfluenced virtue theorists, by contrast, give more importance to thetraining of perception and the emotional growth that must accompanyit. Among contemporary philosophers working in empirical ethics thereis a similar divide, with some arguing that we process situationsusing an innate moral grammar (Mikhail 2011) and some emphasizing therole of emotions in that processing (Haidt 2001, Prinz 2007, Greene2014). For the moral reasoner, a crucial task for our capacities ofmoral recognition is to mark out certain features of a situation asbeing morally salient. Sartre’s student, for instance, focusedon the competing claims of his mother and the Free French, giving themeach an importance to his situation that he did not give to eatingFrench cheese or wearing a uniform. To say that certain features aremarked out as morally salient is not to imply that the features thussingled out answer to the terms of some general principle or other: wewill come to the question of particularism, below. Rather, it issimply to say that recognitional attention must have a selectivefocus.
What will be counted as a moral issue or difficulty, in the senserequiring moral agents’ recognition, will again vary by moraltheory. Not all moral theories would count filial loyalty andpatriotism as moral duties. It is only at great cost, however, thatany moral theory could claim to do without a layer of moral thinkinginvolving situation-recognition. A calculative sort of utilitarianism,perhaps, might be imagined according to which there is no need to spota moral issue or difficulty, asevery choice node in lifepresents the agent with the same, utility-maximizing task. PerhapsJeremy Bentham held a utilitarianism of this sort. For the moreplausible utilitarianisms mentioned above, however, such asMill’s and Hare’s, agents need not always calculateafresh, but must instead be alive to the possibility that because theordinary “landmarks and direction posts” lead one astrayin the situation at hand, they must make recourse to a more direct andcritical mode of moral reasoning. Recognizing whether one is in one ofthose situations thus becomes the principal recognitional task for theutilitarian agent. (Whether this task can be suitably confined, ofcourse, has long been one of the crucial questions about whether suchindirect forms of utilitarianism, attractive on other grounds, canprevent themselves from collapsing into a more Benthamite, directform: cf. Brandt 1979.)
Note that, as we have been describing moral uptake, we have notimplied that what is perceived is ever a moral fact. Rather, it mightbe that what is perceived is some ordinary, descriptive feature of asituation that is, for whatever reason, morally relevant. An accountof moral uptake will interestingly impinge upon the metaphysics ofmoral facts, however, if it holds that moral facts can be perceived.Importantly intermediate, in this respect, is the set of judgmentsinvolving so-called “thick” evaluative concepts –for example, that someone is callous, boorish, just, or brave (see theentry onthick ethical concepts). These do not invoke the supposedly “thinner” terms ofoverall moral assessment, “good,” or “right.”Yet they are not innocent of normative content, either. Plainly, we dorecognize callousness when we see clear cases of it. Plainly, too– whatever the metaphysical implications of the last fact– our ability to describe our situations in these thicknormative terms is crucial to our ability to reason morally.
It is debated how closely our abilities of moral discernment are tiedto our moral motivations. For Aristotle and many of his ancientsuccessors, the two are closely linked, in that someone not brought upinto virtuous motivations will not see things correctly. For instance,cowards will overestimate dangers, the rash will underestimate them,and the virtuous will perceive them correctly (EudemianEthics 1229b23–27). By the Stoics, too, having the rightmotivations was regarded as intimately tied to perceiving the worldcorrectly; but whereas Aristotle saw the emotions as allies to enlistin support of sound moral discernment, the Stoics saw them as inimicalto clear perception of the truth (cf. Nussbaum 2001).
That one discerns features and qualities of some situation that arerelevant to sizing it up morally does not yet imply that oneexplicitly or even implicitly employs any general claims in describingit. Perhaps all that one perceives are particularly embedded featuresand qualities, without saliently perceiving themasinstantiations of any types. Sartre’s student may be focused onhis mother and on the particular plights of several of his fellowFrenchmen under Nazi occupation, rather than on any purportedrequirements of filial duty or patriotism. Having become aware of somemoral issue in such relatively particular terms, he might proceeddirectly to sorting out the conflict between them. Anotherpossibility, however, and one that we frequently seem to exploit, isto formulate the issue in general terms: “An only child shouldstick by an otherwise isolated parent,” for instance, or“one should help those in dire need if one can do so withoutsignificant personal sacrifice.” Such general statements wouldbe examples of “moral principles,” in a broad sense. (Wedo not here distinguish between principles and rules. Those who doinclude Dworkin 1978 and Gert 1998.)
We must be careful, here, to distinguish the issue of whetherprinciples commonly play an implicit or explicit role in moralreasoning, including well-conducted moral reasoning, from the issue ofwhether principles necessarily figure as part of the basis of moraltruth. The latter issue is best understood as a metaphysical questionabout the nature and basis of moral facts. What is currently known asmoral particularism is the view that there are no defensible moral principles and thatmoral reasons, or well-grounded moral facts, can exist independentlyof any basis in a general principle. A contrary view holds that moralreasons are necessarily general, whether because the sources of theirjustification are all general or because a moral claim is ill-formedif it contains particularities. But whether principles play a usefulrole in moral reasoning is certainly a different question from whetherprinciples play a necessary role in accounting for the ultimatetruth-conditions of moral statements. Moral particularism, as justdefined, denies their latter role. Some moral particularists seem alsoto believe that moral particularismimplies that moralprinciples cannot soundly play a useful role in reasoning. This claimis disputable, as it seems a contingent matter whether the relevantparticular facts arrange themselves in ways susceptible to generalsummary and whether our cognitive apparatus can cope with them at allwithout employing general principles. Although the metaphysicalcontroversy about moral particularism lies largely outside our topic,we will revisit it insection 2.5, in connection with the weighing of conflicting reasons.
With regard to moral reasoning, while there are some self-styled“anti-theorists” who deny that abstract structures oflinked generalities are important to moral reasoning (Clarke, et al.1989), it is more common to find philosophers who recognize both somerole for particular judgment and some role for moral principles. Thus,neo-Aristotelians like Nussbaum who emphasize the importance of“finely tuned and richly aware” particular discernmentalso regard that discernment as being guided by a set of generallydescribable virtues whose general descriptions will come into play inat least some kinds of cases (Nussbaum 1990). “Situationethicists” of an earlier generation (e.g. Fletcher 1997)emphasized the importance of taking into account a wide range ofcircumstantial differentiae, but against the background of somegeneral principles whose application the differentiae help sort out.Feminist ethicists influenced by Carol Gilligan’s path breakingwork on moral development have stressed the moral centrality of thekind of care and discernment that are salient and well-developed bypeople immersed in particular relationships (Held 1995); but thisemphasis is consistent with such general principles as “oneought to be sensitive to the wishes of one’s friends”(seethe entry onfeminist moral psychology). Again, if we distinguish the question of whether principles areuseful in responsibly-conducted moral thinking from the question ofwhether moral reasons ultimately all derive from general principles,and concentrate our attention solely on the former, we will see thatsome of the opposition to general moral principles melts away.
It should be noted that we have been using a weak notion ofgenerality, here. It is contrasted only with the kind of strictparticularity that comes with indexicals and proper names. Generalstatements or claims – ones that contain no such particularreferences – are not necessarily universal generalizations,making an assertion aboutall cases of the mentioned type.Thus, “one should normally help those in dire need” is ageneral principle, in this weak sense. Possibly, such logically looseprinciples would be obfuscatory in the context of an attempt toreconstruct the ultimate truth-conditions of moral statements. Suchlogically loose principles would clearly be useless in any attempt togenerate a deductively tight “practical syllogism.” In ourday-to-day, non-deductive reasoning, however, such logically looseprinciples appear to be quite useful. (Recall that we areunderstanding “reasoning” quite broadly, as responsiblyconducted thinking: nothing in this understanding of reasoningsuggests any uniquely privileged place for deductive inference: cf.Harman 1986. For more on defeasible or “default”principles, seesection 2.5.)
In this terminology, establishing that general principles areessential to moral reasoning leaves open the further question whetherlogically tight, or exceptionless, principles are also essential tomoral reasoning. Certainly, much of our actual moral reasoning seemsto be driven by attempts to recast or reinterpret principles so thatthey can betaken to be exceptionless. Adherents andinheritors of the natural-law tradition in ethics (e.g. Donagan 1977)are particularly supple defenders of exceptionless moral principles,as they are able to avail themselves not only of a refined traditionof casuistry but also of a wide array of subtle – some would sayoverly subtle – distinctions, such as those mentioned abovebetween doing and allowing and between intending as a means andaccepting as a byproduct.
A related role for a strong form of generality in moral reasoningcomes from the Kantian thought that one’s moral reasoning mustcounter one’s tendency to make exceptions for oneself.Accordingly, Kant holds, as we have noted, that we must ask whetherthe maxims of our actions can serve as universal laws. As mostcontemporary readers understand this demand, it requires that weengage in a kind of hypothetical generalization across agents, and askabout the implications of everybody acting that way in thosecircumstances. The grounds for developing Kant’s thought in thisdirection have been well explored (e.g., Nell 1975, Korsgaard 1996,Engstrom 2009). The importance and the difficulties of such ahypothetical generalization test in ethics were discussed theinfluential works Gibbard 1965 and Goldman 1974.
Whether or not moral considerations need the backing of generalprinciples, we must expect situations of action to present us withmultiple moral considerations. In addition, of course, thesesituations will also present us with a lot of information that is notmorally relevant. On any realistic account, a central task of moralreasoning is to sort out relevant considerations from irrelevant ones,as well as to determine which are especially relevant and which onlyslightly so. That a certain woman is Sartre’s student’smother seems arguably to be a morally relevant fact; whatabout the fact (supposing it is one) that she has no other children totake care of her? Addressing the task of sorting what is morallyrelevant from what is not, some philosophers have offered generalaccounts of moral relevant features. Others have given accounts of howwe sort out which of the relevant features aremost relevant,a process of thinking that sometimes goes by the name of“casuistry.”
Before we look at ways of sorting out which features are morallyrelevant ormost morally relevant, it may be useful to note aprior step taken by some casuists, which was to attempt to set out aschema that would captureall of the features of an action orproposed action. The Roman Catholic casuists of the middle ages did soby drawing on Aristotle’s categories. Accordingly, they asked,where, when, why, how, by what means, to whom, or by whom the actionin question is to be done or avoided (see Jonsen and Toulmin 1988).The idea was that complete answers to these questions would containall of the features of the action, of which the morally relevant oneswould be a subset. Although metaphysically uninteresting, the idea ofattempting to list all of an action’s features in this wayrepresents a distinctive – and extreme – heuristic formoral reasoning.
Turning to the morally relevant features, one of the most developedaccounts is Bernard Gert’s. He develops a list of featuresrelevant to whether the violation of a moral rule should be generallyallowed. Given the designed function of Gert’s list, it isnatural that most of his morally relevant features make reference tothe set of moral rules he defended. Accordingly, some of Gert’sdistinctions between dimensions of relevant features reflectcontroversial stances in moral theory. For example, one of thedimensions is whether “the violation [is] done intentionally oronly knowingly” (Gert 1998, 234) – a distinction thatthose who reject the doctrine of double effect would not findrelevant.
In deliberating about what we ought, morally, to do, we also oftenattempt to figure out which considerations aremost relevant.To take an issue mentioned above: Are surrogate motherhood contractsmore akin to agreements with babysitters (clearly acceptable) or toagreements with prostitutes (not clearly so)? That is, which featureof surrogate motherhood is more relevant: that it involves a contractfor child-care services or that it involves payment for the intimateuse of the body? Both in such relatively novel cases and in morefamiliar ones, reasoning by analogy plays a large role in ordinarymoral thinking. When this reasoning by analogy starts to becomesystematic – a social achievement that requires some historicalstability and reflectiveness about what are taken to be moral norms– it begins to exploit comparison to cases that are“paradigmatic,” in the sense of being taken as settled.Within such a stable background, a system of casuistry can developthat lends some order to the appeal to analogous cases. To use ananalogy: the availability of a widely accepted and systematic set ofanalogies and the availability of what are taken to be moral norms maystand to one another as chicken does to egg: each may be anindispensable moment in the genesis of the other.
Casuistry, thus understood, is an indispensable aid to moralreasoning. At least, that it is would follow from conjoining twofeatures of the human moral situation mentioned above: themultifariousness of moral considerations that arise in particularcases and the need and possibility for employing moral principles insound moral reasoning. We require moral judgment, not simply adeductive application of principles or a particularist bottom-lineintuition about what we should do. This judgment must be responsibleto moral principles yet cannot be straightforwardly derived from them.Accordingly, our moral judgment is greatly aided if it is able to reston the sort of heuristic support that casuistry offers. Thinkingthrough which of two analogous cases provides a better key tounderstanding the case at hand is a useful way of organizing our moralreasoning, and one on which we must continue to depend. If we lack thekind of broad consensus on a set of paradigm cases on which theRenaissance Catholic or Talmudic casuists could draw, our casuisticefforts will necessarily be more controversial and tentative thantheirs; but we are not wholly without settled cases from which towork. Indeed, as Jonsen and Toulmin suggest at the outset of theirthorough explanation and defense of casuistry, the depth ofdisagreement about moral theories that characterizes a pluralistsociety may leave us having to rest comparativelymore weighton the cases about which we can find agreement than did the classiccasuists (Jonsen and Toulmin 1988).
Despite the long history of casuistry, there is little that canusefully be said about how one ought to reason about competinganalogies. In the law, where previous cases have precedentialimportance, more can be said. As Sunstein notes (Sunstein 1996, chap.3), the law deals with particular cases, which are always“potentially distinguishable” (72); yet the law alsoimposes “a requirement of practical consistency” (67).This combination of features makes reasoning by analogy particularlyinfluential in the law, for one must decide whether a given case ismore like one set of precedents or more like another. Since the lawmust proceed even within a pluralist society such as ours, Sunsteinargues, we see that analogical reasoning can go forward on the basisof “incompletely theorized judgments” or of what Rawlscalls an “overlapping consensus” (Rawls 1996). That is,although a robust use of analogous cases depends, as we have noted, onsome shared background agreement, this agreement need not extend toall matters or all levels of individuals’ moral thinking.Accordingly, although in a pluralist society we may lack the kind ofcomprehensive normative agreement that made the high casuistry ofRenaissance Christianity possible, the path of the law suggests thatnormatively forceful, case-based, analogical reasoning can still goon. A modern, competing approach to case-based or precedent-respectingreasoning has been developed by John F. Horty (2016). On Horty’sapproach, which builds on the default logic developed in (Horty 2012),the body of precedent systematically shifts the weights of the reasonsarising in a new case.
Reasoning by appeal to cases is also a favorite mode of some recentmoral philosophers. Since our focus here is not on the methods ofmoral theory, we do not need to go into any detail in comparingdifferent ways in which philosophers wield cases for and againstalternative moral theories. There is, however, an important andbroadly applicable point worth making about ordinary reasoning byreference to cases that emerges most clearly from the philosophicaluse of such reasoning. Philosophers often feel free to imagine cases,often quite unlikely ones, in order to attempt to isolate relevantdifferences. An infamous example is a pair of cases offered by JamesRachels to cast doubt on the moral significance of the distinctionbetween killing and letting die, here slightly redescribed. In bothcases, there is at the outset a boy in a bathtub and a greedy oldercousin downstairs who will inherit the family manse if and only if theboy predeceases him (Rachels 1975). In Case A, the cousin hears athump, runs up to find the boy unconscious in the bath, and reachesout to turn on the tap so that the water will rise up to drown theboy. In Case B, the cousin hears a thump, runs up to find the boyunconscious in the bath with the water running, and decides to sitback and do nothing until the boy drowns. Since there is surely nomoral difference between these cases, Rachels argued, the generaldistinction between killing and letting die is undercut. “Not sofast!” is the well-justified reaction (cf. Beauchamp 1979). Justbecause a factor is morally relevant in a certain way in comparing onepair of cases does not mean that it either is or must be relevant inthe same way or to the same degree when comparing other cases. ShellyKagan has dubbed the failure to take account of this fact ofcontextual interaction when wielding comparison cases the“additive fallacy” (1988). Kagan concludes from this thatthe reasoning of moral theorists must depend upon some theory thathelps us anticipate and account for ways in which factors willinteract in various contexts. A parallel lesson, reinforcing what wehave already observed in connection with casuistry proper, would applyfor moral reasoning in general: reasoning from cases must at leastimplicitly rely upon a set of organizing judgments or beliefs, of akind that would, on some understandings, count as a moral“theory.” If this is correct, it provides another kind ofreason to think that moral considerations could be crystallized intoprinciples that make manifest the organizing structure involved.
We are concerned here with moral reasoning as a species of practicalreasoning – reasoning directed to deciding what to do and, ifsuccessful, issuing in an intention. But how can such practicalreasoning succeed? How can moral reasoning hook up with motivationallyeffective psychological states so as to have this kind of causaleffect? “Moral psychology” – the traditional namefor the philosophical study of intention and action – has a lotto say to such questions, both in its traditional,a prioriform and its newly popular empirical form. In addition, theconclusions of moral psychology can have substantive moralimplications, for it may be reasonable to assume that if there aredeep reasons that a given type of moral reasoningcannot bepractical, then any principles that demand such reasoning are unsound.In this spirit, Samuel Scheffler has explored “the importancefor moral philosophy of some tolerably realistic understanding ofhuman motivational psychology” (Scheffler 1992, 8) and PeterRailton has developed the idea that certain moral principles mightgenerate a kind of “alienation” (Railton 1984). In short,we may be interested in what makes practical reasoning of a certainsort psychologically possible both for its own sake and as a way ofworking out some of the content of moral theory.
The issue of psychological possibility is an important one for allkinds of practical reasoning (cf. Audi 1989). In morality, it isespecially pressing, as morality often asks individuals to depart fromsatisfying their own interests. As a result, it may appear that moralreasoning’s practical effect could not be explained by a simpleappeal to the initial motivations that shape or constitutesomeone’s interests, in combination with a requirement, likethat mentioned above, to will the necessary means to one’s ends.Morality, it may seem, instead requires individuals to act on endsthat may not be part of their “motivational set,” in theterminology of Williams 1981. How can moral reasoning lead people todo that? The question is a traditional one. Plato’sRepublic answered that the appearances are deceiving, andthat acting morally is, in fact, in the enlightened self-interest ofthe agent. Kant, in stark contrast, held that our transcendentcapacity to act on our conception of a practical law enables us to setends and to follow morality even when doing so sharply conflicts withour interests. Many other answers have been given. In recent times,philosophers have defended what has been called“internalism” about morality, which claims that there is anecessary conceptual link between agents’ moral judgment andtheir motivation. Michael Smith, for instance, puts the claim asfollows (Smith 1994, 61):
If an agent judges that it is right for her to Φ in circumstancesC, then either she is motivated to Φ inC or sheis practically irrational.
Even this defeasible version of moral judgment internalism may be toostrong; but instead of pursuing this issue further, let us turn to aquestion more internal to moral reasoning. (For more on the issue ofmoral judgment internalism, seemoral motivation.)
The traditional question we were just glancing at picks up when moralreasoning is done. Supposing that we have some moral conclusion, itasks how agents can be motivated to go along with it. A differentquestion about the intersection of moral reasoning and moralpsychology, one more immanent to the former, concerns how motivationalelements shape the reasoning process itself.
A powerful philosophical picture of human psychology, stemming fromHume, insists that beliefs and desires are distinct existences (Hume2000, Book II, part iii, sect. iii; cf. Smith 1994, 7). This meansthat there is always a potential problem about how reasoning, whichseems to work by concatenating beliefs, links up to the motivationsthat desire provides. The paradigmatic link is that of instrumentalaction: the desire to Ψ links with the belief that by Φing incircumstancesC one will Ψ. Accordingly, philosophers whohave examined moral reasoning within an essentially Humean,belief-desire psychology have sometimes accepted a constrained accountof moral reasoning. Hume’s own account exemplifies the sort ofconstraint that is involved. As Hume has it, the calm passions supportthe dual correction of perspective constitutive of morality, alludedto above. Since these calm passions are seen as competing with ourother passions in essentially the same motivational coinage, as itwere, our passions limit the reach of moral reasoning.
An important step away from a narrow understanding of Humean moralpsychology is taken if one recognizes the existence of what Rawls hascalled “principle-dependent desires” (Rawls 1996, 82–83;Rawls 2000, 46–47). These are desires whose objects cannot becharacterized without reference to some rational or moral principle.An important special case of these is that of“conception-dependent desires,” in which theprinciple-dependent desire in question is seen by the agent asbelonging to a broader conception, and as important on that account(Rawls 1996, 83–84; Rawls 2000, 148–152). For instance,conceiving of oneself as a citizen, one may desire to bear one’sfair share of society’s burdens. Although it may look like anycontent, including this, may substitute for Ψ in the Humeanconception of desire, and although Hume set out to show how moralsentiments such as pride could be explained in terms of simplepsychological mechanisms, his influential empiricism actually tends torestrict the possible content of desires. Introducingprinciple-dependent desires thus seems to mark a departure from aHumean psychology. As Rawls remarks, if “we may find ourselvesdrawn to the conceptions and ideals that both the right and the goodexpress … , [h]ow is one to fix limits on what people might bemoved by in thought and deliberation and hence may act from?”(1996, 85). While Rawls developed this point by contrastingHume’s moral psychology with Kant’s, the same basic pointis also made by neo-Aristotelians (e.g., McDowell 1998).
The introduction of principle-dependent desires bursts any would-benaturalist limit on their content; nonetheless, some philosophers holdthat this notion remains too beholden to an essentially Humean pictureto be able to capture the idea of a moral commitment. Desires, it mayseem, remain motivational items that compete on the basis of strength.Saying that one’s desire to be just may be outweighed byone’s desire for advancement may seem to fail to capture thethought that one has a commitment – even a non-absolute one– to justice. Sartre designed his example of the student tornbetween staying with his mother and going to fight with the FreeFrench so as to make it seem implausible that he ought to decidesimply by determining which he more strongly wanted to do.
One way to get at the idea of commitment is to emphasize our capacityto reflect about what we want. By this route, one might distinguish,in the fashion of Harry Frankfurt, between the strength of our desiresand “the importance of what we care about” (Frankfurt1988). Although this idea is evocative, it provides relatively littleinsight intohow it is that we thus reflect. Another way tomodel commitment is to take it that our intentions operate at a leveldistinct from our desires, structuring what we are willing toreconsider at any point in our deliberations (e.g. Bratman 1999).While this two-level approach offers some advantages, it is limited byits concession of a kind of normative primacy to the unreconstructeddesires at the unreflective level. A more integrated approach mightmodel the psychology of commitment in a way that reconceives thenature of desire from the ground up. One attractive possibility is toreturn to the Aristotelian conception of desire as being for the sakeof some good or apparent good (cf. Richardson 2004). On thisconception, the end for the sake of which an action is done plays animportant regulating role, indicating, in part, what one willnot do (Richardson 2018, §§8.3–8.4). Reasoning about finalends accordingly has a distinctive character (see Richardson 1994,Schmidtz 1995). Whatever the best philosophical account of the notionof a commitment – for another alternative, see (Tiberius2000) – much of our moral reasoning does seem to involveexpressions of and challenges to our commitments (Anderson and Pildes2000).
Recent experimental work, employing both survey instruments and brainimaging technologies, has allowed philosophers to approach questionsabout the psychological basis of moral reasoning from novel angles.The initial brain data seems to show that individuals with damage tothe pre-frontal lobes tend to reason in more straightforwardlyconsequentialist fashion than those without such damage (Koenigs etal. 2007). Some theorists take this finding as tending to confirm thatfully competent human moral reasoning goes beyond a simple weighing ofpros and cons to include assessment of moral constraints (e.g.,Wellman & Miller 2008, Young & Saxe 2008). Others, however,have argued that the emotional responses of the prefrontal lobesinterfere with the more sober and sound, consequentialist-stylereasoning of the other parts of the brain (e.g. Greene 2014). Thesurvey data reveals or confirms, among other things, interesting,normatively loaded asymmetries in our attribution of such concepts asresponsibility and causality (Knobe 2006). It also reveals that manyof moral theory’s most subtle distinctions, such as thedistinction between an intended means and a foreseen side-effect, aredeeply built into our psychologies, being present cross-culturally andin young children, in a way that suggests to some the possibility ofan innate “moral grammar” (Mikhail 2011).
A final question about the connection between moral motivation andmoral reasoning is whether someone without the right motivationalcommitments can reason well, morally. On Hume’s official, narrowconception of reasoning, which essentially limits it to tracingempirical and logical connections, the answer would be yes. Thevicious person could trace the causal and logical implications ofacting in a certain way just as a virtuous person could. The onlydifference would be practical, not rational: the two would not act inthe same way. Note, however, that the Humean’s affirmativeanswer depends on departing from the working definition of“moral reasoning” used in this article, which casts it asa species of practical reasoning. Interestingly, Kant can answer“yes” while still casting moral reasoning as practical. Onhis view in theGroundwork and theCritique of PracticalReason, reasoning well, morally, does not depend on any priormotivational commitment, yet remains practical reasoning. That isbecause he thinks the moral law can itself generate motivation.(Kant’sMetaphysics of Morals andReligionoffer a more complex psychology.) For Aristotle, by contrast, an agentwhose motivations are not virtuously constituted will systematicallymisperceive what is good and what is bad, and hence will be unable toreason excellently. The best reasoning that a vicious person iscapable of, according to Aristotle, is a defective simulacrum ofpractical wisdom that he calls “cleverness”(Nicomachean Ethics 1144a25).
Moral considerations often conflict with one another. So do moralprinciples and moral commitments. Assuming that filial loyalty andpatriotism are moral considerations, then Sartre’s student facesa moral conflict. Recall that it is one thing to model the metaphysicsof morality or the truth conditions of moral statements and another togive an account of moral reasoning. In now looking at conflictingconsiderations, our interest here remains with the latter and not theformer. Our principal interest is in ways that we need to structure orthink about conflicting considerations in order to negotiate well ourreasoning involving them.
One influential building-block for thinking about moral conflicts isW. D. Ross’s notion of a “prima facieduty”. Although this term misleadingly suggests mere appearance– the way things seem at first glance – it has stuck. Somemoral philosophers prefer the term “pro tantoduty” (e.g., Hurley 1989). Ross explained that his term provides“a brief way of referring to the characteristic (quite distinctfrom that of being a duty proper) which an act has, in virtue of beingof a certain kind (e.g., the keeping of a promise), of being an actwhich would be a duty proper if it were not at the same time ofanother kind which is morally significant.” Illustrating thepoint, he noted that aprima facie duty to keep a promise canbe overridden by aprima facie duty to avert a seriousaccident, resulting in a proper, or unqualified, duty to do the latter(Ross 1988, 18–19). Ross described eachprima facie duty as a“parti-resultant” attribute, grounded or explained by oneaspect of an act, whereas “being one’s [actual]duty” is a “toti-resultant” attribute resulting fromall such aspects of an act, taken together (28; see Pietroski 1993).This suggests that in each case there is, in principle, some functionthat generally maps from the partial contributions of eachprimafacie duty to some actual duty. What might that function be? ToRoss’s credit, he writes that “for the estimation of thecomparative stringency of theseprima facie obligations nogeneral rules can, so far as I can see, be laid down” (41).Accordingly, a second strand in Ross simply emphasizes, followingAristotle, the need for practical judgment by those who have beenbrought up into virtue (42).
How might considerations of the sort constituted byprimafacie duties enter our moral reasoning? They might do soexplicitly, or only implicitly. There is also a third, still weakerpossibility (Scheffler 1992, 32): it might simply be the case that ifthe agenthad recognized aprima facie duty, hewould have acted on it unless he considered it to be overridden. Thisis a fact about how hewould have reasoned.
Despite Ross’s denial that there is any general method forestimating the comparative stringency ofprima facie duties,there is a further strand in his exposition that many findirresistible and that tends to undercut this denial. In the very sameparagraph in which he states that he sees no general rules for dealingwith conflicts, he speaks in terms of “the greatest balance ofprima facie rightness.” This language, together withthe idea of “comparative stringency,” ineluctably suggeststhe idea that the mapping function might be the same in each case ofconflict and that it might be a quantitative one. On this conception,if there is a conflict between twoprima facie duties, theone that is strongest in the circumstances should be taken to win.Duly cautioned about the additive fallacy (seesection 2.3), we might recognize that the strength of a moral consideration in oneset of circumstances cannot be inferred from its strength in othercircumstances. Hence, this approach will need still to rely onintuitive judgments in many cases. But this intuitive judgment will beabout whichprima facie consideration is stronger in thecircumstances, not simply about what ought to be done.
The thought that our moral reasoning either requires or is benefitedby a virtual quantitative crutch of this kind has a long pedigree. Canwe really reason well morally in a way that boils down to assessingthe weights of the competing considerations? Addressing this questionwill require an excursus on the nature of moral reasons. Philosophicalsupport for this possibility involves an idea of practicalcommensurability. We need to distinguish, here, two kinds of practicalcommensurability or incommensurability, one defined in metaphysicalterms and one in deliberative terms. Each of these forms might bestated evaluatively or deontically. The first, metaphysical sort ofvalue incommensurability is defined directly in terms of what is thecase. Thus, to state an evaluative version: two values aremetaphysically incommensurable just in case neither is better than theother nor are they equally good (see Chang 1998). Now, themetaphysical incommensurability of values, or its absence, is onlyloosely linked to how it would be reasonable to deliberate. If allvalues or moral considerations are metaphysically (that is, in fact)commensurable, still it might well be the case that our access to theultimate commensurating function is so limited that we would fare illby proceeding in our deliberations to try to think about whichoutcomes are “better” or which considerations are“stronger.” We might have no clue about how to measure therelevant “strength.” Conversely, even if metaphysicalvalue incommensurability is common, we might do well, deliberatively,to proceed as if this were not the case, just as we proceed inthermodynamics as if the gas laws obtained in their idealized form.Hence, in thinking about the deliberative implications ofincommensurable values, we would do well to think in terms of a definition tailored to thedeliberative context. Start with a local, pairwise form. We may saythat two options, A and B, are deliberatively commensurable just incase there is some one dimension of value in terms of which, prior to– or logically independently of – choosing between them,it is possible adequately to represent the force of the considerationsbearing on the choice.
Philosophers as diverse as Immanuel Kant and John Stuart Mill haveargued that unless two options are deliberatively commensurable, inthis sense, it is impossible to choose rationally between them.Interestingly, Kant limited this claim to the domain of prudentialconsiderations, recognizing moral reasoning as invoking considerationsincommensurable with those of prudence. For Mill, this claim formed animportant part of his argument that there must be some one, ultimate“umpire” principle – namely, on his view, theprinciple of utility. Henry Sidgwick elaborated Mill’s argumentand helpfully made explicit its crucial assumption, which he calledthe “principle of superior validity” (Sidgwick 1981; cf.Schneewind 1977). This is the principle that conflict between distinctmoral or practical considerations can be rationally resolved only onthe basis of some third principle or consideration that is both moregeneral and more firmly warranted than the two initial competitors.From this assumption, one can readily build an argument for therational necessity not merely of local deliberative commensurability,but of a global deliberative commensurability that, like Mill andSidgwick, accepts just one ultimate umpire principle (cf. Richardson1994, chap. 6).
Sidgwick’s explicitness, here, is valuable also in helping onesee how to resist the demand for deliberative commensurability.Deliberative commensurability is not necessary for proceedingrationally if conflicting considerations can be rationally dealt within a holistic way that does not involve the appeal to a principle of“superior validity.” That our moral reasoning can proceedholistically is strongly affirmed by Rawls. Rawls’scharacterizations of the influential ideal ofreflective equilibrium and his related ideas about the nature of justification imply that wecan deal with conflicting considerations in less hierarchical waysthan imagined by Mill or Sidgwick. Instead of proceeding up a ladderof appeal to some highest court or supreme umpire, Rawls suggests,when we face conflicting considerations “we work from bothends” (Rawls 1999, 18). Sometimes indeed we revise our moreparticular judgments in light of some general principle to which weadhere; but we are also free to revise more general principles inlight of some relatively concrete considered judgment. On thispicture, there is no necessary correlation between degree ofgenerality and strength of authority or warrant. That this holisticway of proceeding (whether in building moral theory or indeliberating: cf. Hurley 1989) can be rational is confirmed by thepossibility of a form of justification that is similarly holistic:“justification is a matter of the mutual support of manyconsiderations, of everything fitting together into one coherentview” (Rawls 1999, 19, 507). (Note that this statement, whichexpresses a necessary aspect of moral or practical justification,should not be taken as a definition or analysis thereof.) So there isan alternative to depending, deliberatively, on finding a dimension interms of which considerations can be ranked as “stronger”or “better” or “more stringent”: one caninstead “prune and adjust” with an eye to building moremutual support among the considerations that one endorses on duereflection. If even the desideratum of practical coherence is subjectto such re-specification, then this holistic possibility really doesrepresent an alternative to commensuration, as the deliberator, andnot some coherence standard, retains reflective sovereignty(Richardson 1994, sec. 26). The result can be one in which theoriginally competing considerations are not so much compared astransformed (Richardson 2018, chap. 1)
Suppose that we start with a set of first-order moral considerationsthat are all commensurable as a matter of ultimate, metaphysical fact,but that our grasp of the actual strength of these considerations isquite poor and subject to systematic distortions. Perhaps some peopleare much better placed than others to appreciate certainconsiderations, and perhaps our strategic interactions would cause usto reach suboptimal outcomes if we each pursued our own unfetteredjudgment of how the overall set of considerations plays out. In suchcircumstances, there is a strong case for departing from maximizingreasoning without swinging all the way to the holist alternative. Thiscase has been influentially articulated by Joseph Raz, who developsthe notion of an “exclusionary reason” to occupy thismiddle position (Raz 1990).
“An exclusionary reason,” in Raz’s terminology,“is a second order reason to refrain from acting for somereason” (39). A simple example is that of Ann, who is tiredafter a long and stressful day, and hence has reason not to act on herbest assessment of the reasons bearing on a particularly importantinvestment decision that she immediately faces (37). This notion of anexclusionary reason allowed Raz to capture many of the complexities ofour moral reasoning, especially as it involves principled commitments,while conceding that, at the first order, all practical reasons mightbe commensurable. Raz’s early strategy for reconcilingcommensurability with complexity of structure was to limit the claimthat reasons are comparable with regard to strength to reasons of agiven order. First-order reasons compete on the basis of strength; butconflicts between first- and second-order reasons “are resolvednot by the strength of the competing reasons but by a generalprinciple of practical reasoning which determines that exclusionaryreasons always prevail” (40).
If we take for granted this “general principle of practicalreasoning,” why should we recognize the existence of anyexclusionary reasons, which by definition prevail independently of anycontest of strength? Raz’s principal answer to this questionshifts from the metaphysical domain of the strengths that variousreasons “have” to the epistemically limited viewpoint ofthe deliberator. As in Ann’s case, we can see in certaincontexts that a deliberator is likely to get things wrong if he or sheacts on his or her perception of the first-order reasons. Second-orderreasons indicate, with respect to a certain range of first-orderreasons, that the agent “must not actfor thosereasons” (185). The broader justification of an exclusionaryreason, then, can consistently be put in terms of the commensurablefirst-order reasons. Such a justification can have the following form:“Given this agent’s deliberative limitations, the balanceof first-order reasons will likely be better conformed with if he orshe refrains from acting for certain of those reasons.”
Raz’s account of exclusionary reasons might be used to reconcileultimate commensurability with the structured complexity of our moralreasoning. Whether such an attempt could succeed would depend, inpart, on the extent to which we have an actual grasp of first-orderreasons, conflict among which can be settled solely on the basis oftheir comparative strength. Our consideration, above, of casuistry,the additive fallacy, and deliberative incommensurability may combineto make it seem that only in rare pockets of our practice do we have agood grasp of first-order reasons, if these are defined, à laRaz, as competing only in terms of strength. If that is right, then wewill almost always have good exclusionary reasons to reason on someother basis than in terms of the relative strength of first-orderreasons. Under those assumptions, the middle way that Raz’s ideaof exclusionary reasons seems to open up would more closely approachthe holist’s.
The notion of a moral consideration’s “strength,”whether put forward as part of a metaphysical picture of howfirst-order considerations interact in fact or as a suggestion abouthow to go about resolving a moral conflict, should not be confusedwith the bottom-line determination of whether one consideration, andspecifically one duty, overrides another. In Ross’s example ofconflictingprima facie duties, someone must choose betweenaverting a serious accident and keeping a promise to meet someone.(Ross chose the case to illustrate that an “imperfect”duty, or a duty of commission, can override a strict, prohibitiveduty.) Ross’s assumption is that all well brought-up peoplewould agree, in this case, that the duty to avert serious harm tosomeone overrides the duty to keep such a promise. We may take it, ifwe like, that this judgment implies that we consider the duty to savea life, here, to be stronger than the duty to keep the promise; but infact this claim about relative strength adds nothing to ourunderstanding of the situation. Yet we do not reach our practicalconclusion in this caseby determining that the duty to savethe boy’s life is stronger. The statement that this duty is herestronger is simply a way to embellish the conclusion that of the twoprima facie duties that here conflict, it is the one thatstates the all-things-considered duty. To be “overridden”is just to be aprima facie duty that fails to generate anactual duty because anotherprima facie duty that conflictswith it – or several of them that do – does generate anactual duty. Hence, the judgment that some duties override others canbe understood just in terms of their deontic upshots and withoutreference to considerations of strength. To confirm this, note that wecan say, “As a matter of fidelity, we ought to keep the promise;as a matter of beneficence, we ought to save the life; we cannot doboth; and both categories considered we ought to save the life.”
Understanding the notion of one duty overriding another in this wayputs us in a position to take up the topic ofmoral dilemmas. Since this topic is covered in a separate article, here we may simplytake up one attractive definition of a moral dilemma.Sinnott-Armstrong (1988) suggested that a moral dilemma is a situationin which the following are true of a single agent:
This way of defining moral dilemmas distinguishes them from the kindof moral conflict, such as Ross’spromise-keeping/accident-prevention case, in which one of the dutiesis overridden by the other. Arguably, Sartre’s student faces amoral dilemma. Making sense of a situation in which neither of twoduties overrides the other is easier if deliberative commensurabilityis denied. Whether moral dilemmas are possible will depend cruciallyon whether “ought” implies “can” and whetherany pair of duties such as those comprised by (1) and (2) implies asingle, “agglomerated” duty that the agent do bothA andB. If either of these purported principles ofthe logic of duties is false, then moral dilemmas are possible.
Jonathan Dancy has well highlighted a kind of contextual variabilityin moral reasons that has come to be known as “reasonsholism”: “a feature that is a reason in one case may be noreason at all, or an opposite reason, in another” (Dancy 2004).To adapt one of his examples: while there is often moral reason not tolie, when playing liar’s poker one generally ought to lie;otherwise, one will spoil the game (cf. Dancy 1993, 61). Dancy arguesthat reasons holism supports moral particularism of the kind discussedinsection 2.2, according to which there are no defensible moral principles. Takingthis conclusion seriously would radically affect how we conducted ourmoral reasoning. The argument’s premise of holism has beenchallenged (e.g., Audi 2004, McKeever & Ridge 2006). Philosophershave also challenged the inference from reasons holism toparticularism in various ways. Mark Lance and Margaret Olivia Little(2007) have done so by exhibiting how defeasible generalizations, inethics and elsewhere, dependsystematically on context. Wecan work with them, they suggest, by utilizing a skill that is similarto the skill of discerning morally salient considerations, namely theskill of discerning relevant similarities among possible worlds. Moregenerally, John F. Horty has developed a logical and semantic accountaccording to which reasons are defaults and so behave holistically,but there are nonetheless general principles that explain how theybehave (Horty 2012). And Mark Schroeder has argued that our holisticviews about reasons are actually better explained by supposing thatthere are general principles (Schroeder 2011).
This excursus on moral reasons suggests that there are a number ofgood reasons why reasoning about moral matters might not simply reduceto assessing the weights of competing considerations.
If we have any moral knowledge, whether concerning general moralprinciples or concrete moral conclusions, it is surely very imperfect.What moral knowledge we are capable of will depend, in part, on whatsorts of moral reasoning we are capable of. Although some morallearning may result from the theoretical work of moral philosophersand theorists, much of what we learn with regard to morality surelyarises in the practical context of deliberation about new anddifficult cases. This deliberation might be merely instrumental,concerned only with settling on means to moral ends, or it might beconcerned with settling those ends. There is no special problem aboutlearning what conduces to morally obligatory ends: that is an ordinarymatter of empirical learning. But by what sorts of process can welearn which ends are morally obligatory, or which norms morallyrequired? And, more specifically, is strictly moral learning possiblevia moral reasoning?
Much of what was said above with regard to moral uptake applies againin this context, with approximately the same degree of dubiousness orpersuasiveness. If there is a role for moral perception or foremotions in agents’ becoming aware of moral considerations,these may function also to guide agents to new conclusions. Forinstance, it is conceivable that our capacity for outrage is arelatively reliable detector of wrong actions, even novel ones, orthat our capacity for pleasure is a reliable detector of actions worthdoing, even novel ones. (For a thorough defense of the latterpossibility, which intriguingly interprets pleasure as a judgment ofvalue, see Millgram 1997.) Perhaps these capacities for emotionaljudgment enable strictly moral learning in roughly the same way thatchess-players’ trained sensibilities enable them to recognizethe threat in a previously unencountered situation on the chessboard(Lance and Tanesini 2004). That is to say, perhaps our moral emotionsplay a crucial role in the exercise of a skill whereby we come to beable to articulate moral insights that we have never before attained.Perhaps competing moral considerations interact in contextuallyspecific and complex ways much as competing chess considerations do.If so, it would make sense to rely on our emotionally-guidedcapacities of judgment to cope with complexities that we cannot modelexplicitly, but also to hope that, once having been so guided, wemight in retrospect be able to articulate something about the lessonof a well-navigated situation.
A different model of strictly moral learning puts the emphasis on ourafter-the-fact reactions rather than on any prior, tacit emotional orjudgmental guidance: the model of “experiments in living,”to use John Stuart Mill’s phrase (see Anderson 1991). Here, thebasic thought is that we can try something and see if “itworks.” For this to be an alternative to empirical learningabout what causally conduces to what, it must be the case that weremain open as to what we mean by things “working.” InMill’s terminology, for instance, we need to remain open as towhat are the important “parts” of happiness. If we are,then perhaps we can learn by experience what some of them are –that is, what are some of the constitutive means of happiness. Thesepaired thoughts, that our practical life is experimental and that wehave no firmly fixed conception of what it is for something to“work,” come to the fore in Dewey’s pragmatistethics (see esp. Dewey 1967 [1922]). This experimentalist conceptionof strictly moral learning is brought to bear on moral reasoning inDewey’s eloquent characterizations of “practicalintelligence” as involving a creative and flexible approach tofiguring out “what works” in a way that is thoroughly opento rethinking our ultimate aims.
Once we recognize that moral learning is a possibility for us, we canrecognize a broader range of ways of coping with moral conflicts thanwas canvassed in the last section. There, moral conflicts weredescribed in a way that assumed that the set of moral considerations,among which conflicts were arising, was to be taken as fixed. If wecan learn, morally, however, then we probably can and should revisethe set of moral considerations that we recognize. Often, we do thisby re-interpreting some moral principle that we had started with,whether by making it more specific, making it more abstract, or insome other way (cf. Richardson 2000 and 2018).
So far, we have mainly been discussing moral reasoning as if it were asolitary endeavor. This is, at best, a convenient simplification. Atworst, it is, as Jürgen Habermas has long argued, deeplydistorting of reasoning’s essentially dialogical orconversational character (e.g., Habermas 1984; cf. Laden 2012). In anycase, it is clear that we often do need to reason morally with oneanother.
Here, we are interested in how people may actually reason with oneanother – not in how imagined participants in an originalposition or ideal speech situation may be said to reason with oneanother, which is a concern for moral theory, proper. There are twosalient and distinct ways of thinking about people morally reasoningwith one another: as members of an organized or corporate body that iscapable of reaching practical decisions of its own; and as autonomousindividuals working outside any such structure to figure out with eachother what they ought, morally, to do.
The nature and possibility of collective reasoning within an organizedcollective body has recently been the subject of some discussion.Collectives can reason if they are structured as an agent. Thisstructure might or might not be institutionalized. In line with thegloss of reasoning offered above, which presupposes being guided by anassessment of one’s reasons, it is plausible to hold that agroup agent “counts as reasoning, not just rational, only if itis able to form not only beliefsin propositions – thatis, object-language beliefs – but also beliefaboutpropositions” (List and Pettit 2011, 63). As List and Pettithave shown (2011, 109–113), participants in a collective agentwill unavoidably have incentives to misrepresent their own preferencesin conditions involving ideologically structured disagreements wherethe contending parties are oriented to achieving or avoiding certainoutcomes – as is sometimes the case where serious moraldisagreements arise. In contexts where what ultimately matters is howwell the relevant group or collective ends up faring, “teamreasoning” that takes advantage of orientation towards thecollective flourishing of the group can help it reach a collectivelyoptimal outcome (Sugden 1993, Bacharach 2006; see entry oncollective intentionality). Where the group in question is smaller than the set of persons,however, such a collectively prudential focus is distinct from a moralfocus and seems at odds with the kind of impartiality typicallythought distinctive of the moral point of view. Thinking about what a“team-orientation” to the set all persons might look like might bringus back to thoughts of Kantian universalizability; but recall thathere we are focused on actual reasoning, not hypothetical reasoning.With regard to actual reasoning, even if individuals can take up suchan orientation towards the “team” of all persons, there is seriousreason, highlighted by another strand of the Kantian tradition, fordoubting that any individual can aptly surrender their moral judgmentto any group’s verdict (Wolff 1998).
This does not mean that people cannot reason together, morally. Itsuggests, however, that such joint reasoning is best pursued as amatter of working out together, as independent moral agents, what theyought to do with regard to an issue on which they have some need tocooperate. Even if deferring to another agent’s verdict as tohow one morally ought to act is off the cards, it is still possiblethat one may licitly take account of the moral testimony of others(for differing views, see McGrath 2009, Enoch 2014).
In the case of independent individuals reasoning morally with oneanother, we may expect that moral disagreement provides the occasionrather than an obstacle. To be sure, if individuals’ moraldisagreement is very deep, they may not be able to get this reasoningoff the ground; but as Kant’s example of Charles V and hisbrother each wanting Milan reminds us, intractable disagreement canarise also from disagreements that, while conceptually shallow, arecircumstantially sharp. If it were true that clear-headedjustification of one’s moral beliefs required seeing them asbeing ultimately grounded in a priori principles, as G.A. Cohen argued(Cohen 2008, chap. 6), then room for individuals to work out theirmoral disagreements by reasoning with one another would seem to berelatively restricted; but whether the nature of (clearheaded) moralgrounding is really so restricted is seriously doubtful (Richardson2018, §9.2). In contrast to what such a picture suggests,individuals’ moral commitments seem sufficiently open to beingre-thought that people seem able to engage in principled – thatis, not simply loss-minimizing – compromise (Richardson 2018,§8.5).
What about the possibility that the moral community as a whole –roughly, the community of all persons – can reason? Thispossibility does not raise the kind of threat to impartiality that israised by the team reasoning of a smaller group of people; but it ishard to see it working in a way that does not run afoul of the concernabout whether any person can aptly defer, in a strong sense, to themoral judgments of another agent. Even so, a residual possibilityremains, which is that the moral community can reason in just one way,namely by accepting or ratifying a moral conclusion that has alreadybecome shared in a sufficiently inclusive and broad way (Richardson2018, chap. 7).
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
[Please contact the author with suggestions.]
agency: shared |intentionality: collective |moral dilemmas |moral particularism |moral particularism: and moral generalism |moral relativism |moral skepticism |practical reason |prisoner’s dilemma |reflective equilibrium |value: incommensurable
The author is grateful for help received from Gopal Sreenivasan andthe students in a seminar on moral reasoning taught jointly with him,to the students in a more recent seminar in moral reasoning, and, forcriticisms received, to David Brink, Margaret Olivia Little and MarkMurphy. He welcomes further criticisms and suggestions forimprovement.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054