Among the many questions that arise in the attempt to come tophilosophical grips with morality is what role, if any, moralprinciples have to play. Moral generalists think morality is bestunderstood in terms of moral principles; moral particularists denythis. To many people, ordinary moral practice seems suffused withprinciples (keep your promises; do not steal; do unto others as youwould have them do unto you). To many moral theorists, the centraltask of moral theory has been to articulate and defend moralprinciples, or, perhaps, a single ultimate moral principle (maximizeimpersonal happiness; act only on maxims that can be willed asuniversal law). The debate between particularists and generalists thushas the potential to force a reassessment of both moral theory andmoral practice.
This characterization of the debate is so far too impressionistic toprovide a tractable framework for philosophical inquiry. Theliterature reveals many ways to sharpen the debate, and sharpening isindeed needed. But both generalism and particularism are best seen asintellectual traditions in moral philosophy, each of which has anumber of distinct but related strands. This article attempts todisentangle some of those strands with the most attention being givento recent stages of this debate.
The arguments for and against both particularism and generalism arealso diverse, arising from metaphysics, epistemology, normative theoryand the philosophy of language. These arguments also interact ininteresting ways with other debates in moral philosophy. Finally, itis very much an open and interesting question to what extent otherareas of philosophy (e.g., the philosophy of language andepistemology) can usefully draw on ideas developed in the debatebetween moral particularists and moral generalists.
More recently, these debates have been juxtaposed with non-Westerntraditions. For example, there has been some discussion of whethersome central works in Taoism presuppose particularism (Fraser 2022).There has also been some discussion of how these debates interact withthe pragmatist tradition (Jackson 2017).
Aristotle might reasonably be characterized as the“forefather” of particularism. Aristotle famouslyemphasizes that ethical inquiry is mistaken if it aims for “adegree of exactness” too great for its subject matter, and addedthat moral generalizations can hold only “for the mostpart”. Moreover, Aristotle tirelessly emphasizes that ethicsultimately concerns particular cases, that no theory can fully addressthem all, and that “judgment depends on perception” (NE,1109b). These ideas have all deeply inspired contemporaryparticularists (John McDowell is a prominent case, though he does nottend to label himself as a particularist; see McDowell 1981, 1998).Whether Aristotle should ultimately be interpreted as a particularistis a matter of debate (Irwin 2000; Leibovitz 2013).
Interestingly, no single major historic figure is most obviouslycharacterized as the “forefather” of generalism. This ispresumably because the most important historic generalists in effectdefended generalism by defending specific moral theories orprinciples. The two most important traditions here are thedeontological tradition which owes so much to Kant, and theconsequentialist tradition which owes so much to the Britishutilitarians (Bentham, Mill and Sidgwick). Nonetheless, each of thesetraditions substantially enriched the generalist approach with awealth of ideas and distinctions which need not be restricted to thetheories in which they were originally formulated.
The Kantian tradition puts enormous weight on the idea that moralitymust be principled and that the ultimate principle of morality must beone we can knowa priori. According to Kant, the moral law asapplied to imperfect agents who are subject to temptations, provideswhat he called a “categorical imperative”—animperative whose rational authority is not dependent on theagent’s contingent ends. Kant provided several formulations ofthe categorical imperative. The so-called “universal law”formulation holds that one must always act so that one’s maximcould at the same time be willed as a universal law. The humanityformulation holds that one must always act so as to treat humanity,whether in one’s own person or that of another, always as an endand never merely as a means. The Kantian tradition emphasizes commonsense moral ideas like respect and dignity, and provides a distinctiveinterpretation of the role of universalizability in moral thought. Onsome readings of Kant, the moral law must itself beconstitutive of being a rational agent at all. This idea has,in turn, been enormously influential, especially in the late twentiethand early twenty-first century.
Consequentialism enriched the generalist framework in other ways. Mostnotably, perhaps, consequentialists have often distinguished betweentwo very differentkinds of principles, corresponding to tworather differentroles they may play. On the one hand, thereare principles—call them “standards”—whichprovide the deepestexplanation of why certain actions areright or wrong. On the other hand, there are the principles whichordinary agents ought to follow in their day to day deliberations.Such principles are “guides”. Consider a simple analogywith the stock market. The principle which explains what counts assuccess might simply be “buy low and sell high”, but thisprinciple is woefully inadequate as a guide to making investmentdecisions in real time. A principle like “have a diversifiedportfolio” seems much more suitable for the latter role.
Each of these traditions (the Kantian and the consequentialist) facesa number ofprima facie powerful objections. It is thereforeperhaps not surprising that there was ultimately a reaction againstthe broader generalist aspirations that these theories embodied. Onsome readings, one of the earliest particularists in the modern sensewas Ewing, who inThe Morality of Punishment (1929) arguedthat consequentialism and deontology were the only plausibleprincipled conceptions of morality, that neither was defensible, andthat morality was therefore not principled (cf. Lind &Brännmark 2008 interview with Dancy, who explicitly characterizesEwing in this way at p. 10).
Just one year after Ewing in effect defended a fairly radical form ofmoral particularism, W.D. Ross argued for a more moderate form. Rossoccupies a very interesting place in the history of particularism, ashe has served as both an inspiration and a foil to modernparticularists. Ross put forward a battery of “primafacie duties” specifying types of conduct—for exampleacts of gratitude—that are always, in some sense, obligatory.The obligation in question need not be an all things considered one,however, since a conflictingprima facie duty might, in thecircumstances, be more important.
Setting aside whether Ross thought anything theoretically useful couldbe said about how to adjudicate conflicts ofprima facieduty, he did not think it useful to try to formulate exceptionlessprinciples with regard to all things considered duty (cf. Postow2006). Ross thus appears to be a generalist aboutprima facieduty, but a defender of particularism about overall duty. Somecontemporary particularists, however, insist on going beyond Ross andcasting doubt even on principles ofprima facie duty, orprinciples specifying which considerations arepro tanto (or“contributory” in Jonathan Dancy’s terminology)reasons.
Jonathan Dancy has done more than anyone to articulate and defend anespecially radical form of particularism. Although Ross was both afoil and an inspiration for Dancy, R.M. Hare was a more immediateopponent. Hare’s prescriptivism drew on ideas from both theKantian and the consequentialist tradition. Hare defended a strongform of universalizability which can be traced to Kant, but Hare thenargued that universalizability lent support not to a deontologicalmoral theory, but to a form of consequentialism. Indeed it led to aform of consequentialism which emphasized the distinction betweenstandards and guides (cf. Hare 1963).
In the introduction ofMoral Reasons, Dancy summarizes hisconclusions as the “mirror image” of Hare’s. Perhapsmost notably, Dancy objected to an idea which he took to be implicitin Hare’s universalizability principle, that if a considerationis a reason in one context then it is a reason with the same valenceinany possible context in which it occurs. (This reading ofHare is open to objection. See McNaughton and Rawlings (2000) fordiscussion.) Dancy calls this idea, which he also attributes to Ross,“atomism” in the theory of reasons and argues against itand in favour of what he calls “holism”.
As Dancy’s early work came to fruition, it inspired histhen-colleague David McNaughton to advance distinct but complementaryarguments for particularism. McNaughton was also heavily influenced bythe work of John McDowell, who had argued that it was an advantage ofhis own brand of moral realism that it did not presuppose generalism(see McDowell 1981; see also Blackburn 1981). InMoralVision, McNaughton defended a form of moral realism which heargued lent support to particularism. He also argued thatparticularism better accounts for moral conflict, fits reasonably wellwith ordinary practice and can explain why we might reasonably besuspicious of the very idea of a moral expert.
The work of Dancy and McNaughton inspired a host of other philosophersto carry forward the particularist research programme, sometimes inrather different directions. This eventually led to a wide variety ofviews all going under the heading of “particularism”. Norwere the challenges posed by these many moral particularisms ignoredby those with more generalist sympathies. Woken from their generalistslumbers, they began to develop arguments for generalism which did notdepend on the correctness of any particular moral principle(s). Thisgenerated a healthy debate, the contours of which the rest of thisentry will outline.
Particularists are united in their opposition to moral principles andgeneralists are united in their allegiance to them. What, though, is amoral principle? At least three conceptions of principles are worthdistinguishing. First, there are principlesquastandards. Standards purport to offerexplanationsof why given actions are right or wrong, why a given consideration isa reason with a certain valence and weight, why a given charactertrait is a virtue, and the like. An especially robust metaphysicalspin on this conception understands standards as beingtruth-makers for moral propositions (cf. Armstrong 2004).Second, there are principlesquaguides. Thesepurport to be well suited to guiding action. Third, there areprinciples purporting to play both of these rolessimultaneously—action-guiding standards.
It is not hard to see these different conceptions of moral principlesat work in the history of moral philosophy. In the utilitariantradition in moral philosophy, the principle of utility (however it isformulated) is characteristically understood as a standard. Even ifsome politically minded utilitarians see advantages to using theprinciple of utility to guide public choice and justification, moralphilosophers tend to follow Mill in thinking that the principle isseldom apt for use in individual moral decision-making. They thus denythat it should be understood as a principlequaguide. Indeed, some utilitarians go further and argue thatthe principle of utility isself-effacing, in the sense thatit recommends its own rejection (cf. Railton 1984; see also Parfit1984). Utilitarians instead hold that various maxims of common sensemorality should be understood as heuristics which work well enough fornormal human beings, so there is room in this picture for principlesquaguides.
Kant, on the other hand, seemed to have understood the categoricalimperative as a kind of action-guiding standard. Kant’sdiscussion of examples in theGroundwork (1785) and hischaracterization of the Formula of Universal Law as appropriate methodfor testing our maxims makes clear that he thinks of it asappropriately guiding the actions of the morally virtuous agent.Equally clearly, the categorical imperative is to be understood as themost fundamental explanation of why given actions are right or wrong,and so also counts as a principlequa standard. On aconstitutivist reading, the categorical imperative is meant to playboth of these roles in virtue of its constituting our rational agency.Finally, it is worth noting in this context that a principle canfunction usefully as aguide even if its application requiresjudgment and sensitivity; principlesqua guides need not bealgorithmic.
Principles can also be distinguished in terms of theirscope.Some principles have purely non-moral antecedents (e.g., the principleof utility), whereas others use moral concepts in both theirantecedents and consequents (e.g., “if an action is just then itis morally permissible”). Finally, principles can bedistinguished in terms of whether they are in some sense“hedged”, including aceteris paribus clause ofsome kind (e.g., “other things equal, lying is wrong”), orunhedged.
One might be a particularist or a generalist about moral principlesunderstood in any of these ways. Whether being a particularist orgeneralist about principles in one sense drives one to be aparticularist or a generalist about principles in another sense is nota trivial question. Further complicating matters, there is more thanone way to oppose principles (however those principles are conceived.)Last, the form a particularist’s opposition takes mightreasonably vary across different types of principle. Let us now reviewdifferent ways one might oppose principles.
The simplest form of opposition,Principle Eliminativism,simply denies that there are any moral principles. Of course, it mustbe borne in mind here and below that a principle eliminativist maydeny that there are any principles of one sort, while allowing forprinciples of another sort. For example one might be an eliminativistabout principles purporting to give the application conditions formoral predicates in entirely non-normative terms (McNaughton 1988) oran eliminativist about exceptionless principles (Little 2000).Principle Scepticism holds, more modestly, that we do nothave sufficient reason to believe there are any moral principles.Principled Particularism holds that while any given moraltruth is explained by a moral principle, no finite set of moralprinciples can explain all the moral truths (Holton 2002).Anti-Transcendental Particularism, which at one point atleast was Dancy’s favoured gloss of the view, holds that moralthought and judgment does not depend on the supply of a suitable stockof moral principles. Finally,Principle Abstinence asserts amore practical opposition to moral principles, holding that we oughtnot be guided by moral principles. For each of these forms ofparticularism, there is a corresponding form of generalism which issimply the denial of the particularist thesis in question.
Although this taxonomy entails that the logical space forparticularist (and generalist) views is wide and heterogeneous, itwould be a mistake to assume that all of the positions which can bederived from a matrix constructed on the basis of these distinctionsreally are distinct in any deep way. For example,PrincipleEliminativism about principlesquaguidesarguably is equivalent toPrinciple Abstinence aboutprinciples tout court.
The debate between particularists and generalists is often framed inmetaphysical terms. In this guise, the debate primarily concernsprinciples conceived as standards. Moral principles might then bethought of as true and law-like generalizations about moral propertiesor, alternatively, as nomic regularities involving moral properties.To be clear, generalizations in the relevant sense need not actuallybe instantiated to be true; plausibly, the moral law would still betrue in a world with no agents and hence with no right or wrongactions. This view has been developed, in different ways, by McKeeverand Ridge (2006), Väyrynen (2006, 2009) and Lance and Little(2007), though it is not universally accepted (see Robinson 2008,2011). Thus understood, particularism is often taken to have anaffinity with non-reductive or non-naturalist views in ethics.Furthermore, it has been noted that contemporary particularism arosealongside a resurgence of interest in non-naturalism (cf. Little,1994). To a degree, this is understandable. After allsomenaturalist views appear to entail generalism. A form of reductivenaturalism according to which being morally right is metaphysicallyidentical with being an act which maximizes human happiness appears toentail a robust utilitarian principle and so,a fortiori, toentail generalism. Furthermore, if one denies any kind of reduction ofmoral properties to natural properties then it becomes more difficultto see how any informative statements connecting moral and non-moralproperties could be sufficiently law-like to count as principles.Nevertheless, one ought not to take it as given that non-naturalistsmust be particularists or that naturalists must be generalists. Bothclassic and contemporary non-naturalists have endeavored to defendprinciples (Moore 1922 [1903]; Shafer-Landau 2003). Furthermore, theremay be room for particularists to embrace naturalism by claiming thatparticular reasons are always grounded in some particular naturalproperty instance while maintaining particularism by claiming thatthere are no law-like generalizations connecting moral and non-moralproperty types. Perhaps because the commitments and resources ofnon-naturalist and naturalist views in metaethics (and even howproperly to distinguish these views from each other) remainscontested, one cannot uncontroversially map thegeneralism/particularism debate onto the naturalism/non-naturalismdebate.
Non-naturalism would seem to have less bearing on a further question:whether there are moral principles connecting one moral property toanother. This issue is at play in Ross’s rejection ofMoore’s claim that the right action is the action whichmaximizes the good. Even if Ross relied on Moore’s own“open question” strategy to challenge Moore’sutilitarian principle, it is nevertheless the case that Moore’sprinciple is at least consistent with non-naturalism. More recently,philosophers sympathetic to particularism have divided over theavailability of intra-moral principles. Some are content to allow thata claim such as, “the fact that an action is just is a reason inits favour”, is true and informative (cf. McNaughton and Rawling2000). Others propose a more radical particularism according to whicheven intra-moral principles—that is, principles linking onemoral concept with another—are unavailable (cf. Dancy 2004: ch7).
Naturalists and non-naturalists typically share a commitment tosupervenience. Roughly put, supervenience says that, necessarily,there can be no moral difference without some natural (or non-moral)difference. There are various ways to interpret supervenience and itsmetaphysical significance. So long as we reject a global error theory,though, supervenience seems to guarantee some necessarily trueuniversal generalizations involving moral predicates.
Nevertheless, it is generally agreed by all sides that such“supervenience functions” should not count as moralprinciples. The grounds offered for this are various, but include thatsuch generalizations contain much potentially irrelevant information,that they are massively disjunctive, and that they lack explanatoryimport (cf. Little 2000; Dancy 2004; McKeever and Ridge 2005). Theidea is that to be a successful moral principle (quastandard) requires more than a true or even necessary connectionbetween the descriptive and the moral. The connection must beexplanatory and not cite irrelevant features in the antecedent either.Even those who gesture at supervenience in mounting arguments forgeneralism concede that a successful argument requires significantadditional semantic or epistemological premises (see below, and cf.Jackson, Pettit, & Smith 2000).
More recently the metaphysical status of principles (as standards) hasbeen taken up by philosophers exploring the metaphysical relation ofgrounding. These philosophers agree that moral principles must be lawlike and that they must be metaphysically explanatory but defendimportantly different positions on the specific role principles play.On one view (Rosen 2017a; 2017b; Enoch 2019), we should allow thatmoral principles themselves serve as the partial grounds forparticular moral facts. By analogy, one might think that a particularlegal fact is partially grounded by non-legal facts, say facts aboutwhat someone did while also being partially grounded in legal facts tothe effect that conduct such as that has a certain legal status.Similarly, entirely non-moral facts or properties would not besufficient to ground particular moral facts. A moral principle or inRosen’s terminology a bridge law is needed. If this bridge lawis not itself grounded in non-moral facts, then this variety ofgeneralism constitutes a form of non-naturalism. One route for thegeneralists to avoid any commitment to non-naturalism is to adopt amore deflationary of account of moral principles according to whichmoral principles do not serve as even the partial grounds ofparticular moral facts. On this view, developed and defended by Berker(2019), moral principles are “explanation serving” becausethey refer to the non-moral properties that, themselves, fully groundparticular moral facts.
While particularism has strong affinities with non-naturalism, themost prominent argument for particularism—the argument from theholism of reasons—has proceeded from a more targeted andspecific claim about the metaphysics of moral reasons (see McNaughton1988; Dancy 1993; Little 2000). According to holism about reasons, aconsideration that counts as a reason in one case may not count as areason in another case, or may count as a reason, but in a differentdirection. By way of illustration, the fact that a remark would befunny might be, in one case a reason for making the remark, in anothercase a reason against making the remark, and in still another case noreason at all. In short it depends on context. Importantly, holism ismeant to be a universal and modal claim. It says that for anyconsideration that is a reason it is possible that that considerationmight behave differently in another case. Thus understood, holism isconsistent with the possibility that some considerations are, as amatter of fact, reasons (of the same force and direction) in everycase. Holism also seems to presuppose that the considerations that arereasons are not brutely unique; the insight of holism—if it isan insight—is built upon the thought that a consideration whichis a reason in one case is repeatable in other cases. Only if, forexample, the funniness of a remark is a consideration that isrepeatable can we say, as the holist wants us to, that the funninessof a remark is a reason in one case but is not a reason inanother.
Those attracted to holism about reasons agree that it is typicallyspecific elements of context that further account for whether aconsideration counts as a reason, and a rich language forcharacterizing context has been developed. For example, a putativereason might be defeated, enabled, or intensified by specific elementsof the context. To continue the previous example, the fact that agenuinely funny remark would also be offensive might defeat whateverreason-giving force that the humor might otherwise have had. On thisreading, the humor of the remark is no reason at all; not just areason that is outweighed. To vary the case again, the fact thatone’s audience will appreciate a (non-offensive) funny remarkenables the humor to be a reason. Here the humor is a reason, but onlyagainst the background of a receptive audience; the backgroundfunctions as an “enabler”. Finally, the fact that a funnyremark will break an unduly somber mood may intensify the force ofhumor as a reason. Humor itself is especially apt, but only becausethe mood is unduly somber. Other factors could function asattenuators, weakening the force of a reason (see Dancy 2004: ch5).
Holism depends crucially on the sustainability of the distinctionbetween the particular considerations that count as reasons and thecontextual factors (defeaters, enablers, etc.) that impact whether aconsideration counts as a reason. Context-sensitivity without such adistinction would be unable to explain why a feature which is a reasonin one context can fail to be a reason in another. Moreover, we needto know why the relevant features should not be “hooveredup” into the content of the reasons themselves. After all,atomists need not be simple atomists. A hedonist who held thatpleasure and pain are both always reasons and the only reasons, wouldcertainly count as an atomist. But atomists can allow for significantpluralism and complexity. One way to do so is to insist that theconsiderations that a holist calls, variously, reasons, defeaters,enablers, and so on are all but parts of a larger more complex“whole” reason. Such views have been proposed by Bennett(1995), Crisp (2000), Hooker (2003) and Raz (2000), and rejected byDancy (2004: 6.2). One worry about the atomists’ appeal to wholereasons is that if reasons are identified with large complexes offacts, then the same reason may seldom recur across cases and theclaim that agents act for reasons may fall under threat (Price2013).
Setting aside whether holism is true, does it support particularism?Generalists have rejected the inference on the grounds that holismleaves open the possibility that the behaviour of reasons, defeaters,enablers, and intensifiers/attenuators is codifiable (seeVäyrynen 2004; McKeever and Ridge 2005). They also observe thatsome paradigmatic generalists seem to have exploited this logicalspace. For example, Kant arguably thinks that the fact that a courseof action would advance someone’s happiness is of variable moralsignificance, counting in favor whenever the happiness and itspurchase is consistent with the categorical imperative and counting asno reason at all otherwise (McKeever and Ridge 2005). Particularistshave countered that even if holism is logically consistent withprinciples it would nevertheless render them “cosmicaccidents” (Dancy 2004: 82). If this were right it would beenough to cast doubt on the generalist tradition in moral philosophy.Why should the heart of a discipline be the search for cosmicaccidents?! Generalists counter that whether principles are cosmicaccidents depends entirely upon underlying metaphysical issues and noton whether principles tolerate holism. For example, if the property ofbeing good is identical to the property of being non-maliciouspleasure, then the associated holism tolerating principle does notlook to be a cosmic accident (see McKeever and Ridge 2006: 2.2).
Selim Berker (2007) has challenged the particularist argument fromholism in another way. He argues that the conjunction of holism withwhat he calls the particularist’s“noncombinatorialism” about the ways reasons combine tofix an overall verdict leaves the particularist with no coherentnotion of a reason for action. To understand noncombinatorialism, onemust first understand the idea of a “combinatorialfunction”. A combinatorial function takes as input all thereasons and their valences in a given situation and gives therightness or wrongness of actions in that situation as an output.Noncombinatorialism simply asserts that the combinatorial function forreasons cannot be finitely expressed (and so, in particular, is notadditive). Berker argues that particularists are committed tononcombinatorialism, but that this leaves them with no coherent notionof a reason for action. Particularists typically gloss being a reasonas “counting in favour” of that for which theconsideration is a reason. Berker’s point is that talk of aconsideration “counting in favour” of something is itselfhard to make intelligible once we abandon a combinatorial conceptionof how reasons combine to fix an overall verdict. We are left with ametaphor that cannot be cashed out in any helpful way. Particularistscould of course abandon the noncombinatorial conception of reasons,but Berker argues that this would commit them to the truth of numerousexceptionless principles, thus compromising their particularism. (Forcritical discussion of Berker’s argument, see Lechler 2012.)
So far we have focused on generalist replies to metaphysical argumentsfor particularism. Generalists are not without positive metaphysicalarguments for their own views, though. Most notably, so-called“constitutivists” sometimes invoke premises about themetaphysics of rational agency to argue for generalism. Kantianconstitutivists are the most influential and clear-cut instance ofthis style of argument. Christine Korsgaard, for example, argues thatthe categorical imperative is constitutive of rational agency(Korsgaard 2008, 2009). The rough idea is that the principles ofpractical reasons unify us as agents, and allow us to take controlover our representations of the world and our movements (Korsgaard2008: 9). Insofar as simply being a rational agent commits one to therelevant principle(s), this strategy for defending generalism is alsomeant to be especially effective at silencing sceptical challenges,e.g., classic “why be moral?” challenges. The thought isthat the sceptic has no coherent perspective from which to reject therelevant principles.
Whatever the success of these metaphysical arguments, someparticularists have worried that an excessive focus on metaphysicsthreatens to lead us astray—not to falsehood so much asmisplaced emphasis. The ontological status of moral laws and thegrounding of moral reasons ought not predominate the particularistprogram. Instead, the particularist should emphasize how their accountof moral psychology makes sense of moral development and competence(see Bakhurst 2008, 2013).
Particularists and generalists typically share a commitment to moralknowledge. This common ground is not strictly entailed by either view.For example, proponents of Hare’s universal prescriptivism willinsist that moral thought is principled even if, in their rejection ofthe truth-aptness of moral language, they deny that there is moralknowledge. On the other side, a fictionalist might reject moralknowledge while insisting that the moral fiction is itself devoid ofprincipled structure, just as the particularist insists. Nevertheless,both generalists and particularists do in fact typically see moralthought and judgment as achieving (sometimes) significant success andin this context the shared commitment to moral knowledge is notsurprising.
Particularists and generalists often treat moral knowledge as being ona par with other types of commonly accepted knowledge. Just as we canknow that our internet connection is running slow, that the milk is onthe verge of going stale, or that our friend is annoyed by the storyjust told at his expense, so too we can know that it would be wrongrefuse directions to the person who is lost, that our co-worker wascourageous to criticize her supervisor, and that the American justicesystem treats many people unfairly. Because the commitment to moralknowledge is a shared one, many of the arguments both for and againstparticularism have sought to use it for dialectical leverage. Thequestions at stake include whether particularism or generalism bestexplains our capacity to achieve moral knowledge and whetherparticularism or generalism best models the person who has and usesmoral knowledge.
Some moral knowledge, it is agreed, involves the transmission orextension of other moral knowledge already achieved. If Solomon tellsme the treaty is unjust, I may know this by relying on his testimony.If every member of the Diogenes Society whom I have met is honest,then I may know that Walter, who is also a member, is honest. Here, Irely on an induction from my other moral knowledge. While highlyinteresting, these types of knowledge are typically regarded asderivative (Zangwill 2006) and are therefore set aside in argumentsover moral particularism. The question is what explains our most basicmoral knowledge. How strong an assumption can be made about our moralknowledge while remaining on ground common to both generalists andparticularists? Obviously, particularists will not grant that we haveknowledge of moral principles, and the point of surest agreement isthat we sometimes know the moral status of a particular case, e.g.,that this act was wrong. However, most arguments both for and againstparticularism deploy somewhat stronger assumptions.
First, one may make a stronger assumption about the objects of moralknowledge. Of special interest is the possibility that we can knowgeneral truths about morality even while such general truths fallshort of counting as moral principles. For example, whileparticularists will deny that there is any exceptionless moralprinciple to the effect that pain is bad, many sympathetic toparticularism would agree that, as a general matter, pain is bad andthat we can know this. On a deflationary reading, one might treat theclaim that pain is bad as a useful heuristic, a reminder that pain hasoften been bad in the past and may well be so in the future (Dancy1993). Alternatively, that pain is bad might capture an interestingmetaphysical fact about pain; its default status is the status ofbeing bad. We can understand default status in terms of an explanatoryasymmetry. When pain is not bad there must be something that explainswhy it is not, but when pain is bad there needn’t be any furtherexplanation of what makes pain bad (Dancy 1993, 2004). Finally, onemight try to invest such generalizations with real explanatory powerwhile insisting they remain exception laden. That pain is bad is akind of defeasible generalization, where this amounts the claim thatpain is bad in a privileged set of worlds (Little 2000; Lance andLittle 2004; for critical discussion of each of these possibilities,see Stangl 2006).
Second, one may make a stronger assumption about the scope of moralknowledge, at least for some people. Some people, one may assume, are(or become) especially good at acquiring moral knowledge; they have ameasure of practical wisdom or expertise. Their knowledge thus readilyextends not just to their actual circumstances but to a broad array ofnovel circumstances as they arise. In so far as this is so, we shouldlike to have a good explanation not only of how humans acquirespecific knowledge but also of how they develop over time into morecompetent moral knowers (Bakhurst 2005, 2013).
Two models of moral knowledge predominate in defences ofparticularism: a perceptual model and a skill based model. Accordingto the perceptual model successful moral judgment is properlyanalogous to sense perception even if it is not, literally, a form ofsense perception (McDowell 1979, 1985; McNaughton 1988). Moraljudgment on this view depends upon a range of sensibilities, developedthrough experience and acculturation. Once developed, however, one canjust “see”, for example, that a certain response ismerited by a situation. As John McDowell puts it,
Occasion by occasion, one knows what to do, if one does, not byapplying universal principles, but by being a certain kind of person:one who sees situations in a certain distinctive way. (McDowell 1979:350)
Since sensibilities may be more or less refined, the perceptual modelappears to fit well with the idea that there are both moral novicesand experts. Whatever further account is to be given of thesesensibilities, the resulting knowledge is not dependent upon anydeduction from general moral principles, at least not onetransparently and readily available to the knower. Generalists haveobserved that similar perceptual metaphors seem equally apt in domainsthat admit of principles (McKeever and Ridge 2006: ch 4). For example,one might just “see” that a sentence is ungrammatical evenif grammaticality is rule-governed. Furthermore, some who develop anddefend a perceptual model are not led by it to particularism (Audi2013). So the perceptual model of moral knowledge seems not toestablish particularism, but it was likely never meant to. Instead,the perceptual model is intended to offer more indirect support. Ifour moral experts reliably achieve moral knowledge withoutself-consciously adverting to moral principles, then the generalistinherits at least some burden to explain why principles should be acenterpiece of moral theorizing.
Two related constraints confront the development of the perceptualmodel and may threaten the particularism the model is taken tosupport. The first is that the model must extend to prospective andhypothetical cases. Particularists and generalists typically agreethat we sometimes have knowledge that if we were to perform an action(say maintain a confidence) in our actual circumstances, that actionwould be right. This seems essential if moral knowledge is to precedeand guide virtuous conduct. Similarly, if slightly morecontroversially, we sometimes have knowledge that a certain course ofconduct would be right in some hypothetical circumstance. While onemight try to account for such cases by appeal to inductive reasoningfrom past actual cases, this is not, in fact, how proponents of theperceptual model have proceeded. The second constraint arises from thefact that moral properties are grounded in other properties. Forexample, it barely makes sense to say that an action’s wrongnessis a brute fact about it; if wrong an action must be wrong on accountof some otherwise specifiable features it has. While this“because constraint” admits of various explications, thebasic idea is common ground and widely thought to bea priori(Zangwill 2006).
Generalists argue that, once spelled out, the perceptual model is notthea posteriori epistemology it might first have seemed butinstead commits us toa priori intuitions relating moralfeatures to their grounds. Particularists may grant that basic moralknowledge isa priori knowledge of “what is a moralreason for what” (Dancy 2004: 141) while maintaining that theobject of knowledge remains particularized and does not implicateprinciples. One challenge for the particularist is that “what isa reason for what” in a particular case is contingent, and sothe particularist risks being committed toa priori knowledgeof contingent truths. In his defense of particularism, Dancy has beenwilling to embrace this apparent consequence (Dancy 2004; forcriticism, see McKeever and Ridge 2006). Other particularists havesought to avoid the implication (Leibovitz 2014).
Particularists sometimes pursue a somewhat different model of moralknowledge, one that likens the practically wise agent to a person whohas a developed skill. Just how different this model is from theperceptual one must depend upon how each is spelled out. But whereasthe perceptual model directs our attention first to how the virtuousperson understands her situation, the skill model draws attention tothe knower as agent, someone who classifies actions, agents, andstates of affairs as falling under moral categories, who reasons tomoral conclusions, and ultimately puts their knowledge into outwardaction. How might this skill be understood and, relatedly, how much ofan account of it does the particularist owe? Some particularists(sometimes) demur. For example, Dancy described the virtuous agent assomeone possessed of a “contentless ability” to discernwhat matters when it matters (Dancy 1993: 50). But many sympathetic toparticularism would want to say more (Garfield 2000; Leibovitz 2014).One might liken the skill of the virtuous person to a behavioralability, such as the ability to ride a bicycle. While this analogycould prove apt, it risks underrating the extent to which the virtuousperson’s action is rich with judgment and reasoning and is notsimply a sequence of successful performance.
One way to develop the skill model is to urge that the skill of thevirtuous person is akin to the skill of the person with conceptualcompetence and then rely on Wittgensteinian rule-followingconsiderations to urge that conceptual competence cannot be fullyunderstood in terms of rules or principles. This approach lends itselfto a form of particularism that is less hostile to principles. Theclaim is not that there are no principles but that practical wisdomcannot be fullyreduced to principles (McDowell 1979). Aperhaps similar strategy can be pursued by focusing on principles ofreasoning and urging that valid principles of reasoning cannot betreated simply as objects of propositional knowledge akin to premises(Carroll 1895; Thomas 2011). One seeming consequence of thesestrategies is that particularism it not something distinctive ofmorality and other cognate domains. Particularism would be trueeverywhere we apply concepts or everywhere we reason. Some mightwelcome such global particularism, but we would have lost what forsome was initially attractive—that particularism seemed toidentify something distinctive (even if not unique) about morality. Asecond worry is that particularism may no longer pose the threat totraditional moral theory that is sometimes supposed. If thecategorical imperative were shown to have the same status as modusponens, Kantians could sleep easily.
Another way to develop the skill model could urge that the skill inquestion is the skill of applying (or reasoning with) generalizationsof a certain kind. Here the claim may be that moral principles (orgeneralizations) require judgment to apply, or are defeasible, or comewith implicitceteris paribus clauses. This commits theparticularist to principles of a kind, while also allowing both thatmorality is importantly distinctive and that some traditional moraltheorists have erred by seeking a sort of principle that is not to befound. This path leads to interesting intermediate positions that arecertainly more friendly to principles than Dancy’s particularismwhile at the same time concerned to emphasize the limitations ofprinciples. For example, Richard Holton (2002) suggests that soundmoral principles are conditionals containing an implicit“that”s it’ clause. The dictum that lying is wrongis then more perspicuously expressed as the claim that if an actionamounts to lying and “that”s it’ then the action iswrong. In this context, “that’s it” expresses theidea that no other moral principle, given the facts at hand,supersedes the principle that lying is wrong.
A different but similar idea is developed by Mark Lance and MargaretLittle, who advance a model of true but defeasible moralgeneralizations. Here, the claim that lying is morally wrong iselaborated as the claim that lying is wrong under privileged or normalconditions. Conditions might fail to be privileged for any number ofreasons—perhaps because the murderer is at the door looking foryour helpful information or, less dramatically, we might be playing agame in which deception is part of the fun. As this last possibilitysuggests, Lance and Little’s proposal seems more expansive thanHolton’s in so far as they allow that a moral generalizationmight fail to govern a situation not only in the case that it issuperseded by other moral principles but because the circumstancesmight be such that the point of the moral generalization is simplylost. (See Little 2000 and Lance and Little 2004. For discussion ofthe skill needed to apply generalizations see Garfield 2000 and Thomas2011. For discussion of reasoning with defaults see Horty 2007.) Manyof Lance and Little’s most effective examples are of genericslike “tigers have stripes.” More recently, Ravi Thakralhas argued that moral principles justare applications ofgenerics (Thakral forthcoming). His work draws heavily on recent workin the philosophy of language and philosophy of mind on how genericswork. Thakral shows the explanatory power of this framework byemphasizing how some generics admit of far more qualification thanothers. For example, compare ‘mosquitos have wings’ with‘mosquitos carry West Nile virus’. Thakral makes good useof this range of generics to argue that a moral principle like‘pleasure is good’ might on his model admit of far fewerexceptions than ‘stealing is wrong’. In a limiting case amoral principle on this approach may even be exceptionless, with‘torturing babies just for fun’ being like the generic‘in chess, bishops move diagonally’. Thakral also providessome very interesting discussion of how this bears on reasoning withprinciples, as well as the role of moral principles in explanation, inmoral knowledge and in teaching morality to children.
One interesting question for this approach is whether the skill partof the equation can be further explicated in terms of principles, evenif these further principles are grasped only implicitly. This issuehas received significant attention from philosophers outside theparticularism debate who are interested in the question whetherknowledge-how can be reduced to knowledge-that (Ryle 1946; Stanley2011). Perhaps surprisingly, the literature on particularism has not(to our knowledge) drawn significantly from that debate.
Some generalists, agreeing with particularists that moral knowledgepresupposes a sensitivity to the moral landscape and skill indeploying what appear to beceteris paribus laden principles,argue that such sensitivity and skill is possible only if thelandscape itself is sufficiently patterned (McKeever and Ridge 2006)This argument is supposed to allow for holism about reasons, and sothe relevant patterning consists in there being a finite number ofconsiderations that can function as reasons, and that these can beaffected by a finite number of enablers and defeaters operating inregular, “principled” ways. Particular pieces of moralknowledge, on this argument, presuppose only “default moralprinciples” which specify a feature which ground reasons,ceteris paribus. A full array of exceptionless principles,the argument continues, are presupposed by practical wisdom,characterized as including a capacity for reliably acquiring moralknowledge in a full range of novel circumstances.
Some resist this argument in its entirety on grounds that we canregularly gain knowledge in other areas without recourse to principles(Schroeder 2009). Some charge that the second stage of the argumentdepends on overly strong assumptions about the extent of practicalwisdom (Schroeder 2009), and that more modest forms of practicalwisdom can be explained without recourse to exceptionless principles.Some argue that a proper account of hedged moral principles is enough(Väyrynen 2009); others prefer to see moral wisdom as a skillwhich, while wide ranging, can fail in utterly novel circumstances(Leibowitz 2014). Still others have worried that the argument relieson inflated assumptions about what is required for justification andknowledge, for example that the knower must be in a position toaffirmatively rule out any possible defeaters (Thomas 2011).
A recurring charge against generalism is that it assumes an outmodeddeductive-nomological (D-N) account of successful explanation.According to that account, any successful explanation must take adeductive structure in which a covering law is identified that,together with empirical information, could yield a conclusionexpressing the phenomenon to be explained. For many reasons, the D-Nmodel is now widely thought to be misguided.
On behalf of the generalist, one might make two points. First, it isnot clear that a generalist argument from practical wisdom needs toassume thatall successful explanations must conform to theD-N model. The argument draws upon claims about the person of (highlyideal) practical wisdom and asks how best to explain her reliability.Second, while the argument does insist that we must credit thevirtuous agent with at least an implicit grasp of a principle, it isless clear that the argument must treat this principle as functioningas the major premise in a deduction. Likewise, we might credit anagent with grasping modus ponens to explain her logical successwithout thereby assuming that she uses modus ponens as a premise.
Generalists sometimes invoke premises about the nature of moralconcepts or about the meanings of moral words to argue for their view.Ultimately these arguments appeal to what can be derived from acertain kind of competence—either semantic or conceptualcompetence. It is probably no accident that purely semantic/conceptualarguments to settle the debate over particularism/generalism have beenmonopolized by generalists. If the generalist could show thatsemantic/conceptual competence commits one to some specific moralprinciple(s), or to the existence and availability of some suchprinciples, then that would already be enough to establish anambitious form of generalism. By contrast, a particularist who showedonly that such competencies do not yet commit one either to somespecific moral principle(s) or to the existence and availability ofsuch principles would not yet have established a very ambitious formof particularism. For that negative conclusion is logically consistentwith the availability of a convincing epistemological, practical ormetaphysical argument for the existence and availability of suitablemoral principles.
Generalists can and have proceeded in one of two main ways here.First, they might argue that semantic/conceptual competence directlycommits one to the truth of some specific moral principle(s). Second,they might argue that such competence commits one only to the weakerthesis thatif there are any substantive moral truths thenthere must be some true moral principle(s). This second thesis isweaker both in that it takes a conditional form, so that an errortheorist could endorse it but deny the existence of any true moralprinciplesand in that it does not entail that there is somespecific moral principle(s) to which one is committed insofar as onethinks there are substantive moral truths. Consider each of thesestrategies in turn.
The most ambitious and straightforward version of the first strategyis effectively just to argue for a form of analytic naturalism inmeta-ethics. For example, consider the meta-ethical theory that“is morally right” justmeans “is an actionwhich maximizes happiness”, where “happiness” isitself cashed out in purely naturalistic terms. Any convincingargument for that theory would provide a way of carrying out a veryambitious version of the first of the two strategies discussed above.Clearly, insofar as that theory is correct, semantic competence with“is morally right” is enough to commit one to the thesisthat, necessarily, an action is morally right if and only if itmaximizes happiness, and that certainly looks likejust theright sort of generalization to function as a principlequastandard in the sense laid out insection 2.
Of course, this strategy for defending generalism is for good reason ahighly controversial one. For a start, nobody has come close tooffering a fully reductive definition of predicates like “ismorally right” which has met with widespread assent. Moreover,some philosophers are sceptical of the very idea that knowing themeaning of a word (or possessing a concept) is already enough, inprinciple, to know how to live a good life. In way, this concern aboutpulling a highly substantive rabbit out of a purelysemantic/conceptual hat can be seen as what lies behind one of thehistorically most influential arguments against analytic naturalism,namely G.E. Moore’s “Open Question Argument” (Moore1922 [1903]). Finally, anyone who is initially sympathetic toparticularism is very unlikely to find analytic naturalism anexante plausible view, given how trivially it entails a very robustform of generalism. There is a sense, then, in which this strategy fordefending generalism, however sound it might turn out to be, isunlikely to convince anyone who needs convincing (cf. Jackson, Pettit,& Smith 2000—the argument there seems ultimately to turninto a version of this first strategy).
A less ambitious form of the first strategy focuses on so-called“thick” evaluative words or concepts. Such words/conceptsin some sense have both specific descriptive and normative contents.Concepts associated with virtues and vices are classic examples ofthick evaluative concepts—concepts like courage, justice,fairness and generosity are all paradigm cases. The argument forgeneralism focusing on these concepts takes the same form as the moreambitious argument just canvassed. That is, the argument derives acommitment to moral principle(s) from mere conceptual/semanticcompetence.
However, the intended conclusion of an argument in this style is moremodest. For here the relevant principles do not take one from a purelydescriptive antecedent to a purely normative all things consideredconsequent, as with (e.g.) the principle of utility. Rather, therelevant principles here take one from an antecedent deploying a thickevaluative concept (like the concept of justice) to a consequentdeploying a thin normative concept (like the concept of a reason).Such an argument might maintain, for example, that competence with theconcept of justice commits one to a moral principle of the form,“if an action is just then there is at least some reason toperform it (namely, its justice)”.
Even this modest form of generalism is not uncontroversial. Dancy, forexample, argues that even thick evaluative features can vary in theirnormative valence from one context to another, going so far as tomaintain that “almost all the standard thick concepts…areof variable relevance” (Dancy 2004: 121). Insofar as this sortof view is as much as semantically/conceptually coherent, there can beno straightforward derivation of moral principles of the sortcanvassed above from mere semantic/conceptual competence. Of course,there may be more “hedged” principles linking thickevaluative concepts with thin normative concepts—principleswhich either enumerate or quantify over further conditions which mustbe met before the application of a thick evaluative predicate entailsthe application of a thin normative predicate. In effect, this is justthe point about holism not entailing particularism again. However, itis also unclear just how one would plausibly argue that insofar assuch hedged principles are not vacuous, they really do follow frommere semantic/conceptual competence.
There may also be some interesting asymmetries between virtue conceptsand vice concepts which are relevant to how we should think aboutthese arguments. In an interesting series of papers, Rebecca Stanglhas argued for a view she calls “asymmetrical virtueparticularism” (Stangl 2010). On this view, an action is right,all things considered, insofar as it is overall virtuous. However, thevirtues of an action in any specific respect (justice, courage, orwhatever) can vary in normative valence. However,vices onthis view are invariable—they always count against an action.The deeper explanation of this asymmetry, on Stangl’s view, isthat virtues have “targets” at which they aim, whereasvices are simply tendencies to miss the relevant targets. Vices arethus parasitic on virtues but not vice-versa. Thus a given virtue(e.g., mercy) can sometimes be wrong-making because it helps explainwhy the agent (badly) misses the target associated with some othervirtue (e.g., justice). By contrast, Stangl argues, a vice always is atendency to miss some relevant target, and so is therefore always tothat extent wrong-making. Insofar as Stangl makes a convincing casefor this asymmetry (and obviously a lot more could be said aboutthis), we should be less sympathetic to arguments which hold thatthere is a semantic/conceptual link between the virtuousness of anaction in a specific way and the presence of an associated reason foraction.
A different take on the virtues in relation to particularism has beendeveloped by Constantine Sandis (Sandis 2021). Sandis argues that weshould be generalists about the moral virtues but particularists aboutdeontic concepts like the concept of what one ought to do, a reasonfor action, moral duty, and the like. While a directive like “bekind” might hold without exception on this view, Sandis arguesthat this does not entail that you should always act kindly –and that we should reject principles of the latter form. One advantageof this combination of views, according to Sandis, is that it can moreeasily explain our practices of moral education. If the virtues haveinvariant moral value, then it is easier to see how moral educationcan proceed. Sandis also discusses the relationship betweenexemplifying a virtue and acting virtuously on his view, which heargues more radical particularists must say implausible thingsabout.
Moreover, this more modest form of generalism presupposes that ourthick concepts of justice, courage, generosity, and the like must begenuinely evaluative concepts. But this is controversial. PekkaVäyrynen (2013), for instance, argues that the evaluations wetypically associate with thick terms such as “just” and“courageous” are conversational implications which arisefrom our use of those words in a wide range of contexts. Very roughly,the idea is that evaluative content is a kind of generalized contentwhich is explained pragmatically. For example, it may become commonknowledge that only people who disapprove of the sexually explicittend to use the word “lewd”. In that case, someone whouses that word thereby implies that she disapproves of the sexuallyexplicit – otherwise, why use the word “lewd”instead of “sexually explicit”, given that one’sinterlocutors will reasonably infer from the use of the former thatone disapproves of the sexually explicit.
If this is right, then the status of thick words as evaluative dependson contingent facts about the pragmatics of our use of those words.There is then an important sense in which thick terms are, on thisview, descriptive in their semantic content. So although there is abroader kind of speaker competence which involves understanding theconversational defaults associated with the relevant words, this isnot the kind of semantic competence that could ground an argument forgeneralism or particularism. Semantic competence with thick words isalso unlikely to commit one to any interesting moral principles.Depending on our views of concepts, this view about thick language canallow that some thinkers’ concepts of justice, generosity, and couragemay be evaluative. But that is unlikely to be essential to thoseconcepts, nor will the capacity forthought about justice and otherthick notions depend on having genuinely evaluative thick concepts(Väyrynen 2013: 123–4, 206). In that case, competence withconcepts like justice, courage, and generosity is also unlikely tocommit us to any interesting moral principles. Moreover, the moremodest form of generalism may require that a concept isn’t a conceptof justice, or courage, or generosity, unless it is evaluative. Inthat case the pragmatic view of thick evaluative language wouldsupport the view that there are no thick evaluative concepts and,therefore, no such thing as competence with thick evaluativeconcepts.
However, generalists do not have a monopoly on arguments which taketheses about thick evaluative concepts/predicates as their mainpremise. Some particularists argue that thick evaluative concepts are“shapeless” with respect to the descriptive (seeespecially McDowell 1981). Others take a more metaphysical approach,and argue that thick evaluative properties are “irreduciblythick” in a way that puts pressure on the generalist. Indeed,some go so far as to suggest that this even undermines some importantforms of supervenience (see, e.g., Roberts 2011). Whether thesearguments are forceful may depend on the extent to which the argumentthat there reallyare thick evaluative concepts or propertiesin the needed sense can avoid begging the question. In this context,it is not enough that no “shape” at the descriptive levelis built into the meaning of evaluative concepts. Such a weakershapelessness thesis would seem to rule out only principles that areboth analytic and reductive. But it seems compatible with thepossibility that someone who did know the extension of evaluativeconcepts could then discover a unity or “shape” to thatextension which could be expressed using descriptive concepts. To ruleout this possibility would seem to require a stronger shapelessnessthesis according to which the extension of evaluative terms, properlyunderstood, has no shape at all. Generalists will want to see anargument for this stronger thesis. Perhaps more importantly, though,settling whether this stronger version of the shapelessness thesis istrue would seem to require more thana priori theorizingabout moral concepts and more than semantic theorizing aboutevaluative terms. (For discussion of shapelessness and the metaethicallessons to be drawn from it see Väyrynen 2014 and Miller2003.)
So much for the first of the two strategies for giving asemantic/conceptual argument for generalism canvassed above. Whatabout the second? Recall that the second strategy is less ambitiousinsofar as it aims to establish only a conditional thesis linkingsubstantive moral truth to the existence of some moral principle(s) orother. The guiding idea here is that a proper analysis of our moralconcepts will reveal that deploying those concepts to make asubstantive moral judgment commits one to the existence and truth ofsome moral principle(s) or other which somehow explains the truth ofthat judgment. Crucially, though, this commitment to the existence ofsome such moral principle(s) does not entail that the speaker iscommitted to anyparticular moral principle(s), or even tothe possibility in principle of discovering what the relevantprinciple(s) are.
A modified version of T.M. Scanlon’s contractualist theory of“what we owe one another” (Scanlon 1998) helps toillustrate this strategy. Scanlon himself does not intend his theoryas a conceptual analysis, in part because there are strands of moralthinking, like our thoughts about the moral status of nonhuman animalsand the environment and certain forms of moralizing about humansexuality, which do not fit very well into his proposed framework.However, a version of his theory which was offered as an analysis ofour moral concepts would provide a clear illustration of the strategyfor defending generalism under consideration. On Scanlon’s view,to be morally wrong in the sense of “wrong” associatedwith what he calls the “morality of what we owe oneanother” is to be forbidden by principles for the generalregulation of human behaviour which nobody could reasonably reject.The notion of the “reasonable” is a thick evaluativeconcept, so the view is not a reductive one. If the view were to beunderstood as following directly from an analysis of our moralconcepts, then it would follow that anyone who makes a substantivemoral judgment that some action is morally wrong would thereby becommitted to the existence of at least a range of moral principles(the “reasonable” ones) which are such that they allforbid the action in question. At the same time, making such ajudgment does not entail that one can articulate what the relevantprinciple(s) is (are), or even that they are such that one could inprinciple discover them.
Another way of arguing for this sort of view is to take a broaderfocus on normative and evaluative language. On some views (e.g., Ridge2014), all uses of “good”, “reason”,“ought” and “must” advert tostandards of some kind, but the context of utterancedetermines the relevant kind of standards. Sometimes, as in moralcontexts, the relevant standards will be normative in some rich sense.Other times, the relevant standards will be purely conventional, aswhen we discuss what one ought to do as a matter of etiquette. Inother contexts the standards will be purely strategic/instrumental, aswhen we discuss what move one ought to make in a game of chess, say,or what military strategy is best, but where one can sincerely makethese judgments while finding chess a total waste of time or being acommitted pacifist. The view aims to accommodate thecontext-sensitivity of the relevant words without implausiblypostulating a brute ambiguity across the wide variety of contexts inwhich such words are used. As with the conceptual version ofScanlon’s view, this view is also one on which making asubstantive moral judgment commits one to theexistence of arange of moral standards which require the relevant action (or countthe relevant consideration as a reason, or whatever).
An attraction of this strategy is that it draws its plausibility fromhigh level semantic features of words like “good”,“reason”, “ought” and “must” whichare not specific to normative contexts. It is therefore perhapsespecially unlikely to beg the question against the particularist.This stands in sharp contrast with the attempt to derive specificmoral principles from mere competence with moral words orconcepts.
As the taxonomy ofsection 2 above emphasized, whether moral principles are necessary for moralunderstanding or moral explanation is not the only debate betweenparticularists and generalists. Distinct questions remain about theplace and value of principles in guiding moral decision-making andaction and in interpersonal justification. Generalists typically see alarger and more important role for principles to play in thesecontexts. Particularists typically find at least some sympathy withDavid McNaughton’s claim that moral principles are “atbest useless and at worst a hindrance” (McNaughton 1988: 191).In considering this aspect of the debate, it is helpful to treat ascommon ground the idea that it is at least possible for an agent to be(in some sense) guided by a principle. This assumption has, of course,been challenged, most prominently by some readings ofWittgenstein’s arguments concerning rule-following (Kripke1982). If guidance by principle were utterly impossible, thenquestions about the value and importance of principled guidance wouldbe largely moot. For similar reasons, it is helpful to assume, atleast provisionally, that an agent can eschew being guided byprinciples and yet still act rationally and for reasons and with somemeasure of consistency.
Against this background, we may distinguish two questions. First, wemight ask whether guidance by principles constitutes a superiorstrategy for acting well as compared to guidance by particularjudgments untutored by principles. One familiar way to understand thesuperiority of a strategy is in terms of its reliability at leading anagent to act rightly and for morally good reasons (McKeever and Ridge2006; Väyrynen 2008). Second, we might ask whether guidance byprinciples enables us to secure morally valuable goods (or avoidsignificant moral evils) that would otherwise be out of reach. Ifparticularism tells us to eschew guidance by principles and if doingso comes with significant costs, then, to use Brad Hooker’sphrase, there is something “bad” about particularism(Hooker 2000, 2008). Similarly, if generalism tells us to useprinciples and this has serious costs, then there is something badabout generalism.
These questions leave one familiar and related question largely to theside. This is the question whether there is something inherentlymorally valuable about being a “person of principle”independent of the content of those principles and how, morespecifically, they lead one to act. Generalists may, but need not,subscribe to such a view, and even particularists could (consistentlywith holism) allow that across some range of contexts being principledis, itself, a favoring consideration. Turning to the first questionjust noted, how might principles constitute a good strategy for moralaction?
Most ambitiously, the ultimate principlesquastandards—that is, the principles which provide the deepestexplanations of why right actions are right—could be well suitedto guiding action directly. Arguably this is the view we find in Kantand in many modern Kantian moral theories. The categorical imperativeis both the ultimate standard of right action and at the same time iswell suited to guide the decision-making of a conscientious moralagent. This view of principled guidance ought to be distinguished froma distinct meta-ethical view according to which an ultimate moralstandard must, if it is to be valid at all, be such that agents can be(in some sense) guided by it (Bales 1971; Smith 2012). Such a view maybe attractive to those (such as Kant) who think that moral principlesmust comport with autonomy and that morality is a species ofrationality. It may also be attractive to those who believe that moralprinciples must provide reasons on which agents can act. But even avery ambitious generalist model of principled guidance need notsubscribe to this meta-ethical view. Kant, at least in some passages,encourages optimism about our ability to apply the ultimate standardof right and wrong directly to our individual decisions. Otherphilosophers within the generalist tradition, such as Ross, defendprinciples which look, on their face, to be eminently usable, and ifRoss is correct that such principles are ultimate standards, then onemight feel entitled at least to a weak presumption in favour of theclaim that using them would be a good strategy.
When we consider other candidates for the ultimate moral principle,however, many find reasons to be sceptical that the ambitious modeljust canvassed will carry us very far. This has been a recurring worryfor act consequentialism and, for that reason, many of the mostinfluential attempts to deal with it have emerged from philosophersworking in that tradition. The basic idea is that the consequences ofour action are so many, so various, and (often) so far reaching thatagents cannot figure out in a timely fashion what the right act is bydirectly using a consequentialist principle. Using theconsequentialist principle in this sense must of course includegathering the facts about the consequences, not just applying theprinciples to the facts as one believes or knows them to be. (Fordiscussion of weaker and stronger senses in which an agent might“use” a principle, see Smith 2012.) Properly understood,the worry here is not that the act consequentialist principle providesno guidance whatsoever; it may point quite clearly to the kinds ofinformation that must be gathered and heeded. The worry is thatattempts to follow the principle will not reliably lead to morallyright action. Moreover, the worry is not simply that the principlefails to constitute a complete and reliable strategy. Any model ofprincipled guidance—even one such as Kant’s—isliable to require that we rely also on cognitive and emotional powersthat go beyond the principle itself. The worry is that our normalcognitive and emotional powers together with the principle do notyield a reliable strategy for performing morally right actions.
Instead of concluding that principled guidance is hopeless, many actconsequentialists have instead proposed that we replace the project ofbeing guided by the ultimate moral standard (assuming this for themoment to be some form of act consequentialism) and instead be guidedby some more tractable set of principles. According to such“indirect” consequentialism, the principles we typicallyemploy in deliberation are not the ultimate standards of rightconduct. However, an agent who employs them in deliberation willregularly and systematically act rightly. Such proposals have been astaple of consequentialist thinking dating back at least to the workof Mill and Sidgwick. An especially well known recent version of theidea is defended by R.M. Hare, who calls reliance on such principles“intuitive moral thinking”. By contrast, “criticalmoral thinking” proceeds in terms of the actual standards ofmoral conduct (Hare 1981). Importantly, neither indirect models ofprincipled guidance nor the worry that inspires them need be marriedto a consequentialist view of moral standards. Kantian moralphilosophers have sometimes stressed the need for“mid-level” principles (Hill 1989, 1992). Evenparticularists about standards could consistently embrace the use ofsuch an indirect strategy and so embrace a kind of generalism aboutmoral guidance, though so far as we know no one has actually adoptedthis position.
Discussions of indirect consequentialism often proceed as if thecorrect moral standard could, in principle, be applied directly to anygiven circumstance and, if so applied, would indicate the morallyright action(s) to take. Leaving aside whether this is true of (some)consequentialist principles, many claim that it is not true of othercandidate moral standards. Consider, for example, principles such as“all persons must be treated as moral equals”, or“property rights must be respected”, or, to borrow a lessmorally loaded example from Onora O’Neill, “teachers mustassign work appropriate to their students’ abilities”(O’Neill 1996: 73–77). Such principles may not yielddeterminate guidance in concrete circumstances even given a full arrayof non-moral facts. To be properly applied, such principles mayrequire additionalmoral judgment. We must determine justwhich individuals are persons and what it is to treat persons as moralequals. We must determine which claims to property correspond to validrights and what invasions of property amount to a failure to respectthose rights. May an exhausted runner harmlessly trespass in order tocool off beneath the shade of another person’s tree? We musteven decide how difficult is too difficult when it comes tochallenging students. The obstacle to using the standard as a directguide to conduct is not that our cognitive resources come up short,but that the standard is itself not yet sufficiently determinate. Thissituation presents an opportunity for principles to play a guidingrole by helping to fill in the normative content of higher levelstandards. (Whether such guiding principles would themselves count asnon-ultimate standards is a question we here set aside.) Importantly,however, guiding principles in this sense need not make fullydeterminate the higher level principles that they help to fill out.They may, instead, explicitly identify further questions to besettled—whether by other principles or by judgment. For example,the principle that a duly convicted criminal ought to receive only theamount of punishment he deserves is highly abstract. How ought we todetermine whether a punishment is deserved? The further principle thatthe punishment ought to be proportional to the crime may direct us tofind a way to proportionately rank less and more serious crimes, andit may thus point us part, but not all, of the way towards complyingwith the higher level principle.
We now have at least three accounts of how principlesmightfigure in a reliable strategy for acting well. But why think thatprinciples do or must figure in the best strategies for moral action?Or, taking the other side, why think that principles are useless oreven counterproductive? If one could establish or assume a specificgeneralist account of moral standards, this would open up many linesof argument for guidance by principle. The same would be true if onecould establish particularism about standards. However, suchassumptions are not dialectically available in thegeneralism/particularism debate. Accordingly, we here focus onarguments that are largely neutral about the content of the moraldomain and whether it is “principled”.
Some generalists argue that moral principles help avoid “specialpleading”—interpreting one’s moral duties in waysthat favour one’s own interests and in ways that go beyond whata reasonable accommodation of self-interest would allow. Agents whoengage in special pleading do not do so consciously, but rather thinkthey are impartially assessing what morality demands in theircircumstances. The adoption of moral principles might be thought tohelp with this problem. For one thing, principles can be adopted andinternalized well before any conflict with the agent’s interestsarises. Having internalized the relevant principle well in advance maymake it easier to avoid special pleading when a conflict does arise(cf. McKeever and Ridge 2006: 202–203). Furthermore, a practiceof articulating these principles publicly endows them with symbolicmeaning. Violating explicitly endorsed principles or adding caveats inanad hoc manner to suit one’s interests can come tostand for our lacking the right kind of commitment to morality moregenerally (see Nozick 1993: 29; McKeever and Ridge 2006:204–205). Anecdotally, some people seem to think NewYears’ resolutions work in this way, and George Ainslie hasprovided a body of empirical evidence that such public resolutions canhelp motivate agents to (e.g.) stop smoking in a way that somehowprevents the thought that “one cigarette won’t make anynon-negligible difference” from undermining their resolve(Ainslie 1975 and Ainslie 1986).
Particularists agree that special pleading is a problem but they donot think that principles afford the proper solution to that problem.Instead, they typically suggest that one simply needs to “lookharder” at the case at hand to avoid such special pleading:
…the remedy for poor moral judgment is not a different style ofmoral judgment, principle-based judgment, but just better moraljudgment. There is only one real way to stop oneself distorting thingsin one’s favour, and that is to look again, as hard as one can,at the reasons present in the case, and see if really one is sodifferent from others that what would be required of them is notrequired of oneself. The method is not infallible, I know; but thennor was the appeal to principle. (Dancy 2013)
Generalists worry that the exhortation to look again is simplyunrealistic, given human nature, and is therefore not only falliblebut unlikely to do much good. If so, then even if principles are farfrom infallible rejecting them wholesale is premature. The best way toavoid special pleading could involve an array of more specificstrategies with principles playing some significant role.
On the other side particularists worry that reliance on principlesbreedsinflexibility and a problematic tendency to shoehorn amorally complex situation into some more familiar set of categories.McNaughton describes such inflexibility as a “seriousvice” and claims that reliance on principles is partly to blame(McNaughton 1988: 203). Dancy remarks that,
We all know the sort of person who refuses to make the decision herethat the facts are obviously calling for, because he cannot see how tomake that decision consistent with one he made on a differentoccasion. (Dancy 1993: 64)
Importantly, this worry cannot be dismissed simply on the grounds thatgeneralists can (and do) allow judgment to also play a role in our useand application of principles; the worry is that the use of principleshas a distorting influence of its own. One interesting and empiricallyminded proposal for evaluating the force of the particularist’sconcern looks to the literature on the comparative success of rulesand expert judgment in other domains (Zamzow 2015). Much of thisliterature suggests that rules outperform expert judgment (see Groveet al. 2000).
Let us turn now to a second family of arguments for principledguidance. Setting aside whether principles are a winning strategy forthe individual aiming at virtuous action, one might think that ourcollective use of principles enables us to achieve morally valuablegoods. One such argument appeals to the value of predictability(Hooker 2000, 2008). Successful cooperation and coordination yieldenormous benefits yet it requires an ability to predict the behaviourof others and a willingness to rely on those predictions when makingone’s own choices. If principled guidance supportspredictability, so much the better for principles. Not surprisingly,particularists have questioned whether principles are necessary forpredictability. “People are quite capable of judging how tobehave case by case, and in a way that would enable us to predict whatthey will in fact do” (Dancy 2004: 83). The key issue iscomparative. Is the person guided by principles therebymorepredictable than the person who eschews principles? Someone whorejects moral rules altogether and always just tries to judge eachcase on its own merits plausibly is less predictable than someone whohas internalized and follows a set of moral principles. But as we sawabove, assessing the force of this generalist argument would benefitfrom consideration of careful empirical research. One challenge forgeneralists who might further develop this argument is that it standsin some tension with other themes stressed by generalists, forexample, that principles can incorporate various hedges and so exhibitthe kind of flexibility particularists embrace (Väyrynen 2008)and that principles are often indeterminate and must be supplementedby judgment. To be consistent, generalists will need to show not onlythat guidance by crude principles makes one more predictable, but thatguidance by a combination of hedged principles and judgment makes onemore predictable than guidance by judgment alone.
A very different practical argument for generalism has roots in theKantian tradition and has recently been advanced by Stephen Darwall(2013, see also Darwall 2006). He contends that publicly formulableprinciples are necessary for us to realize a valuable form ofinterpersonal accountability in our shared moral life. He furtherargues that such accountability is necessary for moral obligations(though not necessarily for moral reasons). Within the framework heredeveloped, one might see Darwall’s argument as a defense ofgeneralism aboutstandards but with the argument restrictedto standards of moralobligation. Alternatively, one mightsee it as a practical argument for attempting to formulate sharedpublic principles because, if we fail to do so (or fail to continue todo so), we will lose something that we take to be valuable aboutmorality, namely the respect for persons that is inherent in apractice of interpersonal accountability (Darwall 2013: especially183–191). Darwall’s argument fits very well with Kantiancontractualism of the sort defended by T.M. Scanlon, which emphasizesthe value of our being able to justify ourselves to others and seesprinciples as mediating justification. It might also be instructive tocompare Darwall’s argument with some of the ideas found in thetradition of discourse ethics associated with Jürgen Habermas(see, e.g., Habermas 1990). An important challenge for this argumentis to persuasively establish the premise that accountability (orinterpersonal justification) must advert to principles. Particularistsmay allow that accountability is an important value while urging thatthe interpersonal process of holding one another accountable canproceed entirely in terms of the reasons, defeaters, enablers, andintensifiers that are at play in the case at hand.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
[Please contact the author with suggestions.]
consequentialism |ethics: deontological |ethics: virtue |generic generalizations |moral epistemology |moral intuitionism |moral non-naturalism |moral particularism |naturalism: moral |reasons for action: agent-neutral vs. agent-relative |reasons for action: internal vs. external |reasons for action: justification, motivation, explanation |Ross, William David |thick ethical concepts
Section 2 draws on McKeever and Ridge 2006: chapter 1, though this entrydeparts somewhat from the details of that taxonomy.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054