| |
Fitting Attitudes accounts of value analogize or equate being good with being desirable, on the premise that ‘desirable’ means not, ‘able to be desired’, as Mill has been accused of mistakenly assuming, but ‘ought to be desired’, or something similar. The appeal of this idea is visible in the critical reaction to Mill, which generally goes along with his equation of ‘good’ with ‘desirable’ and only balks at the second step, and it crosses broad boundaries in terms of philosophers’ other (...) commitments. For example, Fitting Attitudes accounts play a central role both in T.M. Scanlon’s [1998] case against teleology, and in Michael Smith [2003], [unpublished] and Doug Portmore’s [2007] cases for it. And of course they have a long and distinguished history. (shrink) | |
This is an encyclopedia entry on consequentializing. It explains what consequentializing is, what makes it possible, why someone might be motivated to consequentialize, and how to consequentialize a non-consequentialist theory. | |
This paper draws on the 'Fitting Attitudes' analysis of value to argue that we should take the concept of fittingness (rather than value) as our normative primitive. I will argue that the fittingness framework enhances the clarity and expressive power of our normative theorising. Along the way, we will see how the fittingness framework illuminates our understanding of various moral theories, and why it casts doubt on the Global Consequentialist idea that acts and (say) eye colours are normatively on a (...) par. We will see why even consequentialists, in taking rightness to be in some sense determined by goodness, should not think that rightness is conceptually reducible to goodness. Finally, I will use the fittingness framework to explicate the distinction between consequentialist and deontological theories, with particular attention to the contentious case of Rule Consequentialism. (shrink) | |
I propose and defend a novel view called “de se consequentialism,” which is noteworthy for two reasons. First, it demonstrates—contra Doug Portmore, Mark Schroeder, Campbell Brown, and Michael Smith, among others—that agent-neutral consequentialism is consistent with agent-centered constraints. Second, it clarifies the nature of agent-centered constraints, thereby meriting attention from even dedicated nonconsequentialists. Scrutiny reveals that moral theories in general, whether consequentialist or not, incorporate constraints by assessing states in a first-personal guise. Consequently, de se consequentialism enacts constraints through the (...) very same feature that nonconsequentialist theories do. (shrink) | |
To 'consequentialise' is to take a putatively non-consequentialist moral theory and show that it is actually just another form of consequentialism. Some have speculated that every moral theory can be consequentialised. If this were so, then consequentialism would be empty; it would have no substantive content. As I argue here, however, this is not so. Beginning with the core consequentialist commitment to 'maximising the good', I formulate a precise definition of consequentialism and demonstrate that, given this definition, several sorts of (...) moral theory resist consequentialisation. My strategy is to decompose consequentialism into three conditions, which I call 'agent neutrality', 'no moral dilemmas', and 'dominance', and then to exhibit some moral theories which violate each of these. (shrink) | |
Consequentialists say we may always promote the good. Deontologists object: not if that means killing one to save five. “Consequentializers” reply: this act is wrong, but it is not for the best, since killing is worse than letting die. I argue that this reply undercuts the “compellingness” of consequentialism, which comes from an outcome-based view of action that collapses the distinction between killing and letting die. | |
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences in (...) their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink) | |
To consequentialize a non-consequentialist theory, take whatever considerations that the non-consequentialist theory holds to be relevant to determining the deontic statuses of actions and insist that those considerations are relevant to determining the proper ranking of outcomes. In this way, the consequentialist can produce an ordering of outcomes that when combined with her criterion of rightness yields the same set of deontic verdicts that the non-consequentialist theory yields. In this paper, I argue that any plausible non-consequentialist theory can be consequentialized. (...) I explain the motivation for the consequentializing project and defend it against recent criticisms by Mark Schroeder and others. (shrink) | |
Reasons, it is often said, are king in contemporary normative theory. Some philosophers say not only that the vocabulary of reasons is useful, but that reasons play a fundamental explanatory role in normative theory—that many, most, or even all, other normative facts are grounded in facts about reasons. Even if reasons fundamentalism, the strongest version of this view, has only been wholeheartedly endorsed by a few philosophers, it has a kind of prominence in contemporary normative theory that suits it to (...) be described as orthodoxy by its critics. It is the purpose of this paper to make progress toward understanding what appeal Reasons Fundamentalism should have, and whether that appeal is deserved. I will do so by exploring and comparing two central motivations for Reasons Fundamentalism. (shrink) No categories | |
Metaethics, understood as a distinct branch of ethics, is often traced to G. E. Moore's 1903 classic, Principia Ethica. Whereas normative ethics is concerned to answer first order moral questions about what is good and bad, right and wrong, metaethics is concerned to answer second order non-moral questions about the semantics, metaphysics, and epistemology of moral thought and discourse. Moore has continued to exert a powerful influence, and the sixteen essays here represent the most up-to-date work in metaethics after, and (...) in some cases directly inspired by, the work of Moore. (shrink) | |
Why should ‘better than’ be transitive? The leading answer in ethics is that values do not change with context. But this cannot be the entire source of transitivity, I argue, since transitivity can fail even if values never change, so long as they are complex, with multiple dimensions combined non-additively. I conclude by exploring a new hypothesis: that all alleged cases of nontransitive betterness, such as Parfit’s Repugnant Conclusion, can and should be modelled as the result of complexity, not context-relativity. | |
A growing trend of thought has it that any plausible nonconsequentialist theory can be consequentialized, which is to say that it can be given a consequentialist representation. In this essay, I explore both whether this claim is true and what its implications are. I also explain the procedure for consequentializing a nonconsequentialist theory and give an account of the motivation for doing so. | |
Accuracy‐first epistemology is an approach to formal epistemology which takes accuracy to be a measure of epistemic utility and attempts to vindicate norms of epistemic rationality by showing how conformity with them is beneficial. If accuracy‐first epistemology can actually vindicate any epistemic norms, it must adopt a plausible account of epistemic value. Any such account must avoid the epistemic version of Derek Parfit's “repugnant conclusion.” I argue that the only plausible way of doing so is to say that accurate credences (...) in certain propositions have no, or almost no, epistemic value. I prove that this is incompatible with standard accuracy‐first arguments for probabilism, and argue that there is no way for accuracy‐first epistemology to show that all credences of all agents should be coherent. (shrink) | |
The term “value theory” is used in at least three different ways in philosophy. In its broadest sense, “value theory” is a catch-all label used to encompass all branches of moral philosophy, social and political philosophy, aesthetics, and sometimes feminist philosophy and the philosophy of religion — whatever areas of philosophy are deemed to encompass some “evaluative” aspect. In its narrowest sense, “value theory” is used for a relatively narrow area of normative ethical theory of particular concern to consequentialists. In (...) this narrow sense, “value theory” is roughly synonymous with “axiology”. Axiology can be thought of as primarily concerned with classifying what things are good, and how good they are. For instance, a traditional question of axiology concerns whether the objects of value are subjective psychological states, or objective states of the world. (shrink) | |
The agent-relative/agent-neutral distinction is one of the most important in contemporary moral theory. Yet, providing an adequate formal account of it has proven difficult. In this article I defend a new formal account of the distinction, one that avoids various problems faced by other accounts. My account is based on an influential account of the distinction developed by McNaughton and Rawling. I argue that their approach is on the right track but that it succumbs to two serious objections. I then (...) show how to formulate a new account that follows the key insights of McNaughton and Rawling’s approach yet avoids the two objections. (shrink) | |
How should deontologists concerned with the ethics of killing apply their moral theory when we don’t know all the facts relevant to the permissibility of our action? Though the stakes couldn’t be higher, and uncertainty is endemic where killing is concerned, few deontologists have an answer to this question. In this paper I canvass two possibilities: that we should apply a threshold standard, equivalent to the ‘beyond a reasonable doubt’ standard applied for criminal punishment; and that we should fit our (...) deontological ethical theory into the apparatus of decision theory. I show that the first approach faces insurmountable obstacles, while the second holds much more promise for deontologists than they might first have assumed. (shrink) | |
In traditional consequentialism the good is position-neutral. A single evaluative ranking of states of affairs is correct for everyone, everywhere regardless of their positions. Recently, position-relative forms of consequentialism have been developed. These allow for the correct rankings of states to depend on connections that hold between the state being evaluated and the position of the evaluator. For example, perhaps being an agent who acts in a certain state requires me to rank that state differently from someone else who lacks (...) this connection. In this chapter several different kinds of position-relative rankings related to agents, times, physical locations, and possible worlds are explored. Arguments for and against adopting a position-relative axiology are examined, and it is suggested that position-relative consequentialism is a promising moral theory that has been underestimated. (shrink) | |
The aim of the consequentializing project is to show that, for every plausible ethical theory, there is a version of consequentialism that is extensionally equivalent to it. One challenge this project faces is that there are common-sense ethical theories that posit moral dilemmas. There has been some speculation about how the consequentializers should react to these theories, but so far there has not been a systematic treatment of the topic. In this article, I show that there are at least five (...) ways in which we can construct versions of consequentialism that are extensionally equivalent to the ethical theories that contain moral dilemmas. I argue that all these consequentializing strategies face a dilemma: either they must posit moral dilemmas in unintuitive cases or they must rely on unsupported assumptions about value, permissions, requirements, or options. I also consider this result's consequences for the consequentializing project. (shrink) | |
In this paper we explore the connections between ethics and decision theory. In particular, we consider the question of whether decision theory carries with it a bias towards consequentialist ethical theories. We argue that there are plausible versions of the other ethical theories that can be accommodated by “standard” decision theory, but there are also variations of these ethical theories that are less easily accommodated. So while “standard” decision theory is not exclusively consequentialist, it is not necessarily ethically neutral. Moreover, (...) even if our decision-theoretic models get the right answers vis-`a-vis morally correct action, the question remains as to whether the motivation for the non-consequentialist theories and the psychological processes of the agents who subscribe to those ethical theories are lost or poorly represented in the resulting models. (shrink) | |
Many believe that employment can be wrongfully exploitative, even if it is consensual and mutually beneficial. At the same time, it may seem third parties should not do anything to preclude or eliminate such arrangements, given these same considerations of consent and benefit. I argue that there are perfectly sensible, intuitive ethical positions that vindicate this ‘Reasonable View’. The view requires such defense because the literature often suggests that there is no theoretical space for it. I respond to arguments for (...) the clearest symptom of this obscuration: the so-called nonworseness claim that a consensual, mutually beneficial transaction cannot be ‘morally worse’ than its absence. In addition to making space for the Reasonable View, this serves my dialectical goal of encouraging distinct attention to first- and third-party obligations. (shrink) | |
Recently, a number of philosophers have argued that we can and should “consequentialize” non-consequentialist moral theories, putting them into a consequentialist framework. I argue that these philosophers, usually treated as a group, in fact offer three separate arguments, two of which are incompatible. I show that none represent significant threats to a committed non-consequentialist, and that the literature has suffered due to a failure to distinguish these arguments. I conclude by showing that the failure of the consequentializers’ arguments has implications (...) for disciplines, such as economics, logic, decision theory, and linguistics, which sometimes use a consequentialist structure to represent non-consequentialist ethical theories. (shrink) | |
I challenge the common picture of the “Standard Story” of Action as a neutral account of action within which debates in normative ethics can take place. I unpack three commitments that are implicit in the Standard Story, and demonstrate that these commitments together entail a teleological conception of reasons, upon which all reasons to act are reasons to bring about states of affairs. Such a conception of reasons, in turn, supports a consequentialist framework for the evaluation of action, upon which (...) the normative status of actions is properly determined through appeal to rankings of states of affairs as better and worse. This covert support for consequentialism from the theory of action, I argue, has had a distorting effect on debates in normative ethics. I then present challenges to each of these three commitments, a challenge to the first commitment by T.M. Scanlon, a challenge to the second by recent interpreters of Anscombe, and a new challenge to the third commitment that requires only minimal and prima facie plausible modifications to the Standard Story. The success of any one of the challenges, I demonstrate, is sufficient to block support from the theory of action for the teleological conception of reasons and the consequentialist evaluative framework. I close by demonstrating the pivotal role that such arguments grounded in the theory of action play in the current debate between evaluator-relative consequentialists and their critics. (shrink) | |
It is common to distinguish moral rules, reasons, or values that are agent-relative from those that are agent-neutral. One can also distinguish moral rules, reasons, or values that are moment-relative from those that are moment-neutral. In this article, I introduce a third distinction that stands alongside these two distinctions—the distinction between moral rules, reasons, or values that are patient-relative and those that are patient-neutral. I then show how patient-relativity plays an important role in several moral theories, gives us a better (...) understanding of agent-relativity and moment-relativity, and provides a novel objection to Derek Parfit’s “appeal to full relativity” argument. (shrink) | |
Morality seems important, in the sense that there are practical reasons — at least for most of us, most of the time — to be moral. A central theoretical motivation for consequentialism is that it appears clear that there are practical reasons to promote good outcomes, but mysterious why we should care about non-consequentialist moral considerations or how they could be genuine reasons to act. In this paper we argue that this theoretical motivation is mistaken, and that because many arguments (...) for consequentialism rely upon it, the mistake substantially weakens the overall case for consequentialism. We argue that there is indeed a theoretical connection between good states and reasons to act, because good states are those it is fitting to desire and there is a conceptual connection between the fittingness of a motive and reasons to perform the acts it motivates. But while some of our motives are directed at states, others are directed at acts themselves. We contend that just as the fittingness of desires for states generates reasons to promote the good, the fittingness of these act-directed motives generates reasons to do other things. Moreover, we argue that an act’s moral status consists in the fittingness of act-directed feelings of obligation to perform or avoid performing it, so the connection between fitting motives and reasons to act explains reasons to be moral whether or not morality directs us to promote the good. This, we contend, de-mystifies how there could be non-consequentialist reasons that are both moral and practical. (shrink) | |
In this paper I discuss the kinds of idealisations invoked in normative theories—logic, epistemology, and decision theory. I argue that very often the so-called norms of rationality are in fact mere idealisations invoked to make life easier. As such, these idealisations are not too different from various idealisations employed in scientific modelling. Examples of the latter include: fluids are incompressible (in fluid mechanics), growth rates are constant (in population ecology), and the gravitational influence of distant bodies can be ignored (in (...) celestial mechanics). Thinking of logic, epistemology, and decision theory as normative models employing various idealisations of these kinds, changes the way we approach the justification of the models in question. (shrink) | |
Deontic constraints prohibit an agent performing acts of a certain type even when doing so will prevent more instances of that act being performed by others. In this article I show how deontic constraints can be interpreted as either maximizing or non-maximizing rules. I then argue that they should be interpreted as maximizing rules because interpreting them as non-maximizing rules results in a problem with moral advice. Given this conclusion, a strong case can be made that consequentialism provides the best (...) account of deontic constraints. (shrink) | |
This paper looks at the phenomenon of ethical vagueness by asking the question, how ought one to reason about what to do when confronted with a case of ethical vagueness? I begin by arguing that we must confront this question, since ethical vagueness is inescapable. I then outline one attractive answer to the question: we ought to maximize expected moral value when confronted with ethical vagueness. This idea yields determinate results for what one rationally ought to do in cases of (...) ethical vagueness. But what it recommends is dependent on which substantive theory of vagueness is true; one can't draw conclusions about how to reason about vagueness in ethics in the absence of concrete assumptions about the nature of vagueness. (shrink) No categories | |
A traditional picture is that cases of deontic constraints--- cases where an act is wrong (or one that there is most reason to not do) even though performing that act will prevent more acts of the same morally (or practically) relevant type from being performed---form a kind of fault line in ethical theory separating (agent-neutral) consequentialist theories from other ethical theories. But certain results in the recent literature, such as those due to Graham Oddie and Peter Milne in "Act and (...) Value", do not sit well this traditional wisdom. My aim in this paper is to argue that both the traditional wisdom and Oddie and Milne are mistaken. I begin by looking more closely at the traditional wisdom and why it fails (§1). Then I develop my account of this fault line in ethical theory and its importance (§2). Finally I show that a diagnosis of where Oddie and Milne go wrong follows as a corollary of this new account (§3). An important upshot will be that discussions of cases of deontic constraints would do best to focus on the account of the nature and importance of the cases identified in this paper rather than continuing to work with the mistaken traditional picture. (shrink) | |
The purpose of this paper is to conceptualize and explore what I shall call the Common Subject Problem for ethics. The problem is that there seems to be no good answer to what property everyone who makes moral claims could be talking and thinking about. The Common Subject Problem is not a new problem; on the contrary, I will argue that it is one of the central animating concerns in the history of both metaethics and normative theory. But despite its (...) importance, the Common Subject Problem is essentially invisible on many contemporary ways of carving up the problems of metaethics and normative ethical theory. My aim, therefore, is to make progress – in part by naming the problem, but also by beginning to sketch out the contours of what gives the problem its force, by distinguishing between different paths of response to the problem and assessing some of their chief merits, and finally, by distinguishing the Common Subject Problem from another problem with which it has come to be conflated. This nearby problem is the Moral Twin Earth Problem. Whereas the Common Subject Problem is a problem about what property ‘wrong’ could refer to, the Moral Twin Earth Problem is a problem about how ‘wrong’ could refer to it. The upshot of the paper, therefore, is to rescue one of the historically significant problems in normative ethics and metaethics – a problem that is essentially about normative semantics – from the illusion that has persisted over the last twenty years that it is really, somehow, a problem about metasemantics. Once we have reclaimed this problem, we can see that it could still be a problem even if there are no distinctively metasemantic problems in metaethics at all, that it is a problem faced by a wider variety of views, and that the space of possible solutions is much wider and more interesting for normative theory, moral psychology, and moral epistemology. (shrink) No categories | |
Decision-making typically requires judgments about causal relations: we need to know the causal effects of our actions and the causal relevance of various environmental factors. We investigate how several individuals' causal judgments can be aggregated into collective causal judgments. First, we consider the aggregation of causal judgments via the aggregation of probabilistic judgments, and identify the limitations of this approach. We then explore the possibility of aggregating causal judgments independently of probabilistic ones. Formally, we introduce the problem of causal-network aggregation. (...) Finally, we revisit the aggregation of probabilistic judgments when this is constrained by prior aggregation of qualitative causal judgments. (shrink) | |
The practices of using hostages to obtain concessions and using human shields to deter aggression share an important characteristic which warrants a univocal reference to both sorts of conduct: they both involve manipulating our commitment to morality, as a means to achieving wrongful ends. I call this type of conduct “moral coercion”. In this paper I (a) present an account of moral coercion by linking it to coercion more generally, (b) determine whether and to what degree the coerced agent is (...) liable for the harms resulting from acceding to moral coercion, and (c) investigate factors relevant to determining whether we ought to accede to moral coercion. In so doing, I provide grounds for the intuition that we “allow evil to succeed” when we accede to moral coercion. (shrink) | |
This paper argues that evidential decision theory is incompatible with options having objective values. If options have objective values, then it should always be rationally permissible for an agent to choose an option if they are certain that the option uniquely maximizes objective value. But, as we show, if options have objective values and evidential decision theory is true, then it is not always rationally permissible for an agent to choose an option if they are certain that the option uniquely (...) maximizes objective value. (shrink) | |
abstractThe paper explores a new interpretation of the consequentializing project. Three prominent interpretations are criticized for neglecting the explanatory dimension of moral theories. Instead... | |
This Handbook focuses on value theory as it pertains to ethics, broadly construed, and provides a comprehensive overview of contemporary debates pertaining not only to philosophy but also to other disciplines-most notably, political theory... | |
This essay begins by describing T.M. Scanlon’s contractualism according to which an action is right when it is authorised by the moral principles no one could reasonably reject. This view has argued to have implausible consequences with regards to how different-sized groups, non-human animals, and cognitively limited human beings should be treated. It has also been accused of being theoretically redundant and unable to vindicate the so-called deontic distinctions. I then distinguish between the general contractualist framework and Scanlon’s version of (...) contractualism. I explain how the general framework enables us to formulate many other versions of contractualism some of which can already be found in the literature. Understanding contractualism in this new way enables us both to understand the structural similarities and differences between different versions of contractualism and also to see the different objections to contractualism as internal debates about which version of contractualism is correct. (shrink) | |
The topic of this thesis is axiological uncertainty – the question of how you should evaluate your options if you are uncertain about which axiology is true. As an answer, I defend Expected Value Maximisation (EVM), the view that one option is better than another if and only if it has the greater expected value across axiologies. More precisely, I explore the axiomatic foundations of this view. I employ results from state-dependent utility theory, extend them in various ways and interpret (...) them accordingly, and thus provide axiomatisations of EVM as a theory of axiological uncertainty. (shrink) | |
The verdicts standard consequentialism gives about what we are obligated to do crucially depend on what theory of value the consequentialist accepts. This makes it hard to say what separates standard consequentialist theories from non-consequentialist theories. This article discusses how we can draw sharp lines separating standard consequentialist theories from other theories and what assumptions about goodness we must make in order to draw these lines. The discussion touches on cases of deontic constraints, cases of deontic options, and cases involved (...) in the so-called "actualism"/"possibilism" debate. What emerges is that there are various interesting patterns relating the different commitments of consequentialism, different principles about obligation and about goodness, and different rules concerning how facts about values determine facts about obligation. (shrink) | |
Consequentialism is a state of affairs centered moral theory that finds support in state of affairs centered views of value, reason, action, and desire/preference. Together these views form a mutually reinforcing circle. I map an exit route out of this circle by distinguishing between two different senses in which actions can be understood as bringing about states of affairs. All actions, reasons, desires, and values involve bringing about in the first, deflationary sense, but only some appear to involve bringing about (...) in a second, rationalizing sense. I demonstrate that the views making up this circle hold, implausibly, that all reasons, values, desires, and actions involve bringing about in both senses, and that failure to distinguish these senses obscures the implausibility of these views as a set. I demonstrate, in addition, that the distinction blocks two common arguments that otherwise threaten to leverage us back in to the consequentialist circle. (shrink) | |
It is through our actions that we affect the way the world goes. Whenever we face a choice of what to do, we also face a choice of which of various possible worlds to actualize. Moreover, whenever we act intentionally, we act with the aim of making the world go a certain way. It is only natural, then, to suppose that an agent's reasons for action are a function of her reasons for preferring some of these possible worlds to others, (...) such that what she has most reason to do is to bring about the possible world which, of all those available to her, is the one that she has most reason to want to obtain. This is what is known as the `teleological conception of practical reasons'. Whether this is the correct conception of practical reasons is important not only in its own right, but also in virtue of its potential implications for what sort of moral theory we should accept. Below, I argue that the teleological conception is indeed the correct conception of practical reasons. (shrink) | |
An e-book devoted to 13 critical discussions of Thaddeus Metz's book "Meaning in Life: An Analytic Study", with a lengthy reply from the author. -/- Preface Masahiro Morioka i -/- Précis of Meaning in Life: An Analytic Study Thaddeus Metz ii-vi -/- Source and Bearer: Metz on the Pure Part-Life View of Meaning Hasko von Kriegstein 1-18 -/- Fundamentality and Extradimensional Final Value David Matheson 19-32 -/- Meaningful and More Meaningful: A Modest Measure Peter Baumann 33-49 -/- Is Meaning in (...) Life Comparable?: From the Viewpoint of ‘The Heart of Meaning in Life’ Masahiro Morioka 50-65 -/- Agreement and Sympathy: On Metz’s Meaning in Life Sho Yamaguchi 66-89 -/- Metz’s Quest for the Holy Grail James Tartaglia 90-111 -/- Meaning without Ego Christopher Ketcham 112-133 -/- Death and the Meaning of Life: A Critical Study of Metz’s Meaning in Life Fumitake Yoshizawa 134-149 -/- Metz’ Incoherence Objection: Some Epistemological Considerations Nicholas Waghorn 150-168 -/- Meaning in Consequences Mark Wells 169-179 -/- Defending the Purpose Theory of Meaning in Life Jason Poettcker 180-207 -/- Review of Thaddeus Metz’s Meaning in Life Minao Kukita 208-214 -/- A Psychological Model to Determine Meaning in Life and Meaning of Life Yu Urata 215-227 -/- Assessing Lives, Giving Supernaturalism Its Due, and Capturing Naturalism: Reply to 13 Critics of Meaning in Life Thaddeus Metz 228-278 . (shrink) | |
Recent work on consequentialism has revealed it to be more flexible than previously thought. Consequentialists have shown how their theory can accommodate certain features with which it has long been considered incompatible, such as agent-centered constraints. This flexibility is usually thought to work in consequentialism’s favor. I want to cast doubt on this assumption. I begin by putting forward the strongest statement of consequentialism’s flexibility: the claim that, whatever set of intuitions the best nonconsequentialist theory accommodates, we can construct a (...) consequentialist theory that can do the same while still retaining whatever is compelling about consequentialism. I argue that if this is true then most likely the non-consequentialist theory with which we started will turn out to have that same compelling feature. So while this extreme flexibility, if indeed consequentialism has it (a question I leave to the side), makes consequentialism more appealing, it makes non-consequentialism more appealing too. (shrink) | |
Agent-relative reasons are an important feature of any nonconsequentialist moral theory. Many authors think that they cannot be accommodated within a value-first theory that understands all value as agent-neutral. In this paper, I offer a novel explanation of agent-relative reasons that accommodates them fully within an agent-neutral value-first view. I argue that agent-relative reasons are to be understood in terms of second-order value responses: when an agent acts on an agent-relative reason, she responds appropriately to the agent-neutral value of her (...) own appropriate response to some agent-neutral value. This view helps reconcile important elements of deontology and consequentialism. (shrink) No categories | |
Many philosophers display relaxed scepticism about the Doctrine of Doing and Allowing and the Doctrine of Double Effect, suspecting, without great alarm, that one or both of these Doctrines is indefensible. This relaxed scepticism is misplaced. Anyone who aims to endorse a theory of right action with Nonconsequentialist implications should accept both the DDA and the DDE. First, even to state a Nonconsequentialist theory requires drawing a distinction between respecting and promoting values. This cannot be done without accepting some deontological (...) distinction. Second, if someone is going to accept any deontological distinction she should accept either the DDE or the DDA or some replacement. Finally, anyone who accepts either the DDE or the DDA should accept both doctrines or a replacement of each. Unless both Doctrines can be defended or given a defensible replacement, any Nonconsequentialist is in trouble. (shrink) | |
As an indirect ethical theory, rule consequentialism first evaluates moral codes in terms of how good the consequences of their general adoption are and then individual actions in terms of whether or not the optimific code authorises them. There are three well-known and powerful objections to rule consequentialism’s indirect structure: the ideal world objection, the rule worship objection, and the incoherence objection. These objections are all based on cases in which following the optimific code has suboptimal consequences in the real (...) world. After outlining the traditional objections and the cases used to support them, this paper first constructs a new hybrid version of consequentialism that combines elements of both act and rule consequentialism. It then argues that this novel view has sufficient resources for responding to the previous traditional objections to pure rule consequentialism. (shrink) | |
In this paper, I introduce a new challenge to moral realism: the skeptical argument from moral underdetermination. The challenge arises as a consequence of two recent projects in normative ethics. Both Parfit and a group called consequentializers have independently claimed that the main traditions of normative theories can agree on the set of correct particular deontic verdicts. Nonetheless, as Dietrich and List :421–479, 2017) and myself :191–221, 2018; Australas J Philos 97:511–527, 2019; Ethical Theory Moral Pract 24:999–1018, 2021a) have argued, (...) the traditions still disagree about why these are the correct verdicts. This means that we can understand the situation in terms of an idea from the philosophy of science, the underdetermination of theory by the evidence. Yet underdetermination figures in one of the most important skeptical challenges to scientific realism. I show how an analogous skeptical argument can be construed for the moral realm. I propose a standard form for that argument. I then defend it against three possible objections, arguing that it is at least as plausible as, if not more plausible than, its counterpart in the philosophy of science. (shrink) | |