| |
Modalists think that knowledge requires forming your belief in a “modally stable” way: using a method that wouldn't easily go wrong, or using a method that wouldn't have given you this belief had it been false. Recent Modalist projects from Justin Clarke-Doane and Dan Baras defend a principle they call “Modal Security,” roughly: if evidence undermines your belief, then it must give you a reason to doubt the safety or sensitivity of your belief. Another recent Modalist project from Carlotta Pavese (...) and Bob Beddor defends “Modal Virtue Epistemology”: knowledge is a belief that is maximally modally robust across “normal” worlds. We'll offer new objections to these recent Modalist projects. We will then argue for a rival view, Explanationism: knowing something is believing it because it's true. We will show how Explanationism offers a better account of undermining defeaters than Modalism, and a better account of knowledge. (shrink) | |
A second-order conspiracy (SOC) is a conspiracy that aims to create (and typically also disseminate) a conspiracy theory. Second-order conspiracy theories (SOCT) are theories that explain the occurrence of a given conspiracy theory by appeal to a conspiracy. In this paper I argue that SOC and SOCT are useful and coherent concepts, while also having numerous philosophically interesting upshots (in terms of epistemology, explanation, and prediction). Secondly, I appeal to the nature of two specific kinds of second-order conspiracies to make (...) the case for what has been called ‘local generalism’ (Stamatiadis-Bréhier 2023a). Specifically, I focus on so-called ‘denial industries’ to argue that the structure of these second-order conspiracies allows us to infer non-accidental generalisations about the domain of conspiracy theories. Even though it is true that there is nothing epistemically problematic with the general class of conspiracy theories, there are specific subsets of conspiracy theories that warrant immediate strong suspicion (cf. Dentith 2022). By looking at the intricate mechanisms by which these denial industries operate, we can infer that the conspiracy theories that are produced by them are epistemically unwarranted. I conclude by making some exploratory remarks about what the metaphysics of second-order conspiracies would look like. (shrink) | |
In this paper I develop a genealogical approach for investigating and evaluating conspiracy theories. I argue that conspiracy theories with an epistemically problematic genealogy are (in virtue of that fact) epistemically undermined. I propose that a plausible type of candidate for such conspiracy theories involves what I call ‘second-order conspiracies’ (i.e. conspiracies that aim to create conspiracy theories). Then, I identify two examples involving such conspiracies: the antivaccination industry and the industry behind climate change denialism. After fleshing out the mechanisms (...) by which these industries systematically create and disseminate specific types of conspiracy theories, I examine the implications of my proposal concerning the particularism/generalism debate and I consider the possibility of what I call local generalism. Finally, I tackle three objections. It could be objected that a problematic genealogy for T merely creates what Dentith (2022) calls ‘type-1’ (or ‘weak’) suspicion for T. I also consider a challenge according to which the genealogical method is meta-undermined, as well as an objection from epistemic laundering. (shrink) | |
Animal ethicists have been debating the morality of speciesism for over forty years. Despite rather persuasive arguments against this form of discrimination, many philosophers continue to assign humans a higher moral status than nonhuman animals. The primary source of evidence for this position is our intuition that humans’ interests matter more than the similar interests of other animals. And it must be acknowledged that this intuition is both powerful and widespread. But should we trust it for all that? The present (...) paper defends a negative answer to that question, based on a debunking argument. The intuitive belief that humans matter more than other animals is unjustified because it results from an epistemically defective process. It is largely shaped by tribalism, our tendency to favor ingroup members as opposed to outgroup members. And this influence is distortive for two reasons. First, tribalism evolved for reasons unrelated to moral truths; hence, it would at best produce true moral beliefs accidentally. Second, tribalism generates a vast quantity of false moral beliefs, starting with racist beliefs. Once this intuition is discarded, little evidence remains that speciesism is morally acceptable. (shrink) | |
Disagreement and debunking arguments threaten religious belief. In this paper, I draw attention to two types of propositions and show how they reveal new ways to respond to debunking arguments and disagreement. The first type of proposition is the epistemically self-promoting proposition, which, when justifiedly believed, gives one a reason to think that one reliably believes it. Such a proposition plays a key role in my argument that some religious believers can permissibly wield an epistemically circular argument in response to (...) certain debunking arguments. The second type of proposition is the epistemically others-demoting proposition, which, when justifiedly believed, gives one a reason to think that others are unreliable with respect to it. Such a proposition plays a key role in my argument that some religious believers can permissibly wield a question-begging argument to respond to certain types of disagreement. (shrink) | |
Psychedelic substances elicit powerful, uncanny conscious experiences that are thought to possess therapeutic value. In those who undergo them, these altered states of consciousness often induce shifts in metaphysical beliefs about the fundamental structure of reality. The contents of those beliefs range from contentious to bizarre, especially when considered from the point of view of naturalism. Can chemically induced, radically altered states of consciousness provide reasons for or play some positive epistemic role with respect to metaphysical beliefs? In this paper, (...) I discuss a view that has been underexplored in recent literature. I argue that psychedelic states can be rationally integrated into one’s epistemic life. Consequently, updating one’s metaphysical beliefs based on altered states of consciousness does not have to constitute an instance of epistemic irrationality. (shrink) | |
Debunking arguments aim to undermine common sense beliefs by showing that they are not explanatorily or causally linked to the entities they are purportedly about. Rarely are facts about the aetiology of common sense beliefs invoked for the opposite aim, that is, to support the reality of entities that furnish our manifest image of the world. Here I undertake this sort of un-debunking project. My focus is on the metaphysics of ordinary physical objects. I use the view of perception as (...) approximate Bayesian inference to show how representations of ordinary objects can be extracted from sensory input in a rational and truth-tracking manner. Drawing an analogy between perception construed as Bayesian hypothesis testing and scientific inquiry, I sketch out how some of the intuitions that traditionally inspired arguments for scientific realism also find application with regards to proverbial tables and chairs. (shrink) | |
David Lewis (1986) criticizes moderate views of composition on the grounds that a restriction on composition must be vague, and vague composition leads, via a precisificational theory of vagueness, to an absurd vagueness of existence. I show how to resist this argument. Unlike the usual resistance, however, I do not jettison precisificational views of vagueness. Instead, I blur the connection between composition and existence that Lewis assumes. On the resulting view, in troublesome cases of vague composition, there is an object, (...) which definitely exists, about which it is vague whether the relevant borderline parts compose it. (shrink) | |
Many moral debunking arguments are driven by the idea that the correlation between our moral beliefs and the moral truths is a big coincidence, given a robustly realist conception of morality.One influential response is that the correlation is not a coincidence because there is a common explainer of our moral beliefs and the moral truths. For example, the reason that I believe that I should feed my child is because feeding my child helps them to survive, and natural selection instills (...) in me beliefs and dispositions that help my children survive since that is conductive to my genes continuing through the generations. Similarly, the reason that it's morally good to feed my child is because it helps them to survive, and survival is morally valuable.But if we look at some cases from scientific practice, and from everyday life, we can see, I argue, why this response fails. A correlation can be coincidental even if there is a common explainer. I give an account of the nature of coincidence that draws upon recent literature on scientific explanation and argue that the correlation between moral belief and moral truth is a coincidence, even given such common explainers. And I use this to defend a certain form of debunking argument. (shrink) | |
In the recent literature on the nature of knowledge, a rivalry has emerged between modalism and explanationism. According to modalism, knowledge requires that our beliefs track the truth across some appropriate set of possible worlds. Modalists tend to focus on two modal conditions: sensitivity and safety. According to explanationism, knowledge requires only that beliefs bear the right sort of explanatory relation to the truth. In slogan form: knowledge is believing something because it’s true. In this paper, we aim to vindicate (...) explanationism from some recent objections offered by Gualtiero Piccinini, Dario Mortini, and Kenneth Boyce and Andrew Moon. Together, these authors present five purported counterexamples to the sufficiency of the explanationist analysis for knowledge. In addition, Mortini devises a clever argument that explanationism entails the violation of a plausible closure principle on knowledge. We will argue that explanationism is innocent of all these charges against it, and we hope that the strength of the defense we offer of explanationism is evidence in its favor, and a reason to investigate explanationism further as the long-elusive truth about the nature of knowledge. (shrink) | |
In a recent paper, Nader Shoaibi (2024) makes a valuable contribution to the discussion on genealogies and conspiracy theories (CTs) by focusing on a particular kind of genealogy: what he calls 'political genealogies'. Roughly, political genealogies are not so much interested in the epistemic warrant (or rationality) of a given belief or theory. Rather, their function is to illuminate the social and political conditions that give rise to the spread of (unwarranted) CTs. Shoaibi also notes that such genealogies have an (...) important normative dimension: by drawing on the social/political conditions surrounding CTs we are also invited to engage in a ‘constructive strategy’ concerning CT-believers. This strategy, according to Shoaibi, can be cashed out in terms of ‘world-travelling’ which, as per feminist philosopher Maria Lugones, involves radical humility and playfulness. I agree with a lot of what Shoaibi has to say in his paper. I find his notion of CT political genealogies philosophically fruitful since it carves out what I take to be novel conceptual space in the literature. And I welcome the appeal to ‘world-travelling’ when dealing with proponents of unwarranted CTs. In this piece I respond to some of Shoaibi’s worries against epistemic genealogies, and I raise a concern about the possibility of political genealogies being hijacked by malicious actors. I also make some preliminary remarks about what could be called 'genealogical pluralism' about CTs, while also arguing for the primacy of epistemic genealogies. (shrink) No categories | |
Abstract:Skepticism about grounding is the view that ground-theoretic concepts shouldn’t be used in metaphysical theorizing. Possible reasons for adopting this attitude are numerous: perhaps grounding is unintelligible; or perhaps it’s never instantiated; or perhaps it’s just too heterogeneous to be theoretically useful. Unfortunately, as currently pursued the debate between grounding enthusiasts and skeptics is insufficiently structured. This paper’s purpose is to impose a measure of conceptual rigor on the debate by offering an opinionated taxonomy of views with a reasonable claim (...) to being “skeptical.” I argue that carving up logical space into pro- and anti-grounding views isn’t especially helpful; rather, we should recognize various degrees of ground-theoretic involvement depending on how inflationary our understanding of the theoretical term ‘ground’ is. (shrink) | |
Genealogies of belief have dominated recent philosophical discussions of genealogical debunking at the expense of genealogies of concepts, which has in turn focused attention on genealogical debunking in an epistemological key. As I argue in this paper, however, this double focus encourages an overly narrow understanding of genealogical debunking. First, not all genealogical debunking can be reduced to the debunking of beliefs—concepts can be debunked without debunking any particular belief, just as beliefs can be debunked without debunking the concepts in (...) terms of which they are articulated. Second, not all genealogical debunking is epistemological debunking. Focusing on concepts rather than beliefs brings distinct forms of genealogical debunking to the fore that cannot be comprehensively captured in terms of epistemological debunking. We thus need a broader understanding of genealogical debunking, which encompasses not just epistemological debunking, but also what I shall refer to as metaphysical debunking and ethical debunking. (shrink) | |
Sometimes, scientific models are either intended to or plausibly interpreted as representing nonactual but possible targets. Call this “hypothetical modeling”. This paper raises two epistemological challenges concerning hypothetical modeling. To begin with, I observe that given common philosophical assumptions about the scope of objective possibility, hypothetical models are fallible with respect to what is objectively possible. There is thus a need to distinguish between accurate and inaccurate hypothetical modeling. The first epistemological challenge is that no account of the epistemology of (...) hypothetical models seems to cohere with the most characteristic function of scientific modeling in general, i.e., surrogative representation. The second epistemological challenge is a version of “reliability challenges” familiar from other areas. There is a challenge to explain how hypothetical models could be a reliable guide to what is possible, given that they are not and cannot be compared against their nonactual targets and updated accordingly. I close with some brief remarks on possible solutions to these challenges. (shrink) | |
At the core of the recent debate over moral debunking arguments is a disagreement between explanationist and modalist approaches. Explanationists think that the lack of an explanatory connection between our moral beliefs and the moral truths, given a non-naturalist realist conception of morality, is a reason to reject non-naturalism. Modalists disagree. They say that, given non-naturalism, our beliefs have the appropriate modal features with respect to truth -- in particular they are safe and sensitive -- so there is no problem. (...) -/- There is something of a stand-off here. I argue, though, that by looking at the role explanatory and modal factors have to play in theory choice more generally, and, in particular, by considering the practice of theory choice in science, we can see that the explanationist is right. The lack of an explanatory connection between our moral beliefs and the moral truths is a reason to reject non-naturalist realism about morality. (shrink) | |
The neo-Aristotelian conception of essence has gained prominence in recent analytic metaphysics. I will present an epistemic problem for such essentialists. The challenge centers on the following question: assuming there are essence-facts, what relationship between essence-facts and essence-attitudes explains why those attitudes’ correctness is not coincidental? It is a debunking challenge—what I call the explanatory challenge. The explanatory challenge is distinctive for at least three reasons: (i) it does not centrally concern the domain in question containing abstract objects, or having (...) evolutionary etiologies, (ii) it targets neo-Aristotelian essentialism, not merely essentialism insofar as it is modally analyzable, and (iii) the challenge comes in three grades—weak, moderate, and strong. Although debunking challenges do not pose a problem unique to essentialism, they have yet to be explicitly applied to essentialism in detail. I aim to redress this omission here. I begin by explaining the challenge’s grades, paying particular attention to a species of the moderate grade, which generates a more specific challenge I call the deflationary challenge. Then, I’ll survey David Oderberg’s and E.J. Lowe’s epistemologies of essence. I’ll argue that their accounts fail the weak challenge and that this leaves them especially vulnerable to the moderate challenge, where this involves positive reason to think essence-facts do not, in fact, play an explanatory role in forming one's essence-attitudes. Lastly, I’ll propose that Amie Thomasson’s deflationary account of identity-conditions might offer a deflationary challenge for essentialism. (shrink) | |
When and why does awareness of a belief's genealogy render it irrational to continue holding that belief? According to explanationism, awareness of a belief’s genealogy gives rise to an epistemic defeater when and because it reveals that the belief is not explanatorily connected to the relevant worldly facts. I argue that an influential recent version of explanationism, due to Korman and Locke, incorrectly implies that it is not rationally permissible to adopt a “sparse” ontology of worldly facts or states of (...) affairs. I then propose a new explanationist account of genealogical defeat capable of accommodating rational belief in ontological sparsity. According to my account, awareness of a belief’s genealogy gives rise to a defeater when and because it reveals that the belief is not explanatorily connected to its truthmaker. (shrink) | |
Common sense has it that animals matter considerably less than humans; the welfare and suffering of a cow, a chicken or a fish are important but not as much as the welfare and suffering of a human being. Most animal ethicists reject this “speciesist” view as mere prejudice. In their opinion, there is no difference between humans and other animals that could justify such unequal consideration. In the opposite camp, advocates of speciesism have long tried to identify a difference that (...) would fit the bill, but they have consistently seemed to fail. In light of this, some naturally began to appeal to Moorean arguments: the case against speciesism must be flawed somehow, these philosophers maintain, because speciesism is supported by a strong and widespread intuition. This chapter draws on recent findings in social psychology to criticize this defence of speciesism. It argues that the strong and widespread intuition that humans count more than animals is epistemically defective because it is causally shaped by a pair of irrelevant influences: cognitive dissonance and tribalism. Accordingly, it is no suitable basis for a Moorean argument. (shrink) | |
This chapter explores global debunking arguments, debunking arguments that aim to give one a global defeater. I defend Alvin Plantinga’s view that global defeaters are possible and, once gained, are impossible to escape by reasoning. They thereby must be extinguished by other means: epistemically propitious actions, luck, or grace. I then distinguish between three types of global defeater—pure-undercutters, undercutters-because-rebutters, and undercutters-while-rebutters—and systematically consider how one can deflect such defeaters. Lastly, since I draw insights from the literature on perhaps the most (...) widely discussed global debunking argument in the literature, Plantinga’s evolutionary argument against naturalism, I end up responding to many potential problems for it. This includes the so-called conditionalization problem, as well as those raised by Bergmann (2002), Law (2012), Deem (2018), Hendricks and Anderson (2020), and Wielenberg (in Craig and Wielenberg (2021)). (shrink) | |
We are often confronted with attempts to debunk our aesthetic tastes, like: “You only like jazz because you’re a pretentious hipster,” or, “Your love of the Western canon is just colonialism speaking.” Such debunking arguments often try to give a socio-historical accounting, intended to de-legitimize our tastes by showing that they arise from processes uninterested in real aesthetic value. One common version is the Art Populist debunk: that claims of aesthetic expertise in esoteric arts are really just elitist gatekeeping. Then (...) we have its mirror twin, the Art Expert debunk: that the populist love of simple arts serves the interests of profiteering entertainment corporations dispensing simplified slop. Suppose we accept one of these debunking argument. How are we supposed to got on? Are we supposed to not like the things we like, or force ourselves to choke down food we don’t enjoy? And suppose we accept both of these debunking arguments — what then? Are we supposed to simply give up our grip on beauty altogether? This is hard to imagine. Aesthetic debunking arguments have a harder time getting a grip on us, because aesthetic life involves a distinctively tight relationship between our felt aesthetic phenomena and our aesthetic judgments. Aesthetic life gives us phenomenal resistance to debunking arguments, when our felt loves lag behind our endorsed beliefs. I suggest a way through that offers a livable accommodation. We may be able to treat such debunking arguments, not as targeting the positive content of our taste, but as targeting the boundaries and limitations on our taste. That is, a Populist may not be able to debunk my deep felt love of opera, but they may be able to debunk my dismissal of dance-pop. In this case, we can take onboard both the Art Expert’s and the Art Populist’s debunking arguments, as targeting different varieties of narrowness and dismissal. These debunkings, then, move us, not towards aesthetic nihilism, but aesthetic expansionism. (shrink) | |
Following Anthony Downs’s classic economic analysis of democracy, it has been widely noted that most voters lack the incentive to be well-informed. Recent empirical work, however, suggests further that political partisans can display selectively lazy or biased reasoning. Unfortunately, political knowledge seems to exacerbate, rather than mitigate, these tendencies. In this paper, I build on these observations to construct a more general skeptical challenge which affects what I call creedal beliefs. Such beliefs share three features: (i) the costs to the (...) individual of being wrong are negligible, (ii) the beliefs are subject to social scrutiny, and (iii) the evidential landscape relevant to the beliefs is sufficiently complex so as to make easy verification difficult. Some philosophers and social scientists have recently argued that under such conditions, beliefs are likely to play a signaling, as opposed to a navigational role, and that our ability to hold beliefs in this way is adaptive. However, if this is right, I argue there is at least a partial debunker for such beliefs. Moreover, this offers, I suggest, one way to develop the skeptical challenge based on etiological explanation that John Stuart Mill presents in On Liberty when he claims that the same causes which lead someone to be a devout Christian in London would have made them a Confucian in Peking. Finally, I contend that this skeptical challenge is appropriately circumscribed so that it does not over-extend in an implausible way. (shrink) | |
In this paper, I develop a theory on which each of a thing’s abundant properties is immanent in that thing. On the version of the theory I will propose, universals are abundant, each instantiated universal is immanent, and each uninstantiated universal is such that it could have been instantiated, in which case it would have been immanent. After setting out the theory, I will defend it from David Lewis’s argument that such a combination of immanence and abundance is absurd. I (...) will then advocate the theory on the grounds that it accomplishes all of Lewis’s “new work” while providing a gain in parsimony and a new account of fine-grained content. I will close with a discussion of how the theory also affords a new reply to two objections to uninstantiated universals: Armstrong’s charge that they are inconsistent with naturalism, and a Benacerraf-Field-style objection about epistemic access. (shrink) | |
Evolutionary debunking arguments purport to show that, if moral realism is true, all of our moral beliefs are unjustified. In this paper, I respond to two of the most enduring objections that have been raised against these arguments. The first objection claims that evolutionary debunking arguments are self-undermining, because they cannot be formulated without invoking epistemic principles, and epistemic principles are just as vulnerable to debunking as our moral beliefs. I argue that this objection suffers from several defects, the most (...) serious of which is that it has the unpalatable consequence that we should never revise our moral beliefs in response to evidence that our capacity for normative cognition is globally impaired. The second objection, which comes to us from Katia Vavova, claims that evolutionary debunking arguments are doomed to fail, because they attempt to show that our moral beliefs are unreliable without making any assumptions about the nature of morality, and this is impossible. I argue, to the contrary, that the etiological higher-order evidence cited by debunking arguments can give us good reason to think that our moral beliefs are unreliable, even if we make no assumptions about what morality is like. (shrink) | |
Fifteen years ago, Sharon Street and Richard Joyce advanced evolutionary debunking arguments against moral realism, which purported to show that the evolutionary history of our moral beliefs makes moral realism untenable. These arguments have since given rise to a flurry of objections; the epistemic principles Street and Joyce relied upon, in particular, have come in for a number of serious challenges. My goal in this paper is to develop a new account of evolutionary debunking which avoids the pitfalls Street and (...) Joyce encountered and responds to the most pressing objections they faced. I begin by presenting a striking thought experiment to serve as an analogy for the evolution of morality; I then show why calibrationist views of higher-order evidence are crucial to the evolutionary debunking project; I outline a new rationale for why finding out that morality was selected to promote cooperation suggests that our moral judgments are unreliable; and I explain why evolutionary debunking arguments do not depend on our having a dedicated faculty for moral cognition. All things considered, I argue, evolutionary debunking arguments against moral realism are on relatively secure footing – provided, at least, that we accept a calibrationist account of higher-order evidence. (shrink) | |
Several authors believe that metaethicists ought to leave their comfortable armchairs and engage with serious empirical research. This paper provides partial support for the opposing view, that metaethics is rightly conducted from the armchair. It does so by focusing on debunking arguments against robust moral realism. Specifically, the article discusses arguments based on the possibility that if robust realism is correct, then our beliefs are most likely insensitive to the relevant truths. These arguments seem at first glance to be dependent (...) on empirical research to learn what our moral beliefs are sensitive to. It is argued, however, that this is not so. The paper then examines two thought experiments that have been thought to demonstrate that debunking arguments might depend on empirical details and argues that the conclusion is not supported. (shrink) | |
Presentism is, roughly, the ontological view that only the present exists. Among the philosophers engaged in the metaphysics of time there is wide agreement that presentism is intuitive (or commonsensical) and that its intuitiveness counts as evidence in its favour. My contribution has two purposes: first, defending the view that presentism is intuitive from some recent criticisms; second, putting forth a genealogical (or debunking) argument aimed at depriving presentism’s intuitiveness of the evidential value commonly granted to it. | |
In “The Evolutionary Debunking of Quasi-Realism,” Neil Sinclair and James Chamberlain present a novel answer that quasi-realists can pro-vide to a version of the reliability challenge in ethics—which asks for an explanation of why our moral beliefs are generally true—and in so doing, they examine whether evolutionary arguments can debunk quasi-realism. Although reliability challenges differ from EDAs in several respects, there may well be a connection between them. For the explanatory premise of an EDA may state that a particular theory (...) of beliefs of a certain kind does not, or cannot, provide a plausible account of why those beliefs might be generally true, and its epistemic premise may state that, if that is the case, then the beliefs in question have a negative epistemic status or the theory is false inasmuch as, if it were true, it would lead to those beliefs having such a negative epistemic status. The quasi-realist can answer the reli-ability challenge by claiming that, when we form our moral beliefs through a process of well-informed impartial reflection, we form them in response to the non-moral features of things on which depend the moral features they have. Hence, when we form beliefs by means of such a process, we are most likely forming true moral beliefs. (shrink) | |
One of the most surprisingly prominent themes in Robert Brandom’s A Spirit of Trust is the role of genealogical explanations. Brandom sees genealogies or ‘debunking arguments’ as significant because of their ability to deprive our discursive acts of the normative status they require to be genuinely discursive or conceptual. His solution to the problem of genealogy is to offer rationalizing reconstructions of others’ discursive acts, which credit them with normative status. He calls this “forgiveness”. In this paper, I provide some (...) additional conceptual resources to explicate Brandom’s notions of genealogy and forgiveness. These resources allow me to discriminate between two alternate and seemingly incompatible ways of responding to genealogies. One way depends on rationalizing explanations that still attempt to attribute commitments to their subjects, the other avoids making such attributions in favor of explaining commitments only in terms of norms accepted by the rationalizer. I argue that Brandom’s work sometimes promotes the latter response to genealogy but that this tendency should be eliminated from the account. (shrink) | |
Cognitive science of religion has inspired several debunking arguments against theistic belief. Hans Van Eyghen’s book Arguing from Cognitive Science of Religion is the first monograph devoted to answering such arguments. This article focuses on Van Eyghen’s responses to two widely discussed debunking arguments, one by Matthew Braddock and another by John Wilkins and Paul Griffiths. Both responses have potential but also face problems. Even if Van Eyghen manages to show that these authors have not fully excluded the possibility of (...) noninferential theistic belief being underpinned by reliable belief-forming processes, he fails to offer convincing reasons to think the processes are in fact reliable. A positive argument for their reliability might ultimately have to be based on evidence for God’s existence, namely, theistic arguments. The question of the rationality of religious belief (de jure) thus cannot be isolated from the question of God’s existence (de facto). (shrink) No categories | |
Metaethical debunking arguments often conclude that no moral belief is epistemically justified. Early versions of such arguments largely relied on metaphors and analogies and left the epistemology of debunking underspecified. Debunkers have since come to take on substantial and broad-ranging epistemological commitments. The plausibility of metaethical debunking has thereby become entangled in thorny epistemological issues. In this thesis, I provide a critical yet sympathetic evaluation of the prospects and challenges facing such arguments in light of this development. In doing so, (...) I address the following central question: how could genealogical information undermine the epistemic justification of moral beliefs? In Part I, I begin answering the central question by extracting explicit and implicit epistemic principles from three popular debunking arguments. These arguments, due to Gilbert Harman, Richard Joyce, and Sharon Street, generate principles concerning ontological parsimony, explanatory dispensability, epistemic insensitivity, lack of epistemic safety, unexplained reliability, epistemic coincidences, and explanatory constraints on rational belief. Having set out the principles tasked with explaining how genealogical information undermines, Part II of the thesis seeks to evaluate whether debunking arguments built on them succeed. To this end, I consider two types of challenges faced by such arguments. First, there are strategies that attempt to block global moral debunking arguments. I argue that one popular such strategy, the so-called ‘third-factor strategy’, has been misunderstood. When understood correctly, it is of no help in answering debunking arguments. I then flesh out an alternative and more promising strategy for blocking such arguments. I then turn to internal challenges facing debunkers, particularly those who rely on ‘explanationist’ principles. I argue that explanationist debunking arguments, as well as most others, fall prey to one or more of four internal challenges: the implausibility of first-order epistemic principles, the threat of overgeneralization, the threat of self-defeat, and the need for costly metaepistemic commitments. I conclude that current debunking arguments fail to establish that no moral belief is justified. By analyzing why existing arguments fail, I develop two conditions of adequacy that debunkers must satisfy in order to navigate the internal challenges successfully. I end by suggesting future directions that debunkers should pursue to rehabilitate the prospects for global moral debunking arguments. (shrink) | |
This dissertation is about the partiality problem for fitting attitude (FA) analyses of value. More specifically, it is about whether and how the problem might be resolved. In Chapter 1, I set the stage by offering a short introduction to the topic and a rationale for investigating it. I then give a more detailed account of FA analyses of value in Chapter 2, including a brief outline of their history and appeal, before explaining more thoroughly just what the partiality problem (...) is for such analyses in Chapter 3, where I distinguish between several different versions of the problem, identify two broad strategies for resolving it and put forward six evaluation criteria in terms of which the plausibility of a resolution can be assessed. According to FA analyses of value, what it is for something to have value is, roughly, for it to be fitting to have a certain sort of attitude toward it. To have a positive value, such as being good, is to be something that it is fitting to favor (to have a pro-attitude toward); to have a negative value, such as being bad, is to be something that it is fitting to disfavor (to have a con-attitude toward); and to have a relational or comparative value like being better than, worse than, equally good as or equally bad as something else is to be something that it is fitting to favor more than, disfavor more than, favor equally much as or disfavor equally much as the other thing. The partiality problem is, roughly, that there are situations in which two things seem to have equal value, which, if things are as they seem and FA analyses of value are correct, means that it is fitting to respond to them equally – to be impartial between them. However, in these situations, there are some people for whom it does not seem fitting to respond in that way because they stand in a certain relationship to one of the bearers of value. For these people, it seems that the fitting response is an unequal or partial response, not an impartial one. If such intuitions about fitting partiality and value are veridical, then how can FA analyses of value be correct? The rest of the dissertation is then a critical, in-depth study of a wide range of responses to the partiality problem. In Chapter 4, I examine the first of the two response strategies, which is to deny the veracity of at least one of the intuitions that generate the problem without making any changes to the FA analyses themselves. I then turn to the second strategy, which is to try to accommodate the intuitions by revising the FA analyses in some suitable manner. In Chapters 5–13, I consider the possibility and plausibility of solving the problem by revising the FA analyses on the basis of: the distinction between intrinsic and extrinsic properties (Chapter 5), the notion of basic value (Chapter 6), the notion of agent-relative value (Chapter 7), the notion of an impartial observer (Chapter 8), the distinction between moral and non-moral fittingness (Chapter 9), a sui generis notion of fittingness (Chapter 10), the distinction between pro tanto and overall fittingness (Chapter 11), the distinction between actions and attitudes (Chapter 12) and the notions of evaluative distance and perspective (Chapter 13). My assessment is that none of these responses is entirely plausible. Thus, in Chapter 14, I conclude that although the partiality problem may not be irresolvable, it is yet to receive a fully plausible resolution. I also discuss the ramifications of this conclusion. (shrink) |