Virtually every aspect of self-deception, including its definition andparadigmatic cases, is a matter of controversy among philosophers.Minimally, self-deception involves a person who (a) as aconsequence of some motivation or emotion, seems to acquire andmaintain some false belief despite evidence to the contrary and (b)who may display behavior suggesting some awareness of the truth.Beyond this, philosophers divide over whether self-deception isintentional, whether it involves belief or some other sub- ornon-doxastic attitude, whether self-deceivers are morally responsiblefor their self-deception, whether self-deception is morallyproblematic (and if it is, in what ways and under what circumstances),whether self-deception is beneficial or harmful, whether and in whatsense collectives can be self-deceived (and how, if they can beself-deceived, this might affect individuals within such collectives),and whether our penchant for self-deception might be socially,psychologically or biologically adaptive or merely an accidentalbyproduct of our evolutionary history.
The discussion of self-deception and its associated puzzles shedslight on the ways motivation affects belief acquisition and retentionand other belief-like cognitive attitudes; it also prompts us toscrutinize the notions of belief, intention, and the limits of suchfolk psychological concepts to adequately explain phenomena of thissort. Self-deception also requires careful consideration of thecognitive architecture that might accommodate this apparentirrationality regarding our beliefs.
Self-deception isn’t merely a philosophically interesting puzzlebut a problem of existential concern. It raises the distinctpossibility that we live with distorted views that may make usstrangers to ourselves and blind to the nature of our morallysignificant engagements.
“What is self-deception?” sounds like a straightforwardquestion, but the more philosophers have sought to answer it, the morepuzzling it has become. Traditionally, self-deception has been modeledon interpersonal deception, whereA intentionally getsB to believe some propositionp, all the whileknowing or believing truly that ~p. Such deception isintentional and requires the deceiver to know or believe that~p and the deceived to believe thatp. One reasonfor thinking self-deception is analogous to interpersonal deception ofthis sort is that it helps us to distinguish self-deception from mereerror since the acquisition and maintenance of the false belief areintentional, not accidental. It also helps to explain why we thinkself-deceivers are responsible for and open to the evaluation of theirself-deception. If self-deception is properly modeled on interpersonaldeception, self-deceivers intentionally get themselves to believe thatp, all the while knowing or believing truly that ~p.On this traditional model, then, self-deceivers apparently must (1)hold contradictory beliefs—the dual-belief requirement—and(2) intentionally get themselves to hold a belief they know or believetruly to be false.
The traditional model of self-deception, however, has been thought toraise two paradoxes: One concerns the self-deceiver’s state ofmind—the so-calledstatic paradox. How can a personsimultaneously hold contradictory beliefs? The other concerns theprocess or dynamics of self-deception—the so-calleddynamic orstrategic paradox. How can a personintend to deceive herself without rendering her intentionsineffective? (Mele 1987a; 2001)
The dual-belief requirement raises thestatic paradox sinceit seems to pose an impossible state of mind, namely, consciously andsimultaneously believing thatp and ~p. As deceiver,she believes that ~p, and, as deceived, she believes thatp. Accordingly, the self-deceiver consciously believes thatp and ~p. But if believing both a proposition andits negation in full awareness is an impossible state of mind,self-deception, as it has traditionally been understood, seemsimpossible as well.Static paradoxes also arise regardingmotivation, intention, emotion, and the like insofar as self-deceiversseem to harbor psychological states of these kinds that seem deeplyincompatible (Funkhouser 2019).
The requirement that the self-deceiver intentionally gets herself tohold a belief she knows to be false raises thedynamic orstrategic paradox since it seems to involve the self-deceiverin an impossible project, namely, both deploying and being duped bysome deceitful strategy. As deceiver, she must be aware thatshe’s deploying a deceitful strategy; but, as deceived, she mustbe unaware of this strategy for it to be effective. And yet it’sdifficult to see how the self-deceiver could fail to be aware of herintention to deceive. A strategy known to be deceitful seems bound tofail. How could I be taken in by your efforts to get me to believesomething false if I know what you’re up to? But if it’simpossible to be taken in by a strategy one knows is deceitful, then,again, self-deception, as it has traditionally been understood, seemsto be impossible as well.
These paradoxes have led a minority of philosophers to be skepticalthat self-deception is conceptually possible or even coherent (Paluch1967; Haight 1980; Kipp 1980). Borge (2003) contends that accounts ofself-deception inevitably give up central elements of ourfolk-psychological notions of “self” or“deception” to avoid paradox, leaving us to wonder whetherthis framework itself is what gets in the way of explaining thephenomenon. Such skepticism toward the concept may seem warranted,given the obvious paradoxes involved. Most philosophers, however, havesought some resolution to these paradoxes instead of giving up on thenotion itself, not only because empirical evidence suggests thatself-deception is not only possible but pervasive (Sahdra and Thagard2003) but also because the concept does seem to pick out a distinctkind of motivated irrationality.
Philosophical accounts of self-deception roughly fall into two maingroups: those that maintain that the paradigmatic cases ofself-deception are intentional and those that deny this. Call theseapproachesintentionalist andrevisionist,respectively. Intentionalists find the model of intentionalinterpersonal deception apt since it helps to explain the selectivityof self-deception and the apparent responsibility of self-deceivers,as well as provide a clear way of distinguishing self-deception fromother sorts of motivated belief, such as wishful thinking. To avoidparadox, these approaches introduce a variety of divisions that shieldthe deceiving from the deceived mind. Revisionists are skeptical ofthese divisions and the ‘psychological exotica’ (Mele2001) apparently needed to avoid the static and dynamic paradoxes.Instead, they argue that revision of the intention requirement, thebelief requirement, or both offers a simpler account of self-deceptionthat avoids the paradoxes raised by modeling it on intentionalinterpersonal deception.
The chief problem facing intentional models of self-deception is thedynamic paradox, namely, that it seems impossible to form an intentionto get oneself to believe what one currently disbelieves or believesis false. For one to carry out an intention to deceive oneself, onemust know what one is doing; to succeed, one must be ignorant of thissame fact. Intentionalists agree that self-deception is intentionalor, at least, robustly purposive, but divide over whether it requiresholding contradictory beliefs and, thus, over the specific content ofthe alleged intention involved (see§3.2 Revision of Belief). Insofar as even the bare intention to acquire the belief thatp for reasons having nothing to do with one’s evidenceforp seems unlikely to succeed if directly known, mostintentionalists introduce some sort of temporal or psychologicalpartition to insulate self-deceivers from their deceptive stratagems.When self-deceivers are not consciously aware of what they trulybelieve or intend, it’s easier to see how they can play the roleof the deceiver and the deceived. By dividing the mind into parts,temporally or psychologically, these approaches seek to show thatself-deception does not involve paradox.
Some intentionalists argue that self-deception is a complex,temporally extended process during which a self-deceiver canconsciously set out to deceive herself thatp, knowing orbelieving that ~p, and along the way lose her belief that~p, either forgetting her original deceptive intentionentirely or regarding it as having, albeit accidentally, brought aboutthe true belief she would have arrived at anyway (Sorensen 1985;Bermúdez 2000). So, for instance, an official involved in someillegal behavior might destroy any records of this behavior and createevidence that would cover it up (diary entries, emails, and the like),knowing that she will likely forget having done these things over thenext few months. When her activities are investigated a year later,she has forgotten her tampering efforts and, based upon her falsifiedevidence, comes to believe falsely that she was not involved in theillegal activities of which she is accused. Here, the self-deceiverneed never simultaneously hold contradictory beliefs even though sheintends to bring it about that she believes thatp, which sheregards as false at the outset of the process of deceiving herself andtrue at its completion.
The self-deceiver need not even forget her original intention todeceive. Take an atheist who sets out to get herself to believe in Godbecause it seems the best bet if God turns out to exist. She mightwell remember such an intention at the end of the process and deemthat by God’s grace even this misguided path led her to thetruth. What enables the intention to succeed in such cases is theoperation of what Johnston (1988) terms ‘autonomous means’(e.g., the normal degradation of memory, the tendency to believe whatone practices, etc.), not the continued awareness of the intention,hinting that the process may be subintentional (See§3.1 Revision of Intention).
While such temporal partitioning accounts appear to avoid the staticand dynamic paradoxes, many, if not most, cases of self-deceptionaren’t of this temporally extended type. Regularly,self-deception seems to occur instantaneously (Jordan 2022), as when aphilosopher self-deceives that her article is high quality even whilereading the substantive and accurate criticisms in the rejectionletter from the prestigious peer-reviewed journal she submitted it to(example due to Mele 2001). Additionally, many of these temporallyextended cases lack the distinctive opacity, indirection, and tensionassociated with garden-variety cases of self-deception (Levy2004).
Another strategy employed by intentionalists is the division of theself into psychological parts that play the role of the deceiver anddeceived, respectively. These strategies range from positing strongdivision in the self, where the deceiving part is a relativelyautonomous subagency capable of belief, desire, and intention (Rorty1988), to more moderate division, where the deceiving part stillconstitutes a separate center of agency (Pears 1984, 1986; 1991), tothe relatively modest division of Davidson (1982, 1986), where thereneed only be a boundary between conflicting attitudes andintentions.
Such divisions are prompted in large part by the acceptance of thedual-belief requirement. It isn’t simply that self-deceivershold contradictory beliefs, which though strange, isn’timpossible since one can believe thatp and believe that~p without believing thatp & ~p. Theproblem such theorists face stems from the appearance that the beliefthat ~p motivates and thus forms a part of the intention tobring it about that one acquires and maintains the false belief thatp (Davidson 1986). So, for example, the Nazi official’srecognition that his actions implicate him in serious evil motivateshim to implement a strategy to deceive himself into believing he isnot so involved; he can’t intend to bring it about that he holdssuch a false belief if he doesn’t recognize it is false, and hewouldn’t want to bring such a belief about if he didn’trecognize the evidence to the contrary. So long as this is the case,the deceiving self, whether it constitutes a separate center of agencyor something less robust, must be hidden from the conscious self beingdeceived if the self-deceptive intention is to succeed.
While these psychological partitioning approaches seem to resolve thestatic and dynamic puzzles, they do so by introducing a picture of themind that raises puzzles of its own. On this point, there appears tobe consensus even among intentionalists that self-deception can andshould be accounted for without invoking speculative or stipulativedivisions not already used to explain non-self-deceptive behavior,what Talbott (1995) calls ‘innocent’ divisions. That said,recent, if controversial, research (e.g., Bargh and Morsella 2008;Hassin, Bargh, and Zimmerman 2009; Huang and Bargh 2014) seems tosupport the possibility of the sort of robust unconscious but flexiblegoal pursuit that could explain the way self-deceivers are able topursue their deceptive goal while retaining the beliefs necessary fornavigating the shifting evidential terrain (see Funkhouser and Barrett2016, 2017 and Doody 2016 for skepticism about the applicability ofthis research to self-deception). If this kind of research shows thatno ‘psychological exotica’ are necessary to explainself-deception, there is less pressure to deflate the phenomenon inways that minimize the active, strategic role self-deceivers play inthe process.
A number of philosophers have moved away from modeling self-deceptiondirectly on intentional interpersonal deception, opting instead torevise either the intention or the belief requirement traditionalintentionalist models assume. Those revising the intention requirementtypically treat self-deception as a species of motivationally biasedbelief, thus avoiding the problems involved with intentionallydeceiving oneself. Call thesenon-intentionalistanddeflationaryapproaches.
Those revising the belief requirement do so in a variety of ways. Someposit other, non-doxastic or quasi-doxastic attitudes toward theproposition involved (‘misrepresentation’ Jordan 2020,2022; ‘hope,’ ‘suspicion,’‘doubt,’ ‘anxiety’ Archer 2013;‘besires’ Egan 2009; ‘pretense’ Gendler 2007;‘imagination’ Lazar 1999). Others alter the content of theproposition believed (Holton 2001; Funkhouser 2005; Fernández2013), while others suggest the doxastic attitudes involved areindeterminate, somehow ‘in-between believing’(Schwitzgebel 2001; Funkhouser 2009) or subject to shifting degrees ofcredulity throughout the process of self-deception (Chan and Rowbottom2019). Call theserevision of belief approaches.
Deflationary approaches focus primarily on the process ofself-deception, while the revision of belief approaches focus on theproduct. A revision of either of these aspects, of course, hasramifications for the other. For example, if self-deceptiondoesn’t involve belief, but some other non-doxastic attitude(product), then one may well be able to intentionally enter that statewithout paradox (process). This section considers non-intentional anddeflationary approaches and the worries such approaches raise (§3.1). It also considers revision of belief approaches (§3.2).
Non-intentionalists argue that most ‘garden-variety’ casesof self-deception can be explained without adverting to subagents, orunconscious beliefs and intentions, which, even if they resolve thestatic and dynamic puzzles of self-deception, raise puzzles of theirown. If such non-exotic explanations are available, intentionalistexplanations seem unwarranted and unnecessary.
Since the central paradoxes of self-deception arise from modelingself-deception on intentional interpersonal deception,non-intentionalists suggest this model be jettisoned in favor of onethat takes ‘to be deceived’ to be nothing more thanbelieving falsely or being mistaken in believing something (Johnston1988; Mele 2001). For instance, Sam mishears that it will be a sunnyday and relays this misinformation to Joan with the result that shebelieves it will be a sunny day. Joan is deceived into believing itwill be sunny, and Sam has deceived her, albeit unintentionally.Initially, such a model may not appear promising for self-deceptionsince simply being mistaken aboutp or accidentally causingoneself to be mistaken aboutp doesn’t seem to beself-deception at all but some sort of innocent error.Non-intentionalists, however, argue that in cases of self-deception,the false belief is not accidental but, rather, motivated by desire(Mele 2001), emotion (Lazar 1999), anxiety (Johnston 1988; Barnes1997), or some other attitude regardingp or related top. So, for instance, when Allison believes, against thepreponderance of evidence available to her, that her daughter is nothaving learning difficulties, non-intentionalists will explain thevarious ways she misreads the evidence by pointing to such things asher desire that her daughter not have learning difficulties, her fearthat she has such difficulties, or anxiety over this possibility. Insuch cases, Allison’s self-deceptive belief that her daughter isnot having learning difficulties fulfills her desire, quells her fear,or reduces her anxiety, and it’s this function—not anintention—that explains why her belief formation process isbiased. Allison’s false belief is not an innocent mistake but aconsequence of her motivational states.
Non-intentionalists divide over the dual-belief requirement. Someaccept the requirement, seeing the persistent efforts to resist theconscious recognition of the unwelcome truth or to reduce the anxietygenerated by this recognition as characteristic of self-deception(Bach 1981; Johnston 1988). So, in Allison’s case, her beliefthat her daughter is having learning difficulties, along with herdesire that this not be the case, motivates her to employ means toavoid this thought and to believe the opposite.
Others, however, argue the needed motivation can as easily be suppliedby uncertainty or ignorance whetherp, or suspicion that~p (Mele 2001; Barnes 1997). Thus, Allison need not hold anyopinion regarding her daughter’s having learning difficultiesfor her false belief to count as self-deception since it’s herregarding evidence in a motivationally biased way in the face ofevidence to the contrary, not her recognition of this evidence, thatmakes her belief self-deceptive. Accordingly, Allison needn’tintend to deceive herself nor believe at any point that her daughter,in fact, has learning difficulties. If we think someone like Allisonis self-deceived, then self-deception requires neither contradictorybeliefs nor intentions regarding the acquisition or retention of theself-deceptive belief. Such approaches are ‘deflationary’in the sense that they take self-deception to be explicable withoutreaching for exotic cognitive architecture since neither intentionsnor dual-beliefs are required to account for self-deception. (For moreon the dual-belief requirement, see§3.2 Revision of Belief.)
Mele (2001, 2012) has offered the most fully articulated deflationaryaccount, and his view has been the target of the most scrutiny, so itis worth stating what he takes to be the jointly sufficient conditionsfor entering self-deception:
Mele (2012) added the last two conditions to clarify his account inview of some of the criticisms of this kind of approach addressed inthe next section.
To support non-intentionalism, some have looked to the purposivemechanisms of deception operating in non-human organisms as a model(Smith 2014), while others have focused on neurobiological mechanismstriggered by affect to explain the peculiarly purposive responses toevidence involved in self-deception (Lauriaet al. 2016).
Critics contend these deflationary accounts do not adequatelydistinguish self-deception from other sorts of motivated believing(such as wishful thinking), nor can they explain the peculiarselectivity associated with self-deception, its characteristic‘tension,’ the way it involves a failure ofself-knowledge, or the agency of the self-deceiver.
Self-Deception and Wishful Thinking: What distinguisheswishful thinking from self-deception, according to intentionalists,just is that the latter is intentional while the former is not(Bermúdez 2000). Specifically, wishful thinking does not seem‘deceptive’ in the requisite sense. Non-intentionalistsrespond that what distinguishes wishful thinking from self-deceptionis that self-deceivers recognize evidence against their self-deceptivebelief whereas wishful thinkers do not (Bach 1981; Johnston 1988) orthat they merely possess, without recognizing it, greatercounterevidence than wishful thinkers (Mele 2001). In either case,self-deceivers exert more agency than wishful thinkers over theirbelief formation. In wishful thinking, motivation triggers a beliefformation process in which the person does not play an active,conscious role, “while in self-deception the subject is awilling participant in directing cognition towards the doxasticembrace of the favored proposition” (Scott-Kakures 2002; seealso Szabados 1973). While the precise relationship between wishfulthinking and self-deception is clearly a matter of debate,non-intentionalists offer plausible ways of distinguishing the twothat do not invoke the intention to deceive.
Self-Deception and Selectivity: Another worry—termedthe ‘selectivity problem’—originally raised byBermúdez (1997, 2000) is that deflationary accounts don’tseem to be able to explain the selective nature of self-deception(i.e., why motivation seems only selectively to produce bias). Why isit, such intentionalists ask, that we are not rendered biased in favorof the belief thatp in many cases where we have a verystrong desire thatp (or anxiety or some other motivationrelated top)? Intentionalists argue that an intention to getoneself to acquire the belief thatp offers the moststraightforward answer to this question.
Others, following Mele (2001, 2012, 2020), contend that selectivitymay be explained in terms of the agent’s assessment of therelative costs of erroneously believing thatp or ~p(see Friedrich (1993) and Trope and Lieberman (1996) on lay hypothesistesting). Essentially, this approach suggests that the minimization ofcostly errors is the central principle guiding hypothesis testing. So,for example, Josh would be happier believing falsely that the gourmetchocolate he finds so delicious isn’t produced by exploitedfarmers than falsely believing that it is since he desires that it notbe so produced. Because Josh considers the cost of erroneouslybelieving his favorite chocolate is tainted by exploitation to be veryhigh—no other chocolate gives him the same pleasure—ittakes a great deal more evidence to convince him that his chocolate isso tainted than it does to convince him otherwise. It’s the lowsubjective cost of falsely believing the chocolate is not tainted thatfacilitates Josh’s self-deception. But we can imagine Joshhaving the same strong desire that his chocolate not be tainted byexploitation and yet having a different assessment of the cost offalsely believing it’s not tainted. Say, for example, he worksfor an organization promoting fair trade and non-exploitive laborpractices among chocolate producers and believes he has an obligationto accurately represent the labor practices of the producer of hisfavorite chocolate and would, furthermore, lose credibility if thechocolate he himself consumes is tainted by exploitation. In thesecircumstances, Josh is more sensitive to evidence that his favoritechocolate is tainted—despite his desire that it notbe—since the subjective cost of being wrong is higher for himthan it was before. It is the relative, subjective costs of falselybelievingp and ~p that explain why desire or othermotivation biases belief in some circumstances and not others.
While error-cost accounts offer some explanation of selectivity, onemight still complain that even when these conditions are met,self-deception needn’t follow (Bermúdez 2000, 2017). Somenon-intentionalist respond that these conditions are as complete anexplanation of self-deception as is possible. Given the complexity ofthe factors affecting belief formation and the lack of a completeaccount of its etiology, we shouldn’t expect a completeexplanation in each case (Funkhouser 2019; Mele 2020). Others arguethat attention to the role of emotion in assessing and filteringevidence sheds further light on the process. According to suchapproaches, affect plays a key role in triggering the conditions underwhich motivation leads to self-deception (Galeotti 2016a; Lauriaet al. 2016; Lauria and Preissmann 2018). Ouremotionally-loaded appraisal of evidence, in combination withevidential ambiguity and our potential to cope with the threateningreality, helps to explain why motivation tips us towardself-deception. Research on the role of dopamine regulation andnegative somatic markers provides some empirical support for this sortof affective model (Lauriaet al. 2016; Lauria and Preissmann2018).
Non-intentionalists also point out that intentionalists haveselectivity problems of their own since it isn’t clear whyintentions are formed in some cases rather than others (Jurjako 2013)or why some intentions to acquire a self-deceptive belief succeedwhile others do not (Mele 2001, 2020) See Bermúdez (2017) for aresponse to these worries.
Self-Deception and Tension: A number of philosophers havecomplained that deflationary accounts fail to explain certain‘tensions’ or conflicts supposed to be present in cases ofgenuine self-deception (Audi 1997; Bach 1997; Nelkin 2002; Funkhouser2005; Fernández 2013; Jordan 2022). Take Ellen, who says she isdoing well in her biology class but systematically avoids looking atthe results on her quizzes and tests. She says she doesn’t needto look; she knows she didn’t miss anything. When her teachertries to catch her after class to discuss her poor grades, she rushesoff. Similarly, when she sees an email from her teacher with thesubject line “your class performance,” she ignores it. Theprospect of looking at the test results, talking with her teacher, orreading her email sends a flash of dread through her and a pit to herstomach, even as she projects calm and confidence. Ellen’sbehavior and affective responses suggest to these critics that sheknows she isn’t doing well in her biology class, despite heravowals to the contrary. Ellen’s case highlights a variety oftensions that may arise in the self-deceived. Philosophers havefocused on tensions that arise with respect to evidence (since thereis a mismatch between what the evidence warrants and what is believedor avowed); to unconscious doxastic attitudes (since they are atvariance with those consciously held); to self-knowledge (since onehas a false second-order belief about what one believes); to thevarious components of belief—behavioral, verbal, emotional,physical—(since what one does, says, feels, or experiences cancome apart in a number of ways) (Funkhouser 2019); and to authorship(since one may be aware of authoring her self-deception and presentingher self-deception to herself as not having been authored, i.e., asthe truth) (Jordan 2022). These tensions seem to be rooted in theself-deceiver’s awareness of the truth. But, since deflationaryaccounts deny the dual-belief requirement, specifically, thatself-deceivers must hold the true belief that~p, it’snot clear why self-deceivers would experience tension or displaybehaviors in conflict with their self-deceptive beliefp.
Deflationary theorists counter that suspicion that ~p,thinking there’s a significant possibility that ~p, maysuffice to explain these kinds of tensions (Mele 2001, 2009, 2010,2012). Clearly, a person who self-deceptively believes thatpand suspects that~p may experience tension; moreover, suchattitudes combined with a desire thatp might account for thebehaviors highlighted by critics.
While these attitudes may explain some of the tension inself-deception, a number of critics think they are inadequate toexplain deep-conflict cases in which what self-deceivers say aboutp is seriously at odds with non-verbal behavior thatjustifies the attribution of the belief that~p to them(Audi, 1997; Patten 2003; Funkhouser 2005; Gendler 2007;Fernández 2013). While some propose these cases are just atype of self-deception that deflationary approaches cannotexplain (Funkhouser 2009, 2019; Fernández 2013), others gofurther, suggesting these cases show that deflationary approachesaren’t accounts of self-deception at all but ofself-delusion since deflationary self-deceivers seemsingle-mindedly to hold the false belief (Audi 2007; Funkhouser 2005;Gendler 2007).
Some defenders of deflation acknowledge that the significantdifference between what deflationary accounts have in view (namely,people who do not believe the unwelcome truth that~p, havinga motivation-driven, unwarranted skepticism toward it) and whatdeep-conflict theorists do (namely, people who know the unwelcometruth that~p and avoid reflecting on it or encounteringevidence for it) warrants questioning whether these phenomena belongto the same psychological kind, but argue that it’s thedeep-conflict cases that represent something other thanself-deception. Those who unconsciously hold the warranted belief andmerely say or pretend they hold the unwarranted one (Audi 1997;Gendler 2007) hardly seem deceived (Lynch 2012). They seem more toresemble what Longeway (1990) callsescapism, the avoidanceof thinking about what we believe to escape reality (See Lynch2012).
Funkhouser (2019) suggests that this dispute over the depth ofconflict and intensity of tension involved in self-deception is, inpart, a dispute over which cases are central, typical, and mostinteresting. Whether deep-conflict cases constitute a distinctpsychological kind or whether they reflect people’spre-theoretical understanding of self-deception remains unclear, butdeflationary approaches seem to be capable of explaining at least someof the behavior such theorists insist justifies attributing anunconscious belief that~p. Deep-conflict theorists need toexplain why we should think when one avows thatp, one doesnot also believe it to some degree, and why the behavior in questioncannot be explained by nearby proxies like suspicion or doubt thatp (Mele 2010, 2012). Some deflationary theorists contend thata degree of belief model might render deep-conflict cases moresensible (seeShifting Degrees of Belief).
Self-Deception and Self-Knowledge: Several theorists haveargued that deflationary approaches miss certain failures ofself-knowledge involved in cases of self-deception. Self-deceivers,these critics argue, must hold false beliefs about their own beliefformation process (Holton 2001; Scott-Kakures 2002), about whatbeliefs they actually hold (Funkhouser 2005; Fernández 2013),or both. Holton (2001), for instance, argues that Mele’sconditions for being self-deceived are not sufficient because they donot require self-deceivers to hold false beliefs about themselves. Itseems possible for a person to acquire a false belief thatpas a consequence of treating data relevant top in amotivationally biased way when the data available to her providesgreater warrant forp and still retain accurateself-knowledge. Such a person would readily admit to ignoring certaindata because it would undermine a belief she cherishes. She makes nomistakes about herself, her beliefs, or her belief formation process.Such a person, Holton argues, would bewillfully ignorant butnot self-deceived. If, however, her strategy was sufficiently opaqueto her, she would be apt to deny she was ignoring relevant evidenceand even affirm her belief was the result of what Scott-Kakures (2002)calls “reflective, critical reasoning.” These erroneousbeliefs represent a failure of self-knowledge that seems, according tothese critics, essential to self-deception, and they distinguish itfrom wishful thinking (see above), willful blindness, and other nearbyphenomena.
In response to such criticisms, Mele (2009, 2012) has offered thefollowing sufficient condition:S’s acquiring thebelief thatp is a product of “reflective, criticalreasoning,” and S is wrong in regarding that reasoning asproperly directed. Some worry that meeting this condition requires adegree of awareness about one’s reasons for believing that wouldrule out those who do not engage in reflection on their reasons forbelief (Fernández 2013) and fails to capture errors about whatone believes that seem essential for dealing with deep-conflict cases(Fernández 2013; Funkhouser 2005). Whether Mele’s (2009)proposed condition requires too much sophistication fromself-deceivers is debatable but suggests a way of accounting for theintuition that self-deceivers fail to know themselves withoutrequiring them to harbor hidden beliefs or intentions.
Self-Deception and Agency: Some worry that deflationaryexplanations render self-deceivers victims of their own motivations;they don’t seem to be agents with respect to theirself-deception but unwitting patients. But, in denying self-deceiversengage in intentional activities for the purpose of deceivingthemselves, non-intentionalists needn’t deny self-deceiversengage in any intentional actions. It’s open to them to acceptwhat Lynch termsagentism: “Self-deceivers end up withtheir unwarranted belief as a result of their own actions motivated bythe desire thatp” (Lynch 2017). According to agentism,motivation affects belief by means of intentional actions, not simplyby triggering biasing mechanisms. Self-deceivers can act withintentions like “find any problem with unwelcome evidence”or “find anyp-supporting evidence” with a viewof determining whetherp is true. These kinds of intentionsmay explain the agency of self-deceivers and how they could beresponsible for self-deception and not merely victims of their ownmotivations. It isn’t perfectly clear whether Mele-styledeflationists are committed to agentism, but even if they are,questions remain about whether such unwittingly deceptive intentionalactions are enough to render self-deceivers true agents of theirdeception since they think that they are engaging in actions todetermine the truth, not to deceive themselves.
Approaches that focus on revising the notion that self-deceptionrequires holding thatp and~p, the dual-beliefrequirement implied by traditional intentionalism, either introducesome “doxastic proxy” (Baghramian and Nicholson 2013) toreplace one or both beliefs or alter the content of theself-deceiver’s belief in a way that preserves tension withoutinvolving outright conflict. These approaches resolve the doxasticparadox either by denying that self-deceivers hold the unwelcome butwarranted belief~p (Talbott 1995; Barnes 1997;Burmúdez 2000; Mele 2001), denying they hold the welcome butunwarranted beliefp (Audi 1982, 1988; Funkhouser 2005;Gendler 2007; Fernández 2013; Jordan 2020, 2022), denying theyhold either beliefp or~p (Archer 2013; Porcher2012), or contending they have shifting degrees of belief regardingp (Chan and Rowbottom 2019). Lauriaet al. (2016)argue for an integrative approach that accommodates all these productsof self-deception on the basis of empirical research on the roleaffect plays in assessments of evidence.
Denying the Unwelcome Belief: Both intentionalists andnon-intentionalists may question whether self-deceivers must hold theunwelcome but warranted belief. For intentionalists, what’snecessary is some intention to form the target beliefp, andthis is compatible with having no views at all regardingp(lacking any evidence for or against p) or believing p is merelypossible (possessing evidence too weak to warrant belief thatp or~p) (Bermúdez 2000). Moreover, rejectingthis requirement relieves the pressure to introduce deep divisions(Talbott 1995). For non-intentionalists, the focus is on how the falsebelief is acquired, not whether a person believes it’scontradictory. For them, it suffices that self-deceivers acquire theunwarranted false belief thatp in a motivated way (Mele2001). The selectivity and tension typical of self-deception can beexplained without attributing~p since nearby proxies likesuspicion that ~p can do the same work. Citing Rorty’s(1988) case of Dr. Androvna, a cancer specialist who believes she doesnot have cancer but who draws up a detailed will and writesuncharacteristically effusive letters suggesting her impendingdeparture, Mele (2009) points out that Androvna’s behavior mighteasily be explained by her holding that there’s a significantchance I have cancer. And this belief is compatible withAndrovna’s self-deceptive belief that she does not, in fact,have cancer.
Denying the Welcome Belief: Another strand of revision ofbelief approaches focuses on the welcome belief thatp,proposing alternatives to this belief that function in ways thatexplain what self-deceivers typically say and do. Self-deceiversdisplay ambiguous behavior that not only falls short of what one wouldexpect from a person who believes thatp but seems to justifythe attribution of the belief that~p. For instance,Androvna’s letter-writing and will-preparation might be taken asreasons for attributing to her the belief that she won’trecover, despite her verbal assertions to the contrary. To explain thespecial pattern of behavior displayed by self-deceivers, some of thesetheorists propose proxies for full belief, such as sincere avowal(Audi 1982, 1988); pretense (Gendler 2007); an intermediate statebetween belief and desire, or ‘besire’ (Egan 2009); someother less-than-full belief state akin to imaginations or fantasies(Lazar 1999); or simply ‘misrepresentation’ (Jordan 2020,2022). Such states may guide and motivate action in many, though notall, circumstances while being relatively less sensitive to evidencethan beliefs.
Others substitute a higher-order belief to explain the behavior ofself-deceivers as another kind of proxy for the belief thatp(Funkhouser 2005; Fernández 2013). On such approaches,self-deceivers don’t believep; they believe that theybelieve thatp, and this false second-orderbelief—“I think that I believe thatp”—underlies and underwrites their sincere avowalthatp as well as their ability to entertainp astrue. Self-deception, then, is a kind of failure of self-knowledge, amisapprehension or misattribution of one’s own beliefs. Byshifting the content of the self-deceptive belief to the second-order,this approach avoids the doxastic paradox and explains thecharacteristic ‘tension’ or ‘conflict’attributed to self-deceivers in terms of the disharmony between thefirst-order and second-order beliefs, the latter explaining theiravowed belief and the former their behavior that goes against thatavowed belief (Funkhouser 2005; Fernández 2013).
Denying both the Welcome Belief and the UnwelcomeBelief: Given the variety of proxies that have beenoffered for both the welcome and the unwelcome belief, it should notbe surprising that some argue that self-deception can be explainedwithout attributing either belief to self-deceivers, a position Archer(2013) refers to as ‘nondoxasticism.’ Porcher (2012)recommends against attributing beliefs to self-deceivers on thegrounds that what they believe is indeterminate since they are, asSchwitzgebel (2001, 2010) contends,“in-between-believing,” neither fully believing thatp nor fully not believing thatp. For Porcher(2012), self-deceivers show the limits of the folk psychologicalconcepts of belief and suggest the need to develop a dispositionalaccount of self-deception that focuses on the ways thatself-deceivers’ dispositions deviate from those of stereotypicalfull belief. Funkhouser (2009) also points to the limits of folkpsychological concepts and suggests that in cases involving deepconflict between behavior and avowal, “the self-deceived producea confused belief-like condition so that it is genuinely indeterminatewhat they believe regardingp.” Archer (2013), however,rejects the claim that the belief is indeterminate or that folkpsychological concepts are inadequate, arguing that folk psychologyoffers a wide variety of non-doxastic attitudes such as‘hope,’ ‘suspicion,’ ‘anxiety,’and the like that are more than sufficient to explain paradigm casesof self-deception without adverting to belief.
Shifting Degrees of Belief: Some contend thatattention to shifting degrees of belief offers a better explanation ofparadigm cases of self-deception—especially the behavioraltensions—and avoids the static paradox (Chan and Rowbottom2019). In their view, many so-called non-doxastic attitudes entailsome degree of belief regardingp. Shifts in these beliefsare triggered by and track shifts in information and non-doxasticpropositional attitudes such as desire, fear, anxiety, and anger. Forinstance, a husband might initially have a high degree of belief inhis spouse’s fidelity that plummets when he encountersthreatening evidence. His low confidence reveals afresh how much hewants her fidelity and prompts him to despair. These non-doxasticattitudes trigger another shift by focusing his attention on evidenceof his spouse’s love and fidelity, leaving him with a higherdegree of confidence than his available evidence warrants. On thisshifting belief account, the self-deceiver holds bothp and~p at varying levels of confidence that are always greaterthan zero (example due to Chan and Rowbottom 2019).
While revision of belief approaches suggest a number ofnon-paradoxical ways of thinking about self-deception, some worry thatthose approaches denying that self-deceivers hold the welcome butunwarranted belief thatp eliminate what is central to thenotion of self-deception, namely,deception (see, e.g., Lynch2012; Mele 2010). Whatever the verdict, these revision of beliefapproaches suggest that our way of characterizing belief may not befine-grained enough to account for the subtle attitudes ormeta-attitudes that self-deceivers bear on the proposition inquestion. Taken together, these approaches make it clear that thequestion regarding what self-deceivers believe is by no meansresolved.
‘Twisted’ or negative self-deception differs from‘straight’ or positive self-deception because it involvesthe acquisition of anunwelcome as opposed to awelcome belief (Mele 1999, 2001; Funkhouser 2019). Roughly,the negatively self-deceived have a belief that is neither warrantednor wanted in consequence of some desire, emotion, or combination ofboth. For instance, a jealous husband, uncertain about hiswife’s fidelity, comes to believe she’s having an affairon scant and ambiguous evidence, something he certainly doesn’twant to be the case. Intentionalists may see little problem here, atleast in terms of offering a unified account, since both positive andnegative self-deceivers intend to produce the belief in question, andnothing about intentional deception precludes getting the victim tobelieve something unpleasant or undesirable. That said,intentionalists typically see the intention to believep asserving a desire to believep (Davidson 1985; Talbott 1995;Bermúdez 2000, 2017), so they still face the difficult task ofexplaining why negative self-deceivers intend to acquire a belief theydon’t want (Lazar 1999; Echano 2017). Non-intentionalists have asteeper hill to climb since it’s difficult to see how someonelike the anxious husband could be motivated to form a belief that hedoesn’t at all desire. The challenge for the non-intentionalist,then, is to supply a motivational story that explains the acquisitionof such an unwelcome belief. Ideally, the aim is to provide a unifiedexplanation for both the positive and negative varieties with a viewto theoretical simplicity. Attempts to provide such an explanation nowconstitute a significant and growing body of literature that centerson the nature of the motivations involved and the precise role affectplays in the process.
Since the desire for the welcome belief cannot serve as the motive foracquiring the unwelcome one, non-intentionalists have sought someulterior motive (Pears 1984), such as the reduction of anxiety (Barnes1997), the avoidance of costly errors (Mele 1999, 2001), or denialthat the motivation is oriented toward the state of the world at all(Nelkin 2002).
The jealous husband might be motivated to believe his wife isunfaithful because it supports the vigilance needed to eliminate allrival lovers and preserve the relationship—both of which hedesires (Pears 1984). Similarly, I might be anxious about my houseburning down and come to hold the unwelcome belief that I’veleft the burner on. Ultimately, acquiring the unwelcome belief reducesmy anxiety because it prompts me to double-check the stove (Barnes1997). Some are skeptical that identifying such ulterior desires oranxieties is always possible or necessary (Lazar 1999; Mele 2001).Many, following Mele (2001, 2003), see a simpler explanation ofnegative cases in terms of the way motivation, broadly speaking,affects the agent’s assessment of the relative costs of error.The jealous husband—not wanting to be made a fool—sees thecost of falsely believing his spouse is faithful as high, while theperson anxious about their house burning down sees the cost of falselybelieving the burner is on as low. Factors such as what the agentcares about and what she can do about the situation affect these errorcost assessments and may explain, in part, the conditions under whichnegative self-deception occurs.
Since negative self-deception often involves emotions—fear,anxiety, jealousy, rage—a good deal of attention has been givento how this component is connected to the motivation driving negativeself-deception. Some, like Mele (2001, 2003), acknowledge thepossibility that emotion alone or in combination with desire isfundamental to what motivates bias in these cases but remain reluctantto say such affective motives are essential or entirelydistinguishable from the desires involved. Others worry that leavingmotivation so ambiguous threatens the claim to provide a unifiedexplanation of self-deception (Galeotti 2016a). Consequently, somehave sought a more central role for affect, seeing emotion astriggering or priming motivationally biased cognition (Scott-Kakures2000, 200; Echano 2017) or as operating as a kind of evidential filterin a pre-attentive—non-epistemic—appraisal of threateningevidence (Galeotti 2016a). On this latter affective-filter view, ouremotions may lead us to see evidence regarding a situation we considersignificant to our wellbeing as ambiguous and therefore potentiallydistressing, especially when we deem our ability to deal with theunwelcome situation as limited. Depending on how strong our affectiveresponse to the distressing evidence is, we may end up discountingevidence for the situation we want, listening instead to our negativeemotions (anxiety, fear, sorrow, etc.), with the result that we becomenegatively self-deceived (see Lauria and Preissmann 2018). Research onthe role of dopamine regulation and negative somatic markers providessome neurobiological evidence in support of this sort ofaffective-filter model and its potential to offer a unified account ofpositive and negative self-deception (Lauria et al. 2016; Lauria andPreissmann 2018).
While the philosophers considered so far take the relevant motives tobe about the state of the world, some hold that the relevant motiveshave to do with self-deceivers’ states of mind. If this latterdesire-to-believe approach is taken, then there may be just onegeneral motive for both kinds of self-deception. Nelkin (2002, 2012),for instance, argues that the motivation for self-deceptive beliefformation should be restricted to a desire to believe thatpand that this is compatible with not wantingp to be true. Imight want to hold thebelief that I have left the stoveburner on but not want it to be the case that I have actually left iton. The belief is desirable in this instance because holding itensures it won’t be true. What unifies cases ofself-deception—both twisted and straight—is that theself-deceptive belief is motivated by a desireto believe thatp; what distinguishes them is that twisted self-deceivers do notwantp to be the case, while straight self-deceivers do.Some, like Mele (2009), argue that such an approach is unnecessarilyrestrictive since a variety of other motives oriented toward the stateof the world might lead one to acquire the unwelcome belief; forexample, even just wanting to not be wrong about the welcome belief(see Nelkin 2012 for a response). Others, like Galeotti (2016a), worrythat this desire-to-believe account renders self-deceivers’epistemic confusion into something bordering on incoherence since itseems to imply self-deceivers want to believep regardless ofthe state of the world, and such a desire seems absurd even at anunconscious level.
Whether the motive for self-deception aims at the state of the worldor the state of the self-deceiver’s mind, the role of affect inthe process remains a significant question that further research inneurobiology may shed light upon. The role of affect has beenunderappreciated but seems to be gathering support and will no doubtguide future theorizing, especially on negative self-deception.
Even though much of the contemporary philosophical discussion ofself-deception has focused on epistemology, philosophical psychology,and philosophy of mind, historically, the morality of self-deceptionhas been the central focus of discussion. Self-deception has beenthought to be morally wrong or, at least, morally dangerous insofar asit represents a threat to moral self-knowledge, a cover for immoralactivity, or a violation of authenticity. Some thinkers, what Martin(1986) calls ‘the vital lie tradition,’ however, have heldthat self-deception can, in some instances, be salutary, protecting usfrom truths that would make life unlivable (e.g., Rorty 1972, 1994).There are two major questions regarding the morality ofself-deception: First, can a person be held morally responsible forself-deception, and if so, under what conditions? Second, is thereanything morally problematic with self-deception, and if so, what andunder what circumstances? The answers to these questions are clearlyintertwined. If self-deceivers cannot be held responsible forself-deception, then their responsibility for whatever morallyobjectionable consequences self-deception might have will be mitigatedif not eliminated. Nevertheless, self-deception might be morallysignificant even if one cannot be taxed for entering it. To beignorant of one’s moral self, as Socrates saw, may represent agreat obstacle to a life well lived, whether or not one is at faultfor such ignorance.
Whether self-deceivers can be held responsible for theirself-deception is largely a question of whether they have therequisite control over the acquisition and maintenance of theirself-deceptive belief. In general, intentionalists hold thatself-deceivers are responsible since they intend to acquire theself-deceptive belief, usually recognizing the evidence to thecontrary. Even when the intention is indirect, such as when oneintentionally seeks evidence in favor ofp or avoidscollecting or examining evidence to the contrary, self-deceivers seemto intentionally flout their own normal standards for gathering andevaluating evidence. So, minimally, they are responsible for suchactions and omissions.
Initially, non-intentionalist approaches may seem to remove the agentfrom responsibility by rendering the process by which she isself-deceived subintentional. If my anxiety, fear, or desire triggersa process that ineluctably leads me to hold the self-deceptive belief,it seems I cannot be held responsible for holding that belief. Howcould I be held responsible for processes that operate without myknowledge and which are set in motion without my intention? Mostnon-intentionalist accounts, however, do allow for the possibilitythat self-deceivers are responsible for individual episodes ofself-deception or for the vices of cowardice and lack of self-controlfrom which they spring. To be morally responsible in the sense ofbeing an appropriate target for praise or blame requires, at least,that agents have control over the actions in question. Mele (2001),for example, argues that many sources of bias are controllable andthat self-deceivers can recognize and resist the influence of emotionand desire on their belief acquisition and retention, particularly inmatters they deem to be important—morally or otherwise. Theextent of this control, however, is an empirical question. Nelkin(2012) argues that since Mele’s account leaves the content ofmotivation driving the bias unrestricted, the mechanism in question isso complex that “it seems unreasonable to expect theself-deceiver to guard against” its operation.
Other non-intentionalists take self-deceivers to be responsible forcertain epistemic vices, such as cowardice in the face of fear oranxiety and lack of self-control with respect to the biasinginfluences of desire and emotion. Thus, Barnes (1997) argues thatself-deceivers “can, with effort, in some circumstances, resisttheir biases” (83) and “can be criticized for failing totake steps to prevent themselves from being biased; they can becriticized for lacking courage in situations where having courage isneither superhumanly difficult nor costly” (175). Whetherself-deception is due to a character defect or not, ascriptions ofresponsibility depend upon whether the self-deceiver has control overthe biasing effects of her desires and emotions.
Some question whether self-deceivers do have such control. Forinstance, Levy (2004) has argued that deflationary accounts ofself-deception that deny the contradictory belief requirement shouldnot suppose that self-deceivers are typically responsible since it israrely the case that self-deceivers possess the requisite awareness ofthe biasing mechanisms operating to produce their self-deceptivebelief. Lacking such awareness, self-deceivers do not appear to knowwhen or on which beliefs such mechanisms operate, rendering themunable to curb the effects of these mechanisms, even when they operateto form false beliefs about morally significant matters. Lacking thecontrol necessary for moral responsibility in individual episodes ofself-deception, self-deceivers seem also to lack control over beingthe sort of person disposed to self-deception.
Non-intentionalists may respond by claiming that self-deceivers oftenare aware of the potentially biasing effects their desires andemotions might have and can exercise control over them (DeWeese-Boyd2007). They might also challenge the idea that self-deceivers must beaware in the ways Levy suggests. One well-known account of control,employed by Levy, holds that a person is responsible just in case sheacts on a mechanism that is moderately responsive to reasons(including moral reasons), such that were she to possess such reasons,this same mechanism would act upon those reasons in at least onepossible world (Fischer and Ravizza 1999). Guidance control, in thissense, requires that the mechanism in question be capable ofrecognizing and responding to moral and non-moral reasons sufficientfor acting otherwise. Nelkin (2011, 2012), however, argues thatreasons-responsiveness should be seen as applying primarily to theagent, not the mechanism, requiring only that the agent hasthe capacity to exercise reason in the situation under scrutiny. Thequestion isn’t whether the biasing mechanism itself isreasons-responsive but whether the agent governing its operationis—that is, whether self-deceivers typically could recognize andrespond to moral and non-moral reasons to resist the influence oftheir desires and emotions and instead exercise special scrutiny ofthe belief in question. According to Nelkin (2012), expectingself-deceivers to have such a capacity is more likely if we understandthe desire driving their bias as a desire tobelieve thatp, since awareness of this sort of desire would make iteasier to guard against its influence on the process of determiningwhetherp. Van Loon (2018) points out that discussion ofmoral responsibility and reasons-responsiveness have focused onactions and omissions that indirectly affect belief formation when itis more appropriate to focus on epistemic reasons-responsiveness.Attitudinal control, not action control, is what is at issue inself-deception. Drawing on McHugh’s (2013, 2014, 2017) accountof attitudinal control, Van Loon argues that self-deceivers onMele-style deflationary accounts are responsible for theirself-deceptive beliefs because they recognize and react to evidenceagainst their self-deceptive belief across a wide-range ofcounter-factual scenarios even though their recognition of thisevidence does not alter their belief and their reaction to suchevidence leads them viciously to hold the self-deceptive belief.
Galeotti (2016b) rejects the idea that control is the best way tothink about responsibility in cases of self-deception sinceself-deceivers on deflationary approaches seem both confused andrelatively powerless over the process. Instead, following a modifiedversion of Sher’s (2009) account of responsibility, she contendsthat self-deceivers typically have, but fail to recognize, evidencethat their acts related to belief formation are wrong or foolish andso fall below some applicable standard (epistemic, moral, etc.). Theself-deceiver is under motivational pressure, not incapacitated, andon exiting self-deception, recognizes “the faultiness of hercondition and feels regret and shame at having fooled herself”(Galeotti 2016b). Thisex-post reasons-responsivenesssuggests self-deceivers are responsible in Sher’s sense even iftheir self-deception is not intentional.
In view of these various ways of cashing out responsibility,it’s plausible that self-deceivers can be morally responsiblefor their self-deception on deflationary approaches and certainly notobvious that they couldn’t be.
Insofar as it seems plausible that, in some cases, self-deceivers areapt targets for censure, what prompts this attitude? Take the case ofa mother who deceives herself into believing her husband is notabusing their daughter because she can’t bear the thought thathe is a moral monster (Barnes 1997). Why do we blame her? Here weconfront the nexus between moral responsibility for self-deception andthe morality of self-deception. Understanding what obligations may beinvolved in cases of this sort will help to clarify the circumstancesin which ascriptions of moral responsibility are appropriate. Whilesome instances of self-deception seem morally innocuous, and othersmay even be thought salutary in various ways (Rorty 1994), mosttheorists have thought there to be something morally objectionableabout self-deception or its consequences in many cases. Self-deceptionhas been considered objectionable because it facilitates harm toothers (Linehan 1982; Gibson 2020; Clifford 1877) and to oneself,undermines autonomy (Darwall 1988; Baron 1988), corrupts conscience(Butler 1726), violates authenticity (Sartre 1943), manifests avicious lack of courage and self-control that undermines the capacityfor compassionate action (Jenni 2003), violates an epistemic duty toproperly ground self-ascriptions (Fernández 2013), violates ageneral duty to form beliefs that “conform to the availableevidence” (Nelkin 2012), or violates a general duty to respectour own values (MacKenzie 2022).
Linehan (1982) argues that we have an obligation to scrutinize thebeliefs that guide our actions that is proportionate to the harm toothers such actions might involve. When self-deceivers induceignorance of moral obligations in connection with the particularcircumstances, likely consequences of actions, or nature of their ownengagements, by means of their self-deceptive beliefs, they may beculpable for negligence with respect to their obligation to know thenature, circumstances, likely consequences, and so forth of theiractions (Jenni 2003; Nelkin 2012). Self-deception, accordingly,undermines or erodes agency by reducing our capacity for self-scrutinyand change (Baron 1988). If I am self-deceived about actions orpractices that harm others or myself, my abilities to takeresponsibility and change are also severely restricted.
Joseph Butler (1726), in his well-known sermon “OnSelf-Deceit,” emphasizes the ways in which self-deception aboutone’s moral character andconduct—‘self-ignorance’ driven by inordinate‘self-love’—not only facilitates vicious actions buthinders the agent’s ability to change. Such ignorance, claimsButler, “undermines the whole principle of good … andcorrupts conscience, which is the guide of life” (1726). Holton(2022) explores the way our motivation to see ourselves as morallygood may play a role in this lack of moral self-knowledge.Existentialist philosophers such as Kierkegaard and Sartre, in verydifferent ways, view self-deception as a threat to‘authenticity’ insofar as self-deceivers fail to takeresponsibility for themselves and their engagements in the past,present, and future. By alienating us from our own principles,self-deception may also threaten moral integrity (Jenni 2003).MacKenzie (2022) might be seen as capturing precisely what’swrong with this sort of inauthenticity when she contends that we havea duty to properly respect our values, even non-moral ones. Sinceself-deception is always about something we value in some way, itrepresents a failure to properly respect ourselves as valuers. Othersnote that self-deception also manifests a certain weakness ofcharacter that disposes us to react to fear, anxiety, or the desirefor pleasure in ways that bias our belief acquisition and retention inways that serve these emotions and desires rather than accuracy(Butler 1726; Clifford 1877). Such epistemic cowardice (Barnes 1997;Jenni 2003) and lack of self-control may inhibit the ability ofself-deceivers to stand by or apply moral principles they hold bybiasing their beliefs regarding particular circumstances,consequences, or engagements or by obscuring the principlesthemselves. Gibson (2020), following Clifford (1877), contends thatself-deception increases the risk of harm to the self and others andthe cultivation of epistemic vices like credulity that may havedevastating social ramifications. In all these ways and a myriad ofothers, philosophers have found some self-deception objectionable initself or for the consequences it has, not only on our ability toshape our lives but also for the potential harm it may cause toourselves and others.
Evaluating self-deception and its consequences for ourselves andothers is a difficult task. It requires, among other things:determining the degree of control self-deceivers have; what theself-deception is about (Is it important morally or otherwise?); whatends the self-deception serves (Does it serve mental health or as acover for moral wrongdoing?); how entrenched it is (Is it episodic orhabitual?); and, whether it is escapable (What means of correction areavailable to the self-deceiver?). As Nelkin (2012) contends,determining whether and to what degree self-deceivers are culpablynegligent will ultimately need to be determined on a case-by-casebasis in light of answers to such questions about the stakes at playand the difficulty involved. Others like Mackenzie (2022) hold thatevery case of self-deception is a violation of a general duty torespect our own values, though some cases are more egregious thanothers.
If self-deception is morally objectionable for any of these reasons,weought to avoid it. But, one might reasonably ask how thatis possible given the subterranean ways self-deception seems tooperate. Answering this question is tricky since strategies will varywith the analyses of self-deception and our responsibility for it.Nevertheless, two broad approaches seem to apply to most accounts,namely, the cultivation of one’s epistemic virtues and thecultivation of one’s epistemic community (Galeotti 2016b). Onemight avoid the self-deceptive effects of motivation by cultivatingvirtues like impartiality, vigilance, conscientiousness, andresistance to the influence of emotion, desire, and the like.Additionally, one might cultivate an epistemic community that holdsone accountable and guides one away from self-deception (Rorty 1994;Galeotti 2016b, 2018). By binding ourselves to communities we haveauthorized to referee our belief formation in this way, we protectourselves from potential lapses in epistemic virtue. These kinds ofstrategies might indirectly affect our susceptibility toself-deception and offer some hope of avoiding it.
Quite aside from the doxastic, strategic, and moral puzzlesself-deception raises, there is the evolutionary puzzle of its origin.Why do human beings have this capacity in the first place? Why wouldnatural selection allow a capacity to survive that undermines theaccurate representation of reality, especially when inaccuracies aboutindividual ability or likely risk can lead to catastrophic errors?
Many argue that self-deceptively inflated views of ourselves, ourabilities, our prospects, or our control—so-called‘positive illusions’—confer direct benefits in termsof psychological well-being, physical health, and social advancementthat serve fitness (Taylor and Brown 1994; Taylor 1989; McKay andDennett 2009). Just because ‘positive illusions’ make us‘feel good,’ of course, it does not follow that they areadaptive. From an evolutionary perspective, whether an organism‘feels good’ or is ‘happy’ is not significantunless it enhances survival and reproduction. McKay and Dennett (2009)argue that positive illusions are not only tolerable; evolutionarilyspeaking, they contribute to fitness directly. Overly positive beliefsabout our abilities or chances for success appear to make us more aptto exceed our abilities and achieve success than more accurate beliefswould (Taylor and Brown 1994; Bandura 1989). According to Johnston andFowler (2011), overconfidence is “advantageous, because itencourages individuals to claim resources they could not otherwise winif it came to a conflict (stronger but cautious rivals will sometimefail to make a claim, and it keeps them from walking away fromconflicts they would surely win).” Inflated attitudes regardingthe personal qualities and capacities of one’s partners andchildren would also seem to enhance fitness by facilitating thethriving of offspring (McKay and Dennett 2009).
Alternatively, some argue that self-deception evolved to facilitateinterpersonal deception by eliminating the cues and cognitive loadthat consciously lying produces and by mitigating retaliation shouldthe deceit become evident (von Hippel and Trivers 2011; Trivers 2011,2000, 1991). On this view, the real gains associated with‘positive illusions’ and other self-deceptions arebyproducts that serve this greater evolutionary end by enhancingself-deceivers’ ability to deceive. Von Hippel and Trivers(2011) contend that “by deceiving themselves about their ownpositive qualities and the negative qualities of others, people areable to display greater confidence than they might otherwise feel,thereby enabling them to advance socially and materially.”Critics have pointed to data suggesting high self-deceivers are deemedless trustworthy than low self-deceivers (McKay and Dennett 2011).Others have complained that there is little data to support thishypothesis (Dunning 2011; Van Leeuwen 2007a, 2013a), and what datathere is shows us to be poor lie-detectors (Funkhouser 2019; Vrij2011). Some challenge this theory by noting that a simple disregardfor the truth would serve as well as self-deception and have theadvantage of retaining true representations (McKay and Prelec 2011) orthat often self-deceivers are the only ones deceived (Van Leeuwen2007a; Kahlil 2011). Van Leeuwen (2013a) raises the concern that thewide variety of phenomena identified by this theory as self-deceptionrender the category so broad that it is difficult to tell whether itis a unified phenomenon traceable to particular mechanisms that couldplausibly be sensitive to selection pressures. Funkhouser (2019)worries that the unconscious retention of the truth that von Hippeland Trivers (2011) propose would generate tells of its own and thatthe psychological complexity of this explanation is unnecessary if thegoal is to deceive others (which is itself contentious) since thatgoal would be easier to achieve through self-delusion. So, von Hippeland Trivers’ (2011) theory may explain self-delusion but notcases of self-deception marked by deep conflict (Funkhouser2017b).
In view of these shortcomings, Van Leeuwen (2007a) argues the capacityfor self-deception is aspandrel—a byproduct of otheraspects of our cognitive architecture—not an adaptation in thestrong sense of being positively selected. While Funkhouser (2017b)agrees that the basic cognitive architecture that allows motivation toinfluence belief formation—as well as the specific tools used toform or maintain biased belief—were not selected for the sake ofself-deception, it nevertheless makes sense to say, for at least somecontents, that self-deception is adaptive (2017b).
Whether it is an adaptation or a spandrel, it’s possible thiscapacity has nevertheless beenretained as a consequence ofits fitness value. Lopez and Fuxjager (2011) argue that the broadresearch on the so-called “winner effect”—theincreased probability of achieving victory in social or physicalconflicts following prior victories—lends support to the ideathat self-deception is at least weakly adaptive since self-deceptionin the form of positive illusions, like past wins, confers a fitnessadvantage. Lamba and Nityananda (2014) test the theory thatself-deceived are better at deceiving others—specifically,whether overconfident individuals are overrated by others andunderconfident individuals underrated. In their study, students intutorials were asked to predict their own performance on the nextassignment as well as that of each of their peers in the tutorial interms of absolute grade and relative rank. Comparing these predictionsand the actual grades given on the assignment suggests a strongpositive relationship between self-deception and deception. Those whoself-deceptively rated themselves higher were rated higher by theirpeers as well. These findings lend suggestive support to the claimself-deception facilitates other deception. While these studiescertainly do not supply all the data necessary to support the theorythat the propensity to self-deception should be viewed as adaptation,they do suggest ways to test these evolutionary hypotheses by focusingon specific phenomena.
Whether or not the psychological and social benefits identified bythese theories explain the evolutionary origins of the capacity forself-deceit, they may well shed light on its prevalence andpersistence, as well as point to ways to identify contexts in whichthis tendency presents high collective risk (Lamba and Nityananda2014).
Collective self-deception has received scant direct philosophicalattention as compared with its individual counterpart. Collectiveself-deception might refer simply to a group of similarlyself-deceived individuals or to a group-entity (such as a corporation,committee, jury, or the like) that is self-deceived. Thesealternatives reflect two basic perspectives that socialepistemologists have taken on ascriptions of propositional attitudesto collectives. On the one hand, such attributions might be takensummatively as simply an indirect way of attributing thosestates to members of the collective (Quinton 1975/1976). Thissummative understanding, then, considers attitudes attributed togroups to be nothing more than metaphors expressing the sum of theattitudes held by their members. To say that students think tuition istoo high is just a way of saying that most students think so. On theother hand, such attributions might be understoodnon-summatively as applying to collective entities,themselves ontologically distinct from the members upon which theydepend. These so-called ‘plural subjects’ (Gilbert 1989,1994, 2005) or ‘social integrates’ (Pettit 2003), whilesupervening upon the individuals comprising them, may well expressattitudes that diverge from individual members. For instance, sayingNASA believed the O-rings on the space shuttle’s booster rocketsto be safe need not imply that most or all the members of thisorganization personally held this belief, only that the institutionitself did. The non-summative understanding, then, considerscollectives to be, like persons, apt targets for attributions ofpropositional attitudes and potentially of moral and epistemic censureas well. Following this distinction, collective self-deception may beunderstood in either a summative or non-summative sense.
In the summative sense, collective self-deception refers to aself-deceptive belief shared by a group of individuals whom each cometo hold the self-deceptive belief for similar reasons and by similarmeans, varying according to the account of self-deception followed. Wemight call this self-deceptionacross a collective. In thenon-summative sense, the subject of collective self-deception is thecollective itself, not simply the individuals comprising it. Thefollowing sections offer an overview of these forms of collectiveself-deception, noting the significant challenges posed by each.
Understood summatively, we might define collective self-deception asthe holding of a false belief in the face of evidence to the contraryby a group of people as a result of shared desires, emotions, orintentions (depending upon the account of self-deception) favoringthat belief. Collective self-deception is distinct from other forms ofcollective false belief—such as might result from deception orlack of evidence—insofar as the false belief issues from theagents’ own self-deceptive mechanisms (however these areconstrued), not the absence of evidence to the contrary or presence ofmisinformation. Accordingly, the individuals constituting the groupwould not hold the false belief if their vision weren’tdistorted by their attitudes (desire, anxiety, fear, or the like)toward the belief. What distinguishes collective self-deception fromsolitary self-deception is just its social context; namely, that itoccurs within a group that shares both the attitudes bringing aboutthe false belief and the false belief itself.
Merely sharing desires, emotions, or intentions favoring a belief witha group does not entail that the self-deception is properly socialsince these individuals may well self-deceive regardless of the factthat their motivations are shared with others (Dings 2017; Funkhouser2019); they are just individually self-deceiving in parallel. Whatmakes collective self-deceptionsocial, according toDings (2017), is that others are a means used in eachindividual’s self-deception. So, when a person situates herselfin a group of like-minded people in response to an encounter with newand threatening evidence, her self-deception becomes social.Self-deception also becomes social in Dings’ (2017) view when aperson influences others to make them like-minded with regard to herpreferred belief, using their behavior to reinforce herself-deception. Within highly homogeneous social groups,however, it may be difficult to tell who is using the groupinstrumentally in these ways, especially when that use is unwitting.Moreover, one may not need to seek out such a group of like-mindedpeople if they already comprise one’s community. In this case,those people may become instrumental to one’s self-deceptionsimply by dint of being there to provide insulation from threateningevidence and support for one’s preferred belief. In any case,this sort of self-deception is both easier to foster and moredifficult to escape, being abetted by the self-deceptive efforts ofothers within the group.
Virtually all self-deception has a social component, being wittinglyor unwittingly supported by one’s associates (see Ruddick 1988).In the case of collective self-deception, however, the socialdimension comes to the fore since each member of the collectiveunwittingly helps to sustain the self-deceptive belief of the othersin the group. For example, my cancer-stricken friend mightself-deceptively believe her prognosis to be quite good. Faced withthe fearful prospect of death, she does not form accurate beliefsregarding the probability of her full recovery, attending only toevidence supporting full recovery and discounting or ignoringaltogether the ample evidence to the contrary. Caring for her as I do,I share many of the anxieties, fears, and desires that sustain myfriend’s self-deceptive belief, and as a consequence, I form thesame self-deceptive belief via the same mechanisms. In such a case, Iunwittingly support my friend’s self-deceptive belief, and shemine—our self-deceptions are mutually reinforcing. We arecollectively or mutually self-deceived, albeit on a very small scale.Ruddick (1988) calls this ‘joint self-deception,’ and itis properly social just in case each person is instrumental in theformation of the self-deceptive belief in the other (Dings 2017).
On a larger scale, sharing common attitudes, large segments of asociety might deceive themselves together. For example, we share anumber of self-deceptive beliefs regarding our consumption patterns.Many of the goods we consume are produced by people enduring laborconditions we do not find acceptable and in ways that we recognize areenvironmentally destructive and likely unsustainable. Despite ourbeing at least generally aware of these social and environmentalramifications of our consumptive practices, we hold the overlyoptimistic beliefs that the world will be fine, that its peril isoverstated, that the suffering caused by the exploitive andecologically degrading practices are overblown, that our ownconsumption habits are unconnected to these sufferings, and even thatour minimal efforts at conscientious consumption are an adequateremedy (see Goleman 1989). When self-deceptive beliefs such as theseare held collectively, they become entrenched, and their consequences,good or bad, are magnified (Surbey 2004).
The collective entrenches self-deceptive beliefs by providing positivereinforcement through others sharing the same false belief as well asby protecting its members from evidence that would destabilize thetarget belief. There are, however, limits to how entrenched suchbeliefs can become and remain. Social support cannot be the sole orprimary cause of the self-deceptive belief, for then the belief wouldsimply be the result of unwitting interpersonal deception and not thedeviant belief formation process that characterizes self-deception. Ifthe environment becomes so epistemically contaminated as to makecounter-evidence inaccessible to the agent, then we have a case ofsimple false belief, not self-deception. Thus, even within acollective, a person is self-deceived just in case her own motivationsskew the belief formation process that results in her holding thefalse belief. But, to bar this from being a simple case of solitaryself-deception, others must be instrumental to her belief formationprocess such that if they were not part of that process, she would notbe self-deceived (Dings 2017). For instance, I might be motivated tobelieve that climate change is not a serious problem and form thatfalse belief as a consequence. In such a case, I’m not sociallyself-deceived, even if virtually everyone I know shares a similarmotivation and belief. But, say I encounter distressing evidence in myenvironmental science class that I can’t shake on my own. I mayseek to surround myself with like-minded people, thereby protectingmyself from further distressing evidence and providing myself withreassuring evidence. Now, my self-deception is social, and this socialcomponent drives and reinforces my own motivations toself-deceive.
Relative to solitary self-deception, the collective variety presentsgreater external obstacles to avoiding or escaping self-deception andis, for this reason, more entrenched. If the various proposedpsychological mechanisms of self-deception pose an internal challengeto the self-deceiver’s power to control her belief formation,then these social factors pose an external challenge to theself-deceiver’s control. Determining how superable thischallenge is will affect our assessment of individual responsibilityfor self-deception as well as the prospects of unassisted escape fromit.
Collective self-deception can also be understood from the perspectiveof the collective itself in a non-summative sense. Though there arevarying accounts of group belief, generally speaking, a group can besaid to believe, desire, value, or the like just in case its members“jointly commit” to these things as a body (Gilbert 2005).A corporate board, for instance, might be jointly committed as a bodyto believe, value, and strive for whatever the CEO recommends. Suchcommitment need not entail that each individual board memberpersonally endorses such beliefs, values, or goals, only that they doso as members of the board (Gilbert 2005). While philosophicallyprecise accounts of non-summative self-deception remain largelyunarticulated—an exception is Galeotti’s (2018) detailedanalysis of how collective self-deception occurs in the context ofpolitics—the possibilities mirror those of individualself-deception. When collectively held attitudes motivate a group toespouse a false belief despite the group’s possession ofevidence to the contrary, we can say that the group is collectivelyself-deceived in a non-summative sense.
For example, Robert Trivers (2000) suggests that ‘organizationalself-deception’ led to NASA’s failure to representaccurately the risks posed by the space shuttle’s O-ring design,a failure that eventually led to the Challenger disaster. Theorganization as a whole, he argues, had strong incentives to representsuch risks as small. As a consequence, NASA’s Safety Unitmishandled and misrepresented data it possessed that suggested thatunder certain temperature conditions, the shuttle’s O-rings werenot safe. NASA, as an organization, then, self-deceptively believedthe risks posed by O-ring damage were minimal. Within the institution,however, there were a number of individuals who did not share thisbelief, but both they and the evidence supporting their belief weretreated in a biased manner by the decision-makers within theorganization. As Trivers (2000) puts it, this information wasrelegated “to portions of … the organization that [were]inaccessible to consciousness (we can think of the people running NASAas the conscious part of the organization).” In this case,collectively held values created a climate within NASA that cloudedits vision of the data and led to its endorsement of a fatally falsebelief.
Collective self-deceit may also play a significant role infacilitating unethical practices by corporate entities. For example, acollective commitment by members of a corporation to maximizingprofits might lead members to form false beliefs about the ethicalpropriety of the corporation’s practices. Gilbert (2005)suggests that such a commitment might lead executives and othermembers to “simply lose sight of moral constraints and valuesthey previously held.” Similarly, Tenbrunsel and Messick (2004)argue that self-deceptive mechanisms play a pervasive role in whatthey call ‘ethical fading,’ acting as a kind of‘bleach’ that renders organizations blind to the ethicaldimensions of their decisions. They argue that such self-deceptivemechanisms must be recognized and actively resisted at theorganizational level if unethical behavior is to be avoided. Morespecifically, Gilbert (2005) contends that collectively accepting that“certain moral constraints must rein in the pursuit of corporateprofits” might shift corporate culture in such a way thatefforts to respect these constraints are recognized as part of being agood corporate citizen. In view of the ramifications this sort ofcollective self-deception has for the way we understand corporatemisconduct and responsibility, understanding its specific nature ingreater detail remains an important task.
Collective self-deception, understood in either the summative ornon-summative sense, raises significant questions, such as whetherindividuals within collectives bear responsibility for theirself-deception or the part they play in the collective’sself-deception and whether collective entities can be held responsiblefor their epistemic failures (see Galeotti 2016b, 2018 on thesequestions). Finally, collective self-deception prompts us to ask whatmeans are available to collectives and their members to resist, avoid,and escape self-deception. Galeotti (2016b, 2018) argues for a varietyof institutional constraints and precommitments to keep groups fromfalling prey to self-deception.
Given the capacity of collective self-deception to entrench falsebeliefs and to magnify their consequences—sometimes withdisastrous results—collective self-deception is not just aphilosophical puzzle; it is a problem that demands attention.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
[Please contact the author with suggestions.]
action |belief |Davidson, Donald |delusion |lying and deception: definition of |moral responsibility |Sartre, Jean-Paul |self-knowledge |weakness of will
The author would like to thank Margaret DeWeese-Boyd, Douglas Young,and the editors for their help in constructing and revising thisentry.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054