Introspection, as the term is used in contemporary philosophy of mind,is a means of learning about one’s own currently ongoing, orperhaps very recently past, mental states or processes. You can, ofcourse, learn about your own mind in the same way you learn aboutothers’ minds—by reading psychology texts, by observingfacial expressions (in a mirror), by examining readouts of brainactivity, by noting patterns of past behavior—but it’sgenerally thought that you can also learn about your mindintrospectively, in a way that no one else can. But whatexactly is introspection? No simple characterization is widelyaccepted.
Introspection is a central concept in epistemology, sinceintrospective knowledge is often thought to be particularly secure,maybe even immune to skeptical doubt. Introspective knowledge is alsooften held to be more immediate or direct than sensory knowledge. Bothof these putative features of introspection have been cited in supportof the idea that introspective knowledge can serve as a ground orfoundation for other sorts of knowledge.
Introspection is also central to philosophy of mind, both as a processworth study in its own right and as a court of appeal for other claimsabout the mind. Philosophers of mind offer a variety of theories ofthe nature of introspection; and philosophical claims aboutconsciousness, emotion, free will, personal identity, thought, belief,imagery, perception, and other mental phenomena are often thought tohave introspectible consequences or to be susceptible to introspectiveverification. For similar reasons, empirical psychologists too havediscussed the accuracy of introspective judgments and the role ofintrospection in the science of the mind.
(This entry focuses on the cognitive process of introspection and itsrole in self-knowledge. For a treatment of self-knowledge moregenerally, with a focus on its distinctiveness and the various meansof acquiring it, see the entry onself-knowledge.)
Introspection is generally regarded as a process by means of which welearn about our own currently ongoing, or very recently past, mentalstates or processes. Not all such processes are introspective,however: Few would say that you have introspected if you learn thatyou’re angry by seeing your facial expression in the mirror.However, it’s unclear and contentious exactly what more isrequired for a process to qualify as introspective. A relativelyrestrictive account of introspection might require introspection toinvolve attention to and direct detection of one’s ongoingmental states; but many philosophers think attention to or directdetection of mental states is impossible or at least not present insome paradigmatic instances of introspection.
For a process to qualify as “introspective” as the term isordinarily used in contemporary philosophy of mind, it must minimallymeet the following three conditions:
The mentality condition: Introspection is a process thatgenerates, or is aimed at generating, knowledge, judgments, or beliefsaboutmental events, states, or processes, and not aboutaffairs outside one’s mind, at least not directly. In thisrespect, it is different from sensory processes that normally deliverinformation about outward events or about the non-mental aspects ofone’s body. The border between introspective andnon-introspective knowledge can seem to blur with respect to bodilyself-knowledge such as proprioceptive knowledge about the position ofone’s limbs or nociceptive knowledge about one’s pains.But in principle the introspective part of such processes, pertainingto judgments about one’s mind—e.g., that one has thefeeling as though one’s arms were crossed or of toe-ishlylocated pain—can be distinguished from the non-introspectivejudgment that one’s arms are in fact crossed or one’s toeis being pinched.
The first-person condition: Introspection is a process thatgenerates, or is aimed at generating, knowledge, judgments, or beliefsaboutone’s own mind only and no one else’s, atleast not directly. Any process that in a similar manner generatesknowledge of one’s own and others’ minds is by that tokennot an introspective process. (Some philosophers have contemplatedpeculiar or science fiction cases in which we might introspect thecontents of others’ minds directly—for example intelepathy or when two people’s brains are directly wiredtogether—but the proper interpretation of such cases isdisputable; see Gertler 2000; Langland-Hassan 2015.)
The temporal proximity condition: Introspection is a processthat generates knowledge, beliefs, or judgments about one’scurrently ongoing mental life only; or, alternatively (orperhaps in addition)immediately past (or even future) mentallife, within a certain narrow temporal window (sometimes called thespecious present; see the entry on thetemporal consciousness). You may know that you were thinking about Montaigne yesterday duringyour morning walk, but you cannot know that fact by currentintrospection alone—though perhaps you can know introspectivelythat you currently have a vivid memory of having thought aboutMontaigne. Likewise, you cannot know by introspection alone that youwill feel depressed if your favored candidate loses the election inNovember—though perhaps you can know introspectively what yourcurrent attitude is toward the election or what emotion starts to risein you when you consider the possible outcomes. Whether the target ofintrospection is best thought of as one’s current mental life orone’s immediately past mental life may depend on one’smodel of introspection: On self-detection models of introspection,according to which introspection is a causal process involving thedetection of a mental state (see Section 2.2 below), it’snatural to suppose that a brief lapse of time will transpire betweenthe occurrence of the mental state that is the introspective targetand the final introspective judgment about that state, which invites(but does not strictly imply) the idea that introspective judgmentsgenerally pertain to immediately past states. On self-shaping andself-fulfillment models of introspection, according to whichintrospective judgments create or embed the very state introspected(see Sections 2.3.1 and 2.3.2 below), it seems more natural to thinkthat the target of introspection is one’s current mental life orperhaps even the immediate future.
Few contemporary philosophers of mind would call a process“introspective” if it does not meet some version of thethree conditions above, though in ordinary language the temporalproximity condition may sometimes be violated. (For example, inordinary speech we might describe as “introspective” aprocess of thinking about why you abandoned a relationship last monthor whether you’re really as kind to your children as you thinkyou are.) However, many philosophers of mind will resist calling aprocess that meets these three conditions “introspective”unless it also meets some or all of the following threeconditions:
The directness condition: Introspection yields judgments orknowledge about one’s own current mental processes relativelydirectly orimmediately. It’s difficult toarticulate exactly what directness or immediacy involves in thepresent context, but some examples should make the import of thiscondition relatively clear. Gathering sensory information about theworld and then drawing theoretical conclusions based on thatinformation should not, according to this condition, count asintrospective, even if the process meets the three conditions above.Seeing that a car is twenty feet in front of you and then inferringfrom that fact about the external world that you are having a visualexperience of a certain sort does not, by this condition, count asintrospective. However, as we will see in Section 2.3.4 below, thosewho embrace transparency theories of introspection may reject at leaststrong formulations of this condition.
The detection condition: Introspection involves some sort ofattunement to ordetection of apre-existing mental state or event, where the introspectivejudgment or knowledge is (when all goes well)causally butnotontologically dependent on the target mental state. Forexample, a process that involved creating the state of mind that oneattributes to oneself would not be introspective, according to thiscondition. Suppose I say to myself in silent inner speech, “I amsaying to myself in silent inner speech, ‘haecceities ofapplesauce’”, without any idea ahead of time how I plan tocomplete the embedded quotation. Now, what I say may be true, and Imay know it to be true, and I may know its truth (in some sense)directly, by a means by which I could not know the truth of anyoneelse’s mind. That is, it may meet all the four conditions aboveand yet we may resist calling such a self-attribution introspective.Self-shaping (Section 2.3.2 below), expressivist (Section 2.3.3below), and transparency (Section 2.3.4 below) accounts ofself-knowledge emphasize the extent to which our self-knowledge oftendoes not involve the detection of pre-existing mental states; andbecause something like the detection condition is implicitly orexplicitly accepted by many philosophers, some philosophers (includingsome but not all of those who endorse self-shaping, expressivist,and/or transparency views) would regard it as inappropriate to regardsuch accounts of self-knowledge as accounts ofintrospectionproper.
The effort condition: Introspection is notconstant,effortless, and automatic. We are not every minute of the dayintrospecting. Introspection involves some sort of special reflectionon one’s own mental life that differs from the ordinaryun-self-reflective flow of thought and action. The mind may monitoritself regularly and constantly without requiring any special act ofreflection by the thinker—for example, at a non-conscious levelcertain parts of the brain or certain functional systems may monitorthe goings-on of other parts of the brain and other functionalsystems, and this monitoring may meet all five conditionsabove—but this sort of thing is not what philosophers generallyhave in mind when they talk of introspection. However, this condition,like the directness and detection conditions, is not universallyaccepted. For example, philosophers who think that consciousexperience requires some sort of introspective monitoring of the mindand who think of conscious experience as a more or less constantfeature of our lives may reject the effort condition (Armstrong 1968,1999; Lycan 1996).
Though not all philosophical accounts that are put forward by theirauthors as accounts of “introspection” meet all ofconditions 4–6, most meet at least two of those. Because ofdifferences in the importance accorded to conditions 4–6, it isnot unusual for authors with otherwise similar accounts ofself-knowledge to differ in their willingness to describe theiraccounts as accounts of “introspection”.
Accounts of introspection differ in what they treat as the propertargets of the introspective process. No major contemporaryphilosopher believes that all of mentality is available to bediscovered by introspection. For example, the cognitive processesinvolved in early visual processing and in the detection of phonemesare generally held to be introspectively impenetrable and nonetheless(in some important sense) mental (Marr 1983; Fodor 1983). Manyphilosophers also accept the existence of unconscious beliefs ordesires, in roughly the Freudian sense, that are not introspectivelyavailable (e.g., Gardner 1993; Velleman 2000; Moran 2001; Wollheim2003; though see Lear 1998). Although in ordinary English usage wesometimes say we are “introspecting” when we reflect onour character traits, contemporary philosophers of mind generally donot believe that we can directly introspect character traits in thesame sense in which we can introspect some of our other mental states(especially in light of research suggesting that we sometimes havepoor knowledge of our traits, reviewed in Taylor and Brown 1988;Vazire 2010).
The two most commonly cited classes of introspectible mental statesareattitudes, such as beliefs, desires, evaluations, andintentions, andconscious experiences, such as sensoryexperiences and the experiential aspects of emotion and imagery.(These two groups may not be disjoint: Depending on other aspects oftheir view, a philosopher may regard some or all conscious experiencesas involving attitudes, and/or they may regard attitudes as thingsthat are or can be consciously experienced.) It of course does notfollow from the fact (if it is a fact) that some attitudes areintrospectible that all attitudes are, or from the fact that someconscious experiences are introspectible that all consciousexperiences are. Some accounts of introspection focus on attitudes(e.g., Nichols and Stich 2003), while others focus on consciousexperiences (e.g., Hill 1991; Goldman 2006; Schwitzgebel 2012); and itis sometimes unclear to what extent philosophers intend their remarksabout the introspection of one type of target to apply to the othertype. There is no guarantee that the same mechanism or process isinvolved in introspecting all the different potential targets.
Generically, this article will describe the targets of introspectionasmental states, though in some cases it may be more apt tothink of the targets as processes rather than states. Also, inspeaking of the targets of introspection astargets, nopresupposition is intended of a self-detection view of introspectionas opposed to a self-shaping or containment or expressivist view (seeSection 2 below). The targets are simply the states self-ascribed as aconsequence of the introspective process if the process workscorrectly, or if the introspective process fails, the states thatwould have been self-ascribed.
Though philosophers have not explored the issue very thoroughly,accounts also differ regarding theproducts of introspection.Most philosophers hold that introspection yields something likebeliefs or judgments about one’s own mind, but others prefer tocharacterize the products of introspection as “thoughts”,“representations”, “awareness”,“acquaintance”, and so on. For ease of exposition, thisarticle will describe the products of the introspective process asjudgments, without meaning to assume the falsity of alternativeviews.
This section will outline several approaches to self-knowledge. Notall deserve to be called introspective, but an understanding ofintrospection requires an appreciation of this diversity ofapproaches—some for the sake of the contrast they provide tointrospection proper and some because it’s disputable whetherthey should be classified as introspective. These approaches are notexclusive. Surely there is more than one process by means of which wecan obtain self-knowledge.
Symmetrical orself/other parity accounts of self-knowledgetreat the processes by which we acquire knowledge of our own minds asessentially the same as the processes by which we acquire knowledge ofother people’s minds. A simplistic version of this view is thatwe know both our own minds and the minds of others only by observingoutward behavior. For example, we might know we like Thai food becausewe’ve noticed that we sometimes drive all the way across town toget it, or we might know that we’re happy because we see or feelourselves smiling.
On such a view, introspection strictly speaking is impossible, sincethe first-person condition on introspection (condition 2 in Section1.1) cannot be met: There is no distinctive process that generatesknowledge of one’s own mind only. Twentieth-centurybehaviorist principles tended to encourage this view, but no prominent treatmentof self-knowledge accepts this view in its most extreme and simpleform. (The closest is probably Bem 1972.) Advocates of parity accountssometimes characterize our knowledge of our own minds as arising from“theories” that we apply equally to ourselves and others(as in Nisbett and Ross 1980; Gopnik 1993a, 1993b). Consequently, thisapproach to self-knowledge is sometimes called thetheorytheory.
Nisbett, Wilson, and their co-authors (Nisbett and Bellows 1977;Nisbett and Wilson 1977; Nisbett and Ross 1980; Wilson 2002) argue forself/other parity in our knowledge of the bases or causes of our ownand others’ attitudes and behavior, describing cases in whichpeople seem to show poor knowledge of these bases or causes. Forexample, people queried in a suburban shopping center about why theychose a particular pair of stockings appeared to be ignorant of theinfluence of position on that choice, including explicitly denyingthat influence when it was suggested to them. People asked to ratevarious traits of supposed job applicants were unaware that theirjudgments of the applicant’s flexibility were greatly influencedby having been told that the applicant had spilled coffee during thejob interview (see also Section 4.2.2 below). In such cases, Nisbettand his co-investigators found that participants’ descriptionsof the causal influences on their own behavior closely mirrored theinfluences hypothesized by outside observers. From this finding, theyinfer that the same mechanism drives the first-person and third-personattributions, a mechanism that that does not involve any specialprivate access to the real causes of one’s attitudes andbehavior and instead relies heavily on intuitive psychologicaltheories.
Gopnik (1993a, 1993b; Gopnik and Meltzoff 1994) deploys developmentalpsychological evidence to support a parity theory of self-knowledge.She points to evidence that for a wide variety of mental states,including believing, desiring, and pretending, children develop thecapacity to ascribe those states to themselves at the same age theydevelop the capacity to ascribe those states to others. For example,children do not seem to be able to ascribe to themselves past falsebeliefs (after having been tricked by the experimenter) any earlierthan they can ascribe false beliefs to other people. This appears tobe so even when that false belief is in the very recent past, havingonly just been revealed to be false. According to Gopnik, thispervasive parallelism shows that we are not given direct introspectiveaccess to our beliefs, desires, pretenses, and the like. Rather, wemust develop a “theory of mind” in light of which weinterpret evidence underwriting our self-attributions. The appearanceof the immediate givenness of one’s mental states is, Gopniksuggests, merely an “illusion of expertise”: Expertsengage in all sorts of tacit theorizing that they don’trecognize as such—the expert chess player for whom the strengthof a move seems simply visually given, the doctor who immediatelyintuits cancer in a patient. Since we are all experts at mental stateattribution, we don’t recognize the layers of theoryunderwriting the process.
The empirical evidence behind self/other parity views remainscontentious (Nichols and Stich 2003; Carruthers 2011; Cassam 2014).Furthermore, though Nisbett, Wilson, and Gopnik all stress theparallelism between mental state attribution to oneself and others andthe inferential and theoretical nature of such attributions, they allalso leave some room for a kind of self-awareness different in kindfrom the awareness one has of others’ mental lives. Thus, noneendorses apurely symmetrical or self/other parity view.Nisbett and Wilson emphasize that we lack access only to the“processes” or causesunderlying our behavior andattitudes. Our attitudes themselves and our current sensations, theysay, can be known with “near certainty” (1977, 255; thoughcontrast Nisbett and Ross 1980, 200–202, which seems sympatheticto skepticism about special access even to our attitudes). Gopnikallows that we “may be well equipped to detect certain kinds ofinternal cognitive activity in a vague and unspecified way”, andthat we have “genuinely direct and special access to certainkinds of first-person evidence [which] might account for the fact thatwe can draw some conclusions about our own psychological states whenwe are perfectly still and silent”, though we can“override that evidence with great ease” (1993a,11–12). In the analytic philosophical tradition, Ryle (1949)similarly emphasizes the importance of outward behavior in theself-attribution of mental states while acknowledging the presence of“twinges”, “thrills”, “tickles”,and even “silent soliloquies”, which we know of in our owncase and that do not appear to be detectable by observing outwardbehavior. However, none of these authors develops an account of thisapparently more direct self-knowledge. Their theories are consequentlyincomplete. Regardless of the importance of behavioral evidence andgeneral theories in driving our self-attributions, in light of theconsiderations that drive Nisbett, Wilson, Gopnik, and Ryle to thesecaveats, it is probably impossible to sustain a view on which there iscomplete parity between first- and third-person mental stateattributions. There must be some sort of introspective, or at leastuniquely first-person, process.
Self/other parity views can also be restricted to particularsubclasses of mental states: Any mental state that can only be knownby cognitive processes identical to the processes by which we knowabout the same sorts of states in other people is a state to which wehave no distinctively introspective access. States for which parity isoften asserted include personality traits, unconscious motives, earlyperceptual processes, and the bases of our decisions (see Section4.2.1 below for more on this). We learn about these states inourselves, perhaps, in much the same way we learn about such states inother people. Carruthers (2011; see also Section 4.2.2 below) presentsa case for parity of access topropositional attitudes like belief and desire (in contrast to inner speech, visual imagery,and the like, which he holds to be introspectible).
Etymologically, the term “introspection”—from theLatin “looking into”—suggests a perceptual orquasi-perceptual process. Locke writes that we have a faculty of“Perception of the Operation of our own Mind” which,“though it be not Sense, as having nothing to do with externalObjects; yet it is very like it, and might properly enough becall’d internal Sense” (1690 [1975, 105], italicssuppressed). Kant (1781/1997) says we have an “innersense” by which we learn about mental aspects of ourselves thatis in important ways parallel to the “outer sense” bywhich we learn about outer objects.
But what does it mean to say that introspection is like perception? Inwhat respects? As Shoemaker (1994a, 1994b, 1994c) observes, in anumber of respects introspection is plausiblyunlikeperception. Both friends and foes of self-detection accounts havetended to agree that introspection does not involve a distinctivephenomenology of “introspective appearances” (Shoemaker1994a, 1994b, 1994c; Lycan 1996; Rosenthal 2001; Siewert 2012; thoughKriegel forthcoming might be an exception): The visual experience ofredness has a distinctive sensory quality or phenomenology that wouldbe difficult or impossible to convey to a blind person; analogouslyfor the olfactory experience of smelling a banana, the auditoryexperience of hearing a pipe organ, and the experience of touchingsomething painfully hot. To be analogous to sensory experience in thisrespect, introspection would have to generate an analogouslydistinctive phenomenology—some quasi-sensory phenomenology inaddition to, say, the visual phenomenology of seeing red that is thephenomenology of theintrospective appearance of the visualphenomenology of seeing red. This would seem to require two layers ofappearance in introspectively attended sensory perception: a visualappearance of the outward object and an introspective appearance ofthat visual appearance. (This isn’t to say, however, thatintrospection, or at least conscious introspection, doesn’tinvolve some sort of “cognitive phenomenology”—ifthere is such a thing—of the sort that accompanies consciousthoughts in general: See Bayne and Montague, eds., 2011.) Proponentsof quasi-perceptual models of introspection concede the existence ofsuch disanalogies (e.g., Lycan 1996).
We might consider an account of introspection to be quasi-perceptual,or less contentiously to be a “self-detection” account, ifit meets the first five conditions described in Section 1.1—thatis, the mentality condition, the first-person condition, the temporalproximity condition, the directness condition, and the detectioncondition. One aspect of the detection condition deserves specialemphasis here: Detection requires the ontological independence of thetarget mental state and the introspective judgment—the twostates will be causally connected (assuming that all has gone well)but notconstitutively connected. (Shoemaker (1994a, 1994b,1994c) calls models of self-knowledge that meet this aspect of thedetection condition “broad perceptual” models.)
Self-detection accounts of self-knowledge seem to put introspectionepistemically on a par with sense perception. To many philosophers,this has seemed a deficiency in these accounts. A long and widespreadphilosophical tradition holds that self-knowledge is epistemicallyspecial, that we have specially “privileged access”to—perhaps even infallible or indubitable knowledge of—atleast some portion of our mentality, in a way that is importantlydifferent in kind from our knowledge of the world outside us (seeSection 4 below). Both self/other parity accounts (Section 2.1 above)and self-detection accounts (this section) of self-knowledge eitherdeny any special epistemic privilege or characterize that privilege assimilar to the privilege of being the only person to have an extendedview of an object or a certain sort of sensory access to that object.Other accounts of self-knowledge to be discussed later in Section 2.3are more readily compatible with, and often to some extent driven by,more robust notions of the epistemic differences betweenself-knowledge and knowledge of environmental objects.
Armstrong (1968, 1981, 1999) laid the groundwork for simple,quasi-perceptual, monitoring accounts of introspection. Armstrongdescribes introspection as a “self-scanning process in thebrain” (1968, 324), and he stresses what he sees as theimportant ontological distinction between the state of awarenessproduced by the self-scanning procedure and the target mental state ofwhich one is aware by means of that scanning—the distinction,for example, between one’s pain and one’s introspectiveawareness of that pain. Armstrong also appears to hold that thequasi-perceptual introspective process proceeds at a fairly low levelcognitively—quick and simple, typically without muchinterference by or influence from other cognitive or sensoryprocesses, and approximately continuous. Note that in callingreflexive self-monitoring “introspection”, Armstrongviolates the effort condition from Section 1.1, which requires thatintrospection not be constant and automatic.
Morales (forthcoming) offers a similarly simple monitoring account,inspired by the framework of “signal detection theory” inthe psychology of perception (Green and Swets 1966). Moralescharacterizes introspection as a matter of focusing one’sattention on current conscious experiences of varying“phenomenal magnitude” or “strength” in orderto produce judgments about them. The strength and accuracy of theintrospective response normally covaries with the strength of thetarget conscious experience.
Nichols and Stich (2003) employ a model of the mind on which having apropositional attitude such as a belief or desire is a matter ofhaving a representation stored in a functionally-defined (andmetaphorical) “belief box” or “desire box”(see also the entries onbelief andfunctionalism). On their account, self-awareness of these attitudes typicallyinvolves the operation of a simple “Monitoring Mechanism”that merely takes the representations from these boxes, appends an“I believe that …”, “I desire that…”, or whatever (as appropriate) to that representation,and adds it back into the belief box. For example, if I desire that myfather flies to Hong Kong on Sunday, the Monitoring Mechanism can copythe representation in my desire box with the content “my fatherflies to Hong Kong on Sunday” and produce a new representationin my belief box—that is, create a new belief—with thecontent “I desire that my father flies to Hong Kong onSunday”. Nichols and Stich also propose an analogous butsomewhat more complicated mechanism (they leave the detailsunspecified) that takes percepts as its input and produces beliefsabout those percepts as its output.
Nichols and Stich emphasize that this Monitoring Mechanism does notoperate in isolation, but often co-operates or competes with a secondmeans of acquiring self-knowledge, which involves deploying theoriesalong the lines suggested by Gopnik (see Section 2.1.1 above). Nicholsand Stich argue that autistic people have very poor theoreticalknowledge of the mind, as suggested by their very poor performance in“theory of mind” tasks (tasks like assessing when someonewill have a false belief), and yet they succeed in monitoring theirmental states as shown by their ability to describe their mentalstates in autobiographies and other forms of self-report. Conversely,Nichols and Stich argue that schizophrenic people remain excellenttheorizers about mental states but monitor their own mental statesvery poorly—for example, when they fail to recognize certainactions as their own and struggle to report, or deny the existence of,ongoing thoughts. If this view is empirically correct, the pattern of“double dissociation” suggests that theoretical inferenceand self-monitoring are distinct and separable processes.
Goldman (2006) criticizes the account of Nichols and Stich (seeSection 2.2.1 above) for not describing how the Monitoring Mechanismdetects the attitude type of the representation (belief, desire,etc.). He argues that a simple mechanism could not discern thedispositional and causal relational facts in virtue of which anattitude is the type it is (see the entry onfunctionalism). Goldman also argues that the Nichols and Stich account leaves unclearhow we can discern the strength or intensity of our beliefs, desires,and other propositional attitudes. Goldman’s positive accountstarts with the idea that introspection is a quasi-perceptual processthat involves attending to individual mental states, which are thenclassified into broad categories (similarly, in visual perception wecan classify seen objects into broad categories). However, onGoldman’s view this process can only generate introspectiveknowledge of the generaltypes of mental states (such asbelief, happiness, bodily sensation) and some properties of thosemental states (such as degree of confidence for belief, and “amultitude of finely delineated categories” for bodilysensation). Specific contents, especially of attitudes like belief,are too manifold, Goldman suggests, for pre-existing classificationalcategories to exist for each one. Rather, we represent the specificcontent of such mental states by “redeploying” therepresentational content of the mental state, that is, simply copyingthe content of the introspected mental state into the content of theintrospective belief or judgment (somewhat like in the Nichols andStich account). Finally, Goldman argues that some mental statesrequire “translation” into the mental code appropriate tobelief if they are to be introspected. Visual representations, hesuggests, have a different format or mental code than beliefs, andtherefore cognitive work will be necessary to translate thefine-grained detail of visual experience into mental contents that canbe believed introspectively.
Hill (1991, 2009) also offers a multi-process self-detection accountof introspection. Like Goldman, Hill sees attention (in some broad,non-sensory sense) as central to introspection, though he also allowsfor introspective awareness without attention (1991, 117–118).Hill (2009) argues that introspection is a process that producesjudgments about, rather than perceptual awareness of, thetarget states, and suggests that the processes that generate thesejudgments vary considerably, depending on the target state, and areoften complex. For example, judgments about enduring beliefs anddesires must, he says, involve complex procedures for searching“vast and heterogeneous” long-term memory stores. Centralto Hill’s (1991) account is an emphasis on the capacity ofintrospective attention to transform—especially to amplify andenrich, even to create—the target experience. In this respectHill argues that the introspective act differs from the paradigmaticobservational act which does not transform the object perceived(though of course both scientific and ordinary—especiallygustatory—observation can affect what is perceived); and thusHill’s account contains a “self-fulfillment” or“self-shaping” aspect in the sense of Section 2.3.1 andSection 2.3.2 below, and only qualifiedly and conditionally meets thedetection condition on accounts of introspection as described inSection 1.1 above—the condition that introspection involvesattunement to or detection of a pre-existing mental state orevent.
Like Hill, Prinz (2004) argues that introspection must involvemultiple mechanisms, depending both on the target states (e.g.,attitudes vs. perceptual experiences) and the particular mode ofaccess to those states. Access might involve controlled attention orit might be more of a passive noticing; it might involve the verbal“captioning” or labeling of experiences or it mightinvolve the kind of non-verbal access that even monkeys have to theirmental states. Prinz (2007) sharply distinguishes between theconceptual classification of our conscious experiences intovarious types that can be recognized and re-identified overtime—classifications which he thinks must necessarily besomewhat crude—and non-conceptual knowledge of ongoing consciousexperiences attained by “pointing” at them with attention.The latter type of knowledge, Prinz argues, is much more detailed andfinely structured than the former but cannot be expressed or retainedover time. Prinz also follows Hill in emphasizing that introspectionoften intensifies or otherwise modifies the target experience. In suchcases, Prinz argues, introspective “access” is only accessin an attenuated sense.
There are several ways to generate judgments, or at least statements,about one’s own current mental life—self-ascriptions,let’s call them—that are reliably true though they do notinvolve the detection of a pre-existing state. Consider the followingfour types of case:
Automatically self-fulfilling self-ascriptions: I think tomyself, “I am thinking”. Or: I judge that I am making ajudgment about my own mental life. Or: I say to myself in inner speech“I am saying to myself in inner speech:‘blu-bob’”. Such self-ascriptions are automaticallyself-fulfilling. Their existence conditions are a subset of theirtruth conditions.
Self-ascriptions that prompt self-shaping: I declare that Ihave a mental image of a pink elephant. At the same time I make thisdeclaration, I deliberately cause myself to form the mental image of apink elephant. Or: A man uninitiated in romantic love declares to aprospective lover that he is the kind of person who sends flowers tohis lovers. At the same time he says this, he successfully resolves tobe the kind of person who sends flowers to his lovers. Theself-ascription either precipitates a change or buttresses whatalready exists in such a way as to make the self-ascription accurate.In these cases, unlike the cases described in (A), some change orself-maintenance is necessary to render the self-ascription true,beyond the self-ascriptional event itself.
Accurate self-ascription through self-expression: I learn tosay “I’m in pain!” instead of “ow!” asan automatic, unreflective response to painful stimuli. Or: I use theself-attributive sentence “I believe Russell changed his mindabout pacifism” simply as a cautious way of expressing thebelief that Russell changed his mind about pacifism, this expressionbeing the product of reflecting upon Russell rather than a product ofreflection upon my own mind. Self-expressions of this sort are assumedhere to flow naturally from the states expressed in roughly the sameway that facial expressions and non-self-attributive verbalexpressions flow naturally from those same states—that is,without being preceded by any attempt to detect the stateself-ascribed.
Self-ascriptions derived from judgments about the outsideworld: From the non-self-attributive fact that Stanford is southof Berkeley I derive the self-attributive conclusion that I believethat Stanford is south of Berkeley. Or: From the non-self-attributivefact that it would be good to go to home now, I derive theself-attributive judgment that I want to go home now. Thesederivations may be inferences, but if so, such inferences require nospecific premises about ongoing mental states. Perhaps one embraces ageneral inference principle like “fromP, it ispermissible to deriveI believe that P”, or“normally, if something is good, I want it”.
The following accounts of self-knowledge all take advantage of one ormore of these facts about self-ascription. Because these ways ofobtaining self-knowledge all violate the detection condition onintrospection (condition 5 in Section 1.1 above), and becausephilosophers are divided about whether methods of obtainingself-knowledge that violate that condition count asintrospective methods strictly speaking, philosophers aredivided about whether accounts of self-knowledge of the sort describedin this section should be regarded as accounts of introspection.
An emphasis on infallible knowledge through self-fulfillingself-ascriptions goes back at least to Augustine (c. 420 C.E./1998)and is most famously deployed by Descartes in hisDiscourse onMethod (1637/1985) andMeditations (1641/1984), where hetakes the self-fulfilling thought that he is thinking as indubitablytrue, immune to even the most radical skepticism, and a secure groundon which to build further knowledge.
Contemporary self-fulfillment accounts tend to exploit the idea ofcontainment. In a 1988 essay, Burge writes:
When one knows one is thinking thatp, one is not takingone’s thought (or thinking) thatp merely as an object.One is thinking thatp in the very event of thinkingknowledgeably that one is thinking it. It is thought and thought aboutin the same mental act. (654)
This is the case, Burge argues, because “by its reflexive,self-referential character, the content of the second-order[self-attributive] judgment is locked (self-referentially) onto thefirst-order content which it both contains and takes as its subjectmatter” (1988, 659–660; cf. Heil 1988; Gertler 2000, 2001;Heil and Gertler describe such thoughts as introspective while Burgeappears not to think of self-knowledge so structured as introspective:1998, 244; see also 1988, 652). In judging that I am thinking of abanana, I thereby necessarily think of a banana: The self-attributivejudgment contains, as a part, the very thought self-ascribed, and thuscannot be false. In a 1996 essay, Burge extends his remarks to includenot just self-attributive “thoughts” as targets but also(certain types of) “judgments” (e.g., “I judge,herewith, that there are physical entities” and other judgmentswith “herewith”-like reflexivity, 92)
Shoemaker (1994a, 1994b, 1994c) deploys the containment idea verydifferently, and over a much wider array of introspective targets,including conscious states like pains and propositional attitudes likebelief. Shoemaker speculates that the relevant containment relationholds not between thecontents orconcepts employedin the target state and in the self-ascriptive state but ratherbetween their neural realizations in the brain. To develop this point,Shoemaker distinguishes between a mental state’s “corerealization” and its “total realization”. One mightthink of mental processes as transpiring in fairly narrow regions ofthe brain (their core realization), and yet, Shoemaker suggests,it’s not as though we could simply carve off those regions fromall others and still have the mental state in question. To be themental state it is, the process must be embedded in a larger causalnetwork involving more of the brain (the total realization).Relationships of containment or overlap between core realization andtotal realization between the target state and the self-ascriptivejudgment might then underwrite introspective accuracy. For example,the total brain-state realization of the state of pain may simply be asubset of the total brain-state realization of the state of believingthat one is in pain. Introspective accuracy might then be explained bythe fact that the introspective judgment is not an independentlyexisting state.
Philosophers have also applied Burge-like content-containment models(as opposed to Shoemaker-like realization-containment models) toself-knowledge of conscious states, or “phenomenology”, inparticular—for example, Gertler (2001), Papineau (2002),Chalmers (2003), Horgan and Kriegel (2007), Balog (2012), and Giustina(2021). Husserl (1913/1982) offers an early phenomenal containmentapproach, arguing that we can at any time put our“cogitatio”—our consciousexperiences—consciously before us through a kind of mentalglancing, with the self-perception that arises containing as a partthe conscious experience toward which it is directed, and incapable ofexisting without it. Papineau offers a “quotational”account on which in introspection we self-attribute “theexperience: ___”, where the blank is completed by the experienceitself. Chalmers writes that “direct phenomenal beliefs”about our experiences are “partlyconstituted by anunderlying phenomenal quality”, in that the two will be tightlycoupled across “a wide range of nearby conceptually possiblecases” (2003, 235).
One possible difficulty with such accounts is that while it seemsplausible to suppose that an introspective thought or judgment mightcontain another thought or judgment as a part, it’s less clearhow a self-attributive judgment or belief might contain a piece ofconscious experience as a part. Beliefs, and other belief-like mentalstates like judgments, one might think, containconcepts, notconscious experiences, as their constituents (Fodor 1998); or,alternatively, one might think that beliefs are functional ordispositional patterns of response to input (Dennett 1987;Schwitzgebel 2002), again rendering it unclear how a piece ofphenomenology could be part of belief. Perhaps with this concern inmind, advocates of containment accounts often appeal to“phenomenal concepts” that are, like the introspectivejudgments to which they contribute, partly constituted by theconscious experiences that are the contents of those concepts. Suchconcepts are often thought to be obtained by demonstrative attentionto our conscious experiences as they are ongoing. Alternatively,Giustina’s containment account (2021, 2022) treats the productof introspection as a non-conceptual acquaintance with the targetexperience rather than a conceptual judgment.
It would seem, at least, that beliefs, concepts, or judgmentscontaining pieces of phenomenology would have to expire once thephenomenology has passed and thus that the introspective judgmentscould not be used in later inferences without recreating the state inquestion. Chalmers (2003) concedes the temporal locality of suchphenomenology-containing introspective judgments and consequentlytheir limited use in speech and in making generalizations. Papineau(2002), in contrast, embraces a theory in which the imaginativerecreation of phenomenology in thinking about past experience iscommonplace.
Although we can seemingly at least sometimes arrive at true selfascriptions through the self-shaping and the self-expressionprocedures (B and C) described at the beginning of Section 2.3, andalthough such procedures may meet the first three conditions on anaccount of introspection as described in Section 1.1—that is,they may (depending on how they are described and developed) beprocedures that can yield only knowledge or judgments (or at leastself-ascriptions) about one’s own currently ongoing or veryrecently past mental states—few philosophers would describe suchprocedures as “introspective”. Nonetheless, they warrantbrief treatment here, partly for the same reason self/other parityaccounts warranted treatment in Section 2.1 above—that is, asskeptical accounts suggesting that the scope of introspection may beconsiderably narrower than is generally thought—and partly asbackground for the “transparency” accounts to be discussedin Section 2.3.4 below, with which they are often married.
It is difficult to find accounts of self-knowledge that stress theself-shaping technique in its purest, forward-looking, causalform—perhaps because it’s clear that self-knowledge mustinvolve considerably more than this (Gertler 2011). However, McGeer(1996, 2008; McGeer and Pettit 2002) and Zawidzki (2016) emphasize theimportance of self-shaping. McGeer writes that “we learn to useour intentional self-ascriptions to instill or reinforce tendenciesand inclinations that fit with these ascriptions, even though suchtendencies and inclinations may at best have been only nascent at thetime we first made the judgments” (McGeer 1996, 510). If Idescribe myself as brave in battle, or as a committedvegetarian—especially if I do so publicly—I createcommitments and expectations for myself that help to make thoseself-ascriptions true. McGeer compares self-knowledge to the knowledgea driver has, as opposed to a passenger, of where the car is going:The driver, unlike the passenger, can make it the case that the cargoes where the driver says it is going.
There are also strains in Dennett (though Dennett may not have anentirely consistent view on these matters; see Schwitzgebel 2007) thatsuggest either a self-fulfillment or a self-shaping view. In someplaces, Dennett compares “introspective” self-reportsabout consciousness to works of fiction, immune to refutation in thesame way that fictional claims are. One could no more go wrong aboutone’s consciousness, Dennett says, than Doyle could go wrongabout the color of Holmes’s easy chair (e.g., 1991, 81, 94).Such remarks are consistent with either an anti-realist view offiction (there are no facts about the easy chair or aboutconsciousness; see 366–367) or a self-fulfillment orself-shaping realist view (Doylecreates facts about Holmesas he thinks or writes about him; we create facts about whatit’s like to be us in thinking or making claims about ourconsciousness, as perhaps on 81 and 94). More moderately, indiscussing attitudes, Dennett emphasizes how the act of formulating anattitude in language—for example, when ordering a menuitem—can involve self-attributing a degree of specification inone’s attitudes that was not present before, thereby committingone to, and partially or wholly creating, the specific attitudeself-ascribed (1987, 20).
Commissive accounts of self-knowledge also involveself-shaping, but not a form of self-shaping in which theintrospective judgment brings into existence an ontologically distincttarget state, but rather a kind of self-shaping involving aself-fulfillment or containment component similar to that discussed inSection 2.3.1 above. Moran (2001), for example, argues that normallywhen we are prompted to think about what we believe, desire, or intend(and he limits his account primarily to these three mental states), wereflect on the (outward) phenomena in question and make up our mindsabout whatto believe, desire, or do. Rather than attemptingto detect a pre-existing state, we open or re-open the matter and cometo a resolution. Since we normally do believe, desire, and intend whatwe resolve to believe, desire, and do, we can therefore accuratelyself-ascribe those attitudes. Coliva (2016) argues that theself-ascription “I believe that P” is like a performativestatement in that it constitutes a comment to the belief that P or tothe truth of P. (See also Wright 1989; Falvey 2000; Heal 2002; Boyle2009, 2024; Singh forthcoming.)
Wittgenstein writes:
[H]ow does a human being learn the meaning of the names ofsensations?—of the word “pain” for example. Here isone possibility: words are connected with the primitive, the natural,expressions of the sensation and used in their place. A child has hurthimself and he cries; and then adults talk to him and teach himexclamations and, later, sentences. They teach the child newpain-behaviour.
“So you are saying that the word ‘pain’ really meanscrying?”—On the contrary: the verbal expression of painreplaces crying and does not describe it. (1953/1968, sec. 244)
And
“It can’t be said of me at all (except perhaps as a joke)that Iknow I am in pain. What is it supposed tomean—except perhaps that Iam in pain?”(1953/1968, sec. 246).
On Wittgenstein’s view, it is both true that I am in pain andthat I say of myself that I am in pain, but the utterance in no wayemerges from a process ofdetecting one’s pain.
A simple expressivist view—sometimes attributed to Wittgensteinon the basis of these and related passages—denies that theexpressive utterances (e.g., “that hurts!”) genuinelyascribe mental states to the individuals uttering them. Such a viewfaces serious difficulties accommodating the evident semantics ofself-ascriptive utterances, including their use in inference and theapparent symmetries between present-tense and past-tense uses andbetween first-person and third-person uses (Wright 1998; Bar-On 2004).Consequently, Bar-On advocates, instead, what she calls aneo-expressivist view according to which expressive utterances canshare logical and semantic structure with non-expressive utterances,despite the epistemic differences between them.
Expressivists have not always been clear about exactly the range oftarget mental states expressible in this way, but it seems plausiblethat at least in principle some true (or apt) self-ascriptions couldarise in this manner, with no intervening introspectiveself-detection. The question would then be whether this is how wegenerally arrive at true self-ascriptions, for someparticular class of mental states, or whether some more archetypicallyintrospective process is also available. (For a more detailedtreatment of expressivism, consult the section about the expressivistmodel of self-knowledge in the entryself-knowledge.)
Evans writes:
[I]n making a self-ascription of belief, one’s eyes are, so tospeak, or occasionally literally, directed outward—upon theworld. If someone asks me, “Do you think there is going to be athird world war?”, I must attend, in answering him, to preciselythe same outward phenomena as I would attend to if I were answeringthe question “Will there be a third world war?” I getmyself into the position to answer the question whether I believe thatp by putting into operation whatever procedure I have foranswering the question whetherp. (1982, 225)
Transparency approaches to self-knowledge, like Evans’,emphasize cases in which it seems that one arrives at an accurateself-ascription not by means of attending to, or thinking about,one’s own mental states, but rather by means of attending to orthinking about the external states of the world that the target mentalstates are about. Note that this claim has both a negative and apositive aspect: We donot learn about our minds by as itwere gazing inward; and wedo learn about our minds byreflecting on the aspects of the world that our mental states areabout. The positive and negative theses are separable: A pluralistmight accept the positive thesis without the negative one; an advocateof a self/other parity theory or an expressivist account ofself-knowledge (with respect to a certain class of target states)might accept the negative thesis without the positive. (N.B.: In thephilosophical literature on self-knowledge “transparency”is also sometimes used to mean something like self-intimation in thesense of Section 4.1.1 below, for example in Wright 1998; Bilgrami2006. This is a completely different usage, not to be confused withthe present usage.) Because transparency accounts emphasize theoutward focus of our thought in arriving at self-ascriptions, callingsuch accounts accounts of “introspection” strains againstthe etymology of the term. Nonetheless, some prominent advocates oftransparency accounts, such as Dretske (1995) and Tye (2000), offerthem explicitly as accounts of introspection.
The range of target states to which transparency applies is a matterof some dispute. Among philosophers who accept something liketransparency, belief is generally regarded as transparent (Gordon1995, 2007; Gallois 1996; Moran 2001; Fernández 2003; Byrne2018). Perceptual states or perceptual experiences are also oftenregarded as transparent in the relevant sense. Harman’s exampleis the most cited:
When Eloise sees a tree before her, the colors she experiences are allexperienced as features of the tree and its surroundings. None of themare experienced as intrinsic features of her experience. Nor does sheexperience any features of anything as intrinsic features of herexperiences. And that is true of you too. There is nothing specialabout Eloise’s visual experience. When you see a tree, you donot experience any features as intrinsic features of your experience.Look at a tree and try to turn your attention to intrinsic features ofyour visual experience. I predict you will find that the only featuresthere to turn your attention to will be features of the presentedtree. (Harman 1990, 667)
Harman’s emphasis here is on the negative thesis, which goesback at least to Moore (1903; though Moore does not unambiguouslyendorse it). The view that it is impossible to attend directly toperceptual experience has been especially stressed by Tye (1995, 2000,2002; see also Evans 1982; Van Gulick 1993; Shoemaker 1994a; Dretske1995; Martin 2002; Stoljar 2004), and directly conflicts with accountsaccording to which we learn about our sensory experience primarily bydirecting introspective attention to it (e.g., Goldman 2006;Petitmengin 2006; Hill 2009; Siewert 2012; and back at least to Wundt1888 and Titchener 1908 [1973]).
Gordon (2007) argues (contra Nichols and Stich 2003 and Goldman 2006)that Evans-likeascent routines (ascending from“p” to “I believe thatp”)can drive the accurate self-ascription of all the attitudes, not justbelief. He makes his case by wedding the transparency thesis tosomething like an expressive account of self-ascription: To answer aquestion about what I want—for example, which flavor ice creamdo I want?—I think not about my desires but rather about thedifferent flavors available, and then Iexpress the resultingattitude self-ascriptively. Similarly for hopes, fears, wishes,intentions, regrets, etc. Gordon points out that from a very earlyage, before they likely have any self-ascriptive intent, childrenlearn to express their attitudes self-ascriptively, for example withsimple phrases like “[I] want banana!” (see also Bar-On2004).
Commissive accounts of self-knowledge (see Section 2.3.2 above) alsogenerally affirm transparency: Reflecting on the world generatescommitment to a belief, desire, or intention, which one thereby alsoknows or self-ascribes (Falvey 2000; Moran 2001; Coliva 2016; Boyle2024).
The transparency thesis is in fact consistent, not just withexpressivism and commissive accounts, but with any of the fournon-detection-based self-ascription procedures described at thebeginning of this section (and indeed Aydede and Güzeldere 2005attempt to reconcile aspects of the transparency view with a broadlydetection-like approach to introspection). This manifold compatibilityflows from the fact that by itself the transparency thesis does not gofar toward a positive view of the mechanisms of self-knowledge.
Byrne (2018), Dretske (1995), and Roche (2016, 2023) bring togethertransparency and something like a derivational model ofself-knowledge—a model on which I derive the conclusion that Ibelieve thatP directly fromP itself, or theconclusion that I am representingx asF from thefact thatx isF—a fact which must of course,to serve as a premise in the derivation, be represented (or believed)by me. Byrne argues that just as one might abide by the followingepistemic rule:
DOORBELL: If the doorbell rings, believe that there is someone at thedoor
so also might someone abide by the rule:
BEL: IfP, believe that you believe thatP.
To determine whether you believe thatP, first determinewhetherP is the case, then follow the rule BEL. Byrne (2018)offers similar accounts of self-knowledge of intention, thinking,seeing, and desire.
Dretske analogizes introspection to ordinary cases of “displacedperception”—cases in which one perceives that something isthe case by way of directly perceiving some other thing (e.g., hearingthat the mail carrier has arrived by hearing the dog’s barking;seeing that you weigh 110 pounds by seeing the dial on the bathroomscale): One perceives that one representsx asF byway of perceiving theF-ness ofx. Dretske notes,however, two points of disanalogy between the cases. In the case ofhearing that the mail carrier has arrived by hearing the dog’sbark, the conclusion (that the mail carrier has arrived) is onlyestablished if the premise about the dog’s barking is true, andfurthermore it depends on a defeasible connecting belief, that thedog’s barking is a reliable indicator of the mail’sarrival. In the introspective case, however, the inference, if it isan inference, does not require the truth of the premise aboutx’s beingF. Even ifx is notF, the conclusion that I’m representingx asF is supported. Nor does there seem to be any sort ofdefeasible connecting belief.
Tye also emphasizes transparency in his account of introspection,though he limits his remarks to the introspection of consciousexperience or “phenomenal character”. In his 2000 book,Tye develops a view like Dretske’s, analogizing introspection todisplaced perception, though Tye unlike Dretske explicitly denies thatinference is involved, instead proposing a mechanism similar to thesort of mechanism envisioned by simple monitoring accounts like thoseof Nichols and Stich (2003; see Section 2.2.1 above), a reliableprocess that, in the case of perceptual self-awareness, takesawareness of external things as its input and yields as its outputawareness of phenomenal character.[1]However, in his 2009 book, Tye rejects the displaced perception model in favorof a version of the transparency view thatidentifiesphenomenal character with external qualities in the world, so thatperceiving features of the world just is perceiving phenomenalcharacter—a view that he recognizes is then charged with thedifficult task of explaining how phenomenal character is a property(or “quality”) of external objects rather than, as isgenerally assumed, a property only of experiences of thoseobjects.
Several authors have challenged the idea that sensory experiencenecessarily eludes attention—that is, they have denied thecentral claim of transparency theories about sensory experience. Block(1996), Kind (2003), and Smith (2008) have argued thatphosphenes—those little lights you see when you press on youreyes—and visual blurriness are aspects of sensory experiencesthat can be directly attended (though see Gow 2019 for objections tothis line of reasoning). Siewert (2004) has argued that what’sintuitively appealing in the transparency view is primarily theobservation that in reflecting on sensory experience one does notwithdraw attention from the objects sensed; but, he argues,this is compatible with also devoting a certain sort of attention tothe sensory experience itself. In early discussions of attention,perceptual attention was sometimes distinguished from“intellectual attention” (James 1890 [1981]; Baldwin1901–1905; see also Peacocke 1998; Mole 2011), that is, from thekind of attention we can devote to purely imagined word puzzles or tophilosophical issues. If non-sensory forms of attention are possible,then the transparency thesis for sensory experience will requirerestatement: Is it only sensory attention to sensory experience thatis impossible? Or is it any kind of attention whatsoever? Simply tosay we don’t attend sensorily to our mental states is to makeonly a modest claim, akin to the claim that we see objects rather thanseeing our visual experiences of objects; but to say that we cannotattend to our mental states even intellectually appears extreme. Inlight of this, it remains unclear how to cast the transparencyintuition to better bring out the core idea that is meant to beconveyed by the slogan that introspecting sensory experience is not amatter of attending to one’s own mind (see also Weksler,Jacobson, and Bronfman 2019).
Philosophers discussing self-knowledge often write as if approacheshighlighting one of these non-self-detection methods of generatingself-ascriptions conflict with approaches that highlight other ofthese non-self-detection methods, and also as if approaches of thisgeneral sort conflict with self-detection approaches (Section 2.2above). While conflicts will certainly exist between differentaccounts intended to serve asexhaustive approaches toself-knowledge, it is implausible that any one or even any few ofthese approaches to self-knowledge is exhaustive. Plausibly, all ofthe non-self-detection approaches described above can lead, at leastoccasionally, to accurate self-ascriptions. Enthusiasts for othermodels needn’t deny this. It also seems hard to deny that we atleastsometimes reach conclusions about our mental livesbased on the kind of theoretical inference or self-interpretationemphasized by advocates of self/other parity accounts (Section 2.1above). Finally, even philosophers concerned about strong oroversimple self-scanning views might wish to grant that the mind candosome sort of tracking of its own present or recently paststates—for example, when we trace back a stream of recently pastthoughts that presumably can’t (because past) be self-ascribedby self-fulfillment, self-shaping, self-expression, or transparencymethods.
Schwitzgebel (2012) elevates this pluralism into a negative account ofintrospection. Introspective judgments, he says, arise from a shiftingconfluence of many processes, recruited opportunistically, none ofwhich can be called introspection proper. Just as there is no single,unified faculty of poster-taking-in that one employs when trying totake in a poster at a psychological conference or science fair, thereis, on Schwitzgebel’s view, no single, unified faculty ofintrospection or one underlying core process, nor even a few dedicatedmechanisms or processes. Instead, the introspector, like theposter-viewer, brings to bear a diverse range of cognitive resourcesas suits the occasion. A process wouldn’t be worth calling“introspective”, he says, unless the introspector aimed toreach a judgment about their current or very recently past consciousexperience, using at least some resources specific to the first-personcase and that exhibit some relatively direct sensitivity to the targetstate; but this limitation does not imply the existence of anydedicated introspective processes. Defenders of less extreme versionsof pluralism that are compatible with the existence of severaldedicated introspective processes include Prinz (2004), Hill (2009),Coliva (2016), Samoilova (2016), and Spener (2024).
Philosophers have long made introspective claims about the humanmind—or, to speak more cautiously, they’ve made claimsseemingly at least in part introspectively grounded. Aristotle (3rd c.BCE/1961) asserts that thought does not occur without imagery. Mengzi(3rd c. BCE/2008) argues that our hearts are pleased by moral goodnessand revolted by evil, even if the pleasure and revulsion are notevident in our outward behavior. Berkeley finds in himself no“abstract ideas” like that of a triangle that is, inLocke’s terms “neither oblique, nor rectangle, neitherequilateral, equicrural, nor scalenon, but all and none of these atonce” (Berkeley 1710/1965, 12; Locke 1689/1975, 596). James Mill(1829 [1878]) attempts a catalog of the varieties of senseexperience.
Although a number of early modern philosophers had aimed to initiatethe scientific study of the mind, it wasn’t until the middle ofthe 19th century—with the appearance ofquantitativeintrospective methods, especially regarding sensoryconsciousness—that the study of the mind took shape as aprogressive, mathematical, laboratory-based science. Earlyquantitative psychologists such as Helmholtz (1856/1962), Fechner(1860 [1964]), and Wundt (1896 [1902]) sought quantitative answers toquestions like: By how much must two physical stimuli differ for theexperiences of them to differ noticeably? How weak a stimulus canstill be consciously perceived? What is the mathematical relationshipbetween stimulus intensity and the intensity of the resultingsensation? (The Weber-Fechner law holds that the relationship islogarithmic.) Along what dimensions, exactly, can sense experiencevary? (The “color solid” [see the link to the Munsellsolid in Other Internet Resources, below], for example, characterizescolor experience by appeal to just three dimensions of variation: hue,saturation, and lightness or brightness.) Although from very early on,psychologists also employed non-introspective methods (e.g.,performance on memory tests, reaction times), most earlycharacterizations of the field stood introspection at the center.James, for example, wrote that “introspective observation iswhat we have to rely on first and foremost and always” (1890[1981, 185]).
In contrast with the dominant philosophical tradition that has, sinceDescartes, stressed the special privilege or at least high accuracy ofintrospective judgments about consciousness (see Section 4.1 below)many early introspective psychologists held that the introspection ofcurrently ongoing or recently past conscious experience is difficultand prone to error if the introspective observer is insufficientlytrained. Wundt, for example, reportedly did not credit theintrospective reports of people with fewer than 50,000 trials ofpractice in observing their conscious experience (Boring 1953).Titchener, a leading American introspective psychologist, wrote a1600-page introspective training manual for students, arguing thatintrospective observation is at least as difficult as observation inthe physical sciences (Titchener 1901–1905; see also Wundt 1874[1908]; Müller 1904; for recent discussions of introspectivetraining see Varela 1996; Vermersch 1999; Nahmias 2002; Schwitzgebel2011b). This difference in optimism about untrained introspection maypartly reflect differences in the types of judgments foregrounded inthe two disciplines. Philosophers stressing privilege tend to focus oncoarse and (seemingly) simple judgments such as “I’mhaving a visual experience of redness” or “I believeit’s raining”. The projects of interest to introspectivepsychologists often required much finer judgments—such asdetermining with mathematical precision whether one visual sensationhas twice the “intensity” of another or determining alongwhat dimensions emotional experience can vary.
Early introspective psychologists’ theoretical discussions ofthe nature of introspection were often framed in reaction toskepticism about the scientific viability of introspection, especiallythe concern that the introspective act interferes with or destroys themental state or process that is its target.[2]) The most influential formulation of this concern wasComte’s:
But as for observing in the same wayintellectual phenomenaat the time of their actual presence, that is a manifestimpossibility. The thinker cannot divide himself into two, of whom onereasons whilst the other observes him reason. The organ observed andthe organ observing being, in this case, identical, how couldobservation take place? This pretended psychological method is thenradically null and void (1830, using the translation of James 1890[1981, 188]).
Introspective psychologists tended to react to this concern in one ofthree ways. The most concessive approach—recommended, forexample, by James (1890 [1981]; see also Mill 1865 [1961]; Lyons1986)—was to grant Comte’s point forconcurrentintrospection, that is, introspection simultaneous with the targetstate or process, and to emphasize in contrastimmediateretrospection, that is, reflecting on or attending to the targetprocess (usually a conscious experience) very shortlyafterit occurs. Since the scientific observation occurs only after thetarget process is complete, it does not interfere with that process;but of course the delay between the process and the observation mustbe as brief as possible to ensure that the process is accuratelyremembered.
Brentano (1874 [1973]) responded to Comte’s concern bydistinguishing between “inner observation”[innereBeobachtung] and “innerperception” [innereWahrnehmung]. Observation,as Brentano characterizes it, involves dedicating full attention to aphenomenon, with the aim of apprehending it accurately. Thisdedication of attention necessarily interferes with the process to beobserved if the process is a mental one; therefore, he says, innerobservation is problematic as a scientific psychological method. Innerperception, in contrast, according to Brentano, does notinvolve attention to our mental lives and thus does not objectionablydisturb them. While our “attention is turned toward a differentobject … we are able to perceive, incidentally, the mentalprocesses which are directed toward that object” (1874 [1973,30]). Brentano concedes that inner perception necessarily lacks theadvantages of attentive observation, so he recommends conjoining itwith retrospective methods.
Wundt (1888) agrees with Comte and Brentano that observationnecessarily involves attention and so often interferes with theprocess to be observed, if that process is an inner, psychologicalone. To a much greater extent than Brentano, however, Wundt emphasizesthe importance to scientific psychology of direct attention toexperience, including planful and controlled variation. Thepsychological method of “inner perception” is, for Wundt,the method of holding and attentively manipulating a memory image orreproduction of a past psychological process. Although Wundt sees somevalue in this retrospective method, he thinks it has two crucialshortcomings: First, one can only work with what one remembers of theprocess in question—the manipulation of a memory-image cannotdiscover new elements. And second, foreign elements may beunintentionally introduced through association—one might confuseone’s memory of a process with one’s memory of anotherassociated process or object.
Therefore, Wundt suggests, the science of psychology must depend uponperception or observation of mental processes as they occur. It is toopessimistic to think that the target mental process are necessarilydistorted, when a well-trained scientist is performing theintrospective task. Asubclass of mental processes remainsrelatively unperturbed by introspection—the“simpler” mental processes, especially perception(1896/1902, 27–28). The experience of seeing red, Wundt claims,is more or less the same whether or not one is introspecting thepsychological fact that one is experiencing redness. Wundt alsosuggests that the basic processes of memory, feeling, and volition canbe systematically introspected and without excessive disruption. Thesealone, he thinks, can be studied by introspective psychology(see also Wundt 1874 [1904]; 1896 [1902]; 1907; for a detailedtreatment of the history of Comte’s objection and thedistinction between self-observation and inner perception, see Spener2024). Other aspects of our psychology must be approached throughnon-introspective methods such as the observation of language,mythology, culture, and human and animal development.
Although introspective psychologists were able to build scientificconsensus on some issues concerning sense experience—issues suchas the limits of sensory perception in various modalities and some ofthe contours of variation in sensory experience—by the early20th century it was becoming clear that on many issues consensus waselusive. The most famous dispute concerned the existence of“imageless thought” (see Humphrey 1951; Kusch 1999); butother topics proved similarly resistant such as the structure ofemotion or “feeling” (James 1890 [1981]; Külpe 1893[1895]; Wundt 1896 [1902]; Titchener 1908 [1973]) and the experientialchanges brought about by shifts in attention (Wundt 1896 [1902];Pillsbury 1908; Titchener 1908 [1973]; Chapman 1933).
By the 1910s,behaviorism (which focused simply on the relationship between outward stimuli andbehavioral response) had declared war on introspective psychology,portraying it as bogged down in irresolvable disputes betweendiffering introspective “experts”, and also rebuking theintrospectivists’ passive taxonomizing of experience,recommending that psychology focus instead on socially usefulparadigms for modifying behavior (e.g., Watson 1913). In the 1920s and1930s, introspective studies were increasingly marginalized. Althoughstrict behaviorism declined in the 1960s and 1970s, its mainreplacement, cognitivistfunctionalism (which treats functionally defined internal cognitive processes ascentral to psychological inquiry), generally continued to sharebehaviorism’s disdain of introspective methods.
Psychophysics (the study of the relationship between physical sensoryinput and consequent psychological state or response), where theintrospective psychologists had found their greatest success,underwent a subtle shift in this period from a focus onsubjective methods—methods that involve asking subjectsto report on their experiences or percepts—to a focus onobjective methods such as asking participants to report onstates of the outside world, including insisting that participantsguess even when they feel they don’t know or have no relevantconscious experience (especially with the rise of signal detectiontheory in psychophysics: Green and Swets 1966; Cheesman and Merikle1986; Macmillan and Creelman 1991; Merikle, Smilek, and Eastwood2001). Perhaps in accord with transparency views of introspection(Section 2.3.4 above), the two types of instruction seem very similar(compare the subjective “tell me if you visually experience aflash of light” with the objective “tell me if the lightflashes”). On the other hand, perhaps in tension withtransparency views, subjective and objective instructions seemsometimes to differ importantly, especially in cases of knownillusion, Gestalt effects such as perceived grouping, stimuli near thelimits of perceivability, and the experience of ambiguous figures(Boring 1921; Merikle, Smilek, and Eastwood 2001; Siewert 2004).
Introspective methods were neverentirely abandoned bypsychologists, and in the last few decades, they have made somethingof a comeback, especially with the rise of the interdisciplinary fieldof “consciousness studies” (see, e.g., Jack andRoepstorff, eds., 2003, 2004). Ericsson and Simon (1984/1993; to bediscussed further in Section 4.2.3 below) have advocated the use of“think-aloud protocols” and immediately retrospectivereports in the study of problem solving. Other researchers haveemphasized introspective methods in the study of imagery (Marks 1985;Kosslyn, Reisbert, and Behrmann 2006) and emotion (Lambie and Marcel2002; Barrett et al. 2007; LeDoux and Brown 2017).
Beeper methodologies have been developed to facilitate immediateretrospection, especially by Hurlburt (1990, 2011; Hurlburt and Heavey2006; Hurlburt and Schwitzgebel 2007) and Csikszentmihalyi (Larson andCsikszentmihalyi 1983; Csikszentmihalyi 2014). Traditional immediatelyretrospective methods required the introspective observer in thelaboratory somehow to intentionally refrain from introspecting thetarget experience as it occurs, arguably a difficult task. Hurlburtand Csikszentmihalyi, in contrast, give participants beepers to wearduring ordinary, everyday activity. The beepers are timed to soundonly at long intervals, surprising participants and triggering animmediately retrospective assessment of their “innerexperience”, emotion, or thoughts in the moment before thebeep.
Introspective or subjective reports of conscious experience have alsoplayed an important role in the search for the “neuralcorrelates of consciousness” (as reviewed in Rees and Frith2007; Prinz 2012; Koch et al. 2016; see also Varela 1996). Oneparadigm is for researchers to present ambiguous sensory stimuli,holding them constant over an extended period, noting what neuralchanges correlate with changes in subjective reports of experience.For example, in “binocular rivalry” methods, two differentimages (e.g., a face and a house) are presented, one to each eye.Participants typically say that only one image is visible at a time,with the visible image switching every few seconds. Researchers havesometimes reported finding evidence that activity in“early” visual areas (such as V1) is not temporallycoupled with reported changes in visual experience, while changes inconscious percept are better temporally coupled with activity inparietal and maybe also frontal areas further downstream and tolarge-scale changes in neural synchronization or oscillation; however,the evidence is disputed (Lumer, Friston, and Rees 1998; Polonsky etal. 2000; Tong, Meng, and Blake 2006; Frässle et al. 2014;Tsuchiya et al. 2015; Brascamp et al. 2018; Block 2020; Hesse and Tsao2020; Bock et al. 2023).
Another version of the ambiguous sensory stimuli paradigm involvespresenting the participant with an ambiguous figure such as the Rubinfaces-vase figure:

Using this paradigm, researchers have found neuronal changes both inearly visual areas and in later areas, as well as changes inwidespread neuronal synchrony, that correspond temporally withsubjective reports of flipping between one way and another of seeingthe ambiguous figure (Kleinschmidt et al. 1998; Rodriguez et al. 1999;Parkkonen et al. 2008; de Graaf et al. 2011; Megumi et al. 2015;Brascamp et al. 2018; Zhu, Hardstone, and Le 2022).
In masking paradigms, stimuli are briefly presented then followed by a“mask”. On some trials, participants report seeing thestimuli, while on others they don’t. In trials in which theparticipant reports that the stimulus was visually experienced,researchers have tended to find higher levels of activity through atleast some of the downstream visual pathways, spontaneous electricaloscillations near 40 Hz, and a negative amplitude EEG response in“early” posterior brain regions about 200 ms after thestimulus (Dehaene et al. 2001; Summerfield, Jack, and Burgess 2002;Del Cul, Baillet, and Dehaene 2007; Quiroga et al. 2008; Salti et al.2015; Förster, Koivisto, and Revonsuo 2020). However, it remainscontentious how properly to interpret such attempts to find neuralcorrelates of consciousness (Noë and Thompson 2004; Dehaene andChangeux 2011; Aru et al. 2012; de Graaf, Hsieh, and Sack 2012; Kochet al. 2016; Phillips 2018; Fink, Kob, and Lyre 2021; Andersen et al.2022).
If we report our attitudes by introspecting upon them, then much ofsurvey research is also introspective, though psychologists have notgenerally explicitly described it as such. As with subjective vs.objective methods in psychophysics, there appears to be only a slightdifference between subjectively phrased questions (“Do youapprove of the President’s handling of the war?”,“Do you think marijuana should be legalized?”) andobjectively phrased questions (“Has the President handled thewar well?”, “Should marijuana be legalized?”). Thiswould seem to support the observation at the core of transparencytheory (discussed in Section 2.3.4 above) that questions about themind and questions about the outside world often call for the sametype of reflection.
It’s plausible to suppose that people have some sort ofprivileged access to at least some of their own mental statesor processes: You know about your own mind, or at least some aspectsof it, in a different way and better than you know about otherpeople’s minds, and maybe also in a different way and betterthan you know about the outside world. Consider pain. It seems youknow your own pains differently and better than you know mine,differently and (perhaps) better than you know about the coffee cup inyour hand. If so, perhaps that special “first-person”privileged knowledge arises through something like introspection, inone or more of the senses described in Section 2 above.
Just as there is a diversity of methods for acquiring knowledge of orreaching judgments about one’s own mental states and processes,to which the label “introspection” applies with more orless or disputable accuracy, so also is there a diversity of forms of“privileged access”, with different kinds of privilege andto which the idea of access applies with more or less or disputableaccuracy. And as one might expect, the different introspective methodsdo not all align equally well with the different varieties ofprivilege.
A substantial philosophical tradition, going back at least toDescartes (1637/1985; 1641/1984; also Augustine c. 420 C.E./1998),ascribes a kind of epistemic perfection to at least some of ourjudgments (or thoughts or beliefs or knowledge) about our ownminds—infallibility, indubitability, incorrigibility, orself-intimation. Consider the judgment (thought, belief, etc.) thatP, whereP is a proposition self-ascribing a mentalstate or process (for exampleP might beI am inpain, orI believe that it is snowing, orI amthinking of a dachshund). The judgment thatP isinfallible just in case, if I make that judgment, it is notpossible thatP is false. It isindubitable just incase, if I make the judgment, it is not possible for me to doubt thetruth ofP. It isincorrigible just in case, if Imake the judgment, it is not possible for anyone else to show thatP is false. And it isself-intimating if it is notpossible forP to be true without my reaching the judgment(thought, belief, etc.) that it is true. Note that the direction ofimplication for the last of these is the reverse of the first three.Infallibility, indubitability, and incorrigibility all have the form:“If I judge (think, believe, etc.) thatP, then…”, while self-intimation has the form “IfP, then I judge (think, believe, etc.) thatP”. All four theses also admit of weakening by addingconditions to the antecedent “if” clause (e.g., “IfI judge thatP as a result of normal introspective processes,then …”). (See Alston 1971 for a helpful dissection ofthese distinctions. Also note that some philosophers [e.g. Ayer1936/1946; Armstrong 1963; Chalmers 2003; Tye 2009] use“incorrigibility” to mean infallibility as defined here,while others [e.g., Ayer 1963; Alston 1971; Rorty 1970; Dennett 2000]use it with the more etymologically specific meaning of [somethinglike] “incapable of correction”.)
Descartes (1641/1984) famously endorsed the indubitability of “Ithink”, which he extends also to such mental states as doubting,understanding, affirming, and seeming to have sensory perceptions. Healso appears to claim that the thought or affirmation that I am insuch states is infallibly true, at least if that thought is clear anddistinct. He was followed in this—especially in hisinfallibilism—by Locke (1690 [1975]), Hume (1739 [1978]),twentieth-century thinkers such as Husserl (1913 [1982]), Ayer (1936[1946], 1963), Lewis (1946), the early Shoemaker (1963), and manyothers. Historical arguments for indubitability and infallibility havetended to center on intuitive appeals to the apparent impossibility ofdoubting or going wrong about such matters as whether one is having athought with a certain content or is experiencing pain or having avisual experience as of seeing red.
Recent infallibilists have added to this intuitive appeal structuralarguments based on self-fulfillment accounts of introspection orself-knowledge (see Section 2.3.1 above)—generally while alsonarrowing the scope of infallibility, for example to thoughts aboutthoughts (Burge 1988, 1996), or to “pure” phenomenaljudgments about consciousness (Chalmers 2003; see also Wright 1998;Gertler 2001; Horgan, Tienson, and Graham 2006; Horgan and Kriegel2007; Tye 2009; with important predecessors in Brentano 1874 [1973];Husserl 1913 [1982]), or to beliefs as “commitments”(Coliva 2016). The intuitive idea behind most of these structuralarguments is that somehow the self-ascriptive thought or judgmentcontains the mental state or process self-ascribed: thethought that I am thinking of a pink elephant contains the thought ofa pink elephant; the judgment that I am having a visual experience ofredness contains the red experience itself.
In contrast, self/other parity (Section 2.1) and self-detection(Section 2.2) accounts of introspection or self-knowledge appear tostand in tension with infallibilism. If introspection orself-knowledge involves a causal process from a mental state to anontologically distinct self-ascription of that state, it appears that,however reliable such a process may generally be, there is inevitablyroom in principle for interference and error. Minimally, it seems,stroke, quantum accident, or clever neurosurgery could break otherwisegenerally reliable relationships between target mental states and theself-ascriptions of those states. Similar considerations apply toself-shaping (Section 2.3.2) and expressivist (Section 2.3.3)accounts, to the extent that these are interpreted causally ratherthan constitutively.
Introspective incorrigibility, as opposed to either infallibility orindubitability, was held by Rorty (1970) to be “the mark of themental”—and thus as applying to a wide range of mentalstates. Dennett (2000, 2002) defends a similar view, for consciousexperiences. The idea behind incorrigibility, recall, is that no oneelse could show your self-ascriptions to be false; or we might say,more qualifiedly and a bit differently, that if you arrive at theright kind of self-ascriptive judgment (perhaps an introspectivelybased judgment about a currently ongoing conscious process thatsurvives critical reflection), then no one else, perhaps not even youin the future, aware of this, can rationally hold that judgment to bemistaken. If I judge that right now I am in severe pain, and I do soas a result of considering introspectively whether I am indeed in suchpain (as opposed to, say, merely inferring that I am in pain based onoutward behavior), and if I pause to think carefully about whether Ireally am in pain and conclude that I indeed am, then no one else whois aware of this can rationally believe that I’m not in pain,regardless of what my outward behavior might be (say, calm andrelaxed) or what shows up in the course of brain imaging (say, noactivation in brain centers normally associated with pain).
Incorrigibility does not imply infallibility: I may not actually be inpain, even if no one couldshow that I’m not.Consequently, incorrigibility is compatible with a broader array ofsources of self-knowledge than is infallibility. Neither Rorty norDennett, for example, appear to defend incorrigibility by appeal toself-fulfillment accounts of introspection (though in both cases,interpreting their positive accounts is difficult). Causal accounts ofself-knowledge may be compatible with incorrigibility if the causalconnections underwriting the incorrigible judgments are vastly moretrustworthy than judgments obtained without the benefit of this sortof privileged access. Of course, unless one embraces a strictself-fulfillment account, with its attendant infallibilism, one willwant to rule out abnormal cases such as quantum accident; hence theneed for qualifications.
Self-intimating mental states are those such that, if a person (or atleast a person with the right background capacities) has them, theynecessarily believe or judge or know that they do. Conscious statesare often held to be in some sense self-intimating, in that the merehaving of them involves, requires, or implies some sort ofrepresentation or awareness of those states. Brentano argues thatconsciousness, for example, of an outward stimulus like a sound,“clearly occurs together with consciousness of thisconsciousness”, that is, the consciousness is “of thewhole mental act in which the sound is presented and in which theconsciousness itself exists concomitantly” (1874 [1995, 129];see alsophenomenological approaches to self-consciousness). “Higher order” and “same order” theories ofconsciousness (Armstrong 1968; Rosenthal 1990, 2005; Gennaro 1996;Lycan 1996; Carruthers 2005; Kriegel 2009; Montague 2016; Lau andBrown, 2019; see alsohigher-order theories of consciousness) explain consciousness in terms of some thought, perception, orrepresentation of the mental state that is conscious—thepresence of that thought, perception, or representation being whatmakes the target state conscious. (On same order theories, the targetmental state, or an aspect of it, represents itself, with no need fora distinct higher order state.) Thus, Horgan, Kriegel, and others havedescribed consciousness as “self-presenting” (Horgan,Tienson, and Graham 2005; Horgan and Kriegel 2007). Shoemaker (1995,2012) argues that beliefs—as long as they are“available” (i.e., readily deployed in inference, assent,practical reasoning, etc.), which needn’t require that they areoccurrently conscious—are self-intimating for individuals withsufficient cognitive capacity. Shoemaker’s idea is that if thebelief thatP is available in the relevant sense, then one isdisposed to do things like say “I believeP”, andsuch dispositions are themselves constitutive of believing that onebelieves thatP.
Self-intimation claims (unlike infallibility, indubitability, andincorrigibility claims) are not usually cast as claims about“introspection”. This may be because knowledge acquiredthrough self-intimation would appear to be constant and automatic,thus violating the effort condition on introspection (condition 6 inSection 1.1 above).
A number of philosophers have argued for forms of first-personprivilege involving some sort of epistemic guarantee—not justconditional accuracy as a matter of empirical fact, but something morerobust than that—without embracing infallibility,indubitability, incorrigibility, or self-intimation in the sensesdescribed in Section 4.1.1 above.
Shoemaker (1968), for example, argues that self-knowledge of certainpsychological facts such as “I am waving my arm” or“I see a canary”, when arrived at “in the ordinaryway (without the aid of mirrors, etc.)”, isimmune to errorthrough misidentification relative to the first-person pronoun(see also Campbell 1999; Pryor 1999; Bar-On 2004; Hamilton 2008:Langland-Hassan 2015). That is, although one may be wrong about wavingone’s arm (perhaps the nerves to your arm were recently severedunbeknownst to you) or about seeing a canary (perhaps it’s agoldfinch), one cannot be wrong due to mistakenly identifying theperson waving the arm or seeing the canaryas you, when infact it is someone else. This immunity arises, Shoemaker argues,because there is no need for identification in the first place, andthus no opportunity formis-identification. In this respect,Shoemaker argues, knowledge that a particular arm that is moving isyour arm (not immune to misidentification since maybe it’ssomeone else’s arm, misidentified in the mirror) is differentfrom the knowledge thatyou are moving yourarm—knowledge, that is, of what Searle (1983) calls an“intention in action”.
Shoemaker has also argued for the conceptual impossibility ofintrospectiveself-blindness with respect to one’sbeliefs, desires, and intentions, and for somewhat different reasonsone’s pains (1988, 1994b). A self-blind creature, byShoemaker’s definition, would be a rational creature with aconception of the relevant mental states, and who can entertain thethought that they have this or that belief, desire, intention, orpain, but who nonetheless utterly lacks introspective access to thetype of mental state in question. A hypothetical self-blind creaturecould still gain “third person” knowledge of the mentalstates in question, through observing their own behavior, readingtextbooks, and the like. (Thus, strict self/other parity accounts ofself-knowledge of the sort described in Section 2.1 are accountsaccording to which one is self-blind in Shoemaker’s sense.)Shoemaker’s case against self-blindness with respect to beliefturns on the dilemma of whether the self-blind creature can avoid“Moore-paradoxical” sentences (see Moore 1942, 1944[1993]; Shoemaker 1995) like “it’s raining but Idon’t believe that it’s raining” in which thesubject asserts bothP and that they don’t believe thatP. If the subject is truly self-blind, Shoemaker suggests,there should be cases in which their best evidence is both thatP and that they don’t believe thatP (thelatter, perhaps, based on misleading facts about their behavior). Butif the subject asserts “P but I don’t believethatP” in such cases, they do not (contra the initialsupposition) really have a rational command of the nature of beliefand assertion. Alternatively, if they can reliably avoid suchMoore-paradoxical sentences, self-attributing belief in an apparentlynormal way, it seems that they are indistinguishable from normalpeople in thought and behavior and hence not self-blind. Shoemakerdevelops similar anti-self-blindness arguments for desire, intention,and pain. Shoemaker uses his case against self-blindness as part ofhis argument against self-detection accounts of introspection(described in Section 2.2 above): If introspection were a matter ofdetecting the presence of states that exist independently of theintrospective judgment or belief, then it ought to be possible for thefaculty enabling the detection to break down entirely, as in the caseof blindness, deafness, etc., in outward perception (see also Nicholsand Stich 2003, who argue that schizophrenia provides such acase).
Transcendental arguments for the accuracy of certain sorts ofself-knowledge offer a different sort of epistemicguarantee—“transcendental arguments” being argumentsthat assume the existence of some sort of experience or capacity, thendevelop insights about the background conditions necessary for thatexperience or capacity, and finally conclude that those backgroundconditions must in fact be met. Burge (1996; see also Shoemaker 1988)argues that to be capable of “critical reasoning” one mustbe able to recognize one’s own attitudes, knowledgeablyevaluating, identifying, and reviewing one’s beliefs, desires,commitments, suppositions, etc., where these mental states are knownto be the states they are. Since we are (by assumption, for the sakeof transcendental argument) capable of critical reasoning, we musthave some knowledge of our attitudes. Bilgrami (2006) argues that wecan only be held responsible for actions if we know the beliefs anddesires that “rationalize” our actions; since we can (byassumption) sometimes be held responsible, we must sometimes know ourbeliefs and desires. Wright (1989) argues that the “languagegame” of ascribing “intentional states” such asbelief and desire to oneself and others requires as a backgroundcondition that self-ascriptions have special authority within thatgame. Given that we successfully play this language game, we mustindeed have the special authority that we assume and others grant usin the context of the game.
Developing an analogy from Wright (1998), if it’s your turn withthe kaleidoscope, you have a type of privileged perspective on theshapes and colors it presents. If someone else in the room wants toknow what color dominates, for example, the most straightforwardcourse would be to ask you. But this type of privileged access comeswith no guarantee. At least in principle, you might be quite wrongabout the tumbling shapes. You might be dazzled by afterimages, ormomentarily confused, or hallucinating, or (unbeknownst to you)colorblind. (Yes, people often don’t know they are colorblind, apoint stressed by Kornblith 1998.) It is also at least in principlepossible that others may know better than you, perhaps evensystematically so, what is transpiring in the kaleidoscope. You mightthink the figure shows octagonal symmetry, but the rest of us,familiar with the kaleidoscope’s design, might know that thesymmetry is hexagonal. A brilliant engineer may invent a kaleidoscopestate detector that can dependably reveal from outside the shape,color, and position of the tumbling chunks.
Wright raises this analogy to suggest that people’s privilegewith respect to certain aspects of their mental lives must bedifferent from that of the person with the kaleidoscope; but otherphilosophers, especially those who embrace self-detection accounts ofintrospection, should find the analogy at least somewhat apt:Introspective privilege is akin to the privilege of having a uniqueand advantageous sensory perspective on something. Metaphoricallyspeaking, we are the only ones who can gaze directly at our attitudesor our stream of experience, while others must rely on us or onoutward signs. Less metaphorically, in generating introspectivejudgments (or beliefs or knowledge) about one’s own mentalityone employs a detection process available to no one else. It is thenan empirical question how accurate the deliverances of this processare; but on the assumption that the deliverances are in a broad rangeof conditions at least somewhat accurate and more accurate than thetypical judgments other people make about those same aspects of yourmind, you have a “privileged” perspective. Typically,advocates of self-detection models of introspection regard themechanism or cognitive process generating introspective judgments orbeliefs as reliable in roughly this way, but not infallible, and notimmune to correction by other people (Armstrong 1968; Hill 1981, 2009;Lycan 1996; Nichols and Stich 2003; Goldman 2000, 2006; Moralesforthcoming).
The arguments of the previous section are a priori in at least thebroad sense of that term (the psychologists’ sense): They dependon general conceptual considerations and armchair folk psychologyrather than on empirical research. To these might be added theargument, due to Boghossian (1989) that “externalism”about the content of our attitudes (the view that our attitudes dependconstitutively not just on what is going on internally but also onfacts about our environment; Putnam 1975; Burge 1979) seems toproblematize introspective self-knowledge of those attitudes. Thisissue will not be treated here, since it is amply covered in theentries onexternalism about mental content andexternalism and self-knowledge.
Now we turn to empirical research on our self-knowledge of thoseaspects of our minds often thought to be accessible to introspection.Since character traits are not generally regarded as introspectibleaspects of our mentality, we’ll skip the large literature on theaccuracy or inaccuracy of our judgments about them (e.g., Taylor andBrown 1988; Funder 1999; Vazire 2010; see also Haybron’s 2008skeptical perspective on our knowledge of how happy we are); nor willwe discuss self-knowledge of subpersonal, nonconscious mentalprocesses, such as the processes underlying visual recognition ofcolor and shape.
As a general matter, while a priori accounts of the epistemology ofintrospection have tended to stress its privilege and accuracy,empirical accounts have tended to stress its failures.
Perhaps the most famous argument in the psychological literature onintrospection and self-knowledge is Nisbett and Wilson’sargument that we have remarkably poor knowledge of the causes of, andprocesses underlying, our behavior and attitudes (Nisbett and Wilson1977; Nisbett and Ross 1980; Wilson 2002). Section 2.1 above brieflymentioned their emblematic finding that people in a shopping mall wereoften ignorant of a major factor—position—influencingtheir judgments about the quality of pairs of stockings. In Nisbettand Bellows (1977), also briefly mentioned above, participants wereasked to assess the influence of various factors on their judgmentsabout features of a supposed job applicant. As in Nisbett andWilson’s stocking study, participants denied the influence ofsome factors that were in fact influential; for example, they deniedthat the information that they would meet the applicant influencedtheir judgments about the applicant’s flexibility. (It actuallyhad a major influence, as assessed by comparing the judgments ofparticipants who were told and not told that they would meet theapplicant.) Participants also attributed influence to factors thatwere not in fact influential; for example, they falsely reported thatthe information that the applicant accidentally knocked over a cup ofcoffee during the interview influenced “how sympathetic theperson seems” to them. Nisbett and Bellows found that ordinaryobservers’ hypothetical ratings of the influence of the variousfactors on the various judgments closely paralleled theparticipants’ own ratings of the factors influencingthem—a finding used by Nisbett to argue that people have nospecial access to causal influences on their judgments and insteadrely on the same sorts of theoretical considerations outside observersrely on (the self/other parity view described in Section 2.1). Despitesome objections and methodological concerns (e.g., Newell and Shanks2014), both psychologists and philosophers now tend to acceptNisbett’s and Wilson’s view that any first-personadvantage in assessing the factors influencing our judgments andbehavior is more modest than non-psychologists generally tend toassume.
In series of experiments, Gazzaniga (1995) presented commissurotomypatients (people with severed corpus callosum) with different visualstimuli to each hemisphere of the brain. With cross-hemisphericcommunication severely impaired due to the commissurotomy, the lefthemisphere, controlling speech, had information about one part of thevisual stimulus, while the right hemisphere, controlling some aspectsof movement (especially the left hand) had information about adifferent part. Gazzaniga reported finding that when these“split brain” patients were asked to explain why they didsomething, when that action was clearly caused by input to the right,non-verbal hemisphere, the left hemisphere would sometimes fluentlyconfabulate an explanation. For example, Gazzaniga reports presentingan instruction like “laugh” to the right hemisphere,making the patient laugh. When asked why he laughed, the patient wouldsay something like “You guys come up and test us every month.What a way to make a living!” (1393). When a chicken claw wasshown to the left hemisphere and snow scene to the right, and thepatient was asked to select an appropriate picture from an array, theright hand would point to a chicken and the left hand to a snowshovel, and when asked why they selected those two things, the patientwould say something like “Oh, that’s simple. The chickenclaw goes with the chicken and you need a shovel to clean out thechicken shed” (ibid.; for a detailed discussion of such cases,see Schechter 2018). Similar confabulation about motives is sometimes(but not always) seen in people whose behavior is, unbeknownst tothem, driven by post-hypnotic suggestion (Richet 1884; Moll 1889[1911]), and in disorders such as hemineglect (anosognosia), blindnessdenial (Anton’s syndrome), and Korsakoff’s syndrome(Hirstein 2005).
In a normal population, Johansson and collaborators (Johansson et al.2005; Johansson et al. 2006) manually displayed to participants pairsof pictures of women’s faces. On each trial, the participant wasto point to the face they found more attractive. The picture of thatface was then centered before the participant while the other face washidden. On some trials, participants were asked to explain the reasonsfor their choices while continuing to look at the selected face. On afew key trials, the experimenters used sleight-of-hand to present tothe participant the face that wasnot selected as though ithad been the face selected. Strikingly, the switch was noticed only28% of the time. What’s more, when the change was not detectedparticipants actually gave explanations for their choice that appealedto specific features of the unselected face that were not possessed bythe selected face 13% of the time. For example, one participantclaimed to have chosen the face before him “because I loveblondes” when in fact he had chosen a dark-haired face(Johansson et al. 2006, 690). Johansson and colleagues failed to findany systematic differences in the explanations of choice between themanipulated and non-manipulated trials, using a wide variety ofmeasures. They found, for example, no difference in linguistic markersof confidence (including pauses in speech), emotionality, specificityof detail, complexity or length of description, or general position insemantic space. These results, like Nisbett’s andWilson’s, suggest that at least some of the time when peoplethink they are explaining the bases of their decisions, they areinstead merely theorizing or confabulating.
The literature on “cognitive dissonance” is replete withcases in which participants’ attitudes appear to change forreasons they do, or would, deny. According to cognitive dissonancetheory, when people behave or appear to behave counternormatively(e.g., incompetently, foolishly, immorally), they will tend to adjusttheir attitudes so as to make the behavior seem less counternormativeor “dissonant” (Festinger 1957; Cooper and Fazio 1984;Stone and Cooper 2001; Harmon-Jones 2019). For example, people inducedto falsely describe as enjoyable a monotonous task they’ve justcompleted will tend, later, to report having a more positive attitudetoward the task then those not induced to lie (though much less so ifthey were handsomely paid to lie in which case the behavior is notclearly counternormative; Festinger and Carlsmith 1959). Presumably,if such attitude changes were known to the person they would generallyfail to have their dissonance-reducing effect. Research psychologistshave also confirmed such familiar phenomena as “sourgrapes” (Elster 1983/2016; Lyubomirsky and Ross 1999; Kay,Jiminez, and Jost 2002) and “self-deception” (Mele 2001)which presumably also involve ignorance of the factors driving therelevant judgments and actions (for a general review of these andother sources of distortion, see Pronin 2009). And of course theFreudian psychoanalytic tradition has also long held that people oftenhave only poor knowledge of their motives and the influences on theirattitudes (Wollheim 1981; Cavell 2006).
In light of this empirical research, no major philosopher now holds(perhaps no major philosopher ever held) that we have infallible,indubitable, incorrigible, or self-intimating knowledge of the causesof our judgments, decisions, and behavior. Perhaps weaker forms ofprivilege also come under threat. But the question arises: Whateverfailures there may be in assessing the causes of our attitudes andbehavior, are those failures failures ofintrospection,properly construed? Psychologists tend to cast these results asfailures of “introspection”, but if it turns out that avery different and more trustworthy process underwrites our knowledgeof some other aspects of our minds—such as what our presentattitudes are (however caused) or our currently ongoing or recentlypast conscious experience—then perhaps we can call onlythat process introspection, thereby retaining some robustform of introspective privilege while acceding to the psychologicalconsensus regarding (what we would now call non-introspective)first-person knowledge of causes. Indeed, few contemporaryphilosophical accounts of introspection or privileged self-knowledgehighlight, as the primary locus of privilege, the causes of ourattitudes and behavior (Bilgrami 2006 is a notable exception). Thus,the literature reviewed in this section can be interpreted assuggesting that the causes of our behavior are not, after all, thesorts of things to which we have introspective access.
Research psychologists have generally not been as skeptical of ourknowledge of our attitudes as they have been of our knowledge of thecauses of our attitudes (Section 4.2.1 above). In fact, many of thesame experiments that purport to show inaccurate knowledge of thecauses of our attitudes nonetheless rely unguardedly on self-reportfor assessment of the attitudes themselves—a feature of thoseexperiments criticized by Bem (1967). Attitudinal surveys inpsychology and social science generally rely on participants’self-report as the principal source of evidence about attitudes (deVaus 1985/2014; Nardi 2002/2018). However, as in the case of motivesand causes, there’s a long tradition in clinical psychologyskeptical of our self-knowledge of our attitudes.
A key challenge in assessing the accuracy of people’s beliefs orjudgments about their attitudes is the difficulty of accuratelymeasuring attitudes independently of self-report. There is at presentno tractable measure of attitude that is generally seen byphilosophers as overriding individuals’ own reports about theirattitudes. However, in the psychological literature,“implicit” measures of attitudes—measures ofattitudes that do not reply on self-report—have recently gainedconsiderable attention (see Wittenbrink and Schwarz, eds., 2007;Petty, Fazio, and Briñol, eds., 2009; Gawronski, De Houwer, andSherman 2020). Such measures are sometimes thought capable ofrevealing unconscious attitudes or implicit attitudes eitherunavailable to introspection or erroneously introspected (Wilson,Lindsey, and Schooler 2000; Lane et al. 2007; though see Hahn et al.2014; Brownstein, Madva, and Gawronski 2019).
Research on implicit attitude measures originated to a substantialextent with attempts to measure racism in North America and Europe, inaccord with the view that racist attitudes, though common, areconsidered socially undesirable and therefore often not self-ascribedeven when present. For example, Campbell, Kruskal, and Wallace (1966)explored the use of seating distance as an index of racial attitudes,noting that racially Black and White students tended to aggregate inclassroom seating arrangements. Using facial electromyography (EMG),Vanman et al. (1997) found White participants to display facialresponses indicative of negative affect more frequently when asked toimagine co-operative activity with Black than with Whitepartners—results interpreted as indicative of racist attitudes.Cunningham et al. (2004) showed White and Black faces to Whiteparticipants while participants were undergoing fMRI brain imaging.They found less amygdala activation when participants looked at facesfrom their own group than when participants looked at other faces; andsince amygdala activation is generally associated with negativeemotion, they interpreted this tendency suggesting a negative attitudetoward outgroup members (see also Hart et al 1990; and for discussionIto and Cacioppo 2007).
Much of the recent implicit attitude research has focused on responsepriming and interference in speeded tasks. In priming research, astimulus (the “prime”) is briefly displayed, followed by amask that hides it, and then a second stimulus (the“target”) is displayed. The participant’s task is torespond as swiftly as possible to the target, typically with aclassification judgment. Inevaluative priming, for example,the participant is primed with a positively or negatively valencedword or picture (e.g., snake), then asked to make a swift judgmentabout whether the subsequently presented target word (e.g.,“disgusting”) is good or bad, or has some other feature(e.g., belongs to a particular category). Generally, negative primeswill speed response for negative targets while delaying response forpositive targets, and positive primes will do the reverse. Researchershave found that photographs of Black faces tend to facilitate thecategorization of negative targets and delay the categorization ofpositive targets for White participants—a result widelyinterpreted as revealing racist attitudes (Fazio et al. 1995; Dovidioet al. 1997; Wittenbrink, Judd, and Park 1997). In theImplicitAssociation Test, respondents are asked to respond disjunctivelyto combined categories, giving for example one response if they seeeither a dark-skinned face or a positively valenced word and adifferent response if they see either a light-skinned face or anegatively valenced word. As in evaluative priming tasks, Whiterespondents tend to respond more slowly when asked to pairdark-skinned faces with positively valenced words than with negativelyvalenced words, which is interpreted as revealing a negative attitudeor association (Greenwald, McGhee, and Schwartz 1998; Lane et al.2007; Jost 2019). However, it should be noted that despite itsprominence, the Implicit Association Task has been criticized ashaving poor test-retest reliability, low predictive validity, and weakcorrelations with other measures of racism (Oswald, Mitchell, Blanton,Jaccard, and Tetlock 2013; Gawronski, Morrison, Phills, and Galdi2017; Payne, Vuletich, and Lundberg 2017; Machery 2022).
As mentioned above, such implicit measures are often interpreted asrevealing attitudes to which people have poor or no introspectiveaccess. The evidence that people lack introspective knowledge of suchattitudes generally turns on the low correlations between suchimplicit measures of racism and more explicit measures such asself-report—though due to the recognized social undesirabilityof racial prejudice, it is difficult to disentangleself-presentational from self-knowledge factors in self-reports (Fazioet al. 1995; Wilson, Lindsey, and Schooler 2000; Greenwald and Nosek2009; ). People who appear racist by implicit measures might disavowracism and inhibit racist patterns of response on explicit measures(such as when asked to rate the attractiveness of faces of differentraces) because they don’t want to beseen asracist—a motivation that might drive them whether or not theyhave accurate self-knowledge of their racist attitudes. Still, itseems prima facie plausible that people have at best limited knowledgeof the patterns of association that drive their responses on primingand other implicit measures.
But what do such tests really measure? In philosophy, Zimmerman (2018)and Gendler (2008a, 2008b) have argued that measures like the ImplicitAssociation Test do not measure actual racist beliefs but rathersomething else, something under less rational control (Gendler callsthem “aliefs”). Schwitzgebel (2010) argues that people whoare implicitly prejudiced but explicitly egalitarian are “inbetween” believing and failing to believe the egalitarianpropositions that they sincerely accept (see also Levy 2015). Machery(2016) argues that implicit measures reveal multi-track dispositions,rather than attitudes. Gawronski and Bodenhausen (2006) advance amodel according to which there is a substantial difference betweenimplicit attitudes, defined in terms of associative processes, andexplicit attitudes which have a propositional structure and are guidedby standards of truth and consistency (see also Wilson, Lindsey, andSchooler 2000; Greenwald and Nosek 2009). Mandelbaum (2016) andBorgoni (2016) also endorse views on which people who are implicitlyprejudiced but explicitly egalitarian have contradictory attitudes,though they argue that both the implicit and the explicit attitudesare propositionally structured and to some extent subject to norms ofrationality. Payne, Vuletich and Lundberg (2017) argue that implicitmeasures capture the situationally-variable accessibility ofculturally given concepts.
How one answers this question about the relation between implicit biasand belief or other attitudes bears on the question of the accuracy ofintrospection of the belief or other attitude in question. On theassumption that people are unaware of the extent of their bias, or atleast have no direct introspective access to their bias, that failureof introspection of bias constitutes a failure of introspection withrespect to the attitude in question. On the other hand, if what is atstake is merely an association or trait-like disposition, rather thanan attitude, failure of introspectibility is unsurprising and does notbear on the general question of the introspection of attitudes.
The issue generalizes beyond implicit bias. To the extent attitudesare held to be reflected in, or even defined by, our explicitjudgments about the matter in question and also, differently butperhaps not wholly separably (see Section 2.3.4 above), our explicitjudgments about ourattitudes toward the matter in question,our self-knowledge would seem to be correspondingly secure andimplicit measures beside the point. To the extent attitudes are heldto crucially involve swift and automatic, or unreflective, patterns ofreaction and association, our self-knowledge of them would appear tobe correspondingly problematic, corrigible by data from implicitmeasures (Bohner and Dickel 2011; Schwitzgebel 2011a, 2021).
Similarly, Carruthers (2011; see also Bem 1967, 1972; Rosenthal 2001;Cassam 2014) argues that evidence of the sort described in Section4.2.1 above shows that people confabulate not just in reporting thecauses of their attitudes but also in reporting the attitudesthemselves. For example, Carruthers suggests that if someone inNisbett and Wilson’s famous 1977 study confabulates “Ithought this pair was softest” as an explanation of their choiceof the rightmost pair of stockings, they err not only about the causeof their choice but also in self-ascribing the judgment that the pairwas softest. On this basis, Carruthers adopts a self/other parity view(see Section 2.1 above) of our self-knowledge of our attitudes,holding that we can only introspect, in the strict sense, consciousexperiences like those that arise in perception and imagery.
Currently ongoing conscious experience—or maybe immediately pastconscious experience (if we hold that introspective judgment musttemporally follow the state or process introspected)—is both themost universally acknowledged target of the introspective process andthe target most commonly thought to be known with a high degree ofprivilege. Infallibility, indubitability, incorrigibility, andself-intimation claims (see Section 4.1.1) are most commonly made forself-knowledge of states such as being in pain or having a visualexperience as of the color red, where these states are construed asqualitative states, or subjective experiences, or aspects of ourphenomenology or consciousness. (All these terms are intendedinterchangeably to refer to what Block [1995], Chalmers [1996], andother contemporary philosophers call “phenomenalconsciousness”.) If attitudes are sometimes conscious, then wemight also be capable of introspecting those attitudes as part of ourcapacity to introspect conscious experience generally (Goldman 2006;Hill 2009; Smithies forthcoming).
It’s difficult to study the accuracy of self-ascriptions ofconscious experience for the same reasons it’s difficult tostudy the accuracy of our self-ascriptions of attitudes (Section4.2.2): There’s no widely accepted measure to trump or confirmself-report. In the medical literature on pain, for example, nobehavioral or physiological measure of pain is generally thoughtcapable of overriding self-report of current pain, despite the factthat scaling issues remain a problem within and especially betweensubjects (Williams, Davies, and Chadury 2000) as does retrospectiveassessment (Redelmeier and Kahneman 1996). When physiological markersof pain and self-report dissociate, it’s by no means clear thatthe physiological marker should be taken as the more accurate index(for methodological recommendations see Price and Aydede 2005).Corresponding remarks apply to the case of pleasure (Haybron2008).
As mentioned in Section 3.3 above, early introspective psychologistsasserted the difficulty of accurately introspecting consciousexperience and achieved only mixed success in their attempts to obtainscientifically replicable (and thus presumably accurate) data throughthe use of trained introspectors. In some domains they achievedconsiderable success and replicability, such as in the construction ofthe “color solid” (a representation of the three primarydimensions of variation in color experience: hue, saturation, andlightness or brightness), the mapping of the size of “justnoticeable differences” between sensations and the“liminal” threshold below which a stimulus is too faint tobe experienced, and the (at least roughly) logarithmic relationshipbetween the intensity of a sensory stimulus and the intensity of theresulting experience (the “Weber-Fechner law”).Contemporary psychophysics—the study of the relation betweenphysical stimuli and the resulting sense experiences orpercepts—is rooted in these early introspective studies.However, other sorts of phenomena proved resistant to cross-laboratoryintrospective consensus—such as the possibility or not ofimageless thought (see the entry on “mental imagery”), the structure of emotion, and the experiential aspects of ofattention. Perhaps these facts about the range of early introspectiveagreement and apparently intractable disagreement cast light on therange over which careful and well-trained introspection is and is notreliable.
Ericsson and Simon (1984/1993; Ericsson 2003) discuss and reviewrelationships between the participant’s performance on variousproblem-solving tasks, their concurrent verbalizations of consciousthoughts (“think aloud protocols”), and their immediatelyretrospective verbalizations. The existence of good relationships inthe predicted directions in many problem-solving tasks lends empiricalsupport to the view that people’s reports about their stream ofthoughts often accurately reflect those thoughts. For example,Ericsson and Simon find that think-aloud and retrospective reports ofthought processes correlate with predicted patterns of eye movementand response latency. Ericsson and Simon also cite studies like thatof Hamilton and Sanford (1978), who asked participants to make yes orno judgments about whether pairs of letters were in alphabetical order(like MO) or not (like RP) and then to describe retrospectively theirmethod for arriving at the judgments. When participantsretrospectively reported knowing the answer“automatically” without an intervening conscious process,reaction times were swift and did not depend on the distance betweenthe letters. When participants retrospectively reported “runningthrough” a sequential series of letters (such as“LMNO” when prompted with “MO”) reaction timescorrelated nicely with reported length of run-through. On the otherhand, Flavell, Green, and Flavell (1995) report gross and widespreadintrospective error about recently past and even current (conscious)thought in young children; and Smallwood and Schooler (2006) reviewliterature that suggests that people are not especially good atdetecting when their mind is wandering.
In the 20th century, philosophers arguing against infallibilism oftendevised hypothetical examples in which they suggested it was plausibleto attribute introspective error; but even if such examples succeed,they are generally confined to far-fetched scenarios, pathologicalcases, or very minor or very brief mistakes (e.g., Armstrong 1963;Churchland 1988). In the 21st century, philosophical critics of theaccuracy of introspective judgments about consciousness shifted theirfocus to cases of widespread disagreement or (putative) error, eitheramong ordinary people or among research specialists. Dennett (1991),Blackmore (2002), and Schwitzgebel (2011b), for example, argue thatmost people are badly mistaken about the nature of the experience ofperipheral vision. These authors argue that people experience visualclarity only in a small and rapidly moving region of about 1–2degrees of visual arc, contrary to the (they say) widespreadimpression most people have that they experience a substantiallybroader range of stable clarity in the visual field. Other recentarguments against the accuracy of introspective judgments aboutconscious experience turn on citing the widespread disagreement aboutwhether there is a “phenomenology of thinking” beyond thatof imagery and emotion, about whether sensory experience as a whole is“rich” (including for example constant tactile experienceof one’s feet in one’s shoes) or “thin”(limited mostly just to what is in attention at any one time), andabout the nature of visual imagery experience (Hurlburt andSchwitzgebel 2007; Bayne and Spener 2010; Schwitzgebel 2011b; thoughsee Hohwy 2011).
Irvine (2013, 2021) has argued that the methodological problems inthis area are so severe that the term “consciousness”should be eliminated from scientific discourse as impossible toeffectively operationalize or measure. Feest (2014), Timmermans andCleeremans (2015), and Spener (2024) similarly highlight thesubstantial methodological challenges using introspective reports inthe science of consciousness, though without being quite aspessimistic as Irvine. In light of such concerns, Pauen and Haynes(2021) emphasize the value of complementing introspective with“extroceptive” measures.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
behaviorism |belief |bias, implicit |Brentano, Franz |consciousness |consciousness: and intentionality |consciousness: higher-order theories |consciousness: representational theories of |consciousness: unity of |delusion |Descartes, René: epistemology |externalism: and self-knowledge |externalism about the mind |folk psychology: as a theory |folk psychology: as mental simulation |functionalism |inner speech |intentionality: phenomenal |James, William |Kant, Immanuel: view of mind and consciousness of self |mental content: narrow |mental imagery |mental representation |pain |perception: the problem of |phenomenology |propositional attitude reports |qualia |Ryle, Gilbert |self-consciousness |self-consciousness: phenomenological approaches to |self-deception |self-knowledge |Wundt, Wilhelm Maximilian
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054