Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Introspection

First published Tue Feb 2, 2010; substantive revision Thu Apr 25, 2024

Introspection, as the term is used in contemporary philosophy of mind,is a means of learning about one’s own currently ongoing, orperhaps very recently past, mental states or processes. You can, ofcourse, learn about your own mind in the same way you learn aboutothers’ minds—by reading psychology texts, by observingfacial expressions (in a mirror), by examining readouts of brainactivity, by noting patterns of past behavior—but it’sgenerally thought that you can also learn about your mindintrospectively, in a way that no one else can. But whatexactly is introspection? No simple characterization is widelyaccepted.

Introspection is a central concept in epistemology, sinceintrospective knowledge is often thought to be particularly secure,maybe even immune to skeptical doubt. Introspective knowledge is alsooften held to be more immediate or direct than sensory knowledge. Bothof these putative features of introspection have been cited in supportof the idea that introspective knowledge can serve as a ground orfoundation for other sorts of knowledge.

Introspection is also central to philosophy of mind, both as a processworth study in its own right and as a court of appeal for other claimsabout the mind. Philosophers of mind offer a variety of theories ofthe nature of introspection; and philosophical claims aboutconsciousness, emotion, free will, personal identity, thought, belief,imagery, perception, and other mental phenomena are often thought tohave introspectible consequences or to be susceptible to introspectiveverification. For similar reasons, empirical psychologists too havediscussed the accuracy of introspective judgments and the role ofintrospection in the science of the mind.

(This entry focuses on the cognitive process of introspection and itsrole in self-knowledge. For a treatment of self-knowledge moregenerally, with a focus on its distinctiveness and the various meansof acquiring it, see the entry onself-knowledge.)


1. General Features of Introspection

1.1 Necessary Features of an Introspective Process

Introspection is generally regarded as a process by means of which welearn about our own currently ongoing, or very recently past, mentalstates or processes. Not all such processes are introspective,however: Few would say that you have introspected if you learn thatyou’re angry by seeing your facial expression in the mirror.However, it’s unclear and contentious exactly what more isrequired for a process to qualify as introspective. A relativelyrestrictive account of introspection might require introspection toinvolve attention to and direct detection of one’s ongoingmental states; but many philosophers think attention to or directdetection of mental states is impossible or at least not present insome paradigmatic instances of introspection.

For a process to qualify as “introspective” as the term isordinarily used in contemporary philosophy of mind, it must minimallymeet the following three conditions:

  1. The mentality condition: Introspection is a process thatgenerates, or is aimed at generating, knowledge, judgments, or beliefsaboutmental events, states, or processes, and not aboutaffairs outside one’s mind, at least not directly. In thisrespect, it is different from sensory processes that normally deliverinformation about outward events or about the non-mental aspects ofone’s body. The border between introspective andnon-introspective knowledge can seem to blur with respect to bodilyself-knowledge such as proprioceptive knowledge about the position ofone’s limbs or nociceptive knowledge about one’s pains.But in principle the introspective part of such processes, pertainingto judgments about one’s mind—e.g., that one has thefeeling as though one’s arms were crossed or of toe-ishlylocated pain—can be distinguished from the non-introspectivejudgment that one’s arms are in fact crossed or one’s toeis being pinched.

  2. The first-person condition: Introspection is a process thatgenerates, or is aimed at generating, knowledge, judgments, or beliefsaboutone’s own mind only and no one else’s, atleast not directly. Any process that in a similar manner generatesknowledge of one’s own and others’ minds is by that tokennot an introspective process. (Some philosophers have contemplatedpeculiar or science fiction cases in which we might introspect thecontents of others’ minds directly—for example intelepathy or when two people’s brains are directly wiredtogether—but the proper interpretation of such cases isdisputable; see Gertler 2000; Langland-Hassan 2015.)

  3. The temporal proximity condition: Introspection is a processthat generates knowledge, beliefs, or judgments about one’scurrently ongoing mental life only; or, alternatively (orperhaps in addition)immediately past (or even future) mentallife, within a certain narrow temporal window (sometimes called thespecious present; see the entry on thetemporal consciousness). You may know that you were thinking about Montaigne yesterday duringyour morning walk, but you cannot know that fact by currentintrospection alone—though perhaps you can know introspectivelythat you currently have a vivid memory of having thought aboutMontaigne. Likewise, you cannot know by introspection alone that youwill feel depressed if your favored candidate loses the election inNovember—though perhaps you can know introspectively what yourcurrent attitude is toward the election or what emotion starts to risein you when you consider the possible outcomes. Whether the target ofintrospection is best thought of as one’s current mental life orone’s immediately past mental life may depend on one’smodel of introspection: On self-detection models of introspection,according to which introspection is a causal process involving thedetection of a mental state (see Section 2.2 below), it’snatural to suppose that a brief lapse of time will transpire betweenthe occurrence of the mental state that is the introspective targetand the final introspective judgment about that state, which invites(but does not strictly imply) the idea that introspective judgmentsgenerally pertain to immediately past states. On self-shaping andself-fulfillment models of introspection, according to whichintrospective judgments create or embed the very state introspected(see Sections 2.3.1 and 2.3.2 below), it seems more natural to thinkthat the target of introspection is one’s current mental life orperhaps even the immediate future.

Few contemporary philosophers of mind would call a process“introspective” if it does not meet some version of thethree conditions above, though in ordinary language the temporalproximity condition may sometimes be violated. (For example, inordinary speech we might describe as “introspective” aprocess of thinking about why you abandoned a relationship last monthor whether you’re really as kind to your children as you thinkyou are.) However, many philosophers of mind will resist calling aprocess that meets these three conditions “introspective”unless it also meets some or all of the following threeconditions:

  1. The directness condition: Introspection yields judgments orknowledge about one’s own current mental processes relativelydirectly orimmediately. It’s difficult toarticulate exactly what directness or immediacy involves in thepresent context, but some examples should make the import of thiscondition relatively clear. Gathering sensory information about theworld and then drawing theoretical conclusions based on thatinformation should not, according to this condition, count asintrospective, even if the process meets the three conditions above.Seeing that a car is twenty feet in front of you and then inferringfrom that fact about the external world that you are having a visualexperience of a certain sort does not, by this condition, count asintrospective. However, as we will see in Section 2.3.4 below, thosewho embrace transparency theories of introspection may reject at leaststrong formulations of this condition.

  2. The detection condition: Introspection involves some sort ofattunement to ordetection of apre-existing mental state or event, where the introspectivejudgment or knowledge is (when all goes well)causally butnotontologically dependent on the target mental state. Forexample, a process that involved creating the state of mind that oneattributes to oneself would not be introspective, according to thiscondition. Suppose I say to myself in silent inner speech, “I amsaying to myself in silent inner speech, ‘haecceities ofapplesauce’”, without any idea ahead of time how I plan tocomplete the embedded quotation. Now, what I say may be true, and Imay know it to be true, and I may know its truth (in some sense)directly, by a means by which I could not know the truth of anyoneelse’s mind. That is, it may meet all the four conditions aboveand yet we may resist calling such a self-attribution introspective.Self-shaping (Section 2.3.2 below), expressivist (Section 2.3.3below), and transparency (Section 2.3.4 below) accounts ofself-knowledge emphasize the extent to which our self-knowledge oftendoes not involve the detection of pre-existing mental states; andbecause something like the detection condition is implicitly orexplicitly accepted by many philosophers, some philosophers (includingsome but not all of those who endorse self-shaping, expressivist,and/or transparency views) would regard it as inappropriate to regardsuch accounts of self-knowledge as accounts ofintrospectionproper.

  3. The effort condition: Introspection is notconstant,effortless, and automatic. We are not every minute of the dayintrospecting. Introspection involves some sort of special reflectionon one’s own mental life that differs from the ordinaryun-self-reflective flow of thought and action. The mind may monitoritself regularly and constantly without requiring any special act ofreflection by the thinker—for example, at a non-conscious levelcertain parts of the brain or certain functional systems may monitorthe goings-on of other parts of the brain and other functionalsystems, and this monitoring may meet all five conditionsabove—but this sort of thing is not what philosophers generallyhave in mind when they talk of introspection. However, this condition,like the directness and detection conditions, is not universallyaccepted. For example, philosophers who think that consciousexperience requires some sort of introspective monitoring of the mindand who think of conscious experience as a more or less constantfeature of our lives may reject the effort condition (Armstrong 1968,1999; Lycan 1996).

Though not all philosophical accounts that are put forward by theirauthors as accounts of “introspection” meet all ofconditions 4–6, most meet at least two of those. Because ofdifferences in the importance accorded to conditions 4–6, it isnot unusual for authors with otherwise similar accounts ofself-knowledge to differ in their willingness to describe theiraccounts as accounts of “introspection”.

1.2 The Targets of Introspection

Accounts of introspection differ in what they treat as the propertargets of the introspective process. No major contemporaryphilosopher believes that all of mentality is available to bediscovered by introspection. For example, the cognitive processesinvolved in early visual processing and in the detection of phonemesare generally held to be introspectively impenetrable and nonetheless(in some important sense) mental (Marr 1983; Fodor 1983). Manyphilosophers also accept the existence of unconscious beliefs ordesires, in roughly the Freudian sense, that are not introspectivelyavailable (e.g., Gardner 1993; Velleman 2000; Moran 2001; Wollheim2003; though see Lear 1998). Although in ordinary English usage wesometimes say we are “introspecting” when we reflect onour character traits, contemporary philosophers of mind generally donot believe that we can directly introspect character traits in thesame sense in which we can introspect some of our other mental states(especially in light of research suggesting that we sometimes havepoor knowledge of our traits, reviewed in Taylor and Brown 1988;Vazire 2010).

The two most commonly cited classes of introspectible mental statesareattitudes, such as beliefs, desires, evaluations, andintentions, andconscious experiences, such as sensoryexperiences and the experiential aspects of emotion and imagery.(These two groups may not be disjoint: Depending on other aspects oftheir view, a philosopher may regard some or all conscious experiencesas involving attitudes, and/or they may regard attitudes as thingsthat are or can be consciously experienced.) It of course does notfollow from the fact (if it is a fact) that some attitudes areintrospectible that all attitudes are, or from the fact that someconscious experiences are introspectible that all consciousexperiences are. Some accounts of introspection focus on attitudes(e.g., Nichols and Stich 2003), while others focus on consciousexperiences (e.g., Hill 1991; Goldman 2006; Schwitzgebel 2012); and itis sometimes unclear to what extent philosophers intend their remarksabout the introspection of one type of target to apply to the othertype. There is no guarantee that the same mechanism or process isinvolved in introspecting all the different potential targets.

Generically, this article will describe the targets of introspectionasmental states, though in some cases it may be more apt tothink of the targets as processes rather than states. Also, inspeaking of the targets of introspection astargets, nopresupposition is intended of a self-detection view of introspectionas opposed to a self-shaping or containment or expressivist view (seeSection 2 below). The targets are simply the states self-ascribed as aconsequence of the introspective process if the process workscorrectly, or if the introspective process fails, the states thatwould have been self-ascribed.

1.3 The Products of Introspection

Though philosophers have not explored the issue very thoroughly,accounts also differ regarding theproducts of introspection.Most philosophers hold that introspection yields something likebeliefs or judgments about one’s own mind, but others prefer tocharacterize the products of introspection as “thoughts”,“representations”, “awareness”,“acquaintance”, and so on. For ease of exposition, thisarticle will describe the products of the introspective process asjudgments, without meaning to assume the falsity of alternativeviews.

2. Introspective Versus Non-Introspective Accounts of Self-Knowledge

This section will outline several approaches to self-knowledge. Notall deserve to be called introspective, but an understanding ofintrospection requires an appreciation of this diversity ofapproaches—some for the sake of the contrast they provide tointrospection proper and some because it’s disputable whetherthey should be classified as introspective. These approaches are notexclusive. Surely there is more than one process by means of which wecan obtain self-knowledge.

2.1 Self/Other Parity Accounts

Symmetrical orself/other parity accounts of self-knowledgetreat the processes by which we acquire knowledge of our own minds asessentially the same as the processes by which we acquire knowledge ofother people’s minds. A simplistic version of this view is thatwe know both our own minds and the minds of others only by observingoutward behavior. For example, we might know we like Thai food becausewe’ve noticed that we sometimes drive all the way across town toget it, or we might know that we’re happy because we see or feelourselves smiling.

On such a view, introspection strictly speaking is impossible, sincethe first-person condition on introspection (condition 2 in Section1.1) cannot be met: There is no distinctive process that generatesknowledge of one’s own mind only. Twentieth-centurybehaviorist principles tended to encourage this view, but no prominent treatmentof self-knowledge accepts this view in its most extreme and simpleform. (The closest is probably Bem 1972.) Advocates of parity accountssometimes characterize our knowledge of our own minds as arising from“theories” that we apply equally to ourselves and others(as in Nisbett and Ross 1980; Gopnik 1993a, 1993b). Consequently, thisapproach to self-knowledge is sometimes called thetheorytheory.

2.1.1 Theory Theory Accounts

Nisbett, Wilson, and their co-authors (Nisbett and Bellows 1977;Nisbett and Wilson 1977; Nisbett and Ross 1980; Wilson 2002) argue forself/other parity in our knowledge of the bases or causes of our ownand others’ attitudes and behavior, describing cases in whichpeople seem to show poor knowledge of these bases or causes. Forexample, people queried in a suburban shopping center about why theychose a particular pair of stockings appeared to be ignorant of theinfluence of position on that choice, including explicitly denyingthat influence when it was suggested to them. People asked to ratevarious traits of supposed job applicants were unaware that theirjudgments of the applicant’s flexibility were greatly influencedby having been told that the applicant had spilled coffee during thejob interview (see also Section 4.2.2 below). In such cases, Nisbettand his co-investigators found that participants’ descriptionsof the causal influences on their own behavior closely mirrored theinfluences hypothesized by outside observers. From this finding, theyinfer that the same mechanism drives the first-person and third-personattributions, a mechanism that that does not involve any specialprivate access to the real causes of one’s attitudes andbehavior and instead relies heavily on intuitive psychologicaltheories.

Gopnik (1993a, 1993b; Gopnik and Meltzoff 1994) deploys developmentalpsychological evidence to support a parity theory of self-knowledge.She points to evidence that for a wide variety of mental states,including believing, desiring, and pretending, children develop thecapacity to ascribe those states to themselves at the same age theydevelop the capacity to ascribe those states to others. For example,children do not seem to be able to ascribe to themselves past falsebeliefs (after having been tricked by the experimenter) any earlierthan they can ascribe false beliefs to other people. This appears tobe so even when that false belief is in the very recent past, havingonly just been revealed to be false. According to Gopnik, thispervasive parallelism shows that we are not given direct introspectiveaccess to our beliefs, desires, pretenses, and the like. Rather, wemust develop a “theory of mind” in light of which weinterpret evidence underwriting our self-attributions. The appearanceof the immediate givenness of one’s mental states is, Gopniksuggests, merely an “illusion of expertise”: Expertsengage in all sorts of tacit theorizing that they don’trecognize as such—the expert chess player for whom the strengthof a move seems simply visually given, the doctor who immediatelyintuits cancer in a patient. Since we are all experts at mental stateattribution, we don’t recognize the layers of theoryunderwriting the process.

2.1.2 Restrictions on Parity

The empirical evidence behind self/other parity views remainscontentious (Nichols and Stich 2003; Carruthers 2011; Cassam 2014).Furthermore, though Nisbett, Wilson, and Gopnik all stress theparallelism between mental state attribution to oneself and others andthe inferential and theoretical nature of such attributions, they allalso leave some room for a kind of self-awareness different in kindfrom the awareness one has of others’ mental lives. Thus, noneendorses apurely symmetrical or self/other parity view.Nisbett and Wilson emphasize that we lack access only to the“processes” or causesunderlying our behavior andattitudes. Our attitudes themselves and our current sensations, theysay, can be known with “near certainty” (1977, 255; thoughcontrast Nisbett and Ross 1980, 200–202, which seems sympatheticto skepticism about special access even to our attitudes). Gopnikallows that we “may be well equipped to detect certain kinds ofinternal cognitive activity in a vague and unspecified way”, andthat we have “genuinely direct and special access to certainkinds of first-person evidence [which] might account for the fact thatwe can draw some conclusions about our own psychological states whenwe are perfectly still and silent”, though we can“override that evidence with great ease” (1993a,11–12). In the analytic philosophical tradition, Ryle (1949)similarly emphasizes the importance of outward behavior in theself-attribution of mental states while acknowledging the presence of“twinges”, “thrills”, “tickles”,and even “silent soliloquies”, which we know of in our owncase and that do not appear to be detectable by observing outwardbehavior. However, none of these authors develops an account of thisapparently more direct self-knowledge. Their theories are consequentlyincomplete. Regardless of the importance of behavioral evidence andgeneral theories in driving our self-attributions, in light of theconsiderations that drive Nisbett, Wilson, Gopnik, and Ryle to thesecaveats, it is probably impossible to sustain a view on which there iscomplete parity between first- and third-person mental stateattributions. There must be some sort of introspective, or at leastuniquely first-person, process.

Self/other parity views can also be restricted to particularsubclasses of mental states: Any mental state that can only be knownby cognitive processes identical to the processes by which we knowabout the same sorts of states in other people is a state to which wehave no distinctively introspective access. States for which parity isoften asserted include personality traits, unconscious motives, earlyperceptual processes, and the bases of our decisions (see Section4.2.1 below for more on this). We learn about these states inourselves, perhaps, in much the same way we learn about such states inother people. Carruthers (2011; see also Section 4.2.2 below) presentsa case for parity of access topropositional attitudes like belief and desire (in contrast to inner speech, visual imagery,and the like, which he holds to be introspectible).

2.2 Self-Detection Accounts

Etymologically, the term “introspection”—from theLatin “looking into”—suggests a perceptual orquasi-perceptual process. Locke writes that we have a faculty of“Perception of the Operation of our own Mind” which,“though it be not Sense, as having nothing to do with externalObjects; yet it is very like it, and might properly enough becall’d internal Sense” (1690 [1975, 105], italicssuppressed). Kant (1781/1997) says we have an “innersense” by which we learn about mental aspects of ourselves thatis in important ways parallel to the “outer sense” bywhich we learn about outer objects.

But what does it mean to say that introspection is like perception? Inwhat respects? As Shoemaker (1994a, 1994b, 1994c) observes, in anumber of respects introspection is plausiblyunlikeperception. Both friends and foes of self-detection accounts havetended to agree that introspection does not involve a distinctivephenomenology of “introspective appearances” (Shoemaker1994a, 1994b, 1994c; Lycan 1996; Rosenthal 2001; Siewert 2012; thoughKriegel forthcoming might be an exception): The visual experience ofredness has a distinctive sensory quality or phenomenology that wouldbe difficult or impossible to convey to a blind person; analogouslyfor the olfactory experience of smelling a banana, the auditoryexperience of hearing a pipe organ, and the experience of touchingsomething painfully hot. To be analogous to sensory experience in thisrespect, introspection would have to generate an analogouslydistinctive phenomenology—some quasi-sensory phenomenology inaddition to, say, the visual phenomenology of seeing red that is thephenomenology of theintrospective appearance of the visualphenomenology of seeing red. This would seem to require two layers ofappearance in introspectively attended sensory perception: a visualappearance of the outward object and an introspective appearance ofthat visual appearance. (This isn’t to say, however, thatintrospection, or at least conscious introspection, doesn’tinvolve some sort of “cognitive phenomenology”—ifthere is such a thing—of the sort that accompanies consciousthoughts in general: See Bayne and Montague, eds., 2011.) Proponentsof quasi-perceptual models of introspection concede the existence ofsuch disanalogies (e.g., Lycan 1996).

We might consider an account of introspection to be quasi-perceptual,or less contentiously to be a “self-detection” account, ifit meets the first five conditions described in Section 1.1—thatis, the mentality condition, the first-person condition, the temporalproximity condition, the directness condition, and the detectioncondition. One aspect of the detection condition deserves specialemphasis here: Detection requires the ontological independence of thetarget mental state and the introspective judgment—the twostates will be causally connected (assuming that all has gone well)but notconstitutively connected. (Shoemaker (1994a, 1994b,1994c) calls models of self-knowledge that meet this aspect of thedetection condition “broad perceptual” models.)

Self-detection accounts of self-knowledge seem to put introspectionepistemically on a par with sense perception. To many philosophers,this has seemed a deficiency in these accounts. A long and widespreadphilosophical tradition holds that self-knowledge is epistemicallyspecial, that we have specially “privileged access”to—perhaps even infallible or indubitable knowledge of—atleast some portion of our mentality, in a way that is importantlydifferent in kind from our knowledge of the world outside us (seeSection 4 below). Both self/other parity accounts (Section 2.1 above)and self-detection accounts (this section) of self-knowledge eitherdeny any special epistemic privilege or characterize that privilege assimilar to the privilege of being the only person to have an extendedview of an object or a certain sort of sensory access to that object.Other accounts of self-knowledge to be discussed later in Section 2.3are more readily compatible with, and often to some extent driven by,more robust notions of the epistemic differences betweenself-knowledge and knowledge of environmental objects.

2.2.1 Simple Monitoring Accounts

Armstrong (1968, 1981, 1999) laid the groundwork for simple,quasi-perceptual, monitoring accounts of introspection. Armstrongdescribes introspection as a “self-scanning process in thebrain” (1968, 324), and he stresses what he sees as theimportant ontological distinction between the state of awarenessproduced by the self-scanning procedure and the target mental state ofwhich one is aware by means of that scanning—the distinction,for example, between one’s pain and one’s introspectiveawareness of that pain. Armstrong also appears to hold that thequasi-perceptual introspective process proceeds at a fairly low levelcognitively—quick and simple, typically without muchinterference by or influence from other cognitive or sensoryprocesses, and approximately continuous. Note that in callingreflexive self-monitoring “introspection”, Armstrongviolates the effort condition from Section 1.1, which requires thatintrospection not be constant and automatic.

Morales (forthcoming) offers a similarly simple monitoring account,inspired by the framework of “signal detection theory” inthe psychology of perception (Green and Swets 1966). Moralescharacterizes introspection as a matter of focusing one’sattention on current conscious experiences of varying“phenomenal magnitude” or “strength” in orderto produce judgments about them. The strength and accuracy of theintrospective response normally covaries with the strength of thetarget conscious experience.

Nichols and Stich (2003) employ a model of the mind on which having apropositional attitude such as a belief or desire is a matter ofhaving a representation stored in a functionally-defined (andmetaphorical) “belief box” or “desire box”(see also the entries onbelief andfunctionalism). On their account, self-awareness of these attitudes typicallyinvolves the operation of a simple “Monitoring Mechanism”that merely takes the representations from these boxes, appends an“I believe that …”, “I desire that…”, or whatever (as appropriate) to that representation,and adds it back into the belief box. For example, if I desire that myfather flies to Hong Kong on Sunday, the Monitoring Mechanism can copythe representation in my desire box with the content “my fatherflies to Hong Kong on Sunday” and produce a new representationin my belief box—that is, create a new belief—with thecontent “I desire that my father flies to Hong Kong onSunday”. Nichols and Stich also propose an analogous butsomewhat more complicated mechanism (they leave the detailsunspecified) that takes percepts as its input and produces beliefsabout those percepts as its output.

Nichols and Stich emphasize that this Monitoring Mechanism does notoperate in isolation, but often co-operates or competes with a secondmeans of acquiring self-knowledge, which involves deploying theoriesalong the lines suggested by Gopnik (see Section 2.1.1 above). Nicholsand Stich argue that autistic people have very poor theoreticalknowledge of the mind, as suggested by their very poor performance in“theory of mind” tasks (tasks like assessing when someonewill have a false belief), and yet they succeed in monitoring theirmental states as shown by their ability to describe their mentalstates in autobiographies and other forms of self-report. Conversely,Nichols and Stich argue that schizophrenic people remain excellenttheorizers about mental states but monitor their own mental statesvery poorly—for example, when they fail to recognize certainactions as their own and struggle to report, or deny the existence of,ongoing thoughts. If this view is empirically correct, the pattern of“double dissociation” suggests that theoretical inferenceand self-monitoring are distinct and separable processes.

2.2.2 Multi-Process Monitoring Accounts

Goldman (2006) criticizes the account of Nichols and Stich (seeSection 2.2.1 above) for not describing how the Monitoring Mechanismdetects the attitude type of the representation (belief, desire,etc.). He argues that a simple mechanism could not discern thedispositional and causal relational facts in virtue of which anattitude is the type it is (see the entry onfunctionalism). Goldman also argues that the Nichols and Stich account leaves unclearhow we can discern the strength or intensity of our beliefs, desires,and other propositional attitudes. Goldman’s positive accountstarts with the idea that introspection is a quasi-perceptual processthat involves attending to individual mental states, which are thenclassified into broad categories (similarly, in visual perception wecan classify seen objects into broad categories). However, onGoldman’s view this process can only generate introspectiveknowledge of the generaltypes of mental states (such asbelief, happiness, bodily sensation) and some properties of thosemental states (such as degree of confidence for belief, and “amultitude of finely delineated categories” for bodilysensation). Specific contents, especially of attitudes like belief,are too manifold, Goldman suggests, for pre-existing classificationalcategories to exist for each one. Rather, we represent the specificcontent of such mental states by “redeploying” therepresentational content of the mental state, that is, simply copyingthe content of the introspected mental state into the content of theintrospective belief or judgment (somewhat like in the Nichols andStich account). Finally, Goldman argues that some mental statesrequire “translation” into the mental code appropriate tobelief if they are to be introspected. Visual representations, hesuggests, have a different format or mental code than beliefs, andtherefore cognitive work will be necessary to translate thefine-grained detail of visual experience into mental contents that canbe believed introspectively.

Hill (1991, 2009) also offers a multi-process self-detection accountof introspection. Like Goldman, Hill sees attention (in some broad,non-sensory sense) as central to introspection, though he also allowsfor introspective awareness without attention (1991, 117–118).Hill (2009) argues that introspection is a process that producesjudgments about, rather than perceptual awareness of, thetarget states, and suggests that the processes that generate thesejudgments vary considerably, depending on the target state, and areoften complex. For example, judgments about enduring beliefs anddesires must, he says, involve complex procedures for searching“vast and heterogeneous” long-term memory stores. Centralto Hill’s (1991) account is an emphasis on the capacity ofintrospective attention to transform—especially to amplify andenrich, even to create—the target experience. In this respectHill argues that the introspective act differs from the paradigmaticobservational act which does not transform the object perceived(though of course both scientific and ordinary—especiallygustatory—observation can affect what is perceived); and thusHill’s account contains a “self-fulfillment” or“self-shaping” aspect in the sense of Section 2.3.1 andSection 2.3.2 below, and only qualifiedly and conditionally meets thedetection condition on accounts of introspection as described inSection 1.1 above—the condition that introspection involvesattunement to or detection of a pre-existing mental state orevent.

Like Hill, Prinz (2004) argues that introspection must involvemultiple mechanisms, depending both on the target states (e.g.,attitudes vs. perceptual experiences) and the particular mode ofaccess to those states. Access might involve controlled attention orit might be more of a passive noticing; it might involve the verbal“captioning” or labeling of experiences or it mightinvolve the kind of non-verbal access that even monkeys have to theirmental states. Prinz (2007) sharply distinguishes between theconceptual classification of our conscious experiences intovarious types that can be recognized and re-identified overtime—classifications which he thinks must necessarily besomewhat crude—and non-conceptual knowledge of ongoing consciousexperiences attained by “pointing” at them with attention.The latter type of knowledge, Prinz argues, is much more detailed andfinely structured than the former but cannot be expressed or retainedover time. Prinz also follows Hill in emphasizing that introspectionoften intensifies or otherwise modifies the target experience. In suchcases, Prinz argues, introspective “access” is only accessin an attenuated sense.

2.3 Introspection Without Self-Detection?

There are several ways to generate judgments, or at least statements,about one’s own current mental life—self-ascriptions,let’s call them—that are reliably true though they do notinvolve the detection of a pre-existing state. Consider the followingfour types of case:

  1. Automatically self-fulfilling self-ascriptions: I think tomyself, “I am thinking”. Or: I judge that I am making ajudgment about my own mental life. Or: I say to myself in inner speech“I am saying to myself in inner speech:‘blu-bob’”. Such self-ascriptions are automaticallyself-fulfilling. Their existence conditions are a subset of theirtruth conditions.

  2. Self-ascriptions that prompt self-shaping: I declare that Ihave a mental image of a pink elephant. At the same time I make thisdeclaration, I deliberately cause myself to form the mental image of apink elephant. Or: A man uninitiated in romantic love declares to aprospective lover that he is the kind of person who sends flowers tohis lovers. At the same time he says this, he successfully resolves tobe the kind of person who sends flowers to his lovers. Theself-ascription either precipitates a change or buttresses whatalready exists in such a way as to make the self-ascription accurate.In these cases, unlike the cases described in (A), some change orself-maintenance is necessary to render the self-ascription true,beyond the self-ascriptional event itself.

  3. Accurate self-ascription through self-expression: I learn tosay “I’m in pain!” instead of “ow!” asan automatic, unreflective response to painful stimuli. Or: I use theself-attributive sentence “I believe Russell changed his mindabout pacifism” simply as a cautious way of expressing thebelief that Russell changed his mind about pacifism, this expressionbeing the product of reflecting upon Russell rather than a product ofreflection upon my own mind. Self-expressions of this sort are assumedhere to flow naturally from the states expressed in roughly the sameway that facial expressions and non-self-attributive verbalexpressions flow naturally from those same states—that is,without being preceded by any attempt to detect the stateself-ascribed.

  4. Self-ascriptions derived from judgments about the outsideworld: From the non-self-attributive fact that Stanford is southof Berkeley I derive the self-attributive conclusion that I believethat Stanford is south of Berkeley. Or: From the non-self-attributivefact that it would be good to go to home now, I derive theself-attributive judgment that I want to go home now. Thesederivations may be inferences, but if so, such inferences require nospecific premises about ongoing mental states. Perhaps one embraces ageneral inference principle like “fromP, it ispermissible to deriveI believe that P”, or“normally, if something is good, I want it”.

The following accounts of self-knowledge all take advantage of one ormore of these facts about self-ascription. Because these ways ofobtaining self-knowledge all violate the detection condition onintrospection (condition 5 in Section 1.1 above), and becausephilosophers are divided about whether methods of obtainingself-knowledge that violate that condition count asintrospective methods strictly speaking, philosophers aredivided about whether accounts of self-knowledge of the sort describedin this section should be regarded as accounts of introspection.

2.3.1 Self-Fulfillment and Containment

An emphasis on infallible knowledge through self-fulfillingself-ascriptions goes back at least to Augustine (c. 420 C.E./1998)and is most famously deployed by Descartes in hisDiscourse onMethod (1637/1985) andMeditations (1641/1984), where hetakes the self-fulfilling thought that he is thinking as indubitablytrue, immune to even the most radical skepticism, and a secure groundon which to build further knowledge.

Contemporary self-fulfillment accounts tend to exploit the idea ofcontainment. In a 1988 essay, Burge writes:

When one knows one is thinking thatp, one is not takingone’s thought (or thinking) thatp merely as an object.One is thinking thatp in the very event of thinkingknowledgeably that one is thinking it. It is thought and thought aboutin the same mental act. (654)

This is the case, Burge argues, because “by its reflexive,self-referential character, the content of the second-order[self-attributive] judgment is locked (self-referentially) onto thefirst-order content which it both contains and takes as its subjectmatter” (1988, 659–660; cf. Heil 1988; Gertler 2000, 2001;Heil and Gertler describe such thoughts as introspective while Burgeappears not to think of self-knowledge so structured as introspective:1998, 244; see also 1988, 652). In judging that I am thinking of abanana, I thereby necessarily think of a banana: The self-attributivejudgment contains, as a part, the very thought self-ascribed, and thuscannot be false. In a 1996 essay, Burge extends his remarks to includenot just self-attributive “thoughts” as targets but also(certain types of) “judgments” (e.g., “I judge,herewith, that there are physical entities” and other judgmentswith “herewith”-like reflexivity, 92)

Shoemaker (1994a, 1994b, 1994c) deploys the containment idea verydifferently, and over a much wider array of introspective targets,including conscious states like pains and propositional attitudes likebelief. Shoemaker speculates that the relevant containment relationholds not between thecontents orconcepts employedin the target state and in the self-ascriptive state but ratherbetween their neural realizations in the brain. To develop this point,Shoemaker distinguishes between a mental state’s “corerealization” and its “total realization”. One mightthink of mental processes as transpiring in fairly narrow regions ofthe brain (their core realization), and yet, Shoemaker suggests,it’s not as though we could simply carve off those regions fromall others and still have the mental state in question. To be themental state it is, the process must be embedded in a larger causalnetwork involving more of the brain (the total realization).Relationships of containment or overlap between core realization andtotal realization between the target state and the self-ascriptivejudgment might then underwrite introspective accuracy. For example,the total brain-state realization of the state of pain may simply be asubset of the total brain-state realization of the state of believingthat one is in pain. Introspective accuracy might then be explained bythe fact that the introspective judgment is not an independentlyexisting state.

Philosophers have also applied Burge-like content-containment models(as opposed to Shoemaker-like realization-containment models) toself-knowledge of conscious states, or “phenomenology”, inparticular—for example, Gertler (2001), Papineau (2002),Chalmers (2003), Horgan and Kriegel (2007), Balog (2012), and Giustina(2021). Husserl (1913/1982) offers an early phenomenal containmentapproach, arguing that we can at any time put our“cogitatio”—our consciousexperiences—consciously before us through a kind of mentalglancing, with the self-perception that arises containing as a partthe conscious experience toward which it is directed, and incapable ofexisting without it. Papineau offers a “quotational”account on which in introspection we self-attribute “theexperience: ___”, where the blank is completed by the experienceitself. Chalmers writes that “direct phenomenal beliefs”about our experiences are “partlyconstituted by anunderlying phenomenal quality”, in that the two will be tightlycoupled across “a wide range of nearby conceptually possiblecases” (2003, 235).

One possible difficulty with such accounts is that while it seemsplausible to suppose that an introspective thought or judgment mightcontain another thought or judgment as a part, it’s less clearhow a self-attributive judgment or belief might contain a piece ofconscious experience as a part. Beliefs, and other belief-like mentalstates like judgments, one might think, containconcepts, notconscious experiences, as their constituents (Fodor 1998); or,alternatively, one might think that beliefs are functional ordispositional patterns of response to input (Dennett 1987;Schwitzgebel 2002), again rendering it unclear how a piece ofphenomenology could be part of belief. Perhaps with this concern inmind, advocates of containment accounts often appeal to“phenomenal concepts” that are, like the introspectivejudgments to which they contribute, partly constituted by theconscious experiences that are the contents of those concepts. Suchconcepts are often thought to be obtained by demonstrative attentionto our conscious experiences as they are ongoing. Alternatively,Giustina’s containment account (2021, 2022) treats the productof introspection as a non-conceptual acquaintance with the targetexperience rather than a conceptual judgment.

It would seem, at least, that beliefs, concepts, or judgmentscontaining pieces of phenomenology would have to expire once thephenomenology has passed and thus that the introspective judgmentscould not be used in later inferences without recreating the state inquestion. Chalmers (2003) concedes the temporal locality of suchphenomenology-containing introspective judgments and consequentlytheir limited use in speech and in making generalizations. Papineau(2002), in contrast, embraces a theory in which the imaginativerecreation of phenomenology in thinking about past experience iscommonplace.

2.3.2 Self-Shaping

Although we can seemingly at least sometimes arrive at true selfascriptions through the self-shaping and the self-expressionprocedures (B and C) described at the beginning of Section 2.3, andalthough such procedures may meet the first three conditions on anaccount of introspection as described in Section 1.1—that is,they may (depending on how they are described and developed) beprocedures that can yield only knowledge or judgments (or at leastself-ascriptions) about one’s own currently ongoing or veryrecently past mental states—few philosophers would describe suchprocedures as “introspective”. Nonetheless, they warrantbrief treatment here, partly for the same reason self/other parityaccounts warranted treatment in Section 2.1 above—that is, asskeptical accounts suggesting that the scope of introspection may beconsiderably narrower than is generally thought—and partly asbackground for the “transparency” accounts to be discussedin Section 2.3.4 below, with which they are often married.

It is difficult to find accounts of self-knowledge that stress theself-shaping technique in its purest, forward-looking, causalform—perhaps because it’s clear that self-knowledge mustinvolve considerably more than this (Gertler 2011). However, McGeer(1996, 2008; McGeer and Pettit 2002) and Zawidzki (2016) emphasize theimportance of self-shaping. McGeer writes that “we learn to useour intentional self-ascriptions to instill or reinforce tendenciesand inclinations that fit with these ascriptions, even though suchtendencies and inclinations may at best have been only nascent at thetime we first made the judgments” (McGeer 1996, 510). If Idescribe myself as brave in battle, or as a committedvegetarian—especially if I do so publicly—I createcommitments and expectations for myself that help to make thoseself-ascriptions true. McGeer compares self-knowledge to the knowledgea driver has, as opposed to a passenger, of where the car is going:The driver, unlike the passenger, can make it the case that the cargoes where the driver says it is going.

There are also strains in Dennett (though Dennett may not have anentirely consistent view on these matters; see Schwitzgebel 2007) thatsuggest either a self-fulfillment or a self-shaping view. In someplaces, Dennett compares “introspective” self-reportsabout consciousness to works of fiction, immune to refutation in thesame way that fictional claims are. One could no more go wrong aboutone’s consciousness, Dennett says, than Doyle could go wrongabout the color of Holmes’s easy chair (e.g., 1991, 81, 94).Such remarks are consistent with either an anti-realist view offiction (there are no facts about the easy chair or aboutconsciousness; see 366–367) or a self-fulfillment orself-shaping realist view (Doylecreates facts about Holmesas he thinks or writes about him; we create facts about whatit’s like to be us in thinking or making claims about ourconsciousness, as perhaps on 81 and 94). More moderately, indiscussing attitudes, Dennett emphasizes how the act of formulating anattitude in language—for example, when ordering a menuitem—can involve self-attributing a degree of specification inone’s attitudes that was not present before, thereby committingone to, and partially or wholly creating, the specific attitudeself-ascribed (1987, 20).

Commissive accounts of self-knowledge also involveself-shaping, but not a form of self-shaping in which theintrospective judgment brings into existence an ontologically distincttarget state, but rather a kind of self-shaping involving aself-fulfillment or containment component similar to that discussed inSection 2.3.1 above. Moran (2001), for example, argues that normallywhen we are prompted to think about what we believe, desire, or intend(and he limits his account primarily to these three mental states), wereflect on the (outward) phenomena in question and make up our mindsabout whatto believe, desire, or do. Rather than attemptingto detect a pre-existing state, we open or re-open the matter and cometo a resolution. Since we normally do believe, desire, and intend whatwe resolve to believe, desire, and do, we can therefore accuratelyself-ascribe those attitudes. Coliva (2016) argues that theself-ascription “I believe that P” is like a performativestatement in that it constitutes a comment to the belief that P or tothe truth of P. (See also Wright 1989; Falvey 2000; Heal 2002; Boyle2009, 2024; Singh forthcoming.)

2.3.3 Expressivism

Wittgenstein writes:

[H]ow does a human being learn the meaning of the names ofsensations?—of the word “pain” for example. Here isone possibility: words are connected with the primitive, the natural,expressions of the sensation and used in their place. A child has hurthimself and he cries; and then adults talk to him and teach himexclamations and, later, sentences. They teach the child newpain-behaviour.
“So you are saying that the word ‘pain’ really meanscrying?”—On the contrary: the verbal expression of painreplaces crying and does not describe it. (1953/1968, sec. 244)

And

“It can’t be said of me at all (except perhaps as a joke)that Iknow I am in pain. What is it supposed tomean—except perhaps that Iam in pain?”(1953/1968, sec. 246).

On Wittgenstein’s view, it is both true that I am in pain andthat I say of myself that I am in pain, but the utterance in no wayemerges from a process ofdetecting one’s pain.

A simple expressivist view—sometimes attributed to Wittgensteinon the basis of these and related passages—denies that theexpressive utterances (e.g., “that hurts!”) genuinelyascribe mental states to the individuals uttering them. Such a viewfaces serious difficulties accommodating the evident semantics ofself-ascriptive utterances, including their use in inference and theapparent symmetries between present-tense and past-tense uses andbetween first-person and third-person uses (Wright 1998; Bar-On 2004).Consequently, Bar-On advocates, instead, what she calls aneo-expressivist view according to which expressive utterances canshare logical and semantic structure with non-expressive utterances,despite the epistemic differences between them.

Expressivists have not always been clear about exactly the range oftarget mental states expressible in this way, but it seems plausiblethat at least in principle some true (or apt) self-ascriptions couldarise in this manner, with no intervening introspectiveself-detection. The question would then be whether this is how wegenerally arrive at true self-ascriptions, for someparticular class of mental states, or whether some more archetypicallyintrospective process is also available. (For a more detailedtreatment of expressivism, consult the section about the expressivistmodel of self-knowledge in the entryself-knowledge.)

2.3.4 Transparency

Evans writes:

[I]n making a self-ascription of belief, one’s eyes are, so tospeak, or occasionally literally, directed outward—upon theworld. If someone asks me, “Do you think there is going to be athird world war?”, I must attend, in answering him, to preciselythe same outward phenomena as I would attend to if I were answeringthe question “Will there be a third world war?” I getmyself into the position to answer the question whether I believe thatp by putting into operation whatever procedure I have foranswering the question whetherp. (1982, 225)

Transparency approaches to self-knowledge, like Evans’,emphasize cases in which it seems that one arrives at an accurateself-ascription not by means of attending to, or thinking about,one’s own mental states, but rather by means of attending to orthinking about the external states of the world that the target mentalstates are about. Note that this claim has both a negative and apositive aspect: We donot learn about our minds by as itwere gazing inward; and wedo learn about our minds byreflecting on the aspects of the world that our mental states areabout. The positive and negative theses are separable: A pluralistmight accept the positive thesis without the negative one; an advocateof a self/other parity theory or an expressivist account ofself-knowledge (with respect to a certain class of target states)might accept the negative thesis without the positive. (N.B.: In thephilosophical literature on self-knowledge “transparency”is also sometimes used to mean something like self-intimation in thesense of Section 4.1.1 below, for example in Wright 1998; Bilgrami2006. This is a completely different usage, not to be confused withthe present usage.) Because transparency accounts emphasize theoutward focus of our thought in arriving at self-ascriptions, callingsuch accounts accounts of “introspection” strains againstthe etymology of the term. Nonetheless, some prominent advocates oftransparency accounts, such as Dretske (1995) and Tye (2000), offerthem explicitly as accounts of introspection.

The range of target states to which transparency applies is a matterof some dispute. Among philosophers who accept something liketransparency, belief is generally regarded as transparent (Gordon1995, 2007; Gallois 1996; Moran 2001; Fernández 2003; Byrne2018). Perceptual states or perceptual experiences are also oftenregarded as transparent in the relevant sense. Harman’s exampleis the most cited:

When Eloise sees a tree before her, the colors she experiences are allexperienced as features of the tree and its surroundings. None of themare experienced as intrinsic features of her experience. Nor does sheexperience any features of anything as intrinsic features of herexperiences. And that is true of you too. There is nothing specialabout Eloise’s visual experience. When you see a tree, you donot experience any features as intrinsic features of your experience.Look at a tree and try to turn your attention to intrinsic features ofyour visual experience. I predict you will find that the only featuresthere to turn your attention to will be features of the presentedtree. (Harman 1990, 667)

Harman’s emphasis here is on the negative thesis, which goesback at least to Moore (1903; though Moore does not unambiguouslyendorse it). The view that it is impossible to attend directly toperceptual experience has been especially stressed by Tye (1995, 2000,2002; see also Evans 1982; Van Gulick 1993; Shoemaker 1994a; Dretske1995; Martin 2002; Stoljar 2004), and directly conflicts with accountsaccording to which we learn about our sensory experience primarily bydirecting introspective attention to it (e.g., Goldman 2006;Petitmengin 2006; Hill 2009; Siewert 2012; and back at least to Wundt1888 and Titchener 1908 [1973]).

Gordon (2007) argues (contra Nichols and Stich 2003 and Goldman 2006)that Evans-likeascent routines (ascending from“p” to “I believe thatp”)can drive the accurate self-ascription of all the attitudes, not justbelief. He makes his case by wedding the transparency thesis tosomething like an expressive account of self-ascription: To answer aquestion about what I want—for example, which flavor ice creamdo I want?—I think not about my desires but rather about thedifferent flavors available, and then Iexpress the resultingattitude self-ascriptively. Similarly for hopes, fears, wishes,intentions, regrets, etc. Gordon points out that from a very earlyage, before they likely have any self-ascriptive intent, childrenlearn to express their attitudes self-ascriptively, for example withsimple phrases like “[I] want banana!” (see also Bar-On2004).

Commissive accounts of self-knowledge (see Section 2.3.2 above) alsogenerally affirm transparency: Reflecting on the world generatescommitment to a belief, desire, or intention, which one thereby alsoknows or self-ascribes (Falvey 2000; Moran 2001; Coliva 2016; Boyle2024).

The transparency thesis is in fact consistent, not just withexpressivism and commissive accounts, but with any of the fournon-detection-based self-ascription procedures described at thebeginning of this section (and indeed Aydede and Güzeldere 2005attempt to reconcile aspects of the transparency view with a broadlydetection-like approach to introspection). This manifold compatibilityflows from the fact that by itself the transparency thesis does not gofar toward a positive view of the mechanisms of self-knowledge.

Byrne (2018), Dretske (1995), and Roche (2016, 2023) bring togethertransparency and something like a derivational model ofself-knowledge—a model on which I derive the conclusion that Ibelieve thatP directly fromP itself, or theconclusion that I am representingx asF from thefact thatx isF—a fact which must of course,to serve as a premise in the derivation, be represented (or believed)by me. Byrne argues that just as one might abide by the followingepistemic rule:

DOORBELL: If the doorbell rings, believe that there is someone at thedoor

so also might someone abide by the rule:

BEL: IfP, believe that you believe thatP.

To determine whether you believe thatP, first determinewhetherP is the case, then follow the rule BEL. Byrne (2018)offers similar accounts of self-knowledge of intention, thinking,seeing, and desire.

Dretske analogizes introspection to ordinary cases of “displacedperception”—cases in which one perceives that something isthe case by way of directly perceiving some other thing (e.g., hearingthat the mail carrier has arrived by hearing the dog’s barking;seeing that you weigh 110 pounds by seeing the dial on the bathroomscale): One perceives that one representsx asF byway of perceiving theF-ness ofx. Dretske notes,however, two points of disanalogy between the cases. In the case ofhearing that the mail carrier has arrived by hearing the dog’sbark, the conclusion (that the mail carrier has arrived) is onlyestablished if the premise about the dog’s barking is true, andfurthermore it depends on a defeasible connecting belief, that thedog’s barking is a reliable indicator of the mail’sarrival. In the introspective case, however, the inference, if it isan inference, does not require the truth of the premise aboutx’s beingF. Even ifx is notF, the conclusion that I’m representingx asF is supported. Nor does there seem to be any sort ofdefeasible connecting belief.

Tye also emphasizes transparency in his account of introspection,though he limits his remarks to the introspection of consciousexperience or “phenomenal character”. In his 2000 book,Tye develops a view like Dretske’s, analogizing introspection todisplaced perception, though Tye unlike Dretske explicitly denies thatinference is involved, instead proposing a mechanism similar to thesort of mechanism envisioned by simple monitoring accounts like thoseof Nichols and Stich (2003; see Section 2.2.1 above), a reliableprocess that, in the case of perceptual self-awareness, takesawareness of external things as its input and yields as its outputawareness of phenomenal character.[1]However, in his 2009 book, Tye rejects the displaced perception model in favorof a version of the transparency view thatidentifiesphenomenal character with external qualities in the world, so thatperceiving features of the world just is perceiving phenomenalcharacter—a view that he recognizes is then charged with thedifficult task of explaining how phenomenal character is a property(or “quality”) of external objects rather than, as isgenerally assumed, a property only of experiences of thoseobjects.

Several authors have challenged the idea that sensory experiencenecessarily eludes attention—that is, they have denied thecentral claim of transparency theories about sensory experience. Block(1996), Kind (2003), and Smith (2008) have argued thatphosphenes—those little lights you see when you press on youreyes—and visual blurriness are aspects of sensory experiencesthat can be directly attended (though see Gow 2019 for objections tothis line of reasoning). Siewert (2004) has argued that what’sintuitively appealing in the transparency view is primarily theobservation that in reflecting on sensory experience one does notwithdraw attention from the objects sensed; but, he argues,this is compatible with also devoting a certain sort of attention tothe sensory experience itself. In early discussions of attention,perceptual attention was sometimes distinguished from“intellectual attention” (James 1890 [1981]; Baldwin1901–1905; see also Peacocke 1998; Mole 2011), that is, from thekind of attention we can devote to purely imagined word puzzles or tophilosophical issues. If non-sensory forms of attention are possible,then the transparency thesis for sensory experience will requirerestatement: Is it only sensory attention to sensory experience thatis impossible? Or is it any kind of attention whatsoever? Simply tosay we don’t attend sensorily to our mental states is to makeonly a modest claim, akin to the claim that we see objects rather thanseeing our visual experiences of objects; but to say that we cannotattend to our mental states even intellectually appears extreme. Inlight of this, it remains unclear how to cast the transparencyintuition to better bring out the core idea that is meant to beconveyed by the slogan that introspecting sensory experience is not amatter of attending to one’s own mind (see also Weksler,Jacobson, and Bronfman 2019).

2.4 Introspective Pluralism

Philosophers discussing self-knowledge often write as if approacheshighlighting one of these non-self-detection methods of generatingself-ascriptions conflict with approaches that highlight other ofthese non-self-detection methods, and also as if approaches of thisgeneral sort conflict with self-detection approaches (Section 2.2above). While conflicts will certainly exist between differentaccounts intended to serve asexhaustive approaches toself-knowledge, it is implausible that any one or even any few ofthese approaches to self-knowledge is exhaustive. Plausibly, all ofthe non-self-detection approaches described above can lead, at leastoccasionally, to accurate self-ascriptions. Enthusiasts for othermodels needn’t deny this. It also seems hard to deny that we atleastsometimes reach conclusions about our mental livesbased on the kind of theoretical inference or self-interpretationemphasized by advocates of self/other parity accounts (Section 2.1above). Finally, even philosophers concerned about strong oroversimple self-scanning views might wish to grant that the mind candosome sort of tracking of its own present or recently paststates—for example, when we trace back a stream of recently pastthoughts that presumably can’t (because past) be self-ascribedby self-fulfillment, self-shaping, self-expression, or transparencymethods.

Schwitzgebel (2012) elevates this pluralism into a negative account ofintrospection. Introspective judgments, he says, arise from a shiftingconfluence of many processes, recruited opportunistically, none ofwhich can be called introspection proper. Just as there is no single,unified faculty of poster-taking-in that one employs when trying totake in a poster at a psychological conference or science fair, thereis, on Schwitzgebel’s view, no single, unified faculty ofintrospection or one underlying core process, nor even a few dedicatedmechanisms or processes. Instead, the introspector, like theposter-viewer, brings to bear a diverse range of cognitive resourcesas suits the occasion. A process wouldn’t be worth calling“introspective”, he says, unless the introspector aimed toreach a judgment about their current or very recently past consciousexperience, using at least some resources specific to the first-personcase and that exhibit some relatively direct sensitivity to the targetstate; but this limitation does not imply the existence of anydedicated introspective processes. Defenders of less extreme versionsof pluralism that are compatible with the existence of severaldedicated introspective processes include Prinz (2004), Hill (2009),Coliva (2016), Samoilova (2016), and Spener (2024).

3. The Role of Introspection in Scientific Psychology

3.1 The Rise of Introspective Psychology as a Science

Philosophers have long made introspective claims about the humanmind—or, to speak more cautiously, they’ve made claimsseemingly at least in part introspectively grounded. Aristotle (3rd c.BCE/1961) asserts that thought does not occur without imagery. Mengzi(3rd c. BCE/2008) argues that our hearts are pleased by moral goodnessand revolted by evil, even if the pleasure and revulsion are notevident in our outward behavior. Berkeley finds in himself no“abstract ideas” like that of a triangle that is, inLocke’s terms “neither oblique, nor rectangle, neitherequilateral, equicrural, nor scalenon, but all and none of these atonce” (Berkeley 1710/1965, 12; Locke 1689/1975, 596). James Mill(1829 [1878]) attempts a catalog of the varieties of senseexperience.

Although a number of early modern philosophers had aimed to initiatethe scientific study of the mind, it wasn’t until the middle ofthe 19th century—with the appearance ofquantitativeintrospective methods, especially regarding sensoryconsciousness—that the study of the mind took shape as aprogressive, mathematical, laboratory-based science. Earlyquantitative psychologists such as Helmholtz (1856/1962), Fechner(1860 [1964]), and Wundt (1896 [1902]) sought quantitative answers toquestions like: By how much must two physical stimuli differ for theexperiences of them to differ noticeably? How weak a stimulus canstill be consciously perceived? What is the mathematical relationshipbetween stimulus intensity and the intensity of the resultingsensation? (The Weber-Fechner law holds that the relationship islogarithmic.) Along what dimensions, exactly, can sense experiencevary? (The “color solid” [see the link to the Munsellsolid in Other Internet Resources, below], for example, characterizescolor experience by appeal to just three dimensions of variation: hue,saturation, and lightness or brightness.) Although from very early on,psychologists also employed non-introspective methods (e.g.,performance on memory tests, reaction times), most earlycharacterizations of the field stood introspection at the center.James, for example, wrote that “introspective observation iswhat we have to rely on first and foremost and always” (1890[1981, 185]).

In contrast with the dominant philosophical tradition that has, sinceDescartes, stressed the special privilege or at least high accuracy ofintrospective judgments about consciousness (see Section 4.1 below)many early introspective psychologists held that the introspection ofcurrently ongoing or recently past conscious experience is difficultand prone to error if the introspective observer is insufficientlytrained. Wundt, for example, reportedly did not credit theintrospective reports of people with fewer than 50,000 trials ofpractice in observing their conscious experience (Boring 1953).Titchener, a leading American introspective psychologist, wrote a1600-page introspective training manual for students, arguing thatintrospective observation is at least as difficult as observation inthe physical sciences (Titchener 1901–1905; see also Wundt 1874[1908]; Müller 1904; for recent discussions of introspectivetraining see Varela 1996; Vermersch 1999; Nahmias 2002; Schwitzgebel2011b). This difference in optimism about untrained introspection maypartly reflect differences in the types of judgments foregrounded inthe two disciplines. Philosophers stressing privilege tend to focus oncoarse and (seemingly) simple judgments such as “I’mhaving a visual experience of redness” or “I believeit’s raining”. The projects of interest to introspectivepsychologists often required much finer judgments—such asdetermining with mathematical precision whether one visual sensationhas twice the “intensity” of another or determining alongwhat dimensions emotional experience can vary.

3.2 Early Skepticism about Introspective Observation

Early introspective psychologists’ theoretical discussions ofthe nature of introspection were often framed in reaction toskepticism about the scientific viability of introspection, especiallythe concern that the introspective act interferes with or destroys themental state or process that is its target.[2]) The most influential formulation of this concern wasComte’s:

But as for observing in the same wayintellectual phenomenaat the time of their actual presence, that is a manifestimpossibility. The thinker cannot divide himself into two, of whom onereasons whilst the other observes him reason. The organ observed andthe organ observing being, in this case, identical, how couldobservation take place? This pretended psychological method is thenradically null and void (1830, using the translation of James 1890[1981, 188]).

Introspective psychologists tended to react to this concern in one ofthree ways. The most concessive approach—recommended, forexample, by James (1890 [1981]; see also Mill 1865 [1961]; Lyons1986)—was to grant Comte’s point forconcurrentintrospection, that is, introspection simultaneous with the targetstate or process, and to emphasize in contrastimmediateretrospection, that is, reflecting on or attending to the targetprocess (usually a conscious experience) very shortlyafterit occurs. Since the scientific observation occurs only after thetarget process is complete, it does not interfere with that process;but of course the delay between the process and the observation mustbe as brief as possible to ensure that the process is accuratelyremembered.

Brentano (1874 [1973]) responded to Comte’s concern bydistinguishing between “inner observation”[innereBeobachtung] and “innerperception” [innereWahrnehmung]. Observation,as Brentano characterizes it, involves dedicating full attention to aphenomenon, with the aim of apprehending it accurately. Thisdedication of attention necessarily interferes with the process to beobserved if the process is a mental one; therefore, he says, innerobservation is problematic as a scientific psychological method. Innerperception, in contrast, according to Brentano, does notinvolve attention to our mental lives and thus does not objectionablydisturb them. While our “attention is turned toward a differentobject … we are able to perceive, incidentally, the mentalprocesses which are directed toward that object” (1874 [1973,30]). Brentano concedes that inner perception necessarily lacks theadvantages of attentive observation, so he recommends conjoining itwith retrospective methods.

Wundt (1888) agrees with Comte and Brentano that observationnecessarily involves attention and so often interferes with theprocess to be observed, if that process is an inner, psychologicalone. To a much greater extent than Brentano, however, Wundt emphasizesthe importance to scientific psychology of direct attention toexperience, including planful and controlled variation. Thepsychological method of “inner perception” is, for Wundt,the method of holding and attentively manipulating a memory image orreproduction of a past psychological process. Although Wundt sees somevalue in this retrospective method, he thinks it has two crucialshortcomings: First, one can only work with what one remembers of theprocess in question—the manipulation of a memory-image cannotdiscover new elements. And second, foreign elements may beunintentionally introduced through association—one might confuseone’s memory of a process with one’s memory of anotherassociated process or object.

Therefore, Wundt suggests, the science of psychology must depend uponperception or observation of mental processes as they occur. It is toopessimistic to think that the target mental process are necessarilydistorted, when a well-trained scientist is performing theintrospective task. Asubclass of mental processes remainsrelatively unperturbed by introspection—the“simpler” mental processes, especially perception(1896/1902, 27–28). The experience of seeing red, Wundt claims,is more or less the same whether or not one is introspecting thepsychological fact that one is experiencing redness. Wundt alsosuggests that the basic processes of memory, feeling, and volition canbe systematically introspected and without excessive disruption. Thesealone, he thinks, can be studied by introspective psychology(see also Wundt 1874 [1904]; 1896 [1902]; 1907; for a detailedtreatment of the history of Comte’s objection and thedistinction between self-observation and inner perception, see Spener2024). Other aspects of our psychology must be approached throughnon-introspective methods such as the observation of language,mythology, culture, and human and animal development.

3.3 The Decline of Scientific Introspection

Although introspective psychologists were able to build scientificconsensus on some issues concerning sense experience—issues suchas the limits of sensory perception in various modalities and some ofthe contours of variation in sensory experience—by the early20th century it was becoming clear that on many issues consensus waselusive. The most famous dispute concerned the existence of“imageless thought” (see Humphrey 1951; Kusch 1999); butother topics proved similarly resistant such as the structure ofemotion or “feeling” (James 1890 [1981]; Külpe 1893[1895]; Wundt 1896 [1902]; Titchener 1908 [1973]) and the experientialchanges brought about by shifts in attention (Wundt 1896 [1902];Pillsbury 1908; Titchener 1908 [1973]; Chapman 1933).

By the 1910s,behaviorism (which focused simply on the relationship between outward stimuli andbehavioral response) had declared war on introspective psychology,portraying it as bogged down in irresolvable disputes betweendiffering introspective “experts”, and also rebuking theintrospectivists’ passive taxonomizing of experience,recommending that psychology focus instead on socially usefulparadigms for modifying behavior (e.g., Watson 1913). In the 1920s and1930s, introspective studies were increasingly marginalized. Althoughstrict behaviorism declined in the 1960s and 1970s, its mainreplacement, cognitivistfunctionalism (which treats functionally defined internal cognitive processes ascentral to psychological inquiry), generally continued to sharebehaviorism’s disdain of introspective methods.

Psychophysics (the study of the relationship between physical sensoryinput and consequent psychological state or response), where theintrospective psychologists had found their greatest success,underwent a subtle shift in this period from a focus onsubjective methods—methods that involve asking subjectsto report on their experiences or percepts—to a focus onobjective methods such as asking participants to report onstates of the outside world, including insisting that participantsguess even when they feel they don’t know or have no relevantconscious experience (especially with the rise of signal detectiontheory in psychophysics: Green and Swets 1966; Cheesman and Merikle1986; Macmillan and Creelman 1991; Merikle, Smilek, and Eastwood2001). Perhaps in accord with transparency views of introspection(Section 2.3.4 above), the two types of instruction seem very similar(compare the subjective “tell me if you visually experience aflash of light” with the objective “tell me if the lightflashes”). On the other hand, perhaps in tension withtransparency views, subjective and objective instructions seemsometimes to differ importantly, especially in cases of knownillusion, Gestalt effects such as perceived grouping, stimuli near thelimits of perceivability, and the experience of ambiguous figures(Boring 1921; Merikle, Smilek, and Eastwood 2001; Siewert 2004).

3.4 The Re-Emergence of Scientific Introspection

Introspective methods were neverentirely abandoned bypsychologists, and in the last few decades, they have made somethingof a comeback, especially with the rise of the interdisciplinary fieldof “consciousness studies” (see, e.g., Jack andRoepstorff, eds., 2003, 2004). Ericsson and Simon (1984/1993; to bediscussed further in Section 4.2.3 below) have advocated the use of“think-aloud protocols” and immediately retrospectivereports in the study of problem solving. Other researchers haveemphasized introspective methods in the study of imagery (Marks 1985;Kosslyn, Reisbert, and Behrmann 2006) and emotion (Lambie and Marcel2002; Barrett et al. 2007; LeDoux and Brown 2017).

Beeper methodologies have been developed to facilitate immediateretrospection, especially by Hurlburt (1990, 2011; Hurlburt and Heavey2006; Hurlburt and Schwitzgebel 2007) and Csikszentmihalyi (Larson andCsikszentmihalyi 1983; Csikszentmihalyi 2014). Traditional immediatelyretrospective methods required the introspective observer in thelaboratory somehow to intentionally refrain from introspecting thetarget experience as it occurs, arguably a difficult task. Hurlburtand Csikszentmihalyi, in contrast, give participants beepers to wearduring ordinary, everyday activity. The beepers are timed to soundonly at long intervals, surprising participants and triggering animmediately retrospective assessment of their “innerexperience”, emotion, or thoughts in the moment before thebeep.

Introspective or subjective reports of conscious experience have alsoplayed an important role in the search for the “neuralcorrelates of consciousness” (as reviewed in Rees and Frith2007; Prinz 2012; Koch et al. 2016; see also Varela 1996). Oneparadigm is for researchers to present ambiguous sensory stimuli,holding them constant over an extended period, noting what neuralchanges correlate with changes in subjective reports of experience.For example, in “binocular rivalry” methods, two differentimages (e.g., a face and a house) are presented, one to each eye.Participants typically say that only one image is visible at a time,with the visible image switching every few seconds. Researchers havesometimes reported finding evidence that activity in“early” visual areas (such as V1) is not temporallycoupled with reported changes in visual experience, while changes inconscious percept are better temporally coupled with activity inparietal and maybe also frontal areas further downstream and tolarge-scale changes in neural synchronization or oscillation; however,the evidence is disputed (Lumer, Friston, and Rees 1998; Polonsky etal. 2000; Tong, Meng, and Blake 2006; Frässle et al. 2014;Tsuchiya et al. 2015; Brascamp et al. 2018; Block 2020; Hesse and Tsao2020; Bock et al. 2023).

Another version of the ambiguous sensory stimuli paradigm involvespresenting the participant with an ambiguous figure such as the Rubinfaces-vase figure:

Using this paradigm, researchers have found neuronal changes both inearly visual areas and in later areas, as well as changes inwidespread neuronal synchrony, that correspond temporally withsubjective reports of flipping between one way and another of seeingthe ambiguous figure (Kleinschmidt et al. 1998; Rodriguez et al. 1999;Parkkonen et al. 2008; de Graaf et al. 2011; Megumi et al. 2015;Brascamp et al. 2018; Zhu, Hardstone, and Le 2022).

In masking paradigms, stimuli are briefly presented then followed by a“mask”. On some trials, participants report seeing thestimuli, while on others they don’t. In trials in which theparticipant reports that the stimulus was visually experienced,researchers have tended to find higher levels of activity through atleast some of the downstream visual pathways, spontaneous electricaloscillations near 40 Hz, and a negative amplitude EEG response in“early” posterior brain regions about 200 ms after thestimulus (Dehaene et al. 2001; Summerfield, Jack, and Burgess 2002;Del Cul, Baillet, and Dehaene 2007; Quiroga et al. 2008; Salti et al.2015; Förster, Koivisto, and Revonsuo 2020). However, it remainscontentious how properly to interpret such attempts to find neuralcorrelates of consciousness (Noë and Thompson 2004; Dehaene andChangeux 2011; Aru et al. 2012; de Graaf, Hsieh, and Sack 2012; Kochet al. 2016; Phillips 2018; Fink, Kob, and Lyre 2021; Andersen et al.2022).

If we report our attitudes by introspecting upon them, then much ofsurvey research is also introspective, though psychologists have notgenerally explicitly described it as such. As with subjective vs.objective methods in psychophysics, there appears to be only a slightdifference between subjectively phrased questions (“Do youapprove of the President’s handling of the war?”,“Do you think marijuana should be legalized?”) andobjectively phrased questions (“Has the President handled thewar well?”, “Should marijuana be legalized?”). Thiswould seem to support the observation at the core of transparencytheory (discussed in Section 2.3.4 above) that questions about themind and questions about the outside world often call for the sametype of reflection.

4. The Accuracy of Introspection

4.1 Varieties of Privilege

It’s plausible to suppose that people have some sort ofprivileged access to at least some of their own mental statesor processes: You know about your own mind, or at least some aspectsof it, in a different way and better than you know about otherpeople’s minds, and maybe also in a different way and betterthan you know about the outside world. Consider pain. It seems youknow your own pains differently and better than you know mine,differently and (perhaps) better than you know about the coffee cup inyour hand. If so, perhaps that special “first-person”privileged knowledge arises through something like introspection, inone or more of the senses described in Section 2 above.

Just as there is a diversity of methods for acquiring knowledge of orreaching judgments about one’s own mental states and processes,to which the label “introspection” applies with more orless or disputable accuracy, so also is there a diversity of forms of“privileged access”, with different kinds of privilege andto which the idea of access applies with more or less or disputableaccuracy. And as one might expect, the different introspective methodsdo not all align equally well with the different varieties ofprivilege.

4.1.1 Varieties of Perfection: Infallibility, Indubitability, Incorrigibility, and Self-Intimation

A substantial philosophical tradition, going back at least toDescartes (1637/1985; 1641/1984; also Augustine c. 420 C.E./1998),ascribes a kind of epistemic perfection to at least some of ourjudgments (or thoughts or beliefs or knowledge) about our ownminds—infallibility, indubitability, incorrigibility, orself-intimation. Consider the judgment (thought, belief, etc.) thatP, whereP is a proposition self-ascribing a mentalstate or process (for exampleP might beI am inpain, orI believe that it is snowing, orI amthinking of a dachshund). The judgment thatP isinfallible just in case, if I make that judgment, it is notpossible thatP is false. It isindubitable just incase, if I make the judgment, it is not possible for me to doubt thetruth ofP. It isincorrigible just in case, if Imake the judgment, it is not possible for anyone else to show thatP is false. And it isself-intimating if it is notpossible forP to be true without my reaching the judgment(thought, belief, etc.) that it is true. Note that the direction ofimplication for the last of these is the reverse of the first three.Infallibility, indubitability, and incorrigibility all have the form:“If I judge (think, believe, etc.) thatP, then…”, while self-intimation has the form “IfP, then I judge (think, believe, etc.) thatP”. All four theses also admit of weakening by addingconditions to the antecedent “if” clause (e.g., “IfI judge thatP as a result of normal introspective processes,then …”). (See Alston 1971 for a helpful dissection ofthese distinctions. Also note that some philosophers [e.g. Ayer1936/1946; Armstrong 1963; Chalmers 2003; Tye 2009] use“incorrigibility” to mean infallibility as defined here,while others [e.g., Ayer 1963; Alston 1971; Rorty 1970; Dennett 2000]use it with the more etymologically specific meaning of [somethinglike] “incapable of correction”.)

Descartes (1641/1984) famously endorsed the indubitability of “Ithink”, which he extends also to such mental states as doubting,understanding, affirming, and seeming to have sensory perceptions. Healso appears to claim that the thought or affirmation that I am insuch states is infallibly true, at least if that thought is clear anddistinct. He was followed in this—especially in hisinfallibilism—by Locke (1690 [1975]), Hume (1739 [1978]),twentieth-century thinkers such as Husserl (1913 [1982]), Ayer (1936[1946], 1963), Lewis (1946), the early Shoemaker (1963), and manyothers. Historical arguments for indubitability and infallibility havetended to center on intuitive appeals to the apparent impossibility ofdoubting or going wrong about such matters as whether one is having athought with a certain content or is experiencing pain or having avisual experience as of seeing red.

Recent infallibilists have added to this intuitive appeal structuralarguments based on self-fulfillment accounts of introspection orself-knowledge (see Section 2.3.1 above)—generally while alsonarrowing the scope of infallibility, for example to thoughts aboutthoughts (Burge 1988, 1996), or to “pure” phenomenaljudgments about consciousness (Chalmers 2003; see also Wright 1998;Gertler 2001; Horgan, Tienson, and Graham 2006; Horgan and Kriegel2007; Tye 2009; with important predecessors in Brentano 1874 [1973];Husserl 1913 [1982]), or to beliefs as “commitments”(Coliva 2016). The intuitive idea behind most of these structuralarguments is that somehow the self-ascriptive thought or judgmentcontains the mental state or process self-ascribed: thethought that I am thinking of a pink elephant contains the thought ofa pink elephant; the judgment that I am having a visual experience ofredness contains the red experience itself.

In contrast, self/other parity (Section 2.1) and self-detection(Section 2.2) accounts of introspection or self-knowledge appear tostand in tension with infallibilism. If introspection orself-knowledge involves a causal process from a mental state to anontologically distinct self-ascription of that state, it appears that,however reliable such a process may generally be, there is inevitablyroom in principle for interference and error. Minimally, it seems,stroke, quantum accident, or clever neurosurgery could break otherwisegenerally reliable relationships between target mental states and theself-ascriptions of those states. Similar considerations apply toself-shaping (Section 2.3.2) and expressivist (Section 2.3.3)accounts, to the extent that these are interpreted causally ratherthan constitutively.

Introspective incorrigibility, as opposed to either infallibility orindubitability, was held by Rorty (1970) to be “the mark of themental”—and thus as applying to a wide range of mentalstates. Dennett (2000, 2002) defends a similar view, for consciousexperiences. The idea behind incorrigibility, recall, is that no oneelse could show your self-ascriptions to be false; or we might say,more qualifiedly and a bit differently, that if you arrive at theright kind of self-ascriptive judgment (perhaps an introspectivelybased judgment about a currently ongoing conscious process thatsurvives critical reflection), then no one else, perhaps not even youin the future, aware of this, can rationally hold that judgment to bemistaken. If I judge that right now I am in severe pain, and I do soas a result of considering introspectively whether I am indeed in suchpain (as opposed to, say, merely inferring that I am in pain based onoutward behavior), and if I pause to think carefully about whether Ireally am in pain and conclude that I indeed am, then no one else whois aware of this can rationally believe that I’m not in pain,regardless of what my outward behavior might be (say, calm andrelaxed) or what shows up in the course of brain imaging (say, noactivation in brain centers normally associated with pain).

Incorrigibility does not imply infallibility: I may not actually be inpain, even if no one couldshow that I’m not.Consequently, incorrigibility is compatible with a broader array ofsources of self-knowledge than is infallibility. Neither Rorty norDennett, for example, appear to defend incorrigibility by appeal toself-fulfillment accounts of introspection (though in both cases,interpreting their positive accounts is difficult). Causal accounts ofself-knowledge may be compatible with incorrigibility if the causalconnections underwriting the incorrigible judgments are vastly moretrustworthy than judgments obtained without the benefit of this sortof privileged access. Of course, unless one embraces a strictself-fulfillment account, with its attendant infallibilism, one willwant to rule out abnormal cases such as quantum accident; hence theneed for qualifications.

Self-intimating mental states are those such that, if a person (or atleast a person with the right background capacities) has them, theynecessarily believe or judge or know that they do. Conscious statesare often held to be in some sense self-intimating, in that the merehaving of them involves, requires, or implies some sort ofrepresentation or awareness of those states. Brentano argues thatconsciousness, for example, of an outward stimulus like a sound,“clearly occurs together with consciousness of thisconsciousness”, that is, the consciousness is “of thewhole mental act in which the sound is presented and in which theconsciousness itself exists concomitantly” (1874 [1995, 129];see alsophenomenological approaches to self-consciousness). “Higher order” and “same order” theories ofconsciousness (Armstrong 1968; Rosenthal 1990, 2005; Gennaro 1996;Lycan 1996; Carruthers 2005; Kriegel 2009; Montague 2016; Lau andBrown, 2019; see alsohigher-order theories of consciousness) explain consciousness in terms of some thought, perception, orrepresentation of the mental state that is conscious—thepresence of that thought, perception, or representation being whatmakes the target state conscious. (On same order theories, the targetmental state, or an aspect of it, represents itself, with no need fora distinct higher order state.) Thus, Horgan, Kriegel, and others havedescribed consciousness as “self-presenting” (Horgan,Tienson, and Graham 2005; Horgan and Kriegel 2007). Shoemaker (1995,2012) argues that beliefs—as long as they are“available” (i.e., readily deployed in inference, assent,practical reasoning, etc.), which needn’t require that they areoccurrently conscious—are self-intimating for individuals withsufficient cognitive capacity. Shoemaker’s idea is that if thebelief thatP is available in the relevant sense, then one isdisposed to do things like say “I believeP”, andsuch dispositions are themselves constitutive of believing that onebelieves thatP.

Self-intimation claims (unlike infallibility, indubitability, andincorrigibility claims) are not usually cast as claims about“introspection”. This may be because knowledge acquiredthrough self-intimation would appear to be constant and automatic,thus violating the effort condition on introspection (condition 6 inSection 1.1 above).

4.1.2 Weaker Guarantees

A number of philosophers have argued for forms of first-personprivilege involving some sort of epistemic guarantee—not justconditional accuracy as a matter of empirical fact, but something morerobust than that—without embracing infallibility,indubitability, incorrigibility, or self-intimation in the sensesdescribed in Section 4.1.1 above.

Shoemaker (1968), for example, argues that self-knowledge of certainpsychological facts such as “I am waving my arm” or“I see a canary”, when arrived at “in the ordinaryway (without the aid of mirrors, etc.)”, isimmune to errorthrough misidentification relative to the first-person pronoun(see also Campbell 1999; Pryor 1999; Bar-On 2004; Hamilton 2008:Langland-Hassan 2015). That is, although one may be wrong about wavingone’s arm (perhaps the nerves to your arm were recently severedunbeknownst to you) or about seeing a canary (perhaps it’s agoldfinch), one cannot be wrong due to mistakenly identifying theperson waving the arm or seeing the canaryas you, when infact it is someone else. This immunity arises, Shoemaker argues,because there is no need for identification in the first place, andthus no opportunity formis-identification. In this respect,Shoemaker argues, knowledge that a particular arm that is moving isyour arm (not immune to misidentification since maybe it’ssomeone else’s arm, misidentified in the mirror) is differentfrom the knowledge thatyou are moving yourarm—knowledge, that is, of what Searle (1983) calls an“intention in action”.

Shoemaker has also argued for the conceptual impossibility ofintrospectiveself-blindness with respect to one’sbeliefs, desires, and intentions, and for somewhat different reasonsone’s pains (1988, 1994b). A self-blind creature, byShoemaker’s definition, would be a rational creature with aconception of the relevant mental states, and who can entertain thethought that they have this or that belief, desire, intention, orpain, but who nonetheless utterly lacks introspective access to thetype of mental state in question. A hypothetical self-blind creaturecould still gain “third person” knowledge of the mentalstates in question, through observing their own behavior, readingtextbooks, and the like. (Thus, strict self/other parity accounts ofself-knowledge of the sort described in Section 2.1 are accountsaccording to which one is self-blind in Shoemaker’s sense.)Shoemaker’s case against self-blindness with respect to beliefturns on the dilemma of whether the self-blind creature can avoid“Moore-paradoxical” sentences (see Moore 1942, 1944[1993]; Shoemaker 1995) like “it’s raining but Idon’t believe that it’s raining” in which thesubject asserts bothP and that they don’t believe thatP. If the subject is truly self-blind, Shoemaker suggests,there should be cases in which their best evidence is both thatP and that they don’t believe thatP (thelatter, perhaps, based on misleading facts about their behavior). Butif the subject asserts “P but I don’t believethatP” in such cases, they do not (contra the initialsupposition) really have a rational command of the nature of beliefand assertion. Alternatively, if they can reliably avoid suchMoore-paradoxical sentences, self-attributing belief in an apparentlynormal way, it seems that they are indistinguishable from normalpeople in thought and behavior and hence not self-blind. Shoemakerdevelops similar anti-self-blindness arguments for desire, intention,and pain. Shoemaker uses his case against self-blindness as part ofhis argument against self-detection accounts of introspection(described in Section 2.2 above): If introspection were a matter ofdetecting the presence of states that exist independently of theintrospective judgment or belief, then it ought to be possible for thefaculty enabling the detection to break down entirely, as in the caseof blindness, deafness, etc., in outward perception (see also Nicholsand Stich 2003, who argue that schizophrenia provides such acase).

Transcendental arguments for the accuracy of certain sorts ofself-knowledge offer a different sort of epistemicguarantee—“transcendental arguments” being argumentsthat assume the existence of some sort of experience or capacity, thendevelop insights about the background conditions necessary for thatexperience or capacity, and finally conclude that those backgroundconditions must in fact be met. Burge (1996; see also Shoemaker 1988)argues that to be capable of “critical reasoning” one mustbe able to recognize one’s own attitudes, knowledgeablyevaluating, identifying, and reviewing one’s beliefs, desires,commitments, suppositions, etc., where these mental states are knownto be the states they are. Since we are (by assumption, for the sakeof transcendental argument) capable of critical reasoning, we musthave some knowledge of our attitudes. Bilgrami (2006) argues that wecan only be held responsible for actions if we know the beliefs anddesires that “rationalize” our actions; since we can (byassumption) sometimes be held responsible, we must sometimes know ourbeliefs and desires. Wright (1989) argues that the “languagegame” of ascribing “intentional states” such asbelief and desire to oneself and others requires as a backgroundcondition that self-ascriptions have special authority within thatgame. Given that we successfully play this language game, we mustindeed have the special authority that we assume and others grant usin the context of the game.

4.1.3 Privilege Without Guarantee

Developing an analogy from Wright (1998), if it’s your turn withthe kaleidoscope, you have a type of privileged perspective on theshapes and colors it presents. If someone else in the room wants toknow what color dominates, for example, the most straightforwardcourse would be to ask you. But this type of privileged access comeswith no guarantee. At least in principle, you might be quite wrongabout the tumbling shapes. You might be dazzled by afterimages, ormomentarily confused, or hallucinating, or (unbeknownst to you)colorblind. (Yes, people often don’t know they are colorblind, apoint stressed by Kornblith 1998.) It is also at least in principlepossible that others may know better than you, perhaps evensystematically so, what is transpiring in the kaleidoscope. You mightthink the figure shows octagonal symmetry, but the rest of us,familiar with the kaleidoscope’s design, might know that thesymmetry is hexagonal. A brilliant engineer may invent a kaleidoscopestate detector that can dependably reveal from outside the shape,color, and position of the tumbling chunks.

Wright raises this analogy to suggest that people’s privilegewith respect to certain aspects of their mental lives must bedifferent from that of the person with the kaleidoscope; but otherphilosophers, especially those who embrace self-detection accounts ofintrospection, should find the analogy at least somewhat apt:Introspective privilege is akin to the privilege of having a uniqueand advantageous sensory perspective on something. Metaphoricallyspeaking, we are the only ones who can gaze directly at our attitudesor our stream of experience, while others must rely on us or onoutward signs. Less metaphorically, in generating introspectivejudgments (or beliefs or knowledge) about one’s own mentalityone employs a detection process available to no one else. It is thenan empirical question how accurate the deliverances of this processare; but on the assumption that the deliverances are in a broad rangeof conditions at least somewhat accurate and more accurate than thetypical judgments other people make about those same aspects of yourmind, you have a “privileged” perspective. Typically,advocates of self-detection models of introspection regard themechanism or cognitive process generating introspective judgments orbeliefs as reliable in roughly this way, but not infallible, and notimmune to correction by other people (Armstrong 1968; Hill 1981, 2009;Lycan 1996; Nichols and Stich 2003; Goldman 2000, 2006; Moralesforthcoming).

4.2 Empirical Evidence on the Accuracy of Introspection

The arguments of the previous section are a priori in at least thebroad sense of that term (the psychologists’ sense): They dependon general conceptual considerations and armchair folk psychologyrather than on empirical research. To these might be added theargument, due to Boghossian (1989) that “externalism”about the content of our attitudes (the view that our attitudes dependconstitutively not just on what is going on internally but also onfacts about our environment; Putnam 1975; Burge 1979) seems toproblematize introspective self-knowledge of those attitudes. Thisissue will not be treated here, since it is amply covered in theentries onexternalism about mental content andexternalism and self-knowledge.

Now we turn to empirical research on our self-knowledge of thoseaspects of our minds often thought to be accessible to introspection.Since character traits are not generally regarded as introspectibleaspects of our mentality, we’ll skip the large literature on theaccuracy or inaccuracy of our judgments about them (e.g., Taylor andBrown 1988; Funder 1999; Vazire 2010; see also Haybron’s 2008skeptical perspective on our knowledge of how happy we are); nor willwe discuss self-knowledge of subpersonal, nonconscious mentalprocesses, such as the processes underlying visual recognition ofcolor and shape.

As a general matter, while a priori accounts of the epistemology ofintrospection have tended to stress its privilege and accuracy,empirical accounts have tended to stress its failures.

4.2.1 Of the Causes of Attitudes and Behavior

Perhaps the most famous argument in the psychological literature onintrospection and self-knowledge is Nisbett and Wilson’sargument that we have remarkably poor knowledge of the causes of, andprocesses underlying, our behavior and attitudes (Nisbett and Wilson1977; Nisbett and Ross 1980; Wilson 2002). Section 2.1 above brieflymentioned their emblematic finding that people in a shopping mall wereoften ignorant of a major factor—position—influencingtheir judgments about the quality of pairs of stockings. In Nisbettand Bellows (1977), also briefly mentioned above, participants wereasked to assess the influence of various factors on their judgmentsabout features of a supposed job applicant. As in Nisbett andWilson’s stocking study, participants denied the influence ofsome factors that were in fact influential; for example, they deniedthat the information that they would meet the applicant influencedtheir judgments about the applicant’s flexibility. (It actuallyhad a major influence, as assessed by comparing the judgments ofparticipants who were told and not told that they would meet theapplicant.) Participants also attributed influence to factors thatwere not in fact influential; for example, they falsely reported thatthe information that the applicant accidentally knocked over a cup ofcoffee during the interview influenced “how sympathetic theperson seems” to them. Nisbett and Bellows found that ordinaryobservers’ hypothetical ratings of the influence of the variousfactors on the various judgments closely paralleled theparticipants’ own ratings of the factors influencingthem—a finding used by Nisbett to argue that people have nospecial access to causal influences on their judgments and insteadrely on the same sorts of theoretical considerations outside observersrely on (the self/other parity view described in Section 2.1). Despitesome objections and methodological concerns (e.g., Newell and Shanks2014), both psychologists and philosophers now tend to acceptNisbett’s and Wilson’s view that any first-personadvantage in assessing the factors influencing our judgments andbehavior is more modest than non-psychologists generally tend toassume.

In series of experiments, Gazzaniga (1995) presented commissurotomypatients (people with severed corpus callosum) with different visualstimuli to each hemisphere of the brain. With cross-hemisphericcommunication severely impaired due to the commissurotomy, the lefthemisphere, controlling speech, had information about one part of thevisual stimulus, while the right hemisphere, controlling some aspectsof movement (especially the left hand) had information about adifferent part. Gazzaniga reported finding that when these“split brain” patients were asked to explain why they didsomething, when that action was clearly caused by input to the right,non-verbal hemisphere, the left hemisphere would sometimes fluentlyconfabulate an explanation. For example, Gazzaniga reports presentingan instruction like “laugh” to the right hemisphere,making the patient laugh. When asked why he laughed, the patient wouldsay something like “You guys come up and test us every month.What a way to make a living!” (1393). When a chicken claw wasshown to the left hemisphere and snow scene to the right, and thepatient was asked to select an appropriate picture from an array, theright hand would point to a chicken and the left hand to a snowshovel, and when asked why they selected those two things, the patientwould say something like “Oh, that’s simple. The chickenclaw goes with the chicken and you need a shovel to clean out thechicken shed” (ibid.; for a detailed discussion of such cases,see Schechter 2018). Similar confabulation about motives is sometimes(but not always) seen in people whose behavior is, unbeknownst tothem, driven by post-hypnotic suggestion (Richet 1884; Moll 1889[1911]), and in disorders such as hemineglect (anosognosia), blindnessdenial (Anton’s syndrome), and Korsakoff’s syndrome(Hirstein 2005).

In a normal population, Johansson and collaborators (Johansson et al.2005; Johansson et al. 2006) manually displayed to participants pairsof pictures of women’s faces. On each trial, the participant wasto point to the face they found more attractive. The picture of thatface was then centered before the participant while the other face washidden. On some trials, participants were asked to explain the reasonsfor their choices while continuing to look at the selected face. On afew key trials, the experimenters used sleight-of-hand to present tothe participant the face that wasnot selected as though ithad been the face selected. Strikingly, the switch was noticed only28% of the time. What’s more, when the change was not detectedparticipants actually gave explanations for their choice that appealedto specific features of the unselected face that were not possessed bythe selected face 13% of the time. For example, one participantclaimed to have chosen the face before him “because I loveblondes” when in fact he had chosen a dark-haired face(Johansson et al. 2006, 690). Johansson and colleagues failed to findany systematic differences in the explanations of choice between themanipulated and non-manipulated trials, using a wide variety ofmeasures. They found, for example, no difference in linguistic markersof confidence (including pauses in speech), emotionality, specificityof detail, complexity or length of description, or general position insemantic space. These results, like Nisbett’s andWilson’s, suggest that at least some of the time when peoplethink they are explaining the bases of their decisions, they areinstead merely theorizing or confabulating.

The literature on “cognitive dissonance” is replete withcases in which participants’ attitudes appear to change forreasons they do, or would, deny. According to cognitive dissonancetheory, when people behave or appear to behave counternormatively(e.g., incompetently, foolishly, immorally), they will tend to adjusttheir attitudes so as to make the behavior seem less counternormativeor “dissonant” (Festinger 1957; Cooper and Fazio 1984;Stone and Cooper 2001; Harmon-Jones 2019). For example, people inducedto falsely describe as enjoyable a monotonous task they’ve justcompleted will tend, later, to report having a more positive attitudetoward the task then those not induced to lie (though much less so ifthey were handsomely paid to lie in which case the behavior is notclearly counternormative; Festinger and Carlsmith 1959). Presumably,if such attitude changes were known to the person they would generallyfail to have their dissonance-reducing effect. Research psychologistshave also confirmed such familiar phenomena as “sourgrapes” (Elster 1983/2016; Lyubomirsky and Ross 1999; Kay,Jiminez, and Jost 2002) and “self-deception” (Mele 2001)which presumably also involve ignorance of the factors driving therelevant judgments and actions (for a general review of these andother sources of distortion, see Pronin 2009). And of course theFreudian psychoanalytic tradition has also long held that people oftenhave only poor knowledge of their motives and the influences on theirattitudes (Wollheim 1981; Cavell 2006).

In light of this empirical research, no major philosopher now holds(perhaps no major philosopher ever held) that we have infallible,indubitable, incorrigible, or self-intimating knowledge of the causesof our judgments, decisions, and behavior. Perhaps weaker forms ofprivilege also come under threat. But the question arises: Whateverfailures there may be in assessing the causes of our attitudes andbehavior, are those failures failures ofintrospection,properly construed? Psychologists tend to cast these results asfailures of “introspection”, but if it turns out that avery different and more trustworthy process underwrites our knowledgeof some other aspects of our minds—such as what our presentattitudes are (however caused) or our currently ongoing or recentlypast conscious experience—then perhaps we can call onlythat process introspection, thereby retaining some robustform of introspective privilege while acceding to the psychologicalconsensus regarding (what we would now call non-introspective)first-person knowledge of causes. Indeed, few contemporaryphilosophical accounts of introspection or privileged self-knowledgehighlight, as the primary locus of privilege, the causes of ourattitudes and behavior (Bilgrami 2006 is a notable exception). Thus,the literature reviewed in this section can be interpreted assuggesting that the causes of our behavior are not, after all, thesorts of things to which we have introspective access.

4.2.2 Of Attitudes

Research psychologists have generally not been as skeptical of ourknowledge of our attitudes as they have been of our knowledge of thecauses of our attitudes (Section 4.2.1 above). In fact, many of thesame experiments that purport to show inaccurate knowledge of thecauses of our attitudes nonetheless rely unguardedly on self-reportfor assessment of the attitudes themselves—a feature of thoseexperiments criticized by Bem (1967). Attitudinal surveys inpsychology and social science generally rely on participants’self-report as the principal source of evidence about attitudes (deVaus 1985/2014; Nardi 2002/2018). However, as in the case of motivesand causes, there’s a long tradition in clinical psychologyskeptical of our self-knowledge of our attitudes.

A key challenge in assessing the accuracy of people’s beliefs orjudgments about their attitudes is the difficulty of accuratelymeasuring attitudes independently of self-report. There is at presentno tractable measure of attitude that is generally seen byphilosophers as overriding individuals’ own reports about theirattitudes. However, in the psychological literature,“implicit” measures of attitudes—measures ofattitudes that do not reply on self-report—have recently gainedconsiderable attention (see Wittenbrink and Schwarz, eds., 2007;Petty, Fazio, and Briñol, eds., 2009; Gawronski, De Houwer, andSherman 2020). Such measures are sometimes thought capable ofrevealing unconscious attitudes or implicit attitudes eitherunavailable to introspection or erroneously introspected (Wilson,Lindsey, and Schooler 2000; Lane et al. 2007; though see Hahn et al.2014; Brownstein, Madva, and Gawronski 2019).

Research on implicit attitude measures originated to a substantialextent with attempts to measure racism in North America and Europe, inaccord with the view that racist attitudes, though common, areconsidered socially undesirable and therefore often not self-ascribedeven when present. For example, Campbell, Kruskal, and Wallace (1966)explored the use of seating distance as an index of racial attitudes,noting that racially Black and White students tended to aggregate inclassroom seating arrangements. Using facial electromyography (EMG),Vanman et al. (1997) found White participants to display facialresponses indicative of negative affect more frequently when asked toimagine co-operative activity with Black than with Whitepartners—results interpreted as indicative of racist attitudes.Cunningham et al. (2004) showed White and Black faces to Whiteparticipants while participants were undergoing fMRI brain imaging.They found less amygdala activation when participants looked at facesfrom their own group than when participants looked at other faces; andsince amygdala activation is generally associated with negativeemotion, they interpreted this tendency suggesting a negative attitudetoward outgroup members (see also Hart et al 1990; and for discussionIto and Cacioppo 2007).

Much of the recent implicit attitude research has focused on responsepriming and interference in speeded tasks. In priming research, astimulus (the “prime”) is briefly displayed, followed by amask that hides it, and then a second stimulus (the“target”) is displayed. The participant’s task is torespond as swiftly as possible to the target, typically with aclassification judgment. Inevaluative priming, for example,the participant is primed with a positively or negatively valencedword or picture (e.g., snake), then asked to make a swift judgmentabout whether the subsequently presented target word (e.g.,“disgusting”) is good or bad, or has some other feature(e.g., belongs to a particular category). Generally, negative primeswill speed response for negative targets while delaying response forpositive targets, and positive primes will do the reverse. Researchershave found that photographs of Black faces tend to facilitate thecategorization of negative targets and delay the categorization ofpositive targets for White participants—a result widelyinterpreted as revealing racist attitudes (Fazio et al. 1995; Dovidioet al. 1997; Wittenbrink, Judd, and Park 1997). In theImplicitAssociation Test, respondents are asked to respond disjunctivelyto combined categories, giving for example one response if they seeeither a dark-skinned face or a positively valenced word and adifferent response if they see either a light-skinned face or anegatively valenced word. As in evaluative priming tasks, Whiterespondents tend to respond more slowly when asked to pairdark-skinned faces with positively valenced words than with negativelyvalenced words, which is interpreted as revealing a negative attitudeor association (Greenwald, McGhee, and Schwartz 1998; Lane et al.2007; Jost 2019). However, it should be noted that despite itsprominence, the Implicit Association Task has been criticized ashaving poor test-retest reliability, low predictive validity, and weakcorrelations with other measures of racism (Oswald, Mitchell, Blanton,Jaccard, and Tetlock 2013; Gawronski, Morrison, Phills, and Galdi2017; Payne, Vuletich, and Lundberg 2017; Machery 2022).

As mentioned above, such implicit measures are often interpreted asrevealing attitudes to which people have poor or no introspectiveaccess. The evidence that people lack introspective knowledge of suchattitudes generally turns on the low correlations between suchimplicit measures of racism and more explicit measures such asself-report—though due to the recognized social undesirabilityof racial prejudice, it is difficult to disentangleself-presentational from self-knowledge factors in self-reports (Fazioet al. 1995; Wilson, Lindsey, and Schooler 2000; Greenwald and Nosek2009; ). People who appear racist by implicit measures might disavowracism and inhibit racist patterns of response on explicit measures(such as when asked to rate the attractiveness of faces of differentraces) because they don’t want to beseen asracist—a motivation that might drive them whether or not theyhave accurate self-knowledge of their racist attitudes. Still, itseems prima facie plausible that people have at best limited knowledgeof the patterns of association that drive their responses on primingand other implicit measures.

But what do such tests really measure? In philosophy, Zimmerman (2018)and Gendler (2008a, 2008b) have argued that measures like the ImplicitAssociation Test do not measure actual racist beliefs but rathersomething else, something under less rational control (Gendler callsthem “aliefs”). Schwitzgebel (2010) argues that people whoare implicitly prejudiced but explicitly egalitarian are “inbetween” believing and failing to believe the egalitarianpropositions that they sincerely accept (see also Levy 2015). Machery(2016) argues that implicit measures reveal multi-track dispositions,rather than attitudes. Gawronski and Bodenhausen (2006) advance amodel according to which there is a substantial difference betweenimplicit attitudes, defined in terms of associative processes, andexplicit attitudes which have a propositional structure and are guidedby standards of truth and consistency (see also Wilson, Lindsey, andSchooler 2000; Greenwald and Nosek 2009). Mandelbaum (2016) andBorgoni (2016) also endorse views on which people who are implicitlyprejudiced but explicitly egalitarian have contradictory attitudes,though they argue that both the implicit and the explicit attitudesare propositionally structured and to some extent subject to norms ofrationality. Payne, Vuletich and Lundberg (2017) argue that implicitmeasures capture the situationally-variable accessibility ofculturally given concepts.

How one answers this question about the relation between implicit biasand belief or other attitudes bears on the question of the accuracy ofintrospection of the belief or other attitude in question. On theassumption that people are unaware of the extent of their bias, or atleast have no direct introspective access to their bias, that failureof introspection of bias constitutes a failure of introspection withrespect to the attitude in question. On the other hand, if what is atstake is merely an association or trait-like disposition, rather thanan attitude, failure of introspectibility is unsurprising and does notbear on the general question of the introspection of attitudes.

The issue generalizes beyond implicit bias. To the extent attitudesare held to be reflected in, or even defined by, our explicitjudgments about the matter in question and also, differently butperhaps not wholly separably (see Section 2.3.4 above), our explicitjudgments about ourattitudes toward the matter in question,our self-knowledge would seem to be correspondingly secure andimplicit measures beside the point. To the extent attitudes are heldto crucially involve swift and automatic, or unreflective, patterns ofreaction and association, our self-knowledge of them would appear tobe correspondingly problematic, corrigible by data from implicitmeasures (Bohner and Dickel 2011; Schwitzgebel 2011a, 2021).

Similarly, Carruthers (2011; see also Bem 1967, 1972; Rosenthal 2001;Cassam 2014) argues that evidence of the sort described in Section4.2.1 above shows that people confabulate not just in reporting thecauses of their attitudes but also in reporting the attitudesthemselves. For example, Carruthers suggests that if someone inNisbett and Wilson’s famous 1977 study confabulates “Ithought this pair was softest” as an explanation of their choiceof the rightmost pair of stockings, they err not only about the causeof their choice but also in self-ascribing the judgment that the pairwas softest. On this basis, Carruthers adopts a self/other parity view(see Section 2.1 above) of our self-knowledge of our attitudes,holding that we can only introspect, in the strict sense, consciousexperiences like those that arise in perception and imagery.

4.2.3 Of Conscious Experience

Currently ongoing conscious experience—or maybe immediately pastconscious experience (if we hold that introspective judgment musttemporally follow the state or process introspected)—is both themost universally acknowledged target of the introspective process andthe target most commonly thought to be known with a high degree ofprivilege. Infallibility, indubitability, incorrigibility, andself-intimation claims (see Section 4.1.1) are most commonly made forself-knowledge of states such as being in pain or having a visualexperience as of the color red, where these states are construed asqualitative states, or subjective experiences, or aspects of ourphenomenology or consciousness. (All these terms are intendedinterchangeably to refer to what Block [1995], Chalmers [1996], andother contemporary philosophers call “phenomenalconsciousness”.) If attitudes are sometimes conscious, then wemight also be capable of introspecting those attitudes as part of ourcapacity to introspect conscious experience generally (Goldman 2006;Hill 2009; Smithies forthcoming).

It’s difficult to study the accuracy of self-ascriptions ofconscious experience for the same reasons it’s difficult tostudy the accuracy of our self-ascriptions of attitudes (Section4.2.2): There’s no widely accepted measure to trump or confirmself-report. In the medical literature on pain, for example, nobehavioral or physiological measure of pain is generally thoughtcapable of overriding self-report of current pain, despite the factthat scaling issues remain a problem within and especially betweensubjects (Williams, Davies, and Chadury 2000) as does retrospectiveassessment (Redelmeier and Kahneman 1996). When physiological markersof pain and self-report dissociate, it’s by no means clear thatthe physiological marker should be taken as the more accurate index(for methodological recommendations see Price and Aydede 2005).Corresponding remarks apply to the case of pleasure (Haybron2008).

As mentioned in Section 3.3 above, early introspective psychologistsasserted the difficulty of accurately introspecting consciousexperience and achieved only mixed success in their attempts to obtainscientifically replicable (and thus presumably accurate) data throughthe use of trained introspectors. In some domains they achievedconsiderable success and replicability, such as in the construction ofthe “color solid” (a representation of the three primarydimensions of variation in color experience: hue, saturation, andlightness or brightness), the mapping of the size of “justnoticeable differences” between sensations and the“liminal” threshold below which a stimulus is too faint tobe experienced, and the (at least roughly) logarithmic relationshipbetween the intensity of a sensory stimulus and the intensity of theresulting experience (the “Weber-Fechner law”).Contemporary psychophysics—the study of the relation betweenphysical stimuli and the resulting sense experiences orpercepts—is rooted in these early introspective studies.However, other sorts of phenomena proved resistant to cross-laboratoryintrospective consensus—such as the possibility or not ofimageless thought (see the entry on “mental imagery”), the structure of emotion, and the experiential aspects of ofattention. Perhaps these facts about the range of early introspectiveagreement and apparently intractable disagreement cast light on therange over which careful and well-trained introspection is and is notreliable.

Ericsson and Simon (1984/1993; Ericsson 2003) discuss and reviewrelationships between the participant’s performance on variousproblem-solving tasks, their concurrent verbalizations of consciousthoughts (“think aloud protocols”), and their immediatelyretrospective verbalizations. The existence of good relationships inthe predicted directions in many problem-solving tasks lends empiricalsupport to the view that people’s reports about their stream ofthoughts often accurately reflect those thoughts. For example,Ericsson and Simon find that think-aloud and retrospective reports ofthought processes correlate with predicted patterns of eye movementand response latency. Ericsson and Simon also cite studies like thatof Hamilton and Sanford (1978), who asked participants to make yes orno judgments about whether pairs of letters were in alphabetical order(like MO) or not (like RP) and then to describe retrospectively theirmethod for arriving at the judgments. When participantsretrospectively reported knowing the answer“automatically” without an intervening conscious process,reaction times were swift and did not depend on the distance betweenthe letters. When participants retrospectively reported “runningthrough” a sequential series of letters (such as“LMNO” when prompted with “MO”) reaction timescorrelated nicely with reported length of run-through. On the otherhand, Flavell, Green, and Flavell (1995) report gross and widespreadintrospective error about recently past and even current (conscious)thought in young children; and Smallwood and Schooler (2006) reviewliterature that suggests that people are not especially good atdetecting when their mind is wandering.

In the 20th century, philosophers arguing against infallibilism oftendevised hypothetical examples in which they suggested it was plausibleto attribute introspective error; but even if such examples succeed,they are generally confined to far-fetched scenarios, pathologicalcases, or very minor or very brief mistakes (e.g., Armstrong 1963;Churchland 1988). In the 21st century, philosophical critics of theaccuracy of introspective judgments about consciousness shifted theirfocus to cases of widespread disagreement or (putative) error, eitheramong ordinary people or among research specialists. Dennett (1991),Blackmore (2002), and Schwitzgebel (2011b), for example, argue thatmost people are badly mistaken about the nature of the experience ofperipheral vision. These authors argue that people experience visualclarity only in a small and rapidly moving region of about 1–2degrees of visual arc, contrary to the (they say) widespreadimpression most people have that they experience a substantiallybroader range of stable clarity in the visual field. Other recentarguments against the accuracy of introspective judgments aboutconscious experience turn on citing the widespread disagreement aboutwhether there is a “phenomenology of thinking” beyond thatof imagery and emotion, about whether sensory experience as a whole is“rich” (including for example constant tactile experienceof one’s feet in one’s shoes) or “thin”(limited mostly just to what is in attention at any one time), andabout the nature of visual imagery experience (Hurlburt andSchwitzgebel 2007; Bayne and Spener 2010; Schwitzgebel 2011b; thoughsee Hohwy 2011).

Irvine (2013, 2021) has argued that the methodological problems inthis area are so severe that the term “consciousness”should be eliminated from scientific discourse as impossible toeffectively operationalize or measure. Feest (2014), Timmermans andCleeremans (2015), and Spener (2024) similarly highlight thesubstantial methodological challenges using introspective reports inthe science of consciousness, though without being quite aspessimistic as Irvine. In light of such concerns, Pauen and Haynes(2021) emphasize the value of complementing introspective with“extroceptive” measures.

Bibliography

  • Alston, William P., 1971, “Varieties of privilegedaccess”,American Philosophical Quarterly, 8:223–241.
  • Amedi, Amir, Rafael Malach, and Alvaro Pascual-Leone, 2005,“Negative BOLD differentiates visual imagery andperception”,Neuron, 48: 859–872.
  • Andersen, Lau M., Mikkel C. Vinding, Kristian Sandberg, and MortenOvergaard, 2022, “Task requirements affect the neural correlatesof consciousness”,European Journal of Neuroscience,56: 5810–5822.
  • Aristotle, 3rd c. BCE/1961,De Anima, W.D. Ross (ed.),Oxford: Oxford University Press.
  • Armstrong, David M., 1963, “Is introspective knowledgeincorrigible?”,Philosophical Review, 72:417–432.
  • –––, 1968,A materialist theory of themind, London: Routledge.
  • –––, 1981,The nature of mind and otheressays, Ithaca, NY: Cornell University Press.
  • –––, 1999,The mind-body problem,Boulder, CO: Westview.
  • Aru, Jaan, Talis Bachmann, Wolf Singer, and Lucia Melloni, 2012,“Distilling the neural correlates of consciousness”,Neuroscience and Biobehavioral Reviews, 36:737–746.
  • Augustinus, Aurelius, c. 420 C.E./1998,The city of Godagainst the pagans, R.W. Dyson (tr.), Cambridge: CambridgeUniversity Press.
  • Aydede, Murat, and Güven Güzeldere, 2005,“Cognitive architecture, concepts, and introspection: Aninformation-theoretic solution to the problem of phenomenalconsciousness”,Noûs, 39: 197–255.
  • Ayer, A.J., 1936 [1946],Language, truth, and logic, 2nded., London: Gollancz.
  • –––, 1963,The concept of a person, NewYork: St. Martin’s.
  • Baldwin, James Mark, 1901–1905,Dictionary of philosophyand psychology, New York: Macmillan.
  • Balog, Katalin, 2012, “Acquaintance and the mind-bodyproblem”, inNew Perspectives on Type Identity, SimoneGozzano & Christopher S. Hill (eds.), Cambridge: CambridgeUniversity Press.
  • Bar-On, Dorit, 2004,Speaking my mind, Oxford: OxfordUniversity Press.
  • Barrett, Lisa Feldman, Batja Mesquita, Kevin N. Ochsner, and JamesJ. Gross, 2007, “The experience of emotion”,AnnualReview of Psychology, 58: 373–403.
  • Bayne, Tim, and Michelle Montague, eds., 2011,Cognitivephenomenology, Oxford: Oxford University Press.
  • Bayne, Tim, and Maja Spener, 2010, “Introspectivehumility”,Philosophical Issues, 20, 1–22.
  • Bem, Daryl J., 1967, “Self-perception: An alternativeinterpretation of cognitive dissonance phenomena”,Psychological Review, 74: 183–200.
  • –––, 1972, “Self-perception theory”,Advances in Experimental Social Psychology, 6:1–62.
  • Berkeley, George, 1710/1965,A Treatise Concerning thePrinciples of Human Knowledge, inPrinciples, Dialogues, andPhilosophical Correspondence, Colin M. Turbayne (ed.), New York:Macmillan, 3–101.
  • Bilgrami, Akeel, 2006,Self-knowledge and resentment,Cambridge, MA: Harvard University Press.
  • Blackmore, Susan, 2002, “There is no stream ofconsciousness”,Journal of Consciousness Studies,9(5–6), 17–28.
  • Block, Ned, 1995, “On a confusion about a function ofconsciousness”,Behavioral and Brain Sciences, 18:227–247.
  • –––, 1996, “Mental paint and mentallatex”,Philosophical Issues, 7: 19–49.
  • –––, 2020, “Finessing the bored monkeyproblem”,Trends in Cognitive Sciences, 24:167–168.
  • Bock, Elizabeth A., Jeremy D. Fesi, Jason Da Silva Castenheira,Sylvain Baillet, and Janine D. Mendola, 2023, “Distinct dorsaland ventral streams for binocular rivalry dominance andsuppression”,European Journal of Neuroscience, 57:1317–1334.
  • Boghossian, Paul, 1989, “Content and self-knowledge”,Philosophical Topics, 17: 5–26.
  • Bohner, Gerd, and Nina Dickel, 2011, “Attitudes and attitudechange”,Annual Review of Psychology, 62:391–417.
  • Borgoni, Cristina, 2016, “Dissonance andirrationality”,Pacific Philosophical Quarterly, 97:48–57.
  • Boring, Edwin G., 1921, “The stimulus-error”,American Journal of Psychology, 32: 449–471.
  • –––, 1953, “A history ofintrospection”,Psychological Bulletin, 50:169–189.
  • Boyle, Matthew, 2009, “Two kinds of self-knowledge”,Philosophy & Phenomenological Research, 78:133–164.
  • –––, 2024,Transparency and reflection,Oxford: Oxford University Press.
  • Brascamp, Jan, Philipp Sterzer, Randolph Blake, and Tomas Knapen,2018, “Multistable perception and the role of the frontoparietalcortex in perceptual inference”,Annual Review ofPsychology, 69: 77–103.
  • Brentano, Franz, 1874 [1995],Psychology from an empiricalstandpoint, 2nd English edition, Antos C. Rancurello, D. B.Terrell and Linda L. McAlister (trans.), New York: Routledge.
  • Brownstein, Michael, Alex Madva, and Bertram Gawronski, 2019,“What do implicit measures measure?”,WIREs CognitiveScience, 10: e1501.
  • Burge, Tyler, 1979, “Individualism and the mental”,Midwest Studies in Philosophy, 4: 73–121.
  • –––, 1988, “Individualism andself-knowledge”,Journal of Philosophy, 85:649–663.
  • –––, 1996, “Our entitlement toself-knowledge”,Proceedings of the AristotelianSociety, 96: 91–116.
  • –––, 1998, “Reason and the firstperson”, inKnowing our own minds, Crispin Wright,Barry C. Smith, and Cynthia Macdonald (eds.), Oxford: OxfordUniversity Press, 243–270.
  • Byrne, Alex, 2018,Transparency and self-knowledge,Oxford: Oxford University Press.
  • Campbell, Donald T., William H. Kruskal, and William P. Wallace,1966, “Seating aggregation as an index of attitude”,Sociometry, 29: 1–15.
  • Campbell, John, 1999, “Immunity to error throughmisidentification and the meaning of a referring term”,Philosophical Topics, 25(1–2), 89–104.
  • Carruthers, Peter, 2005,Consciousness: Essays from ahigher-order perspective, Oxford: Oxford University Press.
  • –––, 2011,The opacity of mind, Oxford:Oxford University Press.
  • Cassam, Quassim, 2014,Self-knowledge for humans, Oxford:Oxford University Press.
  • Cavell, Marcia, 2006,Becoming a subject, Oxford: OxfordUniversity Press.
  • Chalmers, David J., 1996,The conscious mind, New York:Oxford University Press.
  • –––, 2003, “The content and epistemologyof phenomenal belief”, inConsciousness: New philosophicalperspectives, Quentin Smith and Aleksandar Jokic (eds.), Oxford:Oxford, 220–272.
  • Chapman, Dwight W., 1933, “Attensity, clearness, andattention”,American Journal of Psychology, 45:156–165.
  • Cheesman, Jim, and Philip M. Merikle, 1986, “Distinguishingconscious from unconscious perceptual processes”,CanadianJournal of Psychology, 40: 343–367.
  • Churchland, Paul M., 1988,Matter and consciousness, rev.ed., Cambridge, MA: MIT Press.
  • Coliva, Annalisa, 2016,Varieties of self-knowledge,London: Palgrave.
  • Comte, Auguste, 1830,Cours de philosophie positive,volume 1, Paris: Bacheleier, Libraire pour lesMathématiques.
  • Cooper, Joel, and Russell H. Fazio, 1984, “A new look atdissonance theory”,Advances in Experimental SocialPsychology, 17: 229–266.
  • Csikszentmihalyi, Mihaly, 2014,Flow and the Foundations ofPositive Psychology, Dordrecht: Springer.
  • Cui, Xu, Cameron B. Jeter, Dongni Yang, P. Read Montague, andDavid M. Eagleman, 2007, “Vividness of mental imagery:Individual variability can be measured objectively”,VisionResearch, 47: 474–478.
  • Cunningham, William A., et al., 2004, “Separable neuralcomponents in the processing of Black and White faces”,Psychological Science, 15: 806–813.
  • de Graaf, Tom A., Maartje c. de Jong, Rainer Goebel, Raymond vanEe, and Alexander T. Sack, 2011, “On the functional relevance offrontal cortex for passive and volunatarily controlled bistablevision”,Cerebral Cortex, 21: 2322–2331.
  • de Graaf, Tom A., Po-Jang Hsieh, and Alexander T. Sack, 2012,“The ‘correlates’ in neural correlates ofconsciousness”,Neuroscience and Biobehavioral Reviews,36: 191–197.
  • De Vaus, David, 1985/2014,Surveys in social research,6th ed., London: Routledge.
  • Dehaene, Stanislaus, et al., 2001, “Cerebral mechanisms ofword masking and unconscious repetition priming”,NatureNeuroscience, 4: 752–758.
  • Dehaene, Stanislaus, and Jean-Pierre Changeux, 2011,“Experimental and theoretical approaches to consciousprocessing”,Neuron, 70: 200–227.
  • Del Cul, Antoine, Sylvain Baillet, and Stanislas Dehaene, 2007,“Brain dynamics underlying the nonlinear threshold for access toconsciousness”,PLoS Biology, 5(10): e260).
  • Dennett, Daniel C., 1987,The intentional stance,Cambridge, MA: MIT Press.
  • –––, 1991,Consciousness explained,Boston: Little, Brown, and Co.
  • –––, 2000, “The case for rorts”, inRorty and his critics, R.B. Brandom (ed.), Malden, MA:Blackwell, 91–101.
  • –––, 2002, “How could I be wrong? Howwrong could I be?”,Journal of Consciousness Studies,9(5–6): 13–6.
  • Descartes, René, 1637/1985,Discourse on themethod, inThe philosophical writings of Descartes, vol.1, John Cottingham, Robert Stoothoff, and Dugald Murdoch (editors andtranslators), Cambridge: Cambridge University Press,111–151.
  • –––, 1641/1984,Meditations on firstphilosophy, inThe philosophical writings of Descartes,vol. 2, John Cottingham, Robert Stoothoff, and Dugald Murdoch (editorsand translators,), Cambridge: Cambridge University Press,1–62.
  • Dovidio, John F., Kerry Kawakami, Craig Johnson, Brenda Johnson,and Adaiah Howard, 1997, “On the nature of prejudice: Automaticand controlled processes”,Journal of Experimental SocialPsychology, 33: 510–540.
  • Dretske, Fred, 1995,Naturalizing the mind, Cambridge,MA: MIT.
  • Ebbinghaus, Hermann, 1885/1913,Memory: A contribution toexperimental psychology, Henry A. Ruger and Clara E. Bussenius(translators), New York: Columbia.
  • Elster, Jon, 1983/2016,Sour grapes, Cambridge: CambridgeUniversity Press.
  • Ericsson, K. Anders, 2003, “Valid and non-reactiveverbalization of thoughts during performance of tasks: Towards asolution to the central problems of introspection as a source ofscientific data”,Journal of Consciousness Studies,10(9–10): 1–18.
  • Ericsson, K. Anders, and Herbert A. Simon, 1984/1993,Protocolanalysis, revised edition, Cambridge, MA: MIT.
  • Evans, Gareth, 1982,The varieties of reference, JohnMcDowell (ed.), Oxford: Clarendon; New York: Oxford UniversityPress.
  • Fazio, Russell H., Joni R. Jackson, Bridget C. Dunton, and CarolJ. Williams, 1995, “Variability in automatic activation as anunobtrusive measure of racial attitudes: A bona fide pipeline?”,Journal of Personality and Social Psychology, 69(6):1013–1027.
  • Fechner, Gustav, 1860 [1964],Elements of psychophysics,Helmut E. Adler, Davis H. Howes, and Edwin G. Boring (ed. and trans.),New York: Holt, Rinehart, and Winston.
  • Feest, Uljana, 2014, “Phenomenal experiences, first-personmethods, and the artificiality of experimental data”,Philosophy of Science, 81: 927–939.
  • Fernández, Jorgi, 2003, “Privileged accessnaturalized”,Philosophical Quarterly, 53:352–372.
  • Festinger, Leon, 1957,A theory of cognitive dissonance,Stanford, CA: Stanford.
  • Festinger, Leon, and James M. Carlsmith, 1959, “Cognitiveconsequences of forced compliance”,Journal of Abnormal andSocial Psychology, 58: 203–210.
  • Fink, Sascha Benjamin, Lukas Kob, and Holger, Lyre, 2021. “Astructural constraint on neural correlates of consciousness”,Philosophy and the Mind Sciences, 2 (7).doi:10.33735/phimisci.2021.79
  • Flavell, John H., Frances L. Green, and Eleanor R. Flavell, 1995,Young children’s knowledge about thinking,Monographs of the Society for Research in Child Development,60 (1).
  • Fodor, Jerry A., 1983,Modularity of mind, Cambridge, MA:MIT.
  • –––, 1998,Concepts: Where cognitive sciencewent wrong, Oxford: Oxford University Press.
  • Förster, Jona, Mika Koivisto, and Antti Revonsuo, 2020,“ERP and MEG correlates of visual consciousness: The seconddecade”,Consciousness and Cognition, 80 (102917).doi:10.1016/j.concog.2020.102917
  • Frässle, Stefan, Jens Sommer, Andreas Jansen, Marnix Naber,and Wolfgang Einhäuser, 2014, “Binocular rivalry: Frontalactivity relates to introspection and action but not toperception”,Journal of Neuroscience, 34:1738–1747.
  • Funder, David C., 1999,Personality judgment, London:Academic.
  • Gallois, Andre, 1996,The world without, the mind within,Cambridge: Cambridge University Press.
  • Galton, Francis, 1869/1891,Hereditary genius, rev. ed.,New York: Appleton.
  • Gardner, Sebastian, 1993,Irrationality and the philosophy ofpsychoanalysis, Cambridge: Cambridge University Press.
  • Gazzaniga, Michael S., 1995, “Consciousness and the cerebralhemispheres”, inThe Cognitive Neurosciences, MichaelS. Gazzaniga (ed.), Cambridge, MA: MIT, 1391–1400.
  • Gawronski, Bertram, and Galen V. Bodenhausen, 2006,“Associative and propositional processes in evaluation: Anintegrative review of implicit and explicit attitude change”,Psychological Bulletin, 132: 692–731.
  • Gawronski, Bertram, Mike Morrison, Curtis E. Phills, and SilviaGaldi, 2017, “Temporal stability of implicit and explicitmeasures: A longitudinal analysis”,Personality and SocialPsychology Bulletin, 43: 300–312.
  • Gawronski, Bertram, Jan De Houwer, and Jeffrey W. Sherman, 2020,“Twenty-five years of research using implicit measures”,Social Cognition, 38: S1–S25.
  • Gendler, Tamar Szabó, 2008a, “Alief andbelief”,Journal of Philosophy, 105:634–663.
  • –––, 2008b, “Alief in action, andreaction”,Mind & Language, 23: 552–585.
  • Gennaro, Rocco J., 1996,Consciousness andSelf-Consciousness, Amsterdam: John Benjamins.
  • Gertler, Brie, 2000, “The mechanics ofself-knowledge”,Philosophical Topics, 28:125–146.
  • –––, 2001, “Introspecting phenomenalstates”,Philosophy and Phenomenological Research, 63:305–328.
  • –––, 2011, “Self-knowledge and thetransparency of belief”, inSelf-knowledge, AnthonyHatzimoysis (ed.), Oxford: Oxford University Press.
  • Giustina, Anna, 2021, “Introspective acquaintance: Anintegration account”,European Journal of Philosophy,31: 380–397.
  • –––, 2022, “Introspective knowledge byacquaintance”,Synthese, 200 (128): 1–23.
  • Goldman, Alvin I., 1989, “Interpretationpsychologized”,Mind and Language, 4:161–185.
  • –––, 2000, “Can science know whenyou’re conscious?”,Journal of ConsciousnessStudies, 7(5): 3–22.
  • –––, 2006,Simulating minds, Oxford:Oxford University Press.
  • Gopnik, Alison, 1993a, “How we know our minds: The illusionof first-person knowledge of intentionality”,Behavioral andBrain Sciences, 16: 1–14.
  • –––, 1993b, “Psychopsychology”,Consciousness and Cognition, 2: 264–280.
  • Gopnik, Alison, and Andrew N. Meltzoff, 1994, “Minds, bodiesand persons: Young children’s understanding of the self andothers as reflected in imitation and ‘theory of mind’research”, inSelf-awareness in animals and humans, SueTaylor Parker, Robert W. Mitchell, and Maria L. Boccia (eds.), NewYork: Cambridge, 166–186.
  • Gordon, Robert M., 1995, “Simulation without introspectionor inference from me to you”, inMental simulation,Martin Davies and Tony Stone (eds.), Oxford: Blackwell.
  • –––, 2007, “Ascent routines forpropositional attitudes”,Synthese, 159:151–165.
  • Gow, Laura, 2019, “Everything is clear: All perceptualexperiences are transparent”,European Journal ofPhilosophy, 27: 412–425.
  • Green, David M., and John A. Swets, 1966,Signal detectiontheory and psychophysics, Oxford: Wiley.
  • Greenwald, Anthony G., Debbie E. McGhee, and Jordan L.K. Schwartz,1998, “Measuring individual differences in implicit cognition:The Implicit Association Test”,Journal of Personality andSocial Psychology, 74: 1464–1480.
  • Greenwald, Anthony G., and Brian A. Nosek, 2009,“Attitudinal dissociation: What does it mean?”, inAttitudes: Insights from the New Implicit Measures, RichardE. Petty, Russell H. Fazio, and Pablo Briñol (eds.), New York:Taylor and Francis, 65–82.
  • Hahn, Adam, Charles M. Judd, Holen K. Hirsh, and Irene V. Blair,2014, “Awareness of implicit attitudes”,Journal ofExperimental Psychology: General, 143: 1369–1392.
  • Hamilton, Andy, 2007, “Memory and self-consciousness:Immunity to error through misidentification”,Synthese,171: 409–417.
  • Hamilton, J.M.E., and A.J. Sanford, 1978, “The symbolicdistance effect for alphabetic order judgements: A subjective reportand reaction time analysis”,Quarterly Journal ofExperimental Psychology, 30: 33–43.
  • Harman, Gilbert, 1990, “The intrinsic quality ofexperience”, in James Tomberlin, (ed.),PhilosophicalPerspectives, 4, Atascadero, CA: Ridgeview, 31–52.
  • Harmon-Jones, Eddie, 2019,Cognitive Dissonance, 2nd ed.,Washington, DC: American Psychological Association.
  • Hart, Allen J., Paul J. Whalen, Lisa M. Shin, Sean C. McInerney,Hakan Fischer, and Scott L. Rauch, 2000, “Differential responsein the human amygdala to racial outgroupvs ingroup facestimuli”,NeuroReport, 11: 2351–2355.
  • Haybron, Daniel M., 2008,The pursuit of unhappiness,Oxford: Oxford University Press.
  • Heal, Jane, 2002, “On first-person authority”,Proceedings of the Aristotelian Society, 102:1–19.
  • Heil, John, 1988, “Privileged access”,Mind,97: 238–251.
  • Helmholtz, Hermann, 1856/1962,Helmholtz’s Treatise onPhysiological Optics, James P.C. Southall (ed.), New York: Dover.[Translation based on 1924 edition.]
  • Hesse, Janis Karan, and Doris Y Tsao, 2020, “A new no-reportparadigm reveals that face cells encode both consciously perceived andsuppressed stimuli”,eLife, 9: e58360.
  • Hill, Christopher S., 1991,Sensations: A defense of typematerialism, Cambridge: Cambridge University Press.
  • –––, 2009,Consciousness, Cambridge:Cambridge University Press.
  • Hirstein, William, 2005,Brain fiction, Cambridge, MA:MIT.
  • Hohwy, Jakob, 2011, “Phenomenal variability andintrospective reliability”,Mind & Language, 26:261–286.
  • Horgan, Terence, John L. Tienson, and George Graham, 2006,“Internal-world skepticism and mental self-presentation”,inSelf-representational approaches to consciousness, UriahKriegel and Kenneth Williford (eds.), Cambridge, MA: MIT,191–207.
  • Horgan, Terence, and Uriah Kriegel, 2007, “Phenomenalepistemology: What is consciousness that we may know it sowell?”,Philosophical Issues, 17(1):123–144.
  • Hume, David, 1739 [1978],A treatise of human nature,L.A. Selby-Bigge and P.H. Nidditch (eds.), Oxford: Clarendon.
  • –––, 1748/1975,An enquiry concerning humanunderstanding, inEnquiries concerning human understandingand concerning the principles of morals, L.A. Selby-Bigge andP.H. Nidditch (eds.), Oxford: Clarendon, 1–165.
  • Humphrey, George, 1951,Thinking: An introduction to itsexperimental psychology, London: Methuen.
  • Hurlburt, Russell T., 1990,Sampling normal and schizophrenicinner experience, New York: Plenum.
  • Hurlburt, Russell T., 2011,Investigating pristine innerexperience, Cambridge: Cambridge University Press.
  • Hurlburt, Russell T., and Christopher L. Heavey, 2006,Exploring inner experience, Amsterdam: John Benjamins.
  • Hurlburt, Russell T., and Eric Schwitzgebel, 2007,Describinginner experience? Proponent meets skeptic, Cambridge, MA:MIT.
  • Husserl, Edmund, 1913 [1982],Ideas, Book I, T.E. Kleinand W.E. Pohl (trs.), Dordrecht: Kluwer.
  • Irvine, Elizabeth, 2013,Consciousness as a scientificconcept, Dordrecht: Springer.
  • –––, 2021, “Developing dark pessimismtowards the justificatory role of introspective reports”,Erkenntnis, 86: 1319–1344
  • Ito, Tiffany A., and John T. Cacioppo, 2007, “Attitudes asmental and neural states of readiness”, inImplicit measuresof attitudes, Bernd Wittenbrink and Norbert Schwarz (eds.), NewYork: Guilford, 125–158.
  • Jack, Anthony, and Andreas Roepstorff, 2003,Trusting thesubject, vol. 1, special issue of theJournal ofConsciousness Studies, no. 10(9–10).
  • –––, 2004,Trusting the subject, vol.2, special issue of theJournal of Consciousness Studies,11(7–8).
  • James, William, 1890 [1981],The principles ofpsychology, Cambridge, MA: Harvard University Press.
  • Jaynes, Julian, 1976,The origin of consciousness in thebreakdown of the bicameral mind, New York: Houghton Mifflin.
  • Johansson, Petter, Lars Hall, Sverker Sikström, and AndreasOlsson, 2005, “Failure to detect mismatches between intentionand outcome in a simple decision task”,Science, 310:116–119.
  • Johansson, Petter, Lars Hall, Sverker Sikström, BettyTärning, and Andreas Lind, 2006, “How something can be saidabout telling more than we can know: On choice blindness andintrospection”,Consciousness and Cognition, 15:673–692.
  • Jost, John T., 2019, “The IAT is dead, long live the IAT:Context-sensitive measures of implicit attitudes are indispensable tosocial and political psychology”,Current Directions inPsychological Science, 28:10–19.
  • Kant, Immanuel, 1781/1997,The critique of pure reason,Paul Guyer and Allen W. Wood (eds. and trs.), Cambridge: CambridgeUniversity Press.
  • Kay, Aaron C., Maria C. Jimenez, and John T. Jost, 2002,“Sour grapes, sweet lemons, and the anticipatory rationalizationof the status quo”,Personality and Social PsychologyBulletin, 28: 1300–1312.
  • Kihlstrom, John F., “Implicit methods in socialpsychology”, inThe SAGE handbook of methods in socialpsychology, Carol Sansone, Carolyn C. Morf, and A.T. Panter(eds.), Thousand Oaks, CA: Sage, 195–212.
  • Kind, Amy, 2003, “What’s so transparent abouttransparency?”,Philosophical Studies, 115:225–244.
  • Kleinschmidt, A., C. Büchel, S. Zeki, and R.S.J. Frackowiak,1998, “Human brain activity during spontaneously reversingperception of ambiguous figures”,Proceedings of the RoyalSociety B, 265: 2427–2433.
  • Knapen, Tomas, Jan Brascamp, Joel Pearson, Raymond van Ee, andRandolph Blake, 2011, “The role of frontal and parietal areas inbistable perception”,Journal of Neuroscience, 31:10293–10301.
  • Koch, Christof, Marcello Massimini, Melanie Boly, and GiulioTononi, 2016, “ Neural correlates of consciousness: progress andproblems”,Nature Reviews Neuroscience, 17:307–321.
  • Kornblith, Hilary, 1998, “What is it like to be me?”,Australasian Journal of Philosophy, 76: 48–60.
  • Kosslyn, Stephen M., Daniel Reisberg, and Marlene Behrmann, 2006,“Introspection and mechanism in mental imagery”, inThe Dalai Lama at MIT, Anne Harrington and Arthur Zajonc(eds.), Cambridge, MA: Harvard, 79–90.
  • Kriegel, Uriah, 2009,Subjective consciousness, Oxford:Oxford University Press.
  • –––, forthcoming, “A new perceptual theoryof introspection”, inThe Routledge Handbook ofIntrospection, A. Giustina (ed.), London: Routledge.
  • Külpe, Oswald, 1893 [1895],Outlines of psychology,Edward Titchener (trans.), London: George Allen & Unwin.
  • Kusch, Martin, 1999,Psychological knowledge, London,Routledge.
  • Lambie, John A., and Anthony J. Marcel, 2002, “Consciousnessand the varieties of emotion experience: A theoreticalframework”,Psychological Review, 109:219–259.
  • Lane, Kristin A., Mahzarin R. Banaji, Brian A. Nosek, and AnthonyG. Greenwald, 2007, “Understanding and using the ImplicitAssociation Test: IV”, inImplicit measures ofattitudes, Bernd Wittenbrink and Norbert Schwarz (eds.), NewYork: Guilford, 59–102.
  • Langland-Hassan, Peter, 2015, “Introspectivemisidentification”,Philosophical Studies, 172:1737–1758.
  • Larson, Reed, and Mihaly Csikszentmihalyi, 1983, “TheExperience Sampling Method” in Harry T. Reis, (ed.),Naturalistic approaches to studying social interaction, SanFrancisco: Jossey-Bass, 41–56.
  • Lau, Hakwan and Richard Brown Brown, 2019, “Theemperor’s new phenomenology? The empirical case for consciousexperience without first-order representations”, inBlockheads! Essays on Ned Block’s Philosophy of Mind andConsciousness, A. Pautz and D. Stoljar (eds.) , Cambridge, MA:MIT Press, 171–197.
  • Lear, Jonathan, 1998,Open-minded, Cambridge, MA: HarvardUniversity Press.
  • LeDoux, Joseph E., and Richard Brown, 2017, “A higher-ordertheory of emotional consciousness”,PNAS, 114:E2016–E2025.
  • Levy, Neil, 2015, “Neither fish nor fowl: Implicit attitudesas patchy endorsements”,Noûs, 49:800–823.
  • Lewis, C.I., 1946,An analysis of knowledge andvaluation, La Salle, IL: Open Court.
  • Locke, John, 1690 [1975],An essay concerning humanunderstanding, Peter H. Nidditch (ed.), Oxford: Oxford UniversityPress.
  • Lumer, Erik D., Karl J. Friston, and Geraint Rees, 1998,“Neural correlates of perceptual rivalry in the humanbrain”,Science, 280: 1930–1934.
  • Lycan, William G., 1996,Consciousness and experience,Cambridge, MA: MIT.
  • Lyons, William, 1986,The disappearance of introspection,Cambridge, MA: MIT.
  • Lyubomirsky, Sonja, and Lee Ross, 1999, “Changes inattractiveness of elected, rejected, and precluded alternatives: Acomparison of happy and unhappy individuals”,Journal ofPersonality and Social Psychology, 76: 988–1007.
  • Machery, Edouard, 2016, “De-Freuding implicitattitudes”, inImplicit Bias and philosophy, volume 1:Metaphysics and epistemology, Michael Brownstein and JenniferSaul (eds.), Oxford: Oxford University Press.
  • Macmillan, Neil A., and C. Douglas Creelman, 1991,Detectiontheory, Cambridge: Cambridge University Press.
  • Mandelbaum, Eric, 2016. “Attitude, inference, association:On the propositional structure of implicit bias”,Noûs, 50: 629–658.
  • Marks, David F., 1985, “Imagery paradigms andmethodology”Journal of Mental Imagery, 9:93–105.
  • Marr, David, 1983,Vision, New York: Freeman.
  • Martin, Michael G.F., 2002, “The transparency ofexperience”,Mind and Language, 17: 376–425.
  • Maudsley, Henry, 1867 [1977],Physiology and pathology of themind, Daniel N. Robinson (ed.), Washington, DC: UniversityPublications of America.
  • McGeer, Victoria, 1996, “Is ‘self-knowledge’ anempirical problem? Renegotiating the space of philosophicalexplanation”,Journal of Philosophy, 93:483–515.
  • –––, 2008, “The moral development offirst-person authority”,European Journal ofPhilosophy, 16: 81–108.
  • McGeer, Victoria, and Philip Pettit, 2002, “Theself-regulating mind”,Language and Communication, 22:281–299.
  • Megumi, Fukuda, Bahador Bahrami Ryota Kanai, and Geraint Rees,2015, “Brain activity dynamics in human parietal regions duringspontaneous switches in bistable perception”,NeuroImage, 107: 190–197.
  • Mele, Alfred, 2001,Self-deception unmasked, Princeton,NJ: Princeton University Press.
  • Mengzi, 3rd c. BCE [2008], TITLE, B.W. Van Norden (tr.),Indianapolis: Hackett.
  • Merickle, Philip M., Daniel Smilek, and John D. Eastwood, 2001,“Perception without awareness: Perspectives from cognitivepsychology”,Cognition, 79: 115–134.
  • Mill, James, 1829 [1878],Analysis of the Phenomena of theHuman Mind, John Stuart Mill (ed.), London: Longmans, Green,Reader, and Dyer.
  • Mill, John Stuart, 1865 [1961],Auguste Comte andpositivism, Ann Arbor, MI: University of Michigan.
  • Mole, Christopher, 2011,Attention is cognitive unison,Oxford: Oxford University Press.
  • Moll, Albert, 1889 [1911],Hypnotism, Arthur F. Hopkirk(ed.), New York: Charles Scribner’s Sons.
  • Montague, Michelle, 2016,The given: Experience and itscontent, Oxford: Oxford University Press.
  • Moore, George Edward, 1903, “The refutation ofidealism”,Mind, 12: 433–453.
  • –––, 1942, “A reply to my critics”,inThe philosophy of G.E. Moore, in P.A. Schilpp (ed.), NewYork: Tudor, 535–677.
  • –––, 1944 [1993], “Moore’sparadox”, in G.E. Moore,Selected writings, ThomasBaldwin (ed.), London: Routledge, 207–212.
  • Morales, Jorge, forthcoming, “Introspection is signaldetection”,British Journal for Philosophy ofScience.
  • Moran, Richard, 2001,Authority and estrangement,Princeton: Princeton University Press.
  • Müller, G.E., 1904,Die Gesichtspunkte und die Tatsachender psychophysischen Methodik, Wiesbaden: J.F. Bergmann.
  • Nahmias, Eddy, 2002, “Verbal reports on the contents ofconsciousness: Reconsidering introspectionist methodology”,Psyche, 8(21).
  • Nardi, Peter M., 2002/2018,Doing survey research, 4thed. New York: Routledge.
  • Newell, Ben R., and David R. Shanks, 2014, “Unconsciousinfluences on decision making: A critical review”,Behavioral and Brain Sciences, 37: 1–61.
  • Nichols, Shaun, and Stephen P. Stich, 2003,Mindreading,Oxford: Oxford University Press.
  • Nisbett, Richard E., and Nancy Bellows, 1977, “Verbalreports about causal influences on social judgments: Private accessversus public theories”,Journal of Personality and SocialPsychology, 35: 613–624.
  • Nisbett, Richard E., and Lee Ross, 1980,Human inference,Englewood Cliffs, NJ: Prentice-Hall.
  • Nisbett, Richard E., and Timothy DeCamp Wilson, 1977,“Telling more than we can know: Verbal reports on mentalprocesses”,Psychological Review, 84:231–259.
  • Noë, Alva, 2004,Action in perception, Cambridge,MA: MIT Press.
  • Noë, Alva, and Evan Thompson, 2004, “Are there neuralcorrelates of consciousness?”,Journal of ConsciousnessStudies, 11: (1): 3–28.
  • Oswald, Frederick, Gregory Mitchell, Hart Blanton, James Jaccard,and Philip Tetlock, 2013, “Predicting ethnic and racialdiscrimination: A meta-analysis of IAT criterion studies”,Journal of Personality and Social Psychology, 105:171–192.
  • Papineau, David, 2002,Thinking about consciousness,Oxford: Oxford University Press.
  • Parkkonen, Lauri, Jesper Andersson, MattiHämäläinen, and Riitta Hari, 2008, “Early visualbrain areas reflect the percept of an ambiguous scene”,Proceedings of the National Academy of Sciences, 105:20500–20504.
  • Pauen, Michael, and John-Dylan Haynes, 2021, “Measuring themental”,Consciousness and Cognition, 80: 103106.
  • Payne, B. Keith, Heidi A. Vuletich and Kristjen B. Lundberg, 2017,“The bias of crowds: How implicit bias bridges personal andsystemic prejudice”,Psychological Inquiry, 28:233–248.
  • Peacocke, Christopher, 1998, “Conscious attitudes,attention, and self-knowledge”, inKnowing our ownminds, Crispin Wright, Barry C. Smith, and Cynthia Macdonald(eds.), Oxford: Oxford University Press, 63–99.
  • Petitmengin, Claire, 2006, “Describing one’ssubjective experience in the second person: An interview method forthe science of consciousness”,Phenomenology and theCognitive Sciences, 5: 229–269.
  • Petty, Richard E., Russell H. Fazio, and Pablo Briñol(eds.), 2009,Attitudes: Insights from the new implicitmeasures, New York: Taylor and Francis.
  • Phillips Ian, 2018, “The methodological puzzle of phenomenalconsciousness”,Philosophical Transactions of the RoyalSociety B, 373: 20170347.
  • Pillsbury, W.B., 1908,Attention, London: SwanSonnenschein.
  • Price, Donald D., and Murat Aydede, 2005, “The experimentaluse of introspection in the scientific study of pain and itsintegration with third-person methodologies: Theexperiential-phenomenological approach”, inPain: New essayson its nature and the methodology of its study, Murat Aydede(ed.), Cambridge, MA: MIT, 243–273.
  • Prinz, Jesse, 2004, “The fractionation ofintrospection”,Journal of Consciousness Studies,11(7–8): 40–57.
  • –––, 2007, “Mental pointing: Phenomenalknowledge without concepts”,Journal of ConsciousnessStudies, 14(9–10): 184–211.
  • –––, 2012,The conscious brain, Oxford:Oxford University Press.
  • Pronin, Emily, 2009, “The introspection illusion”,Advances in Experimental Social Psychology, 41:1–67.
  • Pryor, James, 1999, “Immunity to error throughmisidentification”,Philosophical Topics,26(1–2): 271–304.
  • Putnam, Hilary, 1975, “The meaning of‘meaning’” in Hilary Putnam,Philosophicalpapers, vol. 2, Cambridge: Cambridge University Press,215–271.
  • Quiroga, R. Quian, R. Mukamel, E.A. Isham, and I. Fried, 2008,“Human single-neuron responses at the threshold of consciousrecognition”,Proceedings of the National Academy ofSciences, 105: 3599–3604.
  • Redelmeier, Donald A., and Daniel Kahneman, 1996,“Patients’ memories of painful medical treatments:Real-time and retrospective evaluations of two minimally invasiveprocedures”,Pain, 66: 3–8.
  • Rees, Geraint, and Chris Frith, 2007, “Methodologies foridentifying the neural correlates of consciousness”, inTheBlackwell Companion to Consciousness, Max Velmans and SusanSchneider (eds.), Malden, MA: Blackwell, 553–566.
  • Richet, Charles, 1884,L’homme etl’intelligence, Paris: F. Alcan.
  • Roche, Michael, 2016, “Knowing what one believes – indefense of a dispositional reliabilist extrospective account”,American Philosophical Quarterly, 53: 365–379.
  • –––, 2023, “Introspection, transparency,and desire”,Journal of Consciousness Studies, 30(3):132–154.
  • Rodriguez, Eugenio, Nathalie George, Jean-Philippe Lachauz,Jacques Martinerie, Bernard Renault, and Francisco J. Varela, 1999,“Perception’s shadow: Long-distance synchronization ofhuman brain activity”,Nature, 397: 430–433.
  • Rorty, Richard, 1970, “Incorrigibility as the mark of themental”,Journal of Philosophy, 67: 399–424.
  • Rosenthal, David M., 1990, “Two concepts ofconsciousness”,Philosophical Studies, 49:329–359.
  • –––, 2001, “Introspection andself-interpretation”,Philosophical Topics, 28(2):201–233.
  • –––, 2005,Consciousness and Mind,Oxford: Oxford University Press.
  • Ryle, Gilbert, 1949,The concept of mind, New York:Barnes and Noble.
  • Samoilova, Kateryna, 2016, “Transparency and introspectiveunification”,Synthese, 193: 3363–3381.
  • Salti, Moti, et al., 2015, “Distinct cortical codes andtemporal dynamics for conscious and unconscious percepts”,eLife, 4: e05652.
  • Schechter, Elizabeth, 2018,Self-consciousness and“split” brains, Oxford: Oxford University Press.
  • Schwitzgebel, Eric, 2002, “A phenomenal, dispositionalaccount of belief”,Noûs, 36: 249–275.
  • –––, 2005, “Difference tonetraining”,Psyche, 11(6).
  • –––, 2007, “No unchallengeable epistemicauthority, of any sort, regarding our own consciousexperience—contra Dennett?”,Phenomenology and theCognitive Sciences, 6: 107–113.
  • –––, 2010, “ Acting contrary to ourprofessed beliefs, or the gulf between occurrent judgment anddispositional belief”,Pacific Philosophical Quarterly,91: 531–553.
  • –––, 2011a, “Knowing your ownbeliefs”,Canadian Journal of Philosophy, 35(Supplement: Belief and Agency, ed. D. Hunter): 41–62.
  • –––, 2011b,Perplexities ofconsciousness, Cambridge, MA: MIT.
  • –––, 2012, “Introspection, what?”,inIntrospection and consciousness, Declan Smithies andDaniel Stoljar (eds.), Oxford: Oxford University Press.
  • –––, 2021, “The pragmatic metaphysics ofbelief”, inThe fragmented mind, Cristina Borgoni, DirkKindermann, and Andrea Onofri (eds.), Oxford: Oxford UniversityPress.
  • Scollon, Christie Napa, Ed Diener, Shigehiro Oishi, RobertBiswas-Diener , 2005, “An experience-sampling and cross-culturalinvestigation of the relation between pleasant and unpleasantaffect”,Cognition and Emotion, 19: 27–52.
  • Searle, John R., 1983,Intentionality, Cambridge:Cambridge University Press.
  • –––, 1992,The rediscovery of the mind,Cambridge, MA: MIT Press.
  • Shoemaker, Sydney, 1963,Self-knowledge andself-identity, Ithaca, NY: Cornell University Press.
  • –––, 1968, “Self-reference andself-awareness”,Journal of Philosophy, 65:555–567.
  • –––, 1988, “On knowing one’s ownmind”,Philosophical Perspectives, 2:183–209.
  • –––, 1994a, “Self-knowledge and‘inner sense’. Lecture I: The object perceptionmodel”,Philosophy and Phenomenological Research, 54:249–269.
  • –––, 1994b, “Self-knowledge and‘inner sense’. Lecture II: The broad perceptualmodel”,Philosophy and Phenomenological Research, 54:271–290.
  • –––, 1994c, “Self-knowledge and‘inner sense’. Lecture III: The phenomenal character ofexperience”,Philosophy and Phenomenological Research,54: 291–314.
  • –––, 1995, “Moore’s paradox andself-knowledge”,Philosophical Studies, 77:211–228.
  • –––, 2012, “Self-intimation andsecond-order belief”, inIntrospection andconsciousness, Declan Smithies and Daniel Stoljar (eds.), Oxford:Oxford University Press.
  • Siewert, Charles, 2004, “Is experience transparent?”,Philosophical Studies, 117: 15–41.
  • –––, 2012, “On the phenomenology ofintrospection”, inIntrospection and consciousness,Declan Smithies and Daniel Stoljar (eds.), Oxford: Oxford UniversityPress.
  • Singal, Jesse, 2017, “Psychology’s favorite tool formeasuring racism isn’t up to the job”,The Cut,available online.
  • Singh, Keshav, forthcoming, “Belief as commitment totruth”, inThe Nature of Belief, Jonathan Jong and EricSchwitzgebel (eds.), Oxford: Oxford University Press.
  • Smallwood, Jonathan, and Jonathan W. Schooler, 2006, “Therestless mind”,Psychological Bulletin, 132:946–958.
  • Smith, A.D., 2008, “Translucent experiences”,Philosophical Studies, 140:197–212.
  • Smithies, Declan, forthcoming, “Belief as a feeling ofconviction”, inThe Nature of Belief, Jonathan Jong andEric Schwitzgebel (eds.), Oxford: Oxford University Press.
  • Spener, Maja, 2024,Introspection: First-person access inscience and agency, Oxford: Oxford University Press.
  • Stoljar, Daniel, 2004, “The argument fromdiaphanousness”, inNew essays in the philosophy of languageand mind, Maite Ezcurdia, Robert J. Stainton, and ChristopherViger (eds.), Calgary: University of Calgary, 341–390.
  • Stone, Jeff, and Joel Cooper, 2001, “A self-standards modelof cognitive dissonance”,Journal of Experimental SocialPsychology, 37: 228–243.
  • Summerfield, Christopher, Anthony Ian Jack, and Adrian PhilipBurgess, 2002, “Induced gamma activity is associated withconscious awareness of pattern masked nouns”,InternationalJournal of Psychophysiology, 44: 93–100.
  • Taylor, Shelley E., and Jonathon D. Brown, 1988, “Illusionand well-being: A social psychological perspective on mentalhealth”,Psychological Bulletin, 103:193–210.
  • Thomas, Nigel, 1999, “Are theories of imagery theories ofimagination?”,Cognitive Science, 23:207–245.
  • Timmermans, Bert, and Alex Cleeremans, 2015, “How can wemeasure awareness? An overview of current methods”, inBehavioural methods in consciousness research, MortenOvergaard (ed.), Oxford: Oxford University Press.
  • Titchener, E.B., 1901–1905,Experimentalpsychology, New York: Macmillan.
  • –––, 1908 [1973],Lectures on the elementarypsychology of feeling and attention, New York: Arno.
  • –––, 1912a, “Prolegomena to a study ofintrospection”,American Journal of Psychology, 23:427–448.
  • –––, 1912b, “The schema ofintrospection”,American Journal of Psychology, 23:485–508.
  • Tong, Frank, Ming Meng, and Randolf Blake, 2006, “Neuralbases of binocular rivalry”,Trends in CognitiveSciences, 10: 502–511.
  • Tononi, Giulio, and Christof Koch, 2008, “The neuralcorrelates of consciousness: An update”,Annals of the NewYork Academy of Sciences: The Year in Cognitive Neuroscience2008, 1124: 239–261.
  • Tsuchiya, Naotsugu, Melanie Wilke, Stefan Frässle, and VictorA.F. Lamme, 2015, “No-report paradigms: Extracting the trueneural correlates of consciousness”,Trends in CognitiveSciences 19: 757–770.
  • Tye, Michael, 1995,Ten problems about consciousness,Cambridge, MA: MIT.
  • –––, 2000,Consciousness, color, andcontent, Cambridge, MA: MIT.
  • –––, 2002, “Representationalism and thetransparency of experience”Noûs, 36:137–151.
  • –––, 2009,Consciousness revisited,Cambridge, MA: MIT.
  • Van Gulick, Robert, 1993, “Understanding the phenomenalmind: Are we all just armadillos?”, inConsciousness:Psychological and philosophical essays, Martin Davies and Glyn W.Humphreys (eds.), Oxford: Blackwell, 134–154.
  • Vanman, Eric J., Brenda Y. Paul, Tiffany A. Ito, and NormanMiller, 1997, “The modern face of prejudice and structuralfeatures that moderate the effect of cooperation on affect”Journal of Personality and Social Psychology, 73:941–959.
  • Varela, Francisco J., 1996, “Neurophenomenology: Amethodological remedy for the hard problem”,Journal ofConsciousness Studies, 3(4): 330–49.
  • Vazire, Simine, 2010, “Who knows what about a person? TheSelf-Other Knowledge Asymmetry (SOKA) model”,Journal ofPersonality and Social Psychology, 98: 281–300.
  • Velleman, J. David, 2000,The possibility of practicalreason, Oxford: Oxford University Press.
  • Vermersch, Pierre, 1999, “Introspection as practice”,Journal of Consciousness Studies, 6 (2–3):17–42.
  • Watson, John B., 1913, “Psychology as the behaviorist viewsit”,Psychological Review, 20: 158–177.
  • Weksler, Assaf, Hilla Jacobson, and Zohar Z. Bronfman (2019).“The transparency of experience and the neuroscience ofattention”,Synthese, 198: 4709–4730.
  • Williams, Amanda C. de C., Huw Talfryn Oakley Davies, and YasminChadury, 2000, “Simple pain rating scales hide complexidiosyncratic meanings”,Pain, 85: 457–463.
  • Wilson, Timothy D., 2002,Strangers to ourselves,Cambridge, MA: Harvard University Press.
  • Wilson, Timothy D., Samuel Lindsey, and Tonya T. Schooler, 2000,“A model of dual attitudes”,PsychologicalReview, 107: 101–126.
  • Wittenbrink, Bernd, Charles M. Judd, and Bernadette Park, 1997,“Evidence for racial prejudice at the implicit level and itsrelationship with questionnaire measures”,Journal ofPersonality and Social Psychology, 72: 262–274.
  • Wittenbrink, Bernd, and Norbert Schwarz (eds.), 2007,Implicitmeasures of attitudes, New York: Guilford.
  • Wittgenstein, Ludwig, 1953/1968,Philosophicalinvestigations, 3rd edition, G.E.M. Anscombe (translator), NewYork: Macmillan.
  • Wollheim, Richard, 1981,Sigmund Freud, New York:Cambridge University Press.
  • –––, 2003, “On the Freudianunconscious”,Proceedings and Addresses of the AmericanPhilosophical Association, 77(2): 23–35.
  • Wright, Crispin, 1989, “Wittgenstein’s laterphilosophy of mind: Sensation, privacy, and intention”,Journal of Philosophy, 86: 622–634.
  • –––, 1998, “Self-knowledge: TheWittgensteinian legacy”, inKnowing our own minds,Crispin Wright, Barry C. Smith, and Cynthia Macdonald (eds.), Oxford:Oxford University Press.
  • Wundt, Wilhelm, 1874 [1908],Grundzüge derphysiologischen Psychologie (6th ed.), Leipzig: WilhelmEngelmann.
  • –––, 1888, Selbstbeobachtung und innereWahrnehmung,Philosophische Studien, 4: 292–309.
  • –––, 1896 [1902],Outlines ofpsychology (4th ed.), 2nd English ed., Charles Hubbard Judd(trans.), Leipzig: Wilhelm Engelmann.
  • –––, 1907, “Über Ausfrageexperimentsund über die Methoden zur Psychologie des Denkens”,Psychologische Studien, 3: 301–360.
  • Zawidzki, Tad, 2016, “Mindshaping andself-interpretation”, inThe Routledge Handbook of thePhilosophy of the Social Mind, Julian Kiverstein (eds), New York:Routledge, 495–513.
  • Zhu, Michael, Richard Hardstone, and Biyu J. He, 2022,“Neural oscillations promoting perceptual stability andperceptual memory during bistable perception”,ScientificReports, 12 (2760).
  • Zimmerman, Aaron, 2018,Belief: A pragmatic picture,Oxford: Oxford University Press.

Other Internet Resources

Related Entries

behaviorism |belief |bias, implicit |Brentano, Franz |consciousness |consciousness: and intentionality |consciousness: higher-order theories |consciousness: representational theories of |consciousness: unity of |delusion |Descartes, René: epistemology |externalism: and self-knowledge |externalism about the mind |folk psychology: as a theory |folk psychology: as mental simulation |functionalism |inner speech |intentionality: phenomenal |James, William |Kant, Immanuel: view of mind and consciousness of self |mental content: narrow |mental imagery |mental representation |pain |perception: the problem of |phenomenology |propositional attitude reports |qualia |Ryle, Gilbert |self-consciousness |self-consciousness: phenomenological approaches to |self-deception |self-knowledge |Wundt, Wilhelm Maximilian

Copyright © 2024 by
Eric Schwitzgebel<eschwitz@ucr.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp