Higher-order theories of consciousness try to explain the difference between unconscious and conscious mental states in terms of a relation obtainingbetween the conscious state in question and a higher-orderrepresentation of some sort (either a higher-order perception of thatstate, or a higher-order thought about it). The mostchallenging properties to explain are those involved inphenomenal consciousness—the sort of state that has asubjective dimension, that has ‘feel’, or that itislikesomething to undergo. These properties willform the focus of this article.
One of the advances made in the last few decades has been todistinguish between different questions concerning consciousness (seeparticularly: Rosenthal 1986; Dretske 1993; Block 1995; Lycan 1996).Not everyone agrees on quitewhich distinctions need to bedrawn. But all are agreed that we should distinguishcreatureconsciousness frommental-state consciousness. It is onething to sayof an individual person or organism that it isconscious (either in general or of something in particular); and it isquite another thing to sayof one of the mental states of acreature that it is conscious.
It is also agreed that within creature-consciousness itself we shoulddistinguish betweenintransitive andtransitivevariants. To say of an organism that it is conscioussimpliciter (intransitive) is to say just that it is awake,as opposed to asleep or comatose. There don’t appear to be anydeep philosophical difficulties lurking here (or at least, theyaren’t difficulties specific to the topic of consciousness, asopposed to mentality in general). But to say of an organism that it isconsciousof such-and-such (transitive) is normally to say atleast that it isperceiving such-and-such, orawareof such-and-such. So we say of the mouse that it is conscious ofthe cat outside its hole, in explaining why it doesn’t come out;meaning that itperceives the cat’s presence. Toprovide an account of transitive creature-consciousness would thus beto attempt a theory of perception.
There is a choice to be made concerning transitivecreature-consciousness, failure to notice which may be a potentialsource of confusion. For we have to decide whether the perceptualstate in virtue of which an organism may be said to betransitively-conscious of something must itself be a conscious one(state-conscious—see below). If we say ‘Yes’ then weshall need to know more about the mouse than merely that it perceivesthe cat if we are to be assured that it is conscious of thecat—we shall need to establish that its percept of the cat isitself conscious. If we say ‘No’, on the other hand, thenthe mouse’s perception of the cat will be sufficient for the mouse tocount as conscious of the cat; but we may have to say that although itis conscious of the cat, the mental state in virtue of which it is soconscious is not itself a conscious one! It may be best to by-pass anydanger of confusion here by avoiding the language oftransitive-creature-consciousness altogether. Nothing of importancewould be lost to us by doing this.
Turning now to the notion ofmental-state consciousness, themajor distinction here is betweenphenomenal consciousness,on the one hand—which is a property of states that it islike something to be in, that have a distinctive‘feel’ (Nagel 1974)—and variousfunctionally-definable forms ofaccess consciousness, on theother (Block 1995). Most theorists believe that there are mentalstates—such as occurrent thoughts or judgments—that areaccess-conscious (in whatever is the correct functionally-definablesense), but that are not phenomenally conscious. In contrast, there isconsiderable dispute as to whether mental states can bephenomenally-conscious without also being conscious in thefunctionally-definable sense—and even more dispute about whetherphenomenal consciousness can bereductively explained infunctional and/or representational terms.
It is plain that there is nothing deeply problematic aboutfunctionally-definable notions of mental-state consciousness, from anaturalistic perspective. For mental functions and mentalrepresentations are the staple fare of naturalistic accounts of themind. But this leaves plenty of room for dispute about the form thatthe correct functional account should take. Some claim that for astate to be conscious in the relevant sense is for it to be poised tohave an impact on the organism’s decision-making processes (Kirk 1994;Dretske 1995; Tye 1995, 2000), perhaps also with the additional requirementthat those processes should be distinctivelyrational ones(Block 1995). Others think that the relevant requirement foraccess-consciousness is that the state should be suitably related tohigher-order representations—experiences and/or thoughts—ofthat very state (Armstrong 1968, 1984; Rosenthal 1986, 1993, 2005;Dennett 1978a, 1991; Carruthers 1996, 2000, 2005; Lycan 1987,1996; Gennaro 2012).
Whatis often thought to be naturalistically problematic, incontrast, is phenomenal consciousness (Nagel 1974, 1984; Jackson 1982,1986; McGinn 1991; Block 1995; Chalmers 1996). And what is really anddeeply controversial is whether phenomenal consciousness can beexplained in terms of some or other functionally-definablenotion.Cognitive (orrepresentational) theoriesmaintain that it can.Higher-order cognitive theoriesmaintain that phenomenal consciousness can be reductively explained interms of representations (either experiences or thoughts) that arehigher-order. It is such theories that concern us here.
Higher-order theories, like cognitive/representational theories ingeneral, assume that the rightlevel at which to seek anexplanation of phenomenal consciousness is a cognitive one, providingan explanation in terms of some combination ofcausal roleandintentional content. All such theories claim thatphenomenal consciousness consists in a certain kind of intentional orrepresentational content (analog or‘fine-grained’ in comparison with any concepts we possess)figuring in a certain distinctive position in the causal architectureof the mind. They must therefore maintain that these latter sorts ofmental property don’t already implicate or presuppose phenomenalconsciousness. In fact, all cognitive accounts are united in rejectingthe thesis that the very properties ofmind ormentality already presuppose phenomenal consciousness, asproposed by Searle (1992, 1997) for example. The higher-order approachdoes not attempt to reduce consciousness directly to neurophysiologybut rather its reduction is in mentalistic terms, that is, by usingsuch notions as thoughts and awareness.
The major divide amongst representational theories of phenomenalconsciousness in general, is between accounts that are provided inpurely first-order terms and those that implicate higher-orderrepresentations of one sort or another (see below). These higher-ordertheorists will allow that first-order accounts—of the sortdefended by Dretske (1995) and Tye (1995), for example—canalready make some progress with the problem of consciousness.According to first-order views, phenomenal consciousness consists inanalog or fine-grained contents that are available to the first-orderprocesses that guide thought and action. So a phenomenally-consciouspercept of red, for example, consists in a state with the analogcontentred which is tokened in such a way as to feed intothoughts about red, or into actions that are in one way or anotherguided by redness. Now, the point to note in favor of such an accountis that it can explain the natural temptation to think that phenomenalconsciousness is in some senseineffable, orindescribable. This will be because such states havefine-grained contents that can slip through the mesh of any conceptualnet. We can always distinguish many more shades of red than we haveconcepts for, or could describe in language (other thanindexically—e.g., ‘That shade’).
The main motivation behind higher-order theories of consciousness, incontrast, derives from the belief that all (or at least most)mental-state types admit of both conscious and unconscious varieties.Almost everyone now accepts, for example, (post-Freud) that beliefsand desires can be activated unconsciously. (Think, here, of the wayin which problems can apparently become resolved during sleep, orwhile one’s attention is directed to other tasks. Notice, too, thatappeals to unconscious intentional states are now routine in cognitivescience.) And then if we ask what makes the difference between aconscious and an unconscious mental state, one natural answer is thatconscious states are states that we areaware of. And ifawareness is thought to be a form of creature-consciousness(see section 1 above), then this will translate into the view thatconscious states arestates of which the subject is aware, orstates of which the subject is creature-conscious. That is to say,these are states that are the objects of some sort of higher-orderrepresentation—whether a higher-order perception or experience,or a higher-order thought. This is similar to the widely referencedTransitivity Principle (TP) which says that a conscious state is astate whose subject is, in some way, aware of being in it. On theother hand, a mental state of which a subject iscompletelyunaware is clearly an unconscious state. (See alsoLycan’s (2001b)‘related simple argument’ for ahigher-order representation account of consciousness.)
One crucial question, then, is whether perceptual states as well asintentional states admit of both conscious and unconscious varieties. Can therebe, for example, such a thing as an unconscious visual perceptualstate? Higher-order theorists are united in thinking that there can.Armstrong (1968) uses the example of absent-minded driving to make thepoint. Most of us at some time have had the rather unnervingexperience of ‘coming to’ after having been driving on‘automatic pilot’ while our attention was directedelsewhere—perhaps having been day-dreaming or engaged in intenseconversation with a passenger. We were apparently not consciouslyaware of much of the route we have recently taken, nor of any of theobstacles we avoided on the way. Yet we must surely have beenseeing, or we would have crashed the car. Others have usedthe example of blindsight (Carruthers 1989, 1996). This is a conditionin which subjects have had a portion of their primary visual cortexdestroyed, and apparently become blind in a region of their visualfield as a result. But it has now been known for some time that ifsubjects are asked toguess at the properties of their‘blind’ field (e.g. whether it contains a horizontal orvertical grating, or whether it contains an ‘X’ or an‘O’), they prove remarkably accurate. Subjects can alsoreach out and grasp objects in their ‘blind’ field withsomething like 80% or more of normal accuracy, and can catch a ballthrown from their ‘blind’ side, all without consciousawareness. (See Weiskrantz 1986, 1997, for details anddiscussion.)
A powerful case for the existence of unconscious visualexperience has also been generated by thetwo-systems theory ofvision proposed and defended by Milner and Goodale (1995; see alsoJacob and Jeannerod 2003; Glover 2004). They review a wide variety ofkinds of neurological and neuro-psychological evidence for thesubstantial independence of two distinct visual systems, instantiatedin the temporal and parietal lobes respectively. They conclude thatthe parietal lobes provide a set of specialized semi-independentmodules for the on-line visual control of action; whereas the temporallobes are primarily concerned with more off-line functions such asvisual learning and object recognition. And only the perceptionsgenerated by the temporal-lobe system are phenomenally conscious, ontheir account. (Note that this isn’t the familiar distinction betweenwhatandwhere visual systems, but is rather a successor to it.For the temporal-lobe system is supposed to have access both toproperty information and to spatial information. Instead, it is adistinction between a combinedwhat-where system located inthe temporal lobes and ahow-to or action-guiding systemlocated in the parietal lobes.)
To get the flavor of Milner and Goodale’s hypothesis, consider justone strand from the wealth of evidence that they provide. This is aneurological syndrome calledvisual form agnosia, whichresults from damage localized to both temporal lobes, leaving primaryvisual cortex and the parietal lobes intact. (Visual form agnosia isnormally caused by carbon monoxide poisoning, for reasons that arelittle understood.) Such patients cannot recognize objects or shapes,and may be capable of little conscious visual experience; but theirsensorimotor abilities remain largely intact.
One particular patient—D.F.—has now been examined inconsiderable detail. While D.F. is severely agnosic, she isn’tcompletely lacking in conscious visual experience. Her capacities toperceive colors and textures are almost completely preserved. (Whyjust these sub-modules in her temporal cortex should have been sparedisn’t known.) As a result, she can sometimes guess the identity of apresented object—recognizing a banana, say, from its yellowcolor and the distinctive texture of its surface. But she is unable toperceive the shape of the banana (whether straight or curved, say);nor its orientation (upright or horizontal; pointing towards her oracross). Yet many of her sensorimotor abilities are close tonormal—she would be able to reach out and grasp the banana,orienting her hand and wrist appropriately for its position andorientation, and using a normal and appropriate finger grip. Underexperimental conditions it turns out that although D.F. is at chancewhen identifying the orientation of a broad line or letter-box, she isalmost normal when posting a letter through a similarly-shaped slotoriented at random angles. In the same way, although she is at chancewhen trying to discriminate between rectangular blocks of verydifferent sizes, her reaching and grasping behaviors when asked topick up such a block are virtually indistinguishable from those ofnormal controls. It is very hard to make sense of these data withoutsupposing that the sensorimotor perceptual system is functionally andanatomically distinct from the object-recognition/conscioussystem.
But what implications does this have for phenomenal consciousness?Must these unconscious percepts also be lacking inphenomenalproperties? Most people think so. While it may be possible to getoneself to believe that the perceptions of the absent-minded cardriver can remain phenomenally conscious (perhaps lying outside of thefocus of attention, or being instantly forgotten), it is very hard tobelieve that either blindsight percepts or D.F.’s sensorimotorperceptual states might be phenomenally conscious ones. For theseperceptions are ones to which the subjects of those states areblind, and of which theycannot be aware. And thequestion, then, is what makes the relevant difference? What is itabout a conscious perception that renders itphenomenal, thata blindsight perceptual state would correspondingly lack? Higher-ordertheorists are united in thinking that the relevant difference consistsin the presence of somethinghigher-order in the first casethat is absent in the second. The same would go for the differencebetween unconscious and conscious desires, emotions, pains, and soon. The core intuition, again, is that a phenomenally conscious statewill be a stateof which the subject is aware.
What options does a first-order theorist have to resist thisconclusion? One is to deny that the data are as problematic as theyappear (as does Dretske 1995). It can be said that the unconsciousstates in question lack the kind of fineness of grain and richness ofcontent necessary to count as genuinelyperceptual states. Onthis view, the contrast discussed above isn’t really a differencebetween conscious and unconscious perceptions, but rather betweenconscious perceptions, on the one hand, and unconscious belief-likestates, on the other. Another option is to accept the distinctionbetween conscious and unconscious perceptions, and then to explainthat distinction in first-order terms. It might be said, for example,that conscious perceptions are those that are available tobelief andthought, whereas unconscious ones arethose that are available to guidemovement (Kirk 1994). Afinal option is to bite the bullet, and insist that blindsight andsensorimotor perceptual states are indeed phenomenally conscious whilenot beingaccess-conscious. (See Block 1995; Tye 1995; andNelkin 1996; all of whom defend versions of this view.) On thisaccount, blindsight percepts are phenomenally conscious states towhich the subjects of those states are blind. Higher-order theoristswill argue, of course, that none of these alternatives is acceptable(see, e.g., Carruthers 2000; Rosenthal 2005).
Further, Lau and Rosenthal (2011) survey the empirical evidencepertaining to the difference between higher-order theories andfirst-order ones. While much is equivocal, and many questions are leftunanswered, they point to a pair of studies that support ahigher-order account. One is Lau and Passingham (2006), who are ableto demonstrate using carefully controlled stimuli that there arecircumstances in which people’s subjective reports of visualexperience are impaired while their first-order discriminationabilities remain fully intact. They also find that visualconsciousness in these conditions is specifically associated withactivity in a region of dorsolateral prefrontal cortex. Then in afollow-up study Rounis et al. (2010) find that transcranial magneticstimulation directed at this region of cortex, thereby disrupting itsactivity, also has a significant impact on people’s meta-visualawareness, but again without impairing first-order taskperformance. The degree to which the prefrontal cortexisrequired for having a conscious state and the view thatthe prefrontal cortex is the likely site ofall higher-orderthoughts are also the subject of vigorous continuing debate (Block1995; Gennaro 2012, chapter nine; Kozuch 2014; Odegaard, Knight, andLau 2017).
Most generally, then, higher-order theories of phenomenal consciousnessclaim the following:
Higher Order Theory (In General):
A phenomenally conscious mental state is a mental state (of a certainsort—see below) that either is, or is disposed to be, the objectof a higher-order representation of a certain sort (see below).
Higher-order theorists do agree that one must normally become aware ofthe lower-order statenon-inferentially since mental statescan sometimes become targets of higher-order representation viaconscious inference without being phenomenally conscious. For example,if I become aware of my unconscious desire to kill my boss because Ihave consciously inferred it from a session with my psychiatrist, thenthe characteristic phenomenal feel of such a conscious desire may beabsent.
Still, there are then two main dimensions along which higher-ordertheorists disagree amongst themselves. One concerns whether thehigher-order states in question are perception-like, on the one hand,or thought-like, on the other. A thought is composed of or constitutedby concepts. Those taking the former option arehigher-orderperception (often called‘inner-sense’) theorists, and those taking the latteroption are higher-orderthought theorists. The two theoriesare therefore often abbreviated as HOP (higher-order perception) andHOT (higher-order thought) theory. The other general disagreement isinternal to higher-order thought approaches, and concerns whether therelevant relation between the first-order state and the higher-orderthought is one ofavailability or not. That is, the questionis whether a state is conscious by virtue of beingdisposedto give rise to a higher-order thought, or rather by virtue of beingtheactual target of such a thought. These are the three mainoptions that will now concern us. (A fourth will be considered insection 6.)
According to this view, humans not only have first-ordernon-conceptual and/or analog perceptions of states of theirenvironments and bodies, they also have second-order non-conceptualand/or analog perceptions of their first-order states of perception.And the most popular version of higher-order perception (HOP) theoryholds, in addition, that humans (and perhaps other animals) not onlyhave sense-organs that scan the environment/body to producefine-grained representations, but they also haveinner senseswhich scan the first-order senses (i.e. perceptual experiences) toproduce equally fine-grained, but higher-order, representations ofthose outputs. A version of this view was first proposed by theBritish Empiricist philosopher John Locke (1690). In our own time ithas been defended especially by Armstrong (1968, 1984) and by Lycan(1996, 2004).
A terminological point: ‘inner-sense theory’should more strictly be called ‘higher-order-sensetheory’, since we of course have senses that are physically‘inner’, such as pain-perception and internaltouch-perception, that aren’t intended to fall under itsscope. For these are first-order senses on a par with vision andhearing, differing only in that their purpose is to detect propertiesof the body, rather than of the external world (Hill 2004). Accordingto the sort of higher-order theory that is presently under discussion,these senses, too, will need to have their outputs scanned to producehigher-order analog contents in order for those outputs to becomephenomenally conscious. In what follows, however, the term‘inner sense’ will be used to mean, more strictly,‘higher-order sense’.
We therefore have the following proposal to consider:
Inner-Sense Theory:
A phenomenally conscious mental state is a state withanalog/non-conceptual intentional content, which is in turn the targetof a higher-order analog/non-conceptual intentional state, via theoperations of a faculty of ‘inner sense’.
On this account, the difference between a phenomenally consciouspercept of red and the sort of unconscious percepts of red that guidethe guesses of a blindsighter and the activity of the sensorimotorsystem, is as follows. The former is scanned by our inner senses toproduce a higher-order analog state with the contentexperience ofred orseems red, whereas the latter statesaren’t—they remainmerely first-order states with theanalog contentred; and in so remaining, they lack anydimension ofseeming orsubjectivity. According toinner-sense theory, it is the higher-order perceptual contentsproduced by the operations of our inner-senses that make some mentalstates with analog contents, but not others, available to theirsubjects.
One of the main advantages of inner-sense theory is that it canexplain how it is possible for us to acquirepurely recognitionalconcepts of experience. For if we possess higher-order perceptualcontents, then it should be possible for us to learn to recognize theoccurrence of our own perceptual states immediately grounded in thosehigher-order analog contents. (Compare the way in which first-orderperceptual contents representing color and sound enable us the acquirefirst-order recognitional concepts for colors and sounds.) And thisshould be possible without those recognitional concepts thereby havingany conceptual connections with our beliefs about the content of thestates recognized, nor with any of our surrounding mentalconcepts. This is then how inner-sense theory will claim to explainthe familiar philosophical thought-experiments concerning one’sown experiences, which are supposed to cause such problems forphysicalist/naturalistic accounts of the mind (Kripke 1972; Chalmers1996). (For discussion of this ‘phenomenal conceptstrategy’ see Carruthers and Veillet 2007.)
For example, I can think, ‘R [an experience as of red]might have occurred in me, or might normally occur in others, in theabsence of any of its actual causes and effects.’ So on any viewof intentional content that sees content as tied to normal causes(i.e. to information carried) and/or to normal effects (i.e. toteleological or inferential role), experience of typeR mightoccur without representingred. Likewise I can think,‘R might occur in someone without occupying the role ofexperience, but rather (say) of belief.’ In the samesort of way, I shall be able to think, ‘P [anexperience of pain] might have occurred in me, or might occur inothers, in the absence of any of the usual causes and effects of pain.There could be someone in whomP experiences occur but whoisn’t bothered by them, and where those experiences are never causedby tissue damage or other forms of bodily insult. And conversely,there could be someone who behaves and acts just as I do when in pain,and in response to the same physical causes, but who is never subjecttoP types of experience.’ If we possess purelyrecognitional concepts of experience such asR andP, then the thinkability of such thoughts is unthreatening toa naturalistic approach to the mind.
Inner sense theorists are thus well placed to respond tothose who claim that there is an unbridgeable explanatory gap betweenall physical, functional, and intentional facts, on the one hand, andthe facts of phenomenal consciousness, on the other (Levine 1983;Chalmers 1996). And likewise they can explain the conceivability ofzombies without becoming committed to the existence of anynon-physical properties of experience (contra Chalmers 1996).It is the conceptual isolation of our higher-orderrecognitional concepts of experience that explains how there can be noa priori entailment between physical, functional, and intentionalfacts and the occurrence of states of typeR orP(whereR andP express purely recognitionalconcepts).
Inner-sense theory does face a number of difficulties, however. Oneobjection is as follows (see Dretske 1995; Güzeldere 1995). Ifinner-sense theory were true, then how is it that there isn’t anyphenomenology distinctive of inner sense, in the way that there is aphenomenology associated with each outer sense? Since each of theouter senses gives rise to a distinctive set of phenomenologicalproperties, one might expect that if therewere such a thingas inner sense, then there would also be a phenomenology distinctiveof its operation. But there doesn’t appear to be any.
This point turns on the so-called ‘transparency’ of ourperceptual experience (Harman 1990). Concentrate as hard as you likeon your ‘outer’ (first-order) experiences—youwon’t find anyfurther phenomenological propertiesarising out of the attention you pay to them, beyond those alreadybelonging to the contents of the experiences themselves. Paying closeattention to your experience of the color of the red rose, forexample, just produces attention to theredness—aproperty of the rose. Put like this, however, the objection just seemsto beg the question in favor of first-order theories of phenomenalconsciousness. It assumes thatfirst-order—‘outer’—perceptions already have aphenomenology independently of their targeting by inner sense. Butthis is just what an inner-sense theorist will deny. And then in orderto explain the absence of any kind of higher-order phenomenology, aninner-sense theorist only needs to maintain that our higher-orderperceptions are never themselves targeted by an inner-sense-organwhich might producethird-order analog representations ofthem in turn.
Another objection to inner-sense theory is as follows (see Sturgeon2000). If there really were an organ of inner sense, then it ought tobe possible for it to malfunction, just as our first-order sensessometimes do. And in that case, it ought to be possible for someone tohave a first-order percept with the analog contentredcausing a higher-order percept with the analog contentseems-orange. Someone in this situation would be disposed tojudge, ‘It’s red’, immediately and non-inferentially (i.e.not influenced by beliefs about the object’s normal color or their ownphysical state). But at the same time they would be disposed to judge,‘Itseems orange’. Not only does this sort ofthing never apparently occur, but the idea that it might do soconflicts with a powerful intuition. This is that our awareness of ourown experiences isimmediate, in such a way that tothink that you are undergoing an experience of a certainsortis to be undergoing an experience of that sort. But ifinner-sense theory is correct, then it ought to be possible forsomeone to believe that they are in a state ofseeming-orangewhen they are actually in a state ofseeming-red. (Theproblem of misrepresentation will addressed further below in sections6 and 7.)
A different sort of objection to inner-sense theory is developed byCarruthers (2000). It starts from the fact that the internal monitorspostulated by such theories would need to have considerablecomputational complexity in order to generate the requisitehigher-order experiences. In order to perceive an experience, theorganism would need to have mechanisms to generate a set of internalrepresentations with an analog or non-conceptual content representingthe content of that experience, in all its richness and fine-graineddetail. And notice that any inner scanner would have to be a physicaldevice (just as the visual system itself is) which depends upon thedetection of thosephysical events in the brain that are theoutputs of the various sensory systems (just as the visual system is aphysical device that depends upon detection of physical properties ofsurfaces via the reflection of light). For it is hard to see how anyinner scanner could detect the presence of an experiencequaexperience. Rather, it would have to detect the physicalrealizations of experiences in the brain, and construct therequisite higher-order representation of the experiences that thosephysical events realize, on the basis of that physical-informationinput. This makes it seem inevitable that the scanning device thatsupposedly generates higher-order experiences of our first-ordervisual experience would have to be almost as sophisticated and complexas the visual system itself.
Given this complexity in the operations of our organs of inner sense,there should be some plausible story to tell about the evolutionarypressures that led to their construction (Pinker 1994, 1997). Butthere would seem to be no such stories on the market. The mostplausible suggestion is that inner-sense might have evolved tosubserve our capacity to think about the mental states ofconspecifics, thus enabling us to predict their actions and manipulatetheir responses. (This is the so-called ‘Machiavellianhypothesis’ to explain the evolution of intelligence in thegreat-ape lineage. See Byrne and Whiten 1988, 1998; and see Goldman2006, for a view of inner sense of this sort.) But this suggestionpresupposes that the organism mustalready have some capacityfor higher-orderthought, since it is such thoughts thatinner sense is supposed to subserve. And yet as we shall see shortly(in section 5), some higher-order theories can claim all of theadvantages of inner-sense theory as an explanation of phenomenalconsciousness, but without the need to postulate any ‘innerscanners’.
Lycan no longer holds HOP theory (Sauret and Lycan 2014) mainlybecause he now thinks that some sort ofattention tofirst-order states is sufficient for an account of conscious statesand there is little reason to suppose that the attentional mechanismin question is a higher-order representational state (see also Prinz2012).
Actualist higher-order thought (HOT) theory is a proposal about thenature of state-consciousness in general, of which phenomenalconsciousness is but one species. Its main proponent has beenRosenthal (1986, 1993, 2005). The proposal is this: a conscious mentalstateM, of mine, is a state that is actually causing anactivated thought (generally a non-conscious one) that I haveM, and causing it non-inferentially. (The qualificationconcerning non-inferential causation will be discussed in a moment.)An account of phenomenal consciousness can then be generated bystipulating that the mental stateM should have some causal roleand/or content of a certain distinctive sort in order to count as anexperience (e.g., with an analog content, perhaps), and that whenM is an experience (or a mental image, bodily sensation, oremotional feeling), it will be phenomenally conscious when (and onlywhen) suitably targeted. The HOT is typically of the form: ‘I amin mental state M.’
We therefore have the following proposal to consider:
Actualist Higher-Order Thought Theory:
A phenomenally conscious mental state is a state of a certain sort(e.g. with analog/non-conceptual intentional content, perhaps) whichis the object of a higher-order thought, and which causes that thoughtnon-inferentially.
As noted earlier, Rosenthal interprets the non-inferential requirementas ruling out onlyconscious inferences in the generation ofa consciousness-making higher-order thought. This enables him to avoidhaving to say that my unconscious motives become conscious when Ilearn of them under psychoanalysis, or that my jealousy is consciouswhen I learn of it by noticing and interpreting my own behavior. ButRosenthal (2005) thinks thatunconscious self-interpretation is acceptable as a source ofthe conscious status of the states thereby attributed. So if I arriveat the thought that I am feeling cheerful by unconsciously noticing thespring in my own step and the smile on my own face and drawing anunconscious inference, my cheerfulness will thereby have been renderedconscious. This aspect of Rosenthal’s actualist form ofHOT theory would appear to be optional for a HOT theorist,however.
In addition, and more controversially, Rosenthal (2005) thinks thatthe occurrence of a suitably caused HOT issufficient forconsciousness, even in the absence of any targeted first-order state(usually called ‘targetless’ or ‘empty’HOTs). So I am undergoing a conscious experience of red provided thatIthink that I am undergoing an experience of red, even if Iam actually in no first-order perceptual state whatever. This aspectof Rosenthal’s view, too, appears optional for an actualist HOTtheorist. Such a theorist can—and perhaps should—insistthat phenomenally conscious experience occurs when and only when afirst-order perceptual state causes a higher-order thought in theexistence of that state in a way that doesn’t depend uponself-interpretation. In recent years, the twin problemsofmisrepresentation between HOTs and their first-ordertargets as well astargetless HOT cases has led tosignificant disagreement among HOT theorists (see section 7below).
The actualist HOT account avoids some of the difficulties inherent ininner-sense theory, while retaining the latter’s ability toexplain the distinction between conscious and unconscious perceptions.(Conscious perceptions will be analog states that are targeted by aHOT, whereas perceptions such as those involved in blindsight orsubliminal perceptions will be unconscious by virtue ofnotbeing so targeted.) In particular, it is easy to see a function forHOTs, in general, and to tell a story about their likely evolution. Acapacity to entertain HOTs about experiences would enable a creatureto negotiate the is/seems distinction, perhaps learning not to trustits own experiences in certain circumstances, and also to induceappearances in others, by deceit. And a capacity to entertain HOTsabout mental states (such as beliefs and desires) would enable acreature to reflect on, and to alter, its own beliefs and patterns ofreasoning, as well as to predict and manipulate the thoughts andbehaviors of others. Indeed, it can plausibly be claimed that it isour capacity to target higher-order thoughts on our own mental statesthat underlies our status as rational agents (Burge 1996; Sperber1996; Rolls 2004).
A common initial objection to HOT theory (or even HOP) is that theylead to an infinite regress. It might seem that an infinite regressresults because a conscious mental state (M) must be accompanied by aHOT, which, in turn must be accompanied by another HOT and soon. However, the standard and widely accepted reply or explanation isthat when M is conscious, the HOT is not itself conscious (Rosenthal1986, 2005). M is a first-order world-directed conscious state, suchas a desire or perception, accompanied by an unconscious HOT. But whenthe HOT is itself conscious, there is a yet another higher-order (orthird-order) thought directed at the conscious HOT. This would be acase ofintrospection according to HOT theory such thatone’s attention is directed inward at M (such as introspectingmy desire). When this crucial distinction is overlooked, it can leadto some misguided objections such as supposing that, according to HOTtheory, having any conscious state (even for animals and infants)requires the ability to introspect.
One objection to HOT theory is due to Dretske (1993). We are asked toimagine a case in which we carefully examine two line-drawings, say(or in Dretske’s example, two patterns of differently-sizedspots). These drawings are similar in almost all respects, but differin just one aspect—in Dretske’s example, one of thepictures contains a black spot that the other lacks. It is surelyplausible that, in the course of examining these two pictures, onewill have enjoyed a conscious visual experience of the respect inwhich they differ—e.g. of the offending spot. But, as isfamiliar, one can be in this position while not knowingthat the two pictures are different, or in whatwaythey are different. In which case, since one can have a consciousexperience (e.g. of the spot) without being aware that one is havingit, consciousness cannot require higher-order awareness.
Replies to this objection have been made by Seager (1994), Byrne(1997), and Rosenthal (2005), among others. They point out that it isone thing to have a conscious experience of the aspect thatdifferentiates the two pictures, and quite another to consciouslyexperience that the two pictures are differentiated by that aspect.That is, consciously seeing the extra spot in one picture needn’t meanseeing that this is the difference between the two pictures. So whilescanning the two pictures one will enjoy conscious experience of theextra spot. A HOT theorist will say that this meansundergoing a percept with the contentspothere thatforms the target of a HOT that one is undergoing aperception with that content. But this can perfectly well be truewithout one undergoing a percept with the contentspot here inthis picture but absent here in that one. And it can also be truewithout one forming any HOT to the effect that one isundergoing a perception with the contentspotherewhen looking at a given picture but not when looking at the other. Inwhich case the purported counter-example isn’t really acounter-example.
Another objection to actualist HOT theory is epistemological, and isdue to Goldman (2000). It turns crucially on the fact that theconsciousness-making higher-order thoughts postulated by the theoryare, themselves, characteristicallyunconscious. Theobjection goes like this. When I undergo a conscious mental stateM, I generally know, or have good reason to believe, thatM is conscious. But how can this be, if whatmakesM conscious is the existence of anunconscious HOT targeted onM? Since I don’tknow that this thought exists, it seems that I shouldn’t be ableto know thatM is conscious, either. As Goldman himselfacknowledges, however, this argument can only really work on theassumption that actualist HOT theory is supposed to be some sort ofanalytic or logical truth. Rosenthal has always made clear, however,that the theory isn’t intended to be a piece of conceptualanalysis, but is rather an account of the properties thatconstitute the property of being conscious (see Rosenthal1986, as well as his 2005). And the epistemological argument gets notraction against this sort of view.
A different sort of problem with the actualist version of higher-orderthought theory relates to the huge number of thoughts that would haveto be caused by any given phenomenally conscious experience. (This isthe analogue of the ‘computational complexity’ objectionto inner-sense theory, sketched in section 3 above). Consider just howrich and detailed a conscious experience can be. It would seem thatthere can be an immense amount of which we can be consciously aware atany one time. Imagine looking down on a city from a window high up ina tower-block, for example. In such a case you can have phenomenallyconscious percepts of a complex distribution of trees, roads, andbuildings; colors on the ground and in the sky above; moving cars andpedestrians; and so on. And you can—it seems—be consciousof all of this simultaneously. According to actualist HOT theory,then, it seems you would need to have a distinct activated HOT foreach distinct aspect of your experience—either that, or just afew such thoughts with immensely complex contents. Either way, theobjection is the same. For it seems implausible that all of thishigher-order activity should be taking place (albeit non-consciously)every time someone is the subject of a complex consciousexperience. What would be the point? And think of the amount ofcognitive/neural space that these thoughts would take up! (Incontrast, we know that neural tissue and activity are expensive; seeAiello and Wheeler 1995; and we also know that as a result of suchconstraints, the wiring diagram for the brain is about as efficient asit is possible for it to be; see Cherniaket al. 2004.)
This objection to actualist forms of HOT theory is considered at somelength in Carruthers (2000), where a variety of possible replies arediscussed and evaluated. Perhaps the most plausible and challengingsuch reply would be to deny the main premise lying behind theobjection, concerning the rich nature of phenomenally consciousexperience. The theory could align itself with Dennett’s (1991)conception of consciousness as highly fragmented, with multiplestreams of perceptual content being processed in parallel in differentregions of the brain, and with no stage at which all of these contentsare routinely integrated into a phenomenally conscious perceptualmanifold. Rather, contents become conscious on a piecemeal basis, as aresult of internal or externalprobing that gives rise to a HOT about thecontent in question. This serves to convey to us the mereillusion of riches,because wherever we direct our attention, there we find a consciousperceptual content. (For a related reply, see Gennaro 2012, chapter six).
It is difficult to know whether this sort of ‘fragmentist’account can really explain the phenomenology of our experience,however. For it still faces the objection that the objects ofattention can be immensely rich and varied at any given moment, hencerequiring there to be an equally rich and varied repertoire of HOTstokened at the same time. Think of immersing yourself in the colorsand textures of a Van Gogh painting, for example, or the scene as youlook out at your garden—it would seem that one can bephenomenally conscious of ahighly complex set of properties,which one could not even begin to describe or conceptualize in anydetail. However, since the issues here are large and controversial, itcannot yet be concluded that actualist forms of HOT theory have beenrefuted. This is particularly the case when one considers suchphenomena as change and inattentional blindness where subjects oftendo not evennotice somewhat significant changes occurring inan image or video even within one’s focal visual field (Simons2000; Simons and Chabris 1999).
Another difficulty for actualist forms of HOT theory takes the form ofa puzzle: how can the targeting of a perceptual state by HOT make theformer ‘light up’, and acquire the properties of‘feel’ orwhat it is like-ness? Suppose, forexample, that I am undergoing an unconscious perception of red. Howcould such a percept then acquire the properties distinctive ofphenomenal consciousness merely by virtue of me coming to think (innon-inferential fashion) that I am undergoing an experience ofred?
Rosenthal (2005) replies to this objection by pointing to cases inwhich (he says) the acquisition and application of novel higher-orderconcepts to our experience transforms the phenomenal properties of thelatter. Thus a course in wine-tasting can lead me to have experiencesof the wine that are phenomenally quite distinct from any that Ienjoyed previously (see also Siegel 2010; Gennaro 2012, chaptersix). And a course in classical music appreciation might lead tochanges in my experience of the sound of the orchestra, perhapsdistinguishing between the sounds of the oboes and the clarinets forthe first time. Since changes in higher-order concepts can lead tochanges in phenomenal consciousness, Rosenthal thinks, it is plausiblethat it is the presence of higher-order thoughts targeting ourperceptual states that is responsible for the latter’sphenomenal propertiestout court.
In response, an opponent of the theory might observe that some of theconcepts that one acquires in such cases do not appear to behigher-order ones at all. Thus the conceptsoakyandtanniny that one acquires when wine-tasting pick outsecondary qualitiesof the wine (which are first-order), nothigher-order properties of our experience of the wine. And likewisethe conceptoboe when applied in an experience is afirst-order concept of a sound type, not a higher-order concept ofone’s experience of sound. The phenomenon here is quite general:acquiring and applying new concepts in one’s perception cantransform the similarity spaces and organization of one’sperceptual states. (Think here of the familiar duck/rabbit.) But itappears to be a first-order phenomenon, not a higher-order one. At anyrate, there is considerable work for a HOT theorist to do here inmaking out the case to the contrary.
According to the dispositionalist HOT theory, the conscious status ofan perceptual state consists in itsavailability tohigher-order thought (Dennett 1978a; Carruthers 1996, 2000, 2005). Aswith the non-dispositionalist version of the theory, in its simplestform we have here a quite general proposal concerning the consciousstatus of any type of occurrent mental state, which becomes an accountof phenomenal consciousness when the states in question areexperiences (or images, emotions, etc.) with analog content. Theproposal is this: a conscious mental eventM, of mine, is onethat is disposed to cause an activated thought (generally anon-conscious one) that I haveM, and to cause it non-inferentially.
The proposal before us is therefore as follows:
Dispositionalist Higher-Order Thought Theory:
A phenomenally conscious mental state is a state of a certain sort(perhaps with analog/non-conceptual intentional content, and perhapsheld in a special-purpose short-term memory store) which is availableto cause (non-inferentially) higher-order thoughts about itself (orperhaps about any of the contents of the memory store).
In contrast with the actualist form of theory, the higher-orderthoughts that render a percept conscious are not necessarily actual,but potential. So the objection now disappears, that an unbelievableamount of cognitive space would have to be taken up with everyconscious experience. (There need notactually beany HOT occurring, in order for a given perceptual state tocount as phenomenally conscious, on this view.) So we might be able toretain our belief in the rich and integrated nature of phenomenallyconscious experience—we just have to suppose that all of thecontents in question are simultaneouslyavailable tohigher-order thought. (Such availability might be realized by the‘global broadcast’ of perceptual representations to a widerange of conceptual systems in the brain, for drawing inferences, forforming memories, and for planning, as well as for forminghigher-order beliefs. See Baars 1988, 1997, 2002.) Nor will there beany problem in explaining why our faculty of higher-order thoughtshould have evolved, nor why it should have access to perceptualcontents in the first place—this can be the standard sort ofstory in terms of Machiavellian intelligence.
It might well be wondered how their mereavailability tohigher-order thoughts could confer on our perceptual states thepositive properties distinctive of phenomenal consciousness—thatis, of states having asubjective dimension, or a distinctivesubjectivefeel. The answer may lie in the theory of content.Suppose that one agrees with Millikan (1984) that the representationalcontent of a state depends, in part, upon the powers of the systemsthatconsume that state. That is, suppose one thinks thatwhat a state represents will depend, in part, on the kinds ofinferences that the cognitive system is prepared to make in thepresence of that state, or on the kinds of behavioral control that itcan exert. In that case the presence of first-order perceptualrepresentations to a consumer-system that can deploy a ‘theoryof mind’, and that is capable of recognitional applications oftheoretically-embedded concepts of experience, may be sufficient torender those representationsat the same time as higher-orderones. This would be what confers on our phenomenally consciousexperiences the dimension of subjectivity. Each experience would atthe same time (while also representing some state of the world, or ofour own bodies) be a representation that we are undergoing just suchan experience, by virtue of the powers of the ‘theory ofmind’ system. Each percept of green, for example, would at oneand the same time be an analog representation ofgreen and ananalog (non-conceptual) representation ofseems greenorexperience of green. (Consumer semantics embraces not onlya number of different varieties ofteleosemantics, but alsovarious forms ofinferential role semantics. For the former,see Millikan 1984, 1986, 1989; and Papineau 1987, 1993. For thelatter, see Loar 1981, 1982; McGinn 1982, 1989; Block 1986; andPeacocke 1986, 1992).
As an independent illustration of how consumer systems can transformperceptual contents, consider prosthetic vision (Bach-y-Rita 1995;Bach-y-Rita and Kercel 2003). Blind subjects can be fitted with adevice that transduces the output from a hand-held or head-mountedvideo-camera into patterns of electrically-induced tactile stimulationacross the subject’s back or tongue. Initially, of course, thesubjects just feel patterns of gentle tickling sensations spreadingover the area in question, while the camera scans what is in front ofthem. But provided that they are allowed to control the movements ofthe camera themselves, their experiences after a time acquirethree-dimensional distal intentional contents, representing thepositions and movements of objects in space.(Note that the patterns oftactile simulations themselves become imbued with spatial content. Thesubjects in question say that it has come toseem to themthat there is a spherical object moving towards them, for example.)Here everything on the input side remains the same as it was whensubjects first began to wear the device; but the planning andaction-controlling systems have learned to interpret those statesdifferently. And as a result, the subjects’ first-orderintentional perceptual contents have become quite different. Likewise,according to dispositional HOT theory, when the ‘theory ofmind’ system has learned to interpret the subject’sperceptual statesas perceptual states: they all acquire adimension ofseeming or subjectivity.
Proponents of this account hold that it achieves all of the benefitsof inner-sense theory, but without the associated costs. (Somepotential draw-backs will be noted in a moment.) In particular, we canendorse the claim that phenomenal consciousness consists in a set ofhigher-order perceptions. This enables us to explain, not only thedifference between conscious and unconscious perception, but also howanalog states come to acquire a subjective dimension or‘feel’. And we can also explain how it can be possible forus to acquire some purely recognitional concepts of experience (thusexplaining the standard philosophical thought-experiments concerningzombies and such-like). But we don’t have to appeal to theexistence of any ‘inner scanners’ or organs of inner sense(together with their associated problems) in order to dothis. Moreover, it should also be obvious why there can be no questionof our higher-order contents misrepresenting their first-ordercounterparts, in such a way that one might be disposed to makerecognitional judgments ofred andseems orange atthe same time. This is because the content of the higher-orderexperience is parasitic on the content of the first-orderone. Carruthers, therefore, also refers to this view asdualcontent theory.
On the downside, the account isn’t neutral on questions ofsemantic theory. On the contrary, it requires us to reject any form ofpure input semantics, in favor of some sort of consumer semantics. Wecannot then accept that intentional content reduces to informationalcontent, nor that it can be explicated purely in terms of causalco-variance relations to the environment. So anyone who finds suchviews attractive will think that the account is a hard one to swallow.(For discussion of various different versions of input semantics, seeDretske 1981, 1986; Fodor 1987, 1990; and Loewer and Rey 1991.)
Moreover, Rosenthal (2005) has objected that dispositional HOT theorycan’t account for our actualawareness of our consciousmental states, since mere dispositions to entertain thoughtsdoesn’t make us aware of anything. Two replies can be made (seeCarruthers 2000, 2005). One is that, in virtue of our disposition toentertain higher-order thoughts about it, a perceptual state willalready possess an analog higher-order content. It is thiscontent that makes us aware of the experience in question. But thesecond reply is that there does, in any case, seem to be a perfectlygood dispositional sense of ‘know’ and‘aware’. As Dennett pointed out long ago (1978b), I can besaid to know, or to be aware, that zebras in the wild don’t wearovercoats, even though I have never actually considered the matter,because I amdisposed to assent to that proposition in lightof what I occurrently know.
In addition, Rowlands (2001) and Jehle and Kriegel (2006) haveobjected that dispositional HOT theory can’t explain the sensein which the phenomenal properties of experience arecategorical. For the higher-order analog intentional contentsthat our conscious perceptual states possess—and that areidentified with the ‘feel’ of experience—are said tobe constituted by the dispositional property that such states have, ofgiving rise to HOTs about themselves. This objection,however, appears to beg the question in favor of irreducible andintrinsic qualia as an account of the distinctive properties ofphenomenally conscious states. In any case it doesn’t seem to be anobjection against dispositional HOT theory as such, since it willcount equally against any representationalist theory of consciousness.(For example, Tye 1995, explains consciousness in terms of thepoisedness of perceptual states to have an impact on beliefand reasoning, which is a dispositional notion.) Any theory thatproposes to reductively explain phenomenal consciousness in terms ofsome combination of intentional content and causal role will beexplaining consciousness in terms that are at least partlydispositional.
A well-known objection to dispositionalist higher-order thoughttheory, however, is that it may have to deny phenomenal consciousnessto most species of non-human animal. This objection will be discussed,among others, in section 7, since it can be raisedagainstany form of higher-order theory.
Carruthers no longer holds dispositional HOT theory or, for thatmatter, any form of higher-order theory and actually defends a versionof first-order representationalism instead (Carruthers 2017). Heresponds to his own previous two main lines of argument againstfirst-order representationalism and then finds it unnecessary topropose a higher-order theory in order to explain the differencebetween unconscious and conscious states. Still, Carruthers thinksthat dispositional HOT theory is preferable to actualist HOTtheory.
The two most familiar forms of higher-order theory postulate theexistence of a pair of distinct mental states: a first-orderperceptual or quasi-perceptual state with a given content, and a HOTor HOP representing the presence of that first-order state, therebyrendering it conscious. Either one of these states can occur withoutthe other, although there may be a reliable causal relation betweenthem, such that certain types of first-order perception (e.g. attendedoutputs of the temporal-lobe visual system) regularly causehigher-order representations of themselves to be formed. In recentyears, however, a cluster of different proposals have been made thatwould reject this independent-state assumption. Rather, therelationship between the conscious state in question and thehigher-order state is said to beconstitutive, orinternal. To some extent, this view is inspired by Brentano(1874/1973) and the phenomenological tradition, including Sartre(1956). (See Kriegel 2006, 2018; Kriegel and Williford 2006; Zahavi2004; Miguens et al. 2016.) We can refer to these as‘self-representational’ higher-order theories. (Kriegelinitially coined the term ‘same-order monitoring theory’but this was potentially misleading).
We therefore have the following proposal to consider:
Self-Representational Theory:
A phenomenally conscious mental state is a state of a certain sort(perhaps with analog/non-conceptual intentional content) which also,at the same time, possesses an intentional content,thereby in some sense representingitself to the person whois the subject of that state.
There are two basic types of self-representational theory, dependingon whether the constitutive relation between the conscious state andthe higher-order state is one ofidentity, on the one hand,orpart-whole, on the other. According to the former type ofaccount, it is one and the same perceptual state that is bothfirst-order (representing the world to us) and higher-order(presenting itself to us). (Caston 2002, argues that Aristotle had atheory of conscious perception of this sort.) Kriegel (2006) claimsthat such accounts are rather mysterious from a naturalisticperspective, but Carruthers (2000, 2005) and perhaps also Van Gulick(2001, 2004) purport to provide naturalistic explanations of just thissort of view. According to Carruthers, a first-order perceptual statewith analog content acquires, at the same time, a higher-order analogcontent by virtue of its availability to a ‘theory ofmind’ faculty, together with the truth of some suitable form ofconsumer semantics (as explained in section 5 above). Van Gulick canbe interpreted as defending a similar view, which likewise relies on aform of consumer semantics/functional role semantics, which he labelsa ‘Higher-Order Global State (HOGS) theory’. On thisaccount, globally broadcast first-order perceptual states acquire atthe same time a higher-orderseeming dimension though their availability to, andincorporation into, higher-order models of the self and its relationto the perceived environment. (What isn’t entirely clear is whetherVan Gulick thinks that the resulting perceptual stateis theHOG state, or is rather a componentpart of the HOGstate—in which case he would be advocating a kind of part-wholeself-representational account.)
Kriegel’s (2009) eventual view emphasizes and argues for theclaim that there is a ubiquitous conscious (but inattentive orperipheral) self-awareness which accompanies all first-order(attentive and outer-directed) conscious states. Gennaro (2012,chapter five) rejects this view by, among other things, arguing thatit is difficult to make sense of such alleged pervasive peripheralself-awareness especially when one is focused on outer-directedtasks. It is at least not as clearly present as, say, outer-directedperipheral vision in normal visual perception. At minimum, it isnotoriously difficult to settle these sorts of disagreements betweencompeting phenomenological claims.
Some varieties of part-whole self-representational theory take thesame general form as actualist kinds of HOT theory, in which afirst-order perceptual state with the contentanalog-red (asit might be) gives rise to a higher-order thought that one isexperiencing red. But rather than claiming that it is the first-orderperception that becomes phenomenally conscious because of the presenceof the higher-order thought, what is said that the complex state madeup ofboth the first-order perceptionand thehigher-order thought becomes conscious. Gennaro (1996, 2008, 2012)defends such a view which he calls thewide intrinsicalityviewsuch that the HOT is better thought of as belonging to thesame overall complex state as its target. It is, however, not alwaysclear how this theory could offer any substantive benefits not alreadyobtainable from actualist HOT theory. Rather, the claim is merely thata conscious state is one that contains two parts, one of which is anawareness of the other. Kriegel himself (2003, 2006, 2009) and (asKriegel interprets him) Van Gulick (2001, 2004) emphasize that thefirst-order perception and the higher-order judgment need tobeintegrated with one another in order for the resultingcomplex state to be phenomenally conscious. Kriegel argues that thereneeds to be a kind of integration resulting from a psychologicallyreal process (as opposed to a theorist’s definition) in orderfor the resulting state to have causal powers that differ from thoseof the first-order state/higher-order state pair.
Kriegel and Van Gulick do not give fully developed accounts of justwhy the integration of first-order perceptions withhigher-order judgments should give rise to the properties that aredistinctive of phenomenal consciousness. But one plausiblereconstruction is as follows, modeled on the way that theconceptualization of analog (non-conceptual) first-order perceptualcontent can transform the latter’s properties. Consider, forexample, the familiar duck/rabbit. When someone sees this figure forthe first time she may just experience a complex of curved lines,representing nothing. But when she comes to see itas arabbit, those lines take on a certain distinctive organization (thefigure now has both a front and a back, for example), therebytransforming the represented properties of the figure. Arguably whathappens in such cases is that the conceptual systems succeed indeploying a recognitional template for the conceptrabbit,finding a ‘best match’ with the incoming non-conceptualrepresentations. Indeed, there is reason to think that just such aprocess routinely takes place in perception, with conceptual systemsseeking matches against incoming data, and with the resulting statespossessing contents that integrate both conceptual and non-conceptual(analog) representations (Kosslyn 1994; Carruthers 2000). The resultis a single perceptual state that representsboth aparticular analog shapeand a rabbit. Now suppose that whensuch states are globally broadcast and are made available to thesystems responsible for higher-order thought, a similar process takesplace. Those systems bring to bear the conceptexperience orthe conceptseeing to produce a further integrated perceptualstate. This single state will not only have first-order contentsrepresenting the lines on the page, and representing a rabbit, theywill also have a higher-order content representing that one isexperiencing something rabbit-like. Hence the perceptualstate in question becomes ‘self-presenting’, and acquires,as part of its content, a dimension ofseeming orsubjectivity. (See also Gennaro 2005, 2012, for a relatedline of argument in response to this sort of challenge.)
Picciuto (2011) points out, however, that Kriegel’s form ofself-representational theory still permits a mismatch between thefirst-order and higher-order components of the integrated state. Forthere seems to be nothing in the structure of the account to rule outthe possibility of a first-order analog contentgreenbecoming integrated with the higher-order judgmentI amexperiencing yellow, for example. In order to avoid thisdifficulty, Picciuto (2011) proposes an alternative form of part-wholeself-representational theory. (See also Coleman, 2015; Timpe, 2015.)He does so by appropriating, and deploying for a novel purpose, theidea of aquotational phenomenal concept, originallyintroduced by Papineau (2002) and Balog (2009) as part of theirdefense of physicalism against the arguments of Chalmers (1996) andothers. Picciuto’s idea is that the relevant sort of complexself-representational state will consist of a first-order perceptualcontent combined with a higher-order concept likeexperiencethat embeds, or ‘quotes’ that very perceptualcontent. Given this structure, it will be impossible that there shouldbe a mismatch between the two. For the higher-order component of thecomplex state is not ajudgment about the experiencecomponent (which would permit them to be mismatched) but rather aconcept thatquotes the experience component.
All part-whole self-representational accounts differ from thedual-content theory of Carruthers (2000, 2005) in the following way,however: on Carruthers’ account, the end-product can be entirelynon-conceptual. And in particular, the higher-order content possessedby a conscious percept is a non-conceptual one, representing aseeming of the first-order content of the state by virtue ofits availability to higher-order consumer systems. On all of thepart-whole accounts sketched above, in contrast, a consciousperception is always partially conceptual, containing the higher-orderconceptexperience (or something similar) as part of itscontent. There are probably multiple dimensions along which these twosorts of theory could be compared, and each may have its ownadvantages.
There have, of course, been a whole host of objections raised againsthigher-order theories of phenomenal consciousness over the years.(See, e.g., Aquila 1990; Jamieson and Bekoff 1992; Dretske 1993, 1995;Goldman 1993, 2000; Güzeldere 1995; Tye 1995; Chalmers 1996;Byrne 1997; Siewert 1998; Levine 2001; Rowlands 2001; Seager 2004;Block, 2011.) Many of these objections, although perhaps intended asobjections to higher-order theories as such, are often framed in termsof one or another particular version of such a theory. A general moralto be taken away from the present discussion should then be this: thedifferent versions of a higher-order theory of phenomenalconsciousness need to be kept distinct from one another, and criticsshould take care to state which version of the approach is underattack, or to frame objections that turn merely on thehigher-order character of all of these approaches. I shall discuss a few‘local’ objections first, before discussing some genericones.
A good many objections against specific versions of higher-ordertheory have already been discussed above. Thus in section 3 wediscussed Dretske’s (1995) ‘lack of any higher-orderphenomenology’ objection to inner sense theory (whichonly targets inner sense theory). And in section 4 wediscussed Dretske’s (1993) ‘spot’ objection to actualisthigher-order thought theory, as well as Goldman’s (2000)epistemological objection, each of which appears to apply only to HOTtheories. Of course some of the objections discussed above target morethan one version of higher-order theory, while still not being fullygeneral in scope. Thus the cognitive/computational complexityobjections discussed in sections 3 and 4 apply to inner sense theoriesand to actualist HOT theories, but not to dispositionalist HOT or tosome self-representational theories.
Another ‘local’ objection (which is actually ageneralization of a variant of the misrepresentation problem discussedin connection with inner sense theory in section 3 above) is thetargetless higher-order representation problem (Byrne 1997; Neander1998; Levine 2001). This is confronted by both inner sense theory andactualist HOT theory (but not by either dispositionalist HOT orself-representational theories, according to which the relevanthigher-order state can’t exist in the absence of the targetedstate). For in each case it seems that a higher-order experience of aperception of red, say, or a HOT about a perception of red, mightexist in the absence of any such perception occurring. So itwouldseem to the subject that she is experiencing red, orshe mightthink that she is experiencing red, in the absenceof any such experience. (Note that the point isn’t just that shemight undergo such a seeming in the absence of anything reallyred. Rather, the point is that she might not really be undergoing anysort of visual experienceas of red at all.) In which case,does the subject have a phenomenally conscious experienceasof red, or not?
Both Lycan (1996) and Rosenthal (2005) are sanguine in the face ofthis objection. Each allows that targetless higher-orderrepresentations are a possibility (albeit rare, perhaps), and eachopts to say that the subject in such a case is phenomenally conscious.But each denies that this is a problem for their account. Lycan, forexample, insists that it is surely possible that it mightseem to someone that she is feeling pain when really norelevant first-order representation of pain is present. (He suggeststhat the effects of morphine, which leaves patients saying that theirpain feels just as it was, but that they no longer care, mightconstitute such a case.) And surely such a person would have aphenomenally conscious experienceas of pain. Rosenthal,likewise, uses pain as an example. He points to cases of dentalpatients who initially experience pain in the dentist’s chair despitethe fact that the relevant nerves are completely destroyed. It seemsthat their fear, combined with the noise and vibration of the drill,causes them to mistakenly think that they are feeling pain. (Whenthe drilling is stopped, and their dead nerves are explained to them,they thereafter experience only the sound and the vibration.) So thiswould be a case in which a HOT about experiencingpain is alone sufficient to induce a phenomenally conscious experienceas of being in pain. A critic, however, might respond thatthe illusion is caused, instead, by a vividimagining ofpain, rather than by a HOT about feeling pain. Alternatively, if aHOT is causally involved it might be that a top-down expectation ofpaincauses a first-order experience of pain, as opposed tobeingconstitutive of the feeling of pain. It might bethatintrospective anticipation of pain causes the pain inthe first-place. (Note that this seems perfectly possible, since it isthe opposite of well-known placebo effects of expectation in reducingpain.)
The targetless HOT problem has recently become a very significanttopic of debate among HOT theorists as well as some critics (Block2011; Rosenthal 2005, 2011; Weisberg 2011; Gennaro 2012, chapter four;Wilberg 2010; Berger 2014, 2017; Brown 2015; Lau and Brown 2019) whichhas also led some to advocate for other variants of HOT theory or toclarify their own theories. Gennaro (2012) argues, for example, thatsince the HOTs in question are typically themselves unconscious, itmakes little sense to suppose that these HOTs are phenomenallyconscious in the context of HOT theory, especially since a consciousHOT would be an introspective state instead. Maintaining that anunconscious HOT would yield the same subjective experience without anytarget state seems to run counter to a central initial motivation ofHOT theory, namely, to explain what makes afirst-order stateconscious. Thus, the first-order state must exist first in order to berendered conscious by an appropriate and accompanying HOT with somesort of corresponding conceptual content. If both aren’tpresent, then no relevant conscious state occurs (see also Wilberg2010). Berger (2014), however, argues that consciousness is not aproperty of states at all; instead, it is a property of individualpersons (that is, how my mental states appear tome). AndBrown (2015) challenges the very assumption that HOT theory is bestinterpreted as a relational theory at all; instead, it is betterconstrued as a HOROR theory, that is, higher-order representation of arepresentation, regardless of whether or not the target representationexists. In this sense, HOT theory is better understoodasnon-relational which also seems to be Rosenthal’sconsidered view in recent decades. The debate continues (Rosenthal2018).
In response specifically to Block (2011), Rosenthal (2011) andWeisberg (2011) stress that the mereseeming of, say, beingin pain (provided by the HOT that one is in pain in the absence offirst-order pain) is sufficient for phenomenally consciouspain. Consciousness is about mentalappearance. It is notclear that this fully addresses Block’s point, which is that onewould not expect the mere thoughts that one feels paintomatter to us in all the ways that pain matters. Blockdevelops this point with respect to theawfulness of pain. Itwould be remarkable (indeed, mysterious) if a higher-order thoughtshould have all of the causal powers of the mental state that thethought is about. And in particular, there is no reason to expect thata HOT that one is in pain should possess the negative valence andhigh-arousal properties of pain itself. But the latter are surelycrucial components of phenomenally conscious pain. If so, then a HOTthat one feels pain in the absence of first-order painwillnot be sufficient for the conscious feeling of pain. Itis also again important not to conflate introspection (conscious HOTs)with mere unconscious HOTs. (See also Shepherd 2013.)
Yet another ‘local’ objection is targeted againsthigher-order thought theories in particular (whether actualist ordispositionalist). It presents such theories with a dilemma: eitherthey are attempts to explicate theconcept of consciousness,in which case they are circular; or they are attempts to provide areductive explanation of theproperty of being conscious, inwhich case they generate a vicious regress (Rowlands 2001). The firsthorn can be swiftly dismissed. For as Rosenthal (2005) and many othershave made clear, higher-order theories aren’t in the business ofconceptual analysis. Rather, their goal is to provide a reductiveexplanation ofwhat it is for a state to be phenomenallyconscious. Our discussion will therefore focus upon the secondhorn.
Rowlands thinks that HOT theories face a vicious regress because theyexplain state-consciousness in terms of HOT, and because (Rowlandsclaims) onlyconscious thoughts make us aware of the thingsthat those thoughts concern. He gives the example of coming to believethat his dog is seriously ill. If he (Rowlands) thinks and behaves inways that are best explained by attributing to him the thought thathis dog is ill, but if that thoughts isn’t entertainedconsciously, then surely this won’t be a case in which heisaware that his dog is ill. So if we are to become aware ofour conscious states by entertaining higher-order thoughts about them,then these thoughts will have to be conscious ones, requiring us to beaware of them, in turn, via further higher-order thoughts that arealso conscious; and so on.
HOT theorists might respond in several ways: One is to challenge theintuition that only conscious thoughts make us aware of things. Thusit seems that Rowlands, when reflecting back on his dog-nurturingbehavior of recent days, could surely conclude something along thelines of, ‘It seems that I have been aware of my dog’sillness all along; that is why I have been behaving as I have.’Another response would be to allow that there isa way ofunderstanding the concept of awareness such that a person only countsas aware of something if the mental state in virtue of which they areaware of that thing is itself a conscious one, but to deny that thisis the relevant sense of ‘awareness’ which is put to workin HOT theories. A third option would be to stress the distinctionbetweenphenomenal consciousness and state consciousness moregenerally, claiming that there need be no regress involved inexplaining the former in terms of the latter, provided that someseparate account can be provided for the latter.
One generic objection, which can probably be recast in such a way asto apply to any higher-order theory (although it is most easilyexpressed against inner sense theory or actualist HOT theory), is theso-called ‘rock’ objection (Goldman 1993; Stubenberg1998). We don’t think that when we become aware of a rock(either perceiving it, or entertaining a thought about it) that therock thereby becomes conscious. So why should our higher-orderawareness of a mental state render that mental state conscious?Thinking about a rock doesn’t make the rock ‘lightup’ and become phenomenally conscious. So why should thinkingabout my perception of the rock make the latter phenomenallyconscious, either?
An initial reply to this objection involves pointing out that myperception of the rock is amental state, whereas the rockitself isn’t (Lycan 1996). Since phenomenal consciousness is aproperty that (some) mental states possess, we can then say that thereason why the rock isn’t rendered phenomenally conscious by myawareness of it is that it isn’t the rightsort ofthing tobe phenomenally conscious, whereas my perception of the rockis. This reply may be apt to strike the objector as trite. But perhapsmore can be said from the perspective of inner sense theory, at least.Notice that my perception of the rock does, in one sense, confer on thelatter a subjective aspect. For example, the rock is represented fromone particular spatial perspective, and only some of its properties(e.g. color) and not others (e.g. mass) are represented. Likewise,then, with my perception of the rock.
Similar replies to the rock objection are given by Van Gulick (2001)and Gennaro (2005). Both point out that a rock isn’t the kind ofthing that can be incorporatedinto a complex mental statethat involves higher-order representations, in the sort of wayrequired by a self-representational or HOT theory. In contrast,whether actualist HOT theory can reply adequately to the rockobjection will depend on whether or not there is an adequate reply tothe problem considered in section 4, which challenges the actualistHOT theorist to say why targeting a mental state with a HOT about thatstate should cause the latter to ‘light up’ and acquire asubjective dimension orfeel.
Another generic objection is that higher-order theories, when combinedwith plausible empirical claims about the mental abilities ofnon-human animals, will conflict with our common-sense intuition thatsuch animals enjoy phenomenally conscious experience (Jamieson andBekoff 1992; Dretske 1995; Tye 1995; Seager 2004). This objection canbe pressed most forcefully against higher-orderthoughttheories, of either variety, and against self-representationaltheories; but it is also faced by inner-sense theory (depending onwhat account can be offered of the evolutionary function of organs ofinner sense). Are cats and dogs really capable of having suchapparently complex HOTs which presumably contain mental stateconcepts? Since there has been onsiderable dispute as to whether evenchimpanzees (and other primates) have the kind of sophisticated‘theory of mind’ to enable them to entertain thoughtsabout experiential states as such (Byrne and Whiten 1988, 1998;Povinelli 2000), it seems implausible that many other species ofmammal (let alone reptiles, birds, and fish) would qualify asphenomenally conscious, on these accounts (Carruthers 2000, 2005). Yetthe intuition that such creatures enjoy phenomenally consciousexperiences is a powerful one, for many people. (Witness Nagel’sclassic 1974 paper, which argues that there must be something that itis like to be a bat.)
Many higher-order theorists have attempted to resist the claim thattheir theory has any such entailment (e.g. Gennaro 1996, 2004; VanGulick 2006). In each case, a common strategy is to claim that therelevant higher-order representations are somehowsimpler than those tested for by those who do comparative‘theory of mind’ research, hence leaving it open thatthese simpler representations might be widespread in the animalkingdom. Gennaro (1996), for example, suggests that although animalsmight lack the conceptexperience, they can nevertheless becapable of higher-order indexical thoughts of the form‘this is different fromthat’ (where‘this’ and ‘that’ might refer to experiencesof red and of green, respectively). The trouble here, however, is toexplain what makes these indexicals higher-order in content withoutattributing concepts likeexperience of green to the animal.
Gennaro (2004) takes a somewhat different tack. While allowing thatanimals lack the conceptexperience of green, he thinks thatthey might nevertheless possess the (simpler) conceptseeinggreen. But here he faces a dilemma. There is, indeed, a simplerconcept of seeing, grounded in the capacity to track eye-direction andline of sight. But this isn’t necessarily a higher-orderconcept. To say, in this sense, that someone sees green in just to saythat there is some green in the line in which their eyes arepointed—no mental state needs to be attributed. In contrast, itappears that any concept of seeing that is genuinely higher-order willbe one that it would be less plausible to attribute to most species ofanimal (given the comparative evidence). Perhaps a first-orderexplanation of experimental observations of animals is virtuallyalways possible (see, for example, Carruthers 2008). But Gennaro(2012, chapter eight) ultimately argues that there is plenty ofevidence that many animals are capable of metacognition (thinkingabout their own mental states) as well as mindreading (thinking aboutother minds). For example, in the case of mindreading, rhesus monkeysseem to attribute visual and auditory perceptions to others in morecompetitive paradigms (Flombaum and Santos 2005) and crows and scrubjays return alone to caches seen by other animals and recache them innew places (Emery and Clayton 2001). Any evidence of deception orempathy in animals would also seem to indicate some kind ofmindreading ability. In addition, many animals seem capable ofmetacognition (including possessing self-concepts) as evidenced by thepresence of episodic memory, for example (Dere et al. 2006; see alsothe essays in Terrace and Metcalf 2005; Hurley and Nudds 2006). Arelated debate takes place with respect to infant consciousness andthe capacity of infants to have metacognitive and mindreadingabilities (see Gennaro 2012, chapter seven, for somediscussion).
Van Gulick (2006), in contrast, suggests that all of the higher-orderrepresenting sufficient to render an experience phenomenally consciouscan be left merelyimplicit in the way that the experienceenters into relationships with other mental states and the control ofbehavior. So animals that lack the sorts of explicit higher-orderconcepts tested for in comparative ‘theory of mind’research can nevertheless be phenomenally conscious. The difficultyhere, however, is to flesh out the relevant notion of implicitness insuch a way that not every mental state, possessed by every creature(no matter how simple), will count as phenomenally conscious. Forsince mental states can’t occur singly, but are always part of anetwork of other related states, mental states will always carryinformation about others, thus implicitly representing them. It isimplicit in the behavior of any creature that drinks, for example,that it is thirsty; so the drinking behavior implicitly represents theoccurrence of the mental state of thirst.
Of course, the basis for the common-sense intuition that animalspossess phenomenally conscious states can even be challenged. (How,after all, are we supposed toknow whether it is likesomething to be a bat?) And that intuition can perhaps be explainedaway as a mere by-product of imaginative identification with theanimal. (Since ourimages of their experiences are phenomenally conscious, wemay naturally assume that the experiencesimaged aresimilarly conscious (Carruthers 1999, 2000). But there is no doubtthat one major source of resistance to higher-order theories will liehere, for many people, especially given various moral considerationsabout animal pain and suffering. (For one set of attempts to defusethis resistance, arguing that a higher-order account need have few ifany implications for our moral practices or for comparativepsychology, see Carruthers 2005, chapter nine; 2019, chapter eight.)Of course, some will point out that there are alsoenoughneurophysiological similaritiesbetween (at least someparts of) human and animal brains to justify attributions of, say,pains, desires, and basic perceptual states. It is worth emphasizinghere that HOT theory doesnot say that having consciousstates requires havingintrospective states, that is, havingconscious HOTs. Conflating introspection with having mere unconsciousHOTs (and therefore simply having first-order conscious states) maylead some to put forth a misguided straw man argument against HOTtheory.
A third generic objection is that higher-order approaches cannotreallyexplain the distinctive properties of phenomenalconsciousness (Chalmers 1996; Siewert 1998; Levine 2006). Whereas theargument from animals is that higher-order representationsaren’tnecessary for phenomenal consciousness, the argument here isthat such representations aren’tsufficient. It is claimed,for example, that we can easily conceive of creatures who enjoy thepostulated kinds of higher-order representation, related in the rightsort of way to their first-order perceptual states, but where thosecreatures are whollylacking in phenomenal consciousness.
In response to this objection, higher-order theorists will join forceswith first-order theorists and others in claiming that these objectorspitch the standards for explaining phenomenal consciousness too high(Block and Stalnaker 1999; Tye 1999; Carruthers 2000, 2005; Lycan2001). They will insist that a reductive explanation ofsomething—and of phenomenal consciousness inparticular—doesn’t have to be such that we cannot conceiveof theexplanandum (that which is being explained) in theabsence of theexplanans (that which does the explaining).(Indeed, we can alsoexplain why no such explanation can beforthcoming, in terms of our possession of purely recognitionalconcepts of experience.) Rather, we just need to have good reason tothink that the explained properties areconstitutedby the explaining ones, in such a way that nothingelse needed to be added to the world once the explainingproperties were present, in order for the world to contain the targetphenomenon. But this is hotly contested territory. And it is on thisground that the battle for phenomenal consciousness may ultimately bewon or lost.
Before we close, it is worth considering a variant of the thirdgeneric objection that we have just been discussing, which needinvolve no commitment to the latter’s demanding standards ofexplanation. For it might be said that self-representing mental states(or indeed any of the theoretically-relevant kinds of pairing offirst-order with higher-order representations) might occur within theunconscious mind, without (of course) thereby becoming conscious (Rey,2008). Suppose that some version of Freudian theory is true, forexample. Might there not be higher-order thoughts about thesubject’s experiences occurring within the unconscious mind,formed while the latter tries to figure out how to get its messagespast the ‘censor’ and expressed in speech? So here again,just as with the third generic objection, the claim is that theoccurrence of the sorts of representations postulated by higher-ordertheories isn’tsufficient for phenomenal consciousness.
One sort of response to this objection would be to deny that suchpurely unconscious higher-order cognition everactuallyoccurs. Indeed, one might deny that it is even possible, given theconstraints provided by the evolution of cognitively demanding mentalfunctions (Carruthers 2000). But note that this reply will beunavailable to any higher-order theorist who has opted to downplay thecognitive demands of the capacity for higher-order representation inresponse to the problem of animal consciousness. For if higher-orderrepresentation is easily evolved, and is rife within the animalkingdom, then there doesn’t appear to be any reason why itshouldn’t evolve within unconscious sub-systems of the mind aswell. And in any case it is doubtful whether the merenaturalimpossibility of higher-order representing within the unconscious mindwould be enough to rebut the objection. Since higher-order theoriesclaim that phenomenal consciousness is to beidentified with,or isconstituted by, the relevant sorts of higher-orderrepresenting, we would need to show that the imagined occurrence ofthe latter within the unconscious mind ismetaphysicallyimpossible, not just that it is naturally so.
Other responses to the objection remain (Carruthers 2000, 2005). Onewould be to allow that unconscious phenomenal consciousness ispossible, and to appeal to the distinction betweenphenomenalconsciousness andaccess consciousness to explain away theseeming contradiction involved. Remember, phenomenally consciousstates are those that it islike something to be in, and thatpossess a subjectivefeel; whereas access-conscious statesare those that are available to interact with some specified cognitiveprocesses (for example, they might be those that are reportable inspeech). So all we would be saying is that states withfeelcan occur in ways that aren’t (for example) reportable by thesubject. There is no contradiction here. An alternative possibleresponse would be to extend the higher-order theory in question toinclude the relevant sort of access-consciousness as a furthercomponent. A dispositional HOT theorist, for example, might say that aphenomenally conscious state is one thatboth possesses theright sort of dual contentand that is reportable by thesubject. I shall not attempt to adjudicate between these possibilitieshere.
(Objections have also been raised as to how (or if) HOT theory andself-representational theories can account for various pathologies ofself-awareness or ‘depersonalization disorders,’ such assomatoparaphrenia and thought insertion in schizophrenia. See theessays in Gennaro 2015 for some discussion.)
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054