The notion of a “mental representation” is, arguably, inthe first instance a theoretical construct of cognitive science. Assuch, it is a basic concept of the Computational Theory of Mind,according to which cognitive states and processes are constituted bythe occurrence, transformation and storage (in the mind/brain) ofinformation-bearing structures (representations) of one kind oranother.
However, on the assumption that a representation is an object withsemantic properties (content, reference, truth-conditions,truth-value, etc.), a mental representation may be more broadlyconstrued as a mental object with semantic properties. As such, mentalrepresentations (and the states and processes that involve them) neednot be understood only in cognitive/computational terms. On thisbroader construal, mental representation is a philosophical topic withroots in antiquity and a rich history and literature predating therecent “cognitive revolution,” and which continues to beof interest in pure philosophy. Though most contemporary philosophersof mind acknowledge the relevance and importance of cognitive science,they vary in their degree of engagement with its literature, methodsand results; and there remain, for many, issues concerning therepresentational properties of the mind that can be addressedindependently of the computational hypothesis.
Though the term ‘Representational Theory of Mind’ issometimes used almost interchangeably with ‘Computational Theoryof Mind’, I will use it here to refer to any theory thatpostulates the existence of semantically evaluable mental objects,including philosophy’s stock-in-trade mentalia – thoughts,concepts, percepts, ideas, impressions, notions, rules, schemas,images, phantasms, etc. – as well as the various sorts of“subpersonal” representations postulated by cognitivescience. Representational theories may thus be contrasted withtheories, such as those of Baker (1995), Collins (1987), Dennett(1987), Gibson (1966, 1979), Reid (1764/1997), Stich (1983) and Thau(2002), which deny the existence of such things.
The Representational Theory of Mind (RTM) (which goes back at least toAristotle) takes as its starting point commonsense mental states, suchas thoughts, beliefs, desires, perceptions and imagings. Such statesare said to have “intentionality” – they areabout orrefer to things, and may be evaluated withrespect to properties like consistency, truth, appropriateness andaccuracy. (For example, the thought that cousins are not related isinconsistent, the belief that Elvis is dead is true, the desire to eatthe moon is inappropriate, a visual experience of a ripe strawberry asred is accurate, an imaging of George Washington with dreadlocks isinaccurate.)
RTM defines such intentional mental states as relations to mentalrepresentations, and explains the intentionality of the former interms of the semantic properties of the latter. For example, tobelieve that Elvis is dead is to be appropriately related to a mentalrepresentation whose propositional content isthat Elvis isdead. (Thedesire that Elvis be dead, thefearthat he is dead, theregret that he is dead, etc., involvedifferent relations to the same mental representation.) To perceive astrawberry is, on the representational view, to have a sensoryexperience of some kind which is appropriately related to (e.g.,caused by) the strawberry.
RTM also understands mental processes such as thinking, reasoning andimagining as sequences of intentional mental states. For example, toimagine the moon rising over a mountain is,inter alia, toentertain a series of mental images of the moon (and a mountain). Toinfer a propositionq from the propositionsp andif p then q is (inter alia) to have a sequence ofthoughts of the formp,if p then q,q.
Contemporary philosophers of mind have typically supposed (or at leasthoped) that the mind can benaturalized –i.e., that all mental facts have explanations in the terms of naturalscience. This assumption is shared within cognitive science, whichattempts to provide accounts of mental states and processes in terms(ultimately) of features of the brain and central nervous system. Inthe course of doing so, the various sub-disciplines of cognitivescience (including cognitive and computational psychology andcognitive and computational neuroscience) postulate a number ofdifferent kinds of structures and processes, many of which are notdirectly implicated by mental states and processes as commonsensicallyconceived. There remains, however, a shared commitment to the ideathat mental states and processes are to be explained in terms ofmental representations.
In philosophy, recent debates about mental representation havecentered around the existence of propositional attitudes (beliefs,desires, etc.) and the determination of their contents (how they cometo be about what they are about), and the existence of phenomenalproperties and their relation to the content of thought and perceptualexperience. Within cognitive science itself, the philosophicallyrelevant debates have been focused on the computational architectureof the brain and central nervous system, and the compatibility ofscientific and commonsense accounts of mentality.
Intentional Realists such as Dretske (e.g., 1988) and Fodor(e.g., 1987) note that the generalizations we apply in everyday lifein predicting and explaining each other’s behavior (oftencollectively referred to as “folk psychology”) are bothremarkably successful and indispensable. What a person believes,doubts, desires, fears, etc. is a highly reliable indicator of whatthat person will do; and we have no other way of making sense of eachother’s behavior than by ascribing such states and applying therelevant generalizations. We are thus committed to the basic truth ofcommonsense psychology and, hence, to the existence of the states itsgeneralizations refer to. (Some realists, such as Fodor, also holdthat commonsense psychology will be vindicated by cognitive science,given that propositional attitudes can be construed as computationalrelations to mental representations.)
Intentional Eliminativists, such as Churchland, (perhaps)Dennett and (at one time) Stich argue that no such things aspropositional attitudes (and their constituent representationalstates) are implicated by the successful explanation and prediction ofour mental lives and behavior. Churchland (1981) denies that thegeneralizations of commonsense propositional-attitude psychology aretrue. He argues that folk psychology is a theory of the mind with along history of failure and decline, and that it resists incorporationinto the framework of modern scientific theories (including cognitivepsychology). As such, it is comparable to alchemy and phlogistontheory, and ought to suffer a comparable fate. Commonsense psychologyisfalse, and the states (and representations) it postulatessimply don’t exist. (It should be noted that Churchland is notan eliminativist about mental representationtout court. See,e.g., Churchland 1989.)
Dennett (1987a) grants that the generalizations of commonsensepsychology are true and indispensable, but denies that this issufficient reason to believe in the entities they appear to refer to.He argues that to give an intentional explanation of a system’sbehavior is merely to adopt the “intentional stance”toward it. If the strategy of assigning contentful states to a systemand predicting and explaining its behavior (on the assumption that itisrational – i.e., that it behaves as it should, giventhe propositional attitudes it should have, given its environment) issuccessful, then the system is intentional, and thepropositional-attitude generalizations we apply to it are true. Butthere is nothing more to having a propositional attitude than this.(See Dennett 1987a: 29.)
Though he has been taken to be thus claiming that intentionalexplanations should be construed instrumentally, Dennett (1991)insists that he is a “moderate” realist aboutpropositional attitudes, since he believes that the patterns in thebehavior and behavioral dispositions of a system on the basis of whichwe (truly) attribute intentional states to it are objectively real. Inthe event that there are two or more explanatorily adequate butsubstantially different systems of intentional ascriptions to anindividual, however, Dennett claims there is no fact of the matterabout what the individual believes (1987b, 1991). This does suggest anirrealism at least with respect to the sorts of things Fodor andDretske take beliefs to be; though it is not the view that there issimplynothing in the world that makes intentionalexplanations true.
(Davidson 1973, 1974 and Lewis 1974 also defend the view that what itis to have a propositional attitude is just to be interpretable in aparticular way. It is, however, not entirely clear whether they intendtheir views to imply irrealism about propositional attitudes.)
Stich (1983) argues that cognitive psychology does not (or, in anycase,should not) taxonomize mental states by their semanticproperties at all, since attribution of psychological states bycontent is sensitive to factors that render it problematic in thecontext of a scientific psychology. Cognitive psychology seeks causalexplanations of behavior and cognition, and the causal powers of amental state are determined by its intrinsic “structural”or “syntactic” properties. The semantic properties of amental state, however, are determined by its extrinsic properties– e.g., its history, environmental or intramental relations.Hence, such properties cannot figure in causal-scientific explanationsof behavior. (Fodor 1994 and Dretske 1988 are realist attempts to cometo grips with some of these problems.) Stich proposes asyntactic theory of the mind, on which the semanticproperties of mental states playno explanatory role. (Stichhas since changed his views on a number of these issues. See Stich1996.)
It is a traditional assumption among realists about mentalrepresentations that representational states come in two basicvarieties (cf. Boghossian 1995). There are those, such as thoughts,that are composed of concepts and have no phenomenal(“what-it’s-like”) features (“qualia”),and those, such as sensations, which have phenomenal features but noconceptual constituents. (Nonconceptual content is usually defined asa kind of content that states of a creature lacking concepts mightnonetheless have.[1]) On this taxonomy, mental states can represent either in a wayanalogous to expressions of natural languages or in a way analogous todrawings, paintings, maps, photographs or movies. Perceptual statessuch asseeing that something is blue, are sometimes thoughtof as hybrid states, consisting of, for example, a non-conceptualsensory experience and a belief, or some more integrated compound ofconceptual and non-conceptual elements. (There is an extensiveliterature on the representational content of perceptual experience.See the entry on thecontents of perception.)
Disagreement over non-conceptual representation concerns the existenceand nature of phenomenal properties, the role they play in determiningthe contents of sensory representations, and which kinds of propertiescan be represented by non-conceptual states. Dennett (1988), forexample, denies that there are such things as qualia at all (as theyare standardly construed); while Brandom (2002), McDowell (1994), Rey(1991) and Sellars (1956) deny that they are needed to explain thecontent of sensory experience. Among those who accept that experienceshave phenomenal content, some (Dretske, Lycan, Tye) argue that it isreducible to a kind of intentional content, while others (Block, Loar,Peacocke) argue that it is irreducible. (See the discussion in thenext section.) A further debate concerns the non-conceptualrepresentability of high-level properties such as kind properties andmoral properties. (See, e.g., Dretske 1995 and Siegel 2010, and theentry on the contents of perception.)
Some historical discussions of the representational properties of mind(e.g., AristotleDe Anima, Locke 1689/1975, Hume 1739/1978)seem to assume that nonconceptual representations – percepts(“impressions”), images (“ideas”) and the like– are the only (or at least the main) kinds of mentalrepresentations, and that the mind represents the world in virtue ofbeing in states thatresemble things in it. On such a view,all representational states have their content in virtue of theirsensory phenomenal features. Powerful arguments, however, focusing onthe lack of generality (BerkeleyPrinciples of HumanKnowledge), ambiguity (Wittgenstein 1953) andnon-compositionality (Fodor 1981d) of sensory and imagisticrepresentations, as well as their unsuitability to function as logical(Frege 1918/1997, Geach 1957) or mathematical (Frege 1884/1953)concepts, and the symmetry of resemblance (Goodman 1976), convincedphilosophers that no theory of mind can get by with only nonconceptualrepresentations construed in this way. (For more discussion, see theentry onnonconceputal mental content.)
There has also been dissent from the traditional claim that conceptualrepresentations (thoughts, beliefs) lack phenomenology. Chalmers(1996), Flanagan (1992), Goldman (1993), Horgan and Tienson (2002),Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991a), Pitt(2004, 2009, 2011, 2013), Searle (1992), Siewert (1998, 2011) andStrawson (1994, 2010), claim that purely conceptual (conscious)representational states themselves have a proprietary kind ofphenomenology. This view – bread and butter, it should be said,among historical and contemporary Phenomenologists – has beengaining momentum of late among analytic philosophers of mind. (See,e.g., the essays in Bayne and Montague 2011 and Kriegel 2013, andChudnoff 2015, Farkas 2008a, Kriegel 2011, Mendelovici 2018, Montague2016.) If this claim is correct, the question of what rolephenomenology plays in the determination of representational contentre-arises for conceptual representation; and the eliminativistambitions of Sellars, Brandom, Rey, et al. would meet a new obstacle.It would also raise prima facie problems for reductiverepresentationalism, as well as for reductive naturalistic theories ofintentional content, and externalism in general.
The view that there is a proprietary phenomenology of consciousthought – acognitive (conceptual,propositional) phenomenology – claims that there issomething it’s like to occurrently, consciously think a thought(entertain a propositional content), which is as different from otherkinds of phenomenology (visual, auditory, etc.) as they are from eachother. Opinions diverge, however, with respect to the role suchphenomenology plays in determining thecontents ofconceptual/propositional representations. Some (e.g., Siewert) claimthat it plays no such role. Others (e.g., Horgan and Tienson,Strawson) hold that it determines only “narrow” contents,further, “broad” contents being determined by extrinsicrelations to represented objects and properties. Still others (e.g.,Farkas 2008b, Pitt) argue that it is theonly kind ofconceptual content, insisting on a sharp distinction between content(sense) and reference. There is also disagreement about whether or notcognitive phenomenology determines but is distinct fromconceptual/propositional content (e.g., Pitt 2004) or is identical toit (e.g., Pitt 2009).
Outstanding challenges for this thesis include unconscious thought(which seems to entail the existence of unconscious phenomenology, onthis view), indexical concepts (whose content is standardly taken tobe referentially individuated; see Pitt 2013 for an attempt to addressthis challenge), and nominal concepts (concepts expressed byutterances of names, likewise standardly referentiallyindividuated).
See the entries onconsciousness and intentionality andphenomenal intentionality for further discussion.
Among realists about non-conceptual representations, the centraldivision is betweenrepresentationalists (also called“representationists” and “intentionalists”)– e.g., Dretske (1995), Harman (1990), Leeds (1993), Lycan(1987, 1996), Rey (1991), Thau (2002), Tye (1995, 2000, 2009) –andphenomenalists (also called “phenomenists”)– e.g., Block (1996, 2003), Chalmers (1996, 2004), Evans (1982),Loar (2003a, 2003b), Peacocke (1983, 1989, 1992, 2001), Raffman(1995), Shoemaker (1990). Representationalists claim that thephenomenal content of a non-conceptual representation – i.e.,its phenomenal character – is reducible to a kind of intentionalcontent, naturalistically construed (à la Dretske). On thisview, phenomenal contents are extrinsic properties represented bynon-conceptual representations. In contrast, phenomenalists claim thatthe phenomenal content of a non-conceptual mental representation isidentical to its intrinsic phenomenal properties.
The representationalist thesis is often formulated as the claim thatphenomenal properties are representational or intentional. However,this formulation is ambiguous between a reductive and a non-reductiveclaim (though the term ‘representationalism’ is most oftenused for the reductive claim. See Chalmers 2004a). As a reductiveclaim, it means that the phenomenal content of an experience, theproperties that characterize what it is like to have it (i.e.,qualia), are certain extrinsic properties it represents. Forexample, the blueness one might mention in describing one’sexperience (perceptual representation) of a clear sky at noon is aproperty of the sky, not of one’s experience of it. Blueness isrelevant to the characterization of one’s experience becauseone’s experiencerepresents it, not because one’sexperienceinstantiates it. An experience of the sky no moreinstantiates blueness than a thought that snow is cold instantiatescoldness. On this view, the phenomenal content of sensory experienceis explained as its representation of extrinsic properties. (See Byrneand Tye 2006, Dretske 1995, Harman 1990, Lycan 1987, 1996 and Tye2014, 2015 for elaboration and defense of this “qualiaexternalism.” See Thompson 2008 and Pitt 2017 for objections tothis account.) (See also the entry onrepresentational theories of consciousness.)
As a non-reductive claim, it means that the phenomenal content of anexperience is its intrinsic subjective phenomenal properties, whichare themselves representational. One’s experience of the skyrepresents its color by instantiating phenomenal blueness. Amongphenomenalists there is disagreement over whether non-conceptualrepresentation requires complex structuring of phenomenal properties(Block and Peacocke, op. cit., Robinson 1994) or not (Loar 2003b).So-called “Ganzfeld” experiences, in which, for example,the visual field is completely taken up with a uniform experience of asingle color, are a standard test case: Do Ganzfeld experiencesrepresent anything? (It may be that doubts about therepresentationality of such experiences are simply a consequence ofthe fact that (outside of the laboratory) we never encounter thingsthat would produce them. Supposing we routinely did (and especially ifwe had names for them), it seems unlikely such skepticism wouldarise.)
Most (reductive) representationalists are motivated by the convictionthat one or another naturalistic explanation of intentionality (seethe next section) is, in broad outline, correct, and by the desire tocomplete the naturalization of the mental by applying such theories tothe problem of phenomenality. (Needless to say, many phenomenalistsare just as eager to naturalize the phenomenal – though not inthe same way.)
The main argument for representationalism appeals to thetransparency of experience (cf. Tye 2000: 45–51). Theproperties that characterize what it’s like to have a sensoryexperience are presented in experience as properties of objectsperceived: in attempting to attend to an experience, one seems to“see through it” to the objects and properties it isexperiencesof.[2] They are not presented as properties of the experience itself. Ifnonetheless theywere properties of the experience,perception would be massively deceptive. But perception is notmassively deceptive. In veridical perception, these properties arelocally instantiated; in illusion and hallucination, they are not. Onthis view, introspection is indirect perception: one comes to knowwhat phenomenal features one’s experience has by coming to knowwhat objective features it represents. (Cf. also Dretske 1996,1999.)
In order to account for the intuitive differences between conceptualand sensory representations, representationalists appeal to structuralor functional properties. Dretske (1995), for example, distinguishesexperiences and thoughts on the basis of the origin and nature oftheir functions: an experience of a propertyP is a state ofa system whoseevolved function is to indicate the presenceofP in the environment; a thought representing the propertyP, on the other hand, is a state of a system whoseassigned (learned) function is to calibrate the output of theexperiential system. Rey (1991) takes both thoughts and experiences tobe relations to sentences in the language of thought, anddistinguishes them on the basis of (the functional roles of) suchsentences’ constituent predicates. Lycan (1987, 1996)distinguishes them in terms of their functional-computationalprofiles. Tye (2000) distinguishes them in terms of their functionalroles and the intrinsic structure of their vehicles: thoughts arerepresentations in a language-like medium, whereas experiences areimage-like representations consisting of “symbol-filledarrays.” (Cf. the account of mental images in Tye 1991.)
Phenomenalists tend to make use of the same sorts of features(function, intrinsic structure) in explaining some of the intuitivedifferences between thoughts and experiences; but they do not supposethat such features exhaust the differences between phenomenal andnon-phenomenal representations. For the phenomenalist, it is thephenomenal properties of experiences – qualia themselves –that constitute the fundamental difference between experience andthought. Peacocke (1992), for example, develops the notion of aperceptual “scenario” (an assignment of phenomenalproperties to coordinates of a three-dimensional egocentric space),whose content is “correct” (a semantic property) if in thecorresponding “scene” (the portion of the external worldrepresented by the scenario) properties are distributed as theirphenomenal analogues are in the scenario.
Another sort of representation appealed to by some phenomenalists(e.g., Chalmers (2003), Block (2003)) is what Chalmers calls a“pure phenomenal concept.” A phenomenal concept in generalis a concept whose denotation is a phenomenal property, and it may bediscursive (‘the color of ripe bananas’), demonstrative(‘this color’; Loar 1990/96)), or even moredirect. On Chalmers’s view, apure phenomenal conceptis (something like) a conceptual/phenomenal hybrid consisting of aphenomenological “sample” (an image or an occurrentsensation) integrated with (or functioning as) a conceptual component(see also Balog 1999 and Papineau 2002). Phenomenal concepts arepostulated to account for the apparent fact (among others) that, asMcGinn (1991b) puts it, “you cannot form [introspective]concepts of conscious properties unless you yourself instantiate thoseproperties.” One cannot have a phenomenal concept of aphenomenal propertyP, and, hence, phenomenalbeliefs aboutP, without having experience ofP, becauseP itself is (in some way)constitutive of the concept ofP. (Cf. Jackson 1982,1986 and Nagel 1974.) (The so-called “ phenomenal conceptstrategy” puts pure phenomenal concepts to use in defending theKnowledge Argument against physicalism. See Loar 1990/96, Chalmers2004a. Alter and Walter 2007 is an excellent collection of essays onphenomenal concepts. See Conee 1994 and Pitt 2019 for skepticalresponses to this strategy.)
Though imagery has played an important role in the history ofphilosophy of mind, the important contemporary literature on it isprimarily psychological. (Tye 1991 and McGinn 2004 are notable recentexceptions.) In a series of psychological experiments done in the1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982),subjects’ response time in tasks involving mental manipulationand examination of presented figures was found to vary in proportionto the spatial properties (size, orientation, etc.) of the figurespresented. The question of how these experimental results are to beexplained kindled a lively debate on the nature of imagery andimagination.
Kosslyn (1980) claims that the results suggest that the tasks wereaccomplished via the examination and manipulation of mentalrepresentations that themselves have spatial properties – i.e.,pictorial representations, orimages. Others,principally Pylyshyn (1979, 1981a, 1981b, 2003), argue that theempirical facts can be explained in terms exclusively ofdiscursive, orpropositional representations andcognitive processes defined over them. (Pylyshyn takes suchrepresentations to be sentences in a language of thought.)
The idea that pictorial representations are literallypictures in the head is not taken seriously by proponents ofthe pictorial view of imagery (see, e.g., Kosslyn and Pomerantz 1977).The claim is, rather, that mental images represent in a way that isrelevantlylike the way pictures represent. (Attention hasbeen focused onvisual imagery – hence the designation‘pictorial’; though of course there may be imagery inother modalities – auditory, olfactory, etc. – as well.See O’Callaghan 2007 for discussion of auditory imagery.)
The distinction between pictorial and discursive representation can becharacterized in terms of the distinction betweenanalog anddigital representation (Goodman 1976). This distinction hasitself been variously understood (Fodor & Pylyshyn 1981, Goodman1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widelyaccepted construal is that analog representation is continuous (i.e.,in virtue of continuously variable properties of the representation),while digital representation is discrete (i.e., in virtue ofproperties a representation either has or doesn’t have) (Dretske1981). (An analog/digital distinction may also be made with respect tocognitiveprocesses. (Block 1983.)) On this understanding ofthe analog/digital distinction, imagistic representations, whichrepresent in virtue of properties that may vary continuously (such asbeing more or less bright, loud, vivid, etc.), would be analog, whileconceptual representations, whose properties do not vary continuously(a thought cannot be more or less about Elvis: either it is or it isnot) would be digital.
It might be supposed that the pictorial/discursive distinction is bestmade in terms of the phenomenal/non-phenomenal distinction, but it isnot obvious that this is the case. For one thing, there may benon-phenomenal properties of representations that vary continuously.Moreover, there are ways of understanding pictorial representationthat presuppose neither phenomenality nor analogicity. According toKosslyn (1980, 1982, 1983), a mental representation is“quasi-pictorial” when every part of the representationcorresponds to a part of the object represented, and relativedistances between parts of the object represented are preserved amongthe parts of the representation. But distances between parts of arepresentation can be defined functionally rather than spatially– for example, in terms of the number of discrete computationalsteps required to combine stored information about them. (Cf. Rey1981.)
Tye (1991) proposes a view of images on which they are hybridrepresentations, consisting both of pictorial and discursive elements.On Tye’s account, images are “(labeled) interpretedsymbol-filled arrays.” The symbols represent discursively, whiletheir arrangement in arrays has representational significance (thelocation of each “cell” in the array represents a specificviewer-centered 2-D location on the surface of the imaginedobject).
See the entry onmental imagery for further discussion.
The contents of mental representations are typically taken to beabstract objects (properties, relations, propositions, sets, etc.). Apressing question, especially for the naturalist, is how mentalrepresentations come to have their contents. Here the issue is not howto naturalizecontent (abstract objects can’t benaturalized), but, rather, how to specify naturalisticcontent-determiningrelations between mental representationsand the abstract objects they express. There are two basic types ofcontemporary naturalistic theories of content-determination andcausal-informational andfunctional.[3]
Causal-informational theories (Dretske 1981, 1988, 1995) hold that thecontent of a mental representation is grounded in the information itcarries about whatdoes (Devitt 1996) orwould(Fodor 1987, 1990a) cause it to occur.[4] There is, however, widespread agreement that causal-informationalrelations are not sufficient to determine the content of mentalrepresentations. Such relations are common, but representation is not.Tree trunks, smoke, thermostats and ringing telephones carryinformation about what they are causally related to, but they do notrepresent (in the relevant sense) what they carry information about. Amental representation can be caused by something it does notrepresent, and can represent something that has not caused it, whereasnothing can be caused by something that doesn’t cause it.
The main attempts to specify what makes a causal-informational state amental representation areAsymmetric Dependency Theories(e.g., Fodor 1987, 1990a, 1994) andTeleological Theories(Dretske 1988, 1995, Fodor 1990b, Millikan 1984, Neander 2017,Papineau 1987). The Asymmetric Dependency Theory distinguishes merelyinformational relations from representational relations on the basisof their higher-order relations to each other: informational relationsdepend upon representational relations, but not vice versa. Forexample, if tokens of a mental state type are reliably caused byhorses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, thenthey carry information about horses, etc. If, however, such tokens arecaused by cows-on-dark-nights, etc.because they were causedby horses, but not vice versa, then they represent horses (or thepropertyhorse).
According to Teleological Theories, representational relations arethose a representation-producing mechanism has theselected(by evolution or learning)function of establishing. Forexample, zebra-caused horse-representations do not meanzebra, because the mechanism by which such tokens areproduced has the selected function of indicating horses, not zebras.The horse-representation-producing mechanism that responds to zebrasismalfunctioning.
See the entries on teleological theories of mental content and causal theories of mental content.
Functional theories (Block 1986, Harman 1973), hold that the contentof a mental representation is determined, at least in part, by its(causal, computational, inferential) relations to other mentalrepresentations. They differ on whether relata should include allother mental representations or only some of them, and on whether toinclude external states of affairs. The view that the content of amental representation is determined by its inferential/computationalrelations withall other representations isholism;the view it is determined by relations to onlysome othermental states islocalism (ormolecularism). (Thenon-functional view that the content of a mental state depends onnone of its relations to other mental states isatomism.) Functional theories that recognize nocontent-determining external relata have been calledsolipsistic (Harman 1987). Some theorists posit distinctroles for internal and external connections, the former determiningsemantic properties analogous to sense, the latter determiningsemantic properties analogous to reference (McGinn 1982, Sterelny1989).
(Reductive) representationalists (Dretske, Lycan, Tye) usually takeone or another of these theories to provide an explanation of the(non-conceptual) content of experiential states. They thus tend to beexternalists (see the next section) about phenomenological as well asconceptual content. Phenomenalists and non-reductiverepresentationalists (Block, Chalmers, Loar, Peacocke, Siewert), onthe other hand, take it that the representational content of suchstates is (at least in part) determined by their intrinsic phenomenalproperties. Further, those who advocate a phenomenally-based approachtoconceptual content (Horgan and Tienson, Kriegel, Loar,Pitt, Searle, Siewert) also seem to be committed to internalistindividuation of the content (if not the reference) of suchstates.
Persistent indeterminacy problems withcausal-informational-teleological theories of content determinationhave motivated a growing number of (analytic) philosophers to seek adifferent approach, grounded not in external relations ofrepresentational states but in their intrinsic phenomenal properties.This approach has come to be known as the “PhenomenalIntentionality Research Program” (Kriegel 2013), or, simply“Phenomenal Intentionality.” These philosophers (includingBourget, Kriegel, Loar, Mendelovici, Montague, Pitt, Searle, Smithies(2012, 2013a and b, 2019), Strawson and Siewert), argue thatcausal-informational-teleological relations cannot yield thefine-grained, determinate content conceptual and perceptualrepresentations possess, and that such content can only be deliveredby phenomenal character. The cognitive phenomenology thesis (discussedabove) is an important component of this overall approach.
Generally, those who, like informational theorists, think relations toone’s (natural or social) environment are (at least partially)determinative of the content of mental representations areexternalists, oranti-individualists (e.g., Burge1979, 1986b, 2010, McGinn 1977), whereas those who, like someproponents of functional theories, think representational content isdetermined by an individual’s intrinsic properties alone, areinternalists (orindividualists; cf. Putnam 1975,Fodor 1981c).[5]
This issue is widely taken to be of central importance, sincepsychological explanation, whether commonsense or scientific, issupposed to be both causal and content-based. (Beliefs and desirescause the behaviors they do because they have the contents they do.For example, the desirethat one have a beer and the beliefsthat there is beer in the refrigerator andthat therefrigerator is in the kitchen may explain one’s getting upand going to the kitchen.) If, however, a mentalrepresentation’s having a particular content is due to factorsextrinsic to it, it is unclear how its having that contentcould determine its causal powers, which, arguably, must be intrinsic(see Stich 1983, Fodor 1982, 1987, 1994). Some who accept the standardarguments for externalism have argued that internal factors determineacomponent of the content of a mental representation. Theysay that mental representations have both “narrow” content(determined by intrinsic factors) and “wide” or“broad” content (determined by narrow content plusextrinsic factors). (This distinction may be applied to thesub-personal representations of cognitive science as well as to thoseof commonsense psychology. See von Eckardt 1993: 189.)
Narrow content has been variously construed. Putnam (1975), Fodor(1982: 114; 1994: 39ff), and Block (1986: 627ff), for example, seem tounderstand it as something likede dicto content (i.e.,Fregeansense, or perhapscharacter, à laKaplan 1989). On this construal, narrow content is context-independentand directly expressible. Fodor (1987) and Block (1986), however, havealso characterized narrow content as radicallyinexpressible.On this construal, narrow content is a kind of proto-content, orcontent-determinant, and can be specified only indirectly, viaspecifications of context/wide-content pairings. On both construals,narrow contents are characterized as functions from context to (wide)content. The narrow content of a representation is determined byproperties intrinsic to it or its possessor, such as its syntacticstructure or its intramental computational or inferential role.
Burge (1986b) has argued that causation-based worries aboutexternalist individuation of psychological content, and theintroduction of the narrow notion, are misguided. Fodor (1994, 1998)has more recently urged that a scientific psychology might not neednarrow content in order to supply naturalistic (causal) explanationsof human cognition and action, since the sorts of cases they wereintroduced to handle, viz., Twin-Earth cases and Frege cases, areeither nomologically impossible or dismissible as exceptions tonon-strict psychological laws.
On the most common versions of externalism, though intentionalcontents are externally determined, mental representations themselves,and the states they partly constitute, remain “in thehead.” More radical versions are possible. One might maintainthat since thoughts are individuated by their contents, and somethought contents are partially constituted by objects external to themind, then some thoughts are partly constituted by objects external tothe mind. On such a view, asingular thought – i.e., athought about a particular object – literallycontainsthe object it is about. It is “object-involving.” Such athought (and the mind that thinks it) thus extend beyond theboundaries of the skull. (This appears to be the view articulated inMcDowell 1986, on which there is “interpenetration”between the mind and the world.)
See the entries onexternalism about mental content andnarrow mental content.
Clark and Chalmers (1998) and Clark (2001, 2005, 2008) have arguedthat mental representations may existentirely “outsidethe head.” On their view, which they call “activeexternalism,” cognitive processes (e.g., calculation) may berealized in external media (e.g., a calculator or pen and paper), andthe “coupled system” of the individual mind and theexternal workspace ought to count as a cognitive system – a mind–in its own right. Symbolic representations on external mediawould thus count as mental representations.
Clark and Chalmers’s paper has inspired a burgeoning literatureon extended, embodied and interactive cognition. (Menary 2010 is arecent collection of essays. See also the entry onembodied cognition.)
The leading contemporary version of the Representational Theory ofMind, the Computational Theory of Mind (CTM), claims that the brain isa kind of computer and that mental processes are computations.According to CTM, cognitive states are constituted by computationalrelations to mental representations of various kinds, and cognitiveprocesses are sequences of such states.
CTM develops RTM by attempting to explainall psychologicalstates and processes in terms of mental representation. In the courseof constructing detailed empirical theories of human and other animalcognition, and developing models of cognitive processes implementablein artificial information processing systems, cognitive scientistshave proposed a variety of types of mental representations. While someof these may be suited to be mental relata of commonsensepsychological states, some – so-called “subpersonal”or “sub-doxastic” representations – are not. Thoughmany philosophers believe that CTM can provide the best scientificexplanations of cognition and behavior, there is disagreement overwhether such explanations will vindicate the commonsense psychologicalexplanations of prescientific RTM.
According to Stich’s (1983) Syntactic Theory of Mind, forexample, computational theories of psychological states should concernthemselves only with theformal properties of the objectsthose states are relations to. Commitment to the explanatory relevanceofcontent, however, is for most cognitive scientistsfundamental (Fodor 1981a, Pylyshyn 1984, Von Eckardt 1993). Thatmental processes are computations, that computations are rule-governedsequencesof semantically evaluable objects, and that therules apply to the symbols in virtue of their content, are centraltenets of mainstream cognitive science.
Explanations in cognitive science appeal to a many different kinds ofmental representation, including, for example, the “mentalmodels” of Johnson-Laird 1983, the “retinal arrays,”“primal sketches” and “2½-D sketches”of Marr 1982, the “frames” of Minsky 1974, the“sub-symbolic” structures of Smolensky 1989, the“quasi-pictures” of Kosslyn 1980, and the“interpreted symbol-filled arrays” of Tye 1991 – inaddition to representations that may be appropriate to the explanationof commonsense psychological states. Computational explanations havebeen offered of, among other mental phenomena, belief (Fodor 1975,2008 Field 1978), visual perception (Marr 1982, Osherson, et al.1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Lairdand Wason 1977), language learning and use (Chomsky 1965, Pinker1989), and musical comprehension (Lerdahl and Jackendoff 1983).
A fundamental disagreement among proponents of CTM concerns therealization of personal-level representations (e.g., thoughts) andprocesses (e.g., inferences) in the brain. The central debate here isbetween proponents ofClassical Architectures and proponentsofConnectionist Architectures.
The classicists (e.g., Turing 1950, Fodor 1975, 2000, 2003, 2008,Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold thatmental representations aresymbolic structures, whichtypically have semantically evaluable constituents, and that mentalprocesses are rule-governed manipulations of them that are sensitiveto their constituent structure. The connectionists (e.g., McCulloch& Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986,Smolensky 1988) hold that mental representations are realized bypatterns of activation in a network of simple processors(“nodes”) and that mental processes consist of thespreading activation of such patterns. The nodes themselves are,typically, not taken to be semantically evaluable; nor do the patternshave semantically evaluable constituents. (Though there are versionsof Connectionism – “localist” versions – onwhich individual nodes are taken to have semantic properties (e.g.,Ballard 1986.) It is arguable, however,that localist theories are neither definitive nor representative ofthe connectionist program (Smolensky 1988, 1991, Chalmers 1993).)
Classicists are motivated (in part) by properties thought seems toshare with language. Fodor’s Language of Thought Hypothesis(LOTH) (Fodor 1975, 1987, 2008), according to which the system ofmental symbols constituting the neural basis of thought is structuredlike a language, provides a well-worked-out version of the classicalapproach as applied to commonsense psychology. (Cf. also Marr 1982 foran application of classical approach in scientific psychology.)According to the LOTH, the potential infinity of complexrepresentational mental states is generated from a finite stock ofprimitive representational states, in accordance with recursiveformation rules. This combinatorial structure accounts for theproperties ofproductivity andsystematicity of thesystem of mental representations. As in the case of symboliclanguages, including natural languages (though Fodor does not supposeeither that the LOTH explains only linguistic capacities or that onlyverbal creatures have this sort of cognitive architecture), theseproperties of thought are explained by appeal to the content of therepresentational units and their combinability into contentfulcomplexes. That is, the semantics of both language and thought iscompositional: the content of a complex representation isdetermined by the contents of its constituents and their structuralconfiguration. (See, e.g.,Fodor and Lepore 2002.)
Connectionists are motivated mainly by a consideration of thearchitecture of the brain, which apparently consists of layerednetworks of interconnected neurons. They argue that this sort ofarchitecture is unsuited to carrying out classical serialcomputations. For one thing, processing in the brain is typicallymassively parallel. In addition, the elements whose manipulationdrives computation in connectionist networks (principally, theconnections between nodes) are neither semantically compositional norsemantically evaluable, as they are on the classical approach. Thiscontrast with classical computationalism is often characterized bysaying that representation is, with respect to computation,distributed as opposed tolocal: representation islocal if it is computationally basic; and distributed if it is not.(Another way of putting this is to say that for classicists mentalrepresentations are computationallyatomic, whereas forconnectionists they are not.)
Moreover, connectionists argue that information processing as itoccurs in connectionist networks more closely resembles some featuresof actual human cognitive functioning. For example, whereas on theclassical view learning involves something like hypothesis formationand testing (Fodor 1981c), on the connectionist model it is a matterof evolving distribution of “weights” (strengths) on theconnections between nodes, and typically does not involve theformulation of hypotheses regarding the identity conditions for theobjects of knowledge. The connectionist network is “trainedup” by repeated exposure to the objects it is to learn todistinguish; and, though networks typically requiremany moreexposures to the objects than do humans, this seems to model at leastone feature of this type of human learning quite well. (Cf. the sonarexample in Churchland 1989.)
Further, degradation in the performance of such networks in responseto damage is gradual, not sudden as in the case of a classicalinformation processor, and hence more accurately models the loss ofhuman cognitive function as it typically occurs in response to braindamage. It is also sometimes claimed that connectionist systems showthe kind of flexibility in response to novel situations typical ofhuman cognition – situations in which classical systems arerelatively “brittle” or “fragile.”
Some philosophers have maintained that connectionism entails thatthere are no propositional attitudes. Ramsey, Stich and Garon (1990)have argued that if connectionist models of cognition are basicallycorrect, then there are no discrete representational states asconceived in ordinary commonsense psychology and classical cognitivescience. Others, however (e.g., Smolensky 1989), hold that certaintypes of higher-level patterns of activity in a neural network may beroughly identified with the representational states of commonsensepsychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991,Horgan and Tienson 1996) argue that language-of-thought stylerepresentation is both necessary in general and realizable withinconnectionist architectures. (MacDonald & MacDonald 1995 collectsthe central contemporary papers in the classicist/connectionistdebate, and provides useful introductory material as well. See alsoVon Eckardt 2005.)
Whereas Stich (1983) accepts that mental processes are computational,but denies that computations are sequences of mental representations,others accept the notion of mental representation, but deny that CTMprovides the correct account of mental states and processes.
Van Gelder (1995) denies that psychological processes arecomputational. He argues that cognitive systems aredynamic,and that cognitive states are not relations to mental symbols, butquantifiable states of a complex system consisting of (in the case ofhuman beings) a nervous system, a body and the environment in whichthey are embedded. Cognitive processes are not rule-governed sequencesof discrete symbolic states, but continuous, evolving total states ofdynamic systems determined by continuous, simultaneous and mutuallydetermining states of the systems’ components. Representation ina dynamic system is essentially information-theoretic, though thebearers of information are not symbols, but state variables orparameters. (See also Port and Van Gelder 1995; Clark 1997a, 1997b,2008.)
Horst (1996), on the other hand, argues that though computationalmodels may be useful in scientific psychology, they are of no help inachieving a philosophical understanding of the intentionality ofcommonsense mental states. CTM attempts toreduce theintentionality of such states to the intentionality of the mentalsymbols they are relations to. But, Horst claims, the relevant notionof symbolic content is essentially bound up with the notions ofconvention and intention. So CTM involves itself in a viciouscircularity: the very properties that are supposed to be reduced are(tacitly) appealed to in the reduction.
See the entries on thecomputational theory of mind andconnectionism.
To say that a mental object has semantic properties is,paradigmatically, to say that it isabout, or true or falseof, an object or objects, or that it is true or falsesimpliciter. Suppose I think that democracy is dying. I amthinking about democracy, and if what I think of it (that it is dying)is true of it, then my thought is true. According to RTM such statesare to be explained as relations between agents and mentalrepresentations. To think that democracy is dying is to token in someway a mental representation whose content is that democracy is dying.On this view, the semantic properties of mental states are thesemantic properties of the representations they are relations to.
Linguistic acts seem to share such properties with mental states.Suppose Isay that democracy is dying. I am talking aboutdemocracy, and if what I say of it (that it is dying) is true of it,then my utterance is true. Now, to say that democracy is dying is (inpart) to utter a sentence that means that democracy is dying. Manyphilosophers have thought that the semantic properties of linguisticexpressions are inherited from the intentional mental states they areconventionally used to express (Grice 1957, Fodor 1978,Schiffer1972/1988, Searle 1983). On this view, the semantic propertiesof linguistic expressions are the semantic properties of therepresentations that are the mental relata of the states they areconventionally used to express. Fodor has famously argued that thesestates themselves have a language-like structure. (See the entry onthe language of thought hypothesis.)
(Others, however, e.g., Davidson (1975, 1982) have suggested that thekind of thought human beings are capable of is not possible withoutlanguage, so that the dependency might be reversed, or somehow mutual(see also Sellars 1956). (But see Martin 1987 for a defense of theclaim that thought is possible without language. See also Chisholm andSellars 1958.) Schiffer (1987) subsequently despaired of the successof what he calls “Intention-Based Semantics.”)
It is also widely held that in addition to having such properties asreference, truth-conditions and truth – so-calledextensional properties – expressions of naturallanguages also haveintensional properties, in virtue ofexpressing properties or propositions – i.e., in virtue ofhavingmeanings orsenses, where two expressions mayhave the same reference, truth-conditions or truth value, yet expressdifferent properties or propositions (Frege 1892/1997). If thesemantic properties of natural-language expressions are inherited fromthe thoughts and concepts they express (or vice versa, or both), thenan analogous distinction may be appropriate for mentalrepresentations.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
artificial intelligence: logic-based |cognition: embodied |cognitive science |concepts |connectionism |consciousness: and intentionality |consciousness: representational theories of |externalism about the mind |folk psychology: as mental simulation |information: semantic conceptions of |intentionality |intentionality: phenomenal |language of thought hypothesis |materialism: eliminative |mental content: causal theories of |mental content: narrow |mental content: teleological theories of |mental imagery |mental representation: in medieval philosophy |mind: computational theory of |neuroscience, philosophy of |perception: the contents of |perception: the problem of |qualia |reference
Thanks to Brad Armour-Garb, Mark Balaguer, Dave Chalmers, Jim Garson,John Heil, Jeff Poland, Bill Robinson, Galen Strawson, Adam Vinuezaand (especially) Barbara Von Eckardt for comments on earlier versionsof this entry.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054