The language of thought hypothesis (LOTH) proposes thatthinking occurs in a mental language. Often calledMentalese,the mental language resembles spoken language in several key respects:it contains words that can combine into sentences; the words andsentences are meaningful; and each sentence’s meaning depends ina systematic way upon the meanings of its component words and the waythose words are combined. For example, there is a Mentalesewordwhale that denotes whales, and there is aMentalese wordmammal that denotesmammals. These words can combine into a Mentalesesentencewhales are mammals, which means thatwhales are mammals. To believe that whales are mammals is to bear anappropriate psychological relation to this sentence. During aprototypical deductive inference, I might transform the Mentalesesentencewhales are mammals and the MentalesesentenceMoby Dick is a whale into theMentalese sentenceMoby Dick is a mammal. As Iexecute the inference, I enter into a succession of mental states thatinstantiate those sentences.
LOTH emerged gradually through the writings of Augustine, Boethius,Thomas Aquinas, John Duns Scotus, and many others. William of Ockhamoffered the first systematic treatment in hisSumma Logicae(c. 1323), which meticulously analyzed the meaning and structure ofMentalese expressions. LOTH was quite popular during the late medievalera, but it slipped from view in the sixteenth and seventeenthcenturies. From that point through the mid-twentieth century, itplayed little serious role within theorizing about the mind.
In the 1970s, LOTH underwent a dramatic revival. The watershed waspublication of Jerry Fodor’sThe Language of Thought(1975). Fodor argued abductively: our current best scientific theoriesof psychological activity postulate Mentalese; we therefore have goodreason to accept that Mentalese exists. Fodor’s analysis exertedtremendous impact. LOTH once again became a focus of discussion, somesupportive and some critical. Debates over the existence and nature ofMentalese continue to figure prominently within philosophy andcognitive science. These debates have pivotal importance for ourunderstanding of how the mind works.
What does it mean to posit a mental language? Or to say that thinkingoccurs in this language? Just how “language-like” isMentalese supposed to be? To address these questions, we will isolatesome core commitments that are widely shared among LOT theorists.
Folk psychology routinely explains and predicts behavior by citingmental states, including beliefs, desires, intentions, fears, hopes,and so on. To explain why Mary walked to the refrigerator, we mightnote that she believed there was orange juice in the refrigerator andwanted to drink orange juice. Mental states such as belief and desireare calledpropositional attitudes. They can be specifiedusing locutions of the form
X believes thatp.
X desires thatp.
X intends thatp.
X fears thatp.
etc.
By replacing “p” with a sentence, we specify thecontent ofX’s mental state. Propositional attitudes haveintentionality oraboutness: they are about a subject matter. For thatreason, they are often calledintentional states.
The term “propositional attitude” originates with Russell(1918–1919 [1985]) and reflects his own preferred analysis: thatpropositional attitudes are relations topropositions. A proposition is an abstract entity that determines atruth-condition. To illustrate, suppose John believes thatParis is north of London. Then John’s belief is a relation tothe propositionthat Paris is north of London, and thisproposition is true iff Paris is north of London. Beyond the thesisthat propositions determine truth-conditions, there is littleagreement about what propositions are like. The literature offers manyoptions, mainly derived from theories of Frege (1892 [1997]), Russell(1918–1919 [1985]), and Wittgenstein (1921 [1922]).
Fodor (1981: 177–203; 1987: 16–26) proposes a theory ofpropositional attitudes that assigns a central role tomentalrepresentations. Amental representation is a mental item withsemantic properties (such as adenotation, or a meaning, or a truth-condition, etc.). To believe thatp, or hope thatp, or intend thatp, is to bearan appropriate relation to a mental representation whose meaning isthatp. For example, there is a relation belief* betweenthinkers and mental representations, where the following biconditionalis true no matter what English sentence one substitutes for“p”:
X believes thatp iff there is a mental representationS such thatX believes*S andS means thatp.
More generally:
On this analysis, mental representations are the most direct objectsof propositional attitudes. A propositional attitude inherits itssemantic properties, including its truth-condition, from the mentalrepresentation that is its object.
Proponents of (1) typically invokefunctionalism to analyzeA*. Each psychological relationA* isassociated with a distinctivefunctional role: a role thatS plays within your mental activity just in case you bearA* toS. When specifying what it is to believe*S, for example, we might mention howS serves as a basisfor inferential reasoning, how it interacts with desires to produceactions, and so on. Precise functional roles are to be discovered byscientific psychology. Following Schiffer (1981), it is common to usethe term “belief-box” as a placeholder for the functionalrole corresponding to belief*: to believe*S is to placeS in your belief box. Similarly for “desire-box”,etc.
(1) is compatible with the view that propositional attitudes arerelations to propositions. One might analyze the locution“S means thatp” as involving a relationbetweenS and a proposition expressed byS. It wouldthen follow that someone who believes*S stands in apsychologically important relation to the proposition expressed byS. Fodor (1987: 17) adopts this approach. He combines acommitment to mental representations with a commitment topropositions. In contrast, Field (2001: 30–82) declines topostulate propositions when analyzing “S means thatp”. He posits mental representations with semanticproperties, but he does not posit propositions expressed by the mentalrepresentations.
The distinction betweentypes and tokens is crucial for understanding (1). A mental representation is arepeatable type that can be instantiated on different occasions. Inthe current literature, it is generally assumed that a mentalrepresentation’s tokens are neurological. For present purposes,the key point is that mental representations are instantiated bymental events. Here we construe the category ofevents broadly so as to include bothoccurrences (e.g., I form anintention to drink orange juice) andenduring states (e.g.,my longstanding belief that Abraham Lincoln was president of theUnited States). When mental evente instantiates representationS, we say thatS istokened and thateis atokening ofS. For example, if I believe thatwhales are mammals, then my belief (a mental event) is a tokening of amental representation whose meaning is that whales are mammals.
According to Fodor (1987: 17), thinking consists in chains of mentalevents that instantiate mental representations:
A paradigm example is deductive inference: I transition frombelieving* the premises to believing* the conclusion. The first mentalevent (my belief* in the premises) causes the second (my belief* inthe conclusion).
(1) and (2) fit together naturally as a package that one might callthe representational theory of thought (RTT). RTT postulatesmental representations that serve as the objects of propositionalattitudes and that constitute the domain of thought processes.[1]
RTT as stated requires qualification. There is a clear sense in whichyou believe that there are no elephants on Jupiter. However, youprobably never considered the question until now. It is not plausiblethat your belief box previously contained a mental representation withthe meaning that there are no elephants on Jupiter. Fodor (1987:20–26) responds to this sort of example by restricting (1) tocore cases. Core cases are those where the propositionalattitude figures as a causally efficacious episode in a mentalprocess. Your tacit belief that there are no elephants on Jupiter doesnot figure in your reasoning or decision-making, although it can cometo do so if the question becomes salient and you consciously judgethat there are no elephants on Jupiter. So long as the belief remainstacit, (1) need not apply. In general, Fodor says, an intentionalmental state that is causally efficacious must involve explicittokening of an appropriate mental representation. In a slogan:“No Intentional Causation without Explicit Representation”(Fodor 1987: 25). Thus, we should not construe (1) as an attempt atfaithfully analyzing informal discourse about propositional attitudes.Fodor does not seek to replicate folk psychological categories. Heaims to identify mental states that resemble the propositionalattitudes adduced within folk psychology, that play roughly similarroles in mental activity, and that can support systematictheorizing.
Dennett’s (1977 [1981]) review ofThe Language ofThought raises a widely cited objection to RTT:
In a recent conversation with the designer of a chess-playing programI heard the following criticism of a rival program: “it thinksit should get its queen out early”. This ascribes apropositional attitude to the program in a very useful and predictiveway, for as the designer went on to say, one can usefully count onchasing that queen around the board. But for all the many levels ofexplicit representation to be found in that program, nowhere isanything roughly synonymous with “I should get my queen outearly” explicitly tokened. The level of analysis to which thedesigner’s remark belongs describes features of the program thatare, in an entirely innocent way, emergent properties of thecomputational processes that have “engineering reality”. Isee no reason to believe that the relation between belief-talk andpsychological talk will be any more direct.
In Dennett’s example, the chess-playing machine does notexplicitly represent that it should get the queen out early, yet insome sense it acts upon a belief that it should do so. Analogousexamples arise for human cognition. For example, we often follow rulesof deductive inference without explicitly representing the rules.
To assess Dennett’s objection, we must distinguish sharplybetween mental representations and rules governing the manipulation ofmental representations (Fodor 1987: 25). RTT does not require thatevery such rule be explicitly represented.Some rules may beexplicitly represented—we can imagine a reasoning system thatexplicitly represents deductive inference rules to which it conforms.But the rulesneed not be explicitly represented. They maymerely be implicit in the system’s operations. Only whenconsultation of a rule figures as a causally efficacious episode inmental activity does RTT require that the rule be explicitlyrepresented. Dennett’s chess machine explicitly represents chessboard configurations and perhaps some rules for manipulating chesspieces. It never consults any rule akin toGet the Queen outearly. For that reason, we should not expect that the machineexplicitly represents this rule even if the rule is in some sensebuilt into the machine’s programming. Similarly, typicalthinkers do not consult inference rules when engaging in deductiveinference. So RTT does not demand that a typical thinker explicitlyrepresent inference rules, even if she conforms to them and in somesense tacitly believes that she should conform to them.
Natural language iscompositional: complex linguistic expressions are built from simpler linguisticexpressions, and the meaning of a complex expression is a function ofthe meanings of its constituents together with the way thoseconstituents are combined.Compositional semantics describesin a systematic way how semantic properties of a complex expressiondepend upon semantic properties of its constituents and the way thoseconstituents are combined. For example, the truth-condition of aconjunction is determined as follows: the conjunction is true iff bothconjuncts are true.
Historical and contemporary LOT theorists universally agree thatMentalese is compositional:
Compositionality of mental representations(COMP): Mental representations have a compositionalsemantics: complex representations are composed of simpleconstituents, and the meaning of a complex representation depends uponthe meanings of its constituents together with the constituencystructure into which those constituents are arranged.
Clearly, mental language and natural language must differ in manyimportant respects. For example, Mentalese surely does not have aphonology. It may not have a morphology either. Nevertheless, COMParticulates a fundamental point of similarity. Just like naturallanguage, Mentalese contains complex symbols amenable to semanticanalysis.
What is it for one representation to be a “constituent” ofanother? According to Fodor (2008: 108), “constituent structureis a species of the part/whole relation”. Notall partsof a linguistic expression are constituents: “John ran” isa constituent of “John ran and Mary jumped”, but“ran and Mary” is not a constituent because it is notsemantically interpretable. The important point for our purposes isthat all constituents are parts. When a complex representation istokened, so are its parts. For example,
intending that \(P \amp Q\) requires having a sentence in yourintention box… one of whose parts is a token of the very sametype that’s in the intention box when you intend that \(P\), andanother of whose parts is a token of the very same type that’sin the intention box when you intend that \(Q\). (Fodor 1987: 139)
More generally: mental event \(e\) instantiates a complex mentalrepresentation only if \(e\) instantiates all of therepresentation’s constituent parts. In that sense, \(e\) itselfhas internal complexity.
The complexity of mental events figures crucially here, as highlightedby Fodor in the following passage (1987: 136):
Practically everybody thinks that the objects of intentional statesare in some way complex… [For example], what you believe whenyou believe that \(P \amp Q\) is… something composite, whoseelements are—as it might be—the proposition thatPand the proposition thatQ. But the (putative) complexity oftheintentional object of a mental state does not, of course,entail the complexity of the mental state itself… LOT claimsthatmental states—and not just their propositionalobjects—typically have constituent structure.
Many philosophers, including Frege and Russell, regard propositions asstructured entities. These philosophers apply a part/whole model topropositions but not necessarily to mental events during whichthinkers entertain propositions. LOTH as developed by Fodor appliesthe part/whole model to the mental events themselves:
what’s at issue here is the complexity of mental events and notmerely the complexity of the propositions that are their intentionalobjects. (Fodor 1987: 142)
On this approach, a key element of LOTH is the thesis that mentalevents have semantically relevant complexity.
Contemporary proponents of LOTH endorse RTT+COMP. Historicalproponents also believed something in the vicinity (Normore 1990,2009; Panaccio 1999 [2017]), although of course they did not usemodern terminology to formulate their views. We may regard RTT+COMP asa minimalist formulation of LOTH, bearing in mind that manyphilosophers have used the phrase “language of thoughthypothesis” to denote one of the stronger theses discussedbelow. As befits a minimalist formulation, RTT+COMP leaves unresolvednumerous questions about the nature, structure, and psychological roleof Mentalese expressions.
In practice, LOT theorists usually adopt a more specific view of thecompositional semantics for Mentalese. They claim that Mentaleseexpressions havelogical form (Fodor 2008: 21). More specifically, they claim that Mentalesecontains analogues to the familiar logical connectives (and,or,not,if-then,some,all,the). Iterative application of logicalconnectives generates complex expressions from simpler expressions.The meaning of a logically complex expression depends upon themeanings of its parts and upon its logical structure. Thus, LOTtheorists usually endorse a doctrine along the following lines:
Logically structured mental representations(LOGIC): Some mental representations have logicalstructure. The compositional semantics for these mentalrepresentations resembles the compositional semantics for logicallystructured natural language expressions.
Medieval LOT theorists used syllogistic and propositional logic toanalyze the semantics of Mentalese (King 2005; Normore 1990).Contemporary proponents instead use thepredicate calculus,which was discovered by Frege (1879 [1967]) and whose semantics wasfirst systematically articulated by Tarski (1933 [1983]). The view isthat Mentalese contains primitive words—including predicates,singular terms, and logical connectives—and that these wordscombine to form complex sentences governed by something like thesemantics of the predicate calculus.
The notion of aMentalese word corresponds roughly to theintuitive notion of aconcept. In fact, Fodor (1998: 70)construes a concept as a Mentalese word together with its denotation.For example, a thinker has the concept of a cat only if she has in herrepertoire a Mentalese word that denotes cats.
Logical structure is just one possible paradigm for the structure ofmental representations. Human society employs a wide range ofnon-sentential representations, including pictures, maps, diagrams,and graphs. Non-sentential representations typically contain partsarranged into a compositionally significant structure. In many cases,it is not obvious that the resulting complex representations havelogical structure. For example, maps do not seem to contain logicalconnectives (Fodor 1991: 295; Millikan 1993: 302; Pylyshyn 2003:424–5). Nor is it evident that they contain predicates (Camp2018; Rescorla 2009c), although some philosophers contend that they do(Blumson 2012; Casati & Varzi 1999; Kulvicki 2015).
Theorists often posit mental representations that conform to COMP butthat lack logical structure. The British empiricists postulatedideas, which they characterized in broadly imagistic terms.They emphasized that simple ideas can combine to form complex ideas.They held that the representational import of a complex idea dependsupon the representational import of its parts and the way those partsare combined. So they accepted COMP or something close to it(depending on what exactly “constituency” amounts to).[2] They did not say in much detail how compounding of ideas was supposedto work, but imagistic structure seems to be the paradigm in at leastsome passages. LOGIC plays no significant role in their writings.[3] Partly inspired by the British empiricists, Prinz (2002) and Barsalou(1999) analyze cognition in terms of image-like representationsderived from perception. Armstrong (1973) and Braddon-Mitchell andJackson (2007) propose that propositional attitudes are relations notto mental sentences but tomental maps analogous in importantrespects to ordinary concrete maps.
One problem facing imagistic and cartographic theories of thought isthat propositional attitudes are often logically complex (e.g., Johnbelievesthat if Plácido Domingo does not sing then eitherGustavo Dudamel will conduct or the concert will be cancelled).Images and maps do not seem to support logical operations: thenegation of a map is not a map; the disjunction of two maps is not amap; similarly for other logical operations; and similarly for images.Given that images and maps do not support logical operations, theoriesthat analyze thought in exclusively imagistic or cartographic termswill struggle to explain logically complex propositional attitudes.[4]
There is room here for a pluralist position that allows mentalrepresentations of different kinds: some with logical structure, somemore analogous to pictures, or maps, or diagrams, and so on. Thepluralist position is widespread within cognitive science, whichposits a range of formats for mental representation (Block 1983; Camp2009; Johnson-Laird 2004: 187; Kosslyn 1980; Mandelbaum et al. 2022;McDermott 2001: 69; Pinker 2005: 7; Sloman 1978: 144–76). Fodorhimself (1975: 184–195) suggests a view on which imagisticmental representations co-exist alongside, and interact with,logically structured Mentalese expressions.
Given the prominent role played by logical structure within historicaland contemporary discussion of Mentalese, one might take LOGIC to bedefinitive of LOTH. One might insist that mental representationscomprise a mentallanguage only if they have logicalstructure. We need not evaluate the merits of this terminologicalchoice.
RTT concerns propositional attitudes and the mental processes in whichthey figure, such as deductive inference, reasoning, decision-making,and planning. It does not address perception, motor control,imagination, dreaming, pattern recognition, linguistic processing, orany other mental activity distinct from high-level cognition. Hencethe emphasis upon a language ofthought: a system of mentalrepresentations that underlie thinking, as opposed to perceiving,imagining, etc. Nevertheless, talk about a mental language generalizesnaturally from high-level cognition to other mental phenomena.
Perception is a good example. The perceptual systemtransforms proximal sensory stimulations (e.g., retinal stimulations)into perceptual estimates of environmental conditions (e.g., estimatesof shapes, sizes, colors, locations, etc.). Helmholtz (1867 [1925])proposed that the transition from proximal sensory input to perceptualestimates features anunconscious inference, similar in keyrespects to high-level conscious inference yet inaccessible toconsciousness. Helmholtz’s proposal is foundational tocontemporaryperceptual psychology, which constructs detailedmathematical models of unconscious perceptual inference (Knill &Richards 1996; Rescorla 2015). Fodor (1975: 44–55) argues thatthis scientific research program presupposes mental representations.The representations participate in unconscious inferences orinference-like transitions executed by the perceptual system.[5]
Navigation is another good example. Tolman (1948)hypothesized that rats navigate usingcognitive maps: mentalrepresentations that represent the layout of the spatial environment.The cognitive map hypothesis, advanced during the heyday ofbehaviorism, initially encountered great scorn. It remained a fringeposition well into the 1970s, long after the demise of behaviorism.Eventually, mounting behavioral and neurophysiological evidence won itmany converts (Gallistel 1990; Gallistel & Matzel 2013; Jacobs& Menzel 2014; O’Keefe & Nadel 1978; Weiner et al.2011). Although a few researchers remain skeptical (Mackintosh 20002),there is now a broad consensus that mammals (and possibly even someinsects) navigate using mental representations of spatial layout.Rescorla (2017b) summarizes the case for cognitive maps and reviewssome of their core properties.
To what extent should we expect perceptual representations andcognitive maps to resemble the mental representations that figure inhigh-level human thought? It is generally agreed that all these mentalrepresentations have compositional structure. For example, theperceptual system can bind together a representation of shape and arepresentation of size to form a complex representation that an objecthas a certain shape and size; the representational import of thecomplex representation depends in a systematic way upon therepresentational import of the component representations. On the otherhand, it is not clear that perceptual representations havelogical structure (Block 2023: 182–190; Burge 2022:190–201), including even predicative structure (Burge 2010:540–544; Burge 2022: 44–45: Fodor 2008: 169–195).Nor is it evident that cognitive maps contain logical connectives orpredicates (Rescorla 2009a, 2009b). Perceptual processing andnon-human navigation certainly do not seem to instantiate mentalprocesses that would exploit putative logical structure. Inparticular, they do not seem to instantiate deductive inference.
These observations provide ammunition for pluralism aboutrepresentational format. Pluralists can posit one system ofcompositionally structured mental representations for perception,another for navigation, another for high-level cognition, and so on.Different representational systems potentially feature differentcompositional mechanisms. As indicated insection 1.3, pluralism figures prominently in contemporary cognitive science.Pluralists face some pressing questions. Which compositionalmechanisms figure in which psychological domains? Whichrepresentational formats support which mental operations? How dodifferent representational formats interface with each other? Furtherresearch bridging philosophy and cognitive science is needed toaddress such questions.
Modern proponents of LOTH typically endorse thecomputational theory of mind (CTM), which claims that the mind is a computational system. Someauthors use the phrase “language of thought hypothesis” sothat it definitionally includes CTM as one component.
In a seminal contribution, Turing (1936) introduced what is now calledtheTuring machine: an abstract model of an idealized computing device. A Turing machinecontains a central processor, governed by precise mechanical rules,that manipulates symbols inscribed along a linear array of memorylocations. Impressed by the enormous power of the Turing machineformalism, many researchers seek to construct computational models ofcore mental processes, including reasoning, decision-making, andproblem solving. This enterprise bifurcates into two main branches.The first branch isartificial intelligence (AI), which aimsto build “thinking machines”. Here the goal is primarilyan engineering one—to build a system that instantiates or atleast simulates thought—without any pretense at capturing howthe human mind works. The second branch,computationalpsychology, aims to construct computational models of humanmental activity. AI and computational psychology both emerged in the1960s as crucial elements in the new interdisciplinary initiativecognitive science, which studies the mind by drawing upon psychology, computer science(especially AI), linguistics, philosophy, economics (especially gametheory and behavioral economics), anthropology, and neuroscience.
From the 1960s to the early 1980s, computational models offered withinpsychology were mainly Turing-style models. These models embody aviewpoint known asthe classical computational theory of mind(CCTM). According to CCTM, the mind is a computational system similarin important respects to a Turing machine, and certain core mentalprocesses are computations similar in important respects tocomputations executed by a Turing machine.
CCTM fits together nicely with RTT+COMP. Turing-style computationoperates over symbols, so any Turing-style mental computations mustoperate over mental symbols. The essence of RTT+COMP is postulation ofmental symbols. Fodor (1975, 1981) advocates RTT+COMP+CCTM. He holdsthat certain core mental processes are Turing-style computations overMentalese expressions.
One can endorse RTT+COMP without endorsing CCTM. By positing a systemof compositionally structured mental representations, one does notcommit oneself to saying that operations over the representations arecomputational. Historical LOT theorists could not evenformulate CCTM, for the simple reason that the Turing formalism hadnot been discovered. In the modern era, Harman (1973) and Sellars(1975) endorse something like RTT+COMP but not CCTM. Horgan andTienson (1996) endorse RTT+COMP+CTM but notCCTM, i.e.,classical CTM. They favor a version of CTM grounded inconnectionism, an alternative computational framework that differs quitesignificantly from Turing’s approach. Thus, proponents ofRTT+COMP need not accept that mental activity instantiatesTuring-style computation.
Fodor (1981) combines RTT+COMP+CCTM with a view that one might callthe formal-syntactic conception of computation (FSC).According to FSC, computation manipulates symbols in virtue of theirformal syntactic properties but not their semantic properties.
FSC draws inspiration from modern logic, which emphasizes theformalization of deductive reasoning. To formalize, wespecify aformal language whose component linguisticexpressions are individuated non-semantically (e.g., by theirgeometric shapes). We describe the expressions as pieces of formalsyntax, without considering what if anything the expressions mean. Wethen specifyinference rules in syntactic, non-semanticterms. Well-chosen inference rules will carry true premises to trueconclusions. By combining formalization with Turing-style computation,we can build a physical machine that manipulates symbols based solelyon the formal syntax of the symbols. If we program the machine toimplement appropriate inference rules, then its syntacticmanipulations will transform true premises into true conclusions.
CCTM+FSC says that the mind is a formal syntactic computing system:mental activity consists in computation over symbols with formalsyntactic properties; computational transitions are sensitive to thesymbols’ formal syntactic properties but not their semanticproperties. The key term “sensitive” is rather imprecise,allowing some latitude as to the precise import of CCTM+FSC.Intuitively, the picture is that a mental symbol’s formal syntaxrather than its semantics determines how mental computationmanipulates it. The mind is a “syntactic engine”.
Fodor (1987: 18–20) argues that CCTM+FSC helps illuminate acrucial feature of cognition:semantic coherence. For themost part, our thinking does not move randomly from thought tothought. Rather, thoughts are causally connected in a way thatrespects their semantics. For example, deductive inference carriestrue beliefs to true beliefs. More generally, thinking tends torespect epistemic properties such as warrant and degree ofconfirmation. In some sense, then, our thinking tends to cohere withsemantic relations among thoughts. How is semantic coherence achieved?How does our thinking manage to track semantic properties? CCTM+FSCgives one possible answer. It shows how a physical system operating inaccord with physical laws can execute computations that coherentlytrack semantic properties. By treating the mind as a syntax-drivenmachine, we explain how mental activity achieves semantic coherence.We thereby answer the question:How is rationality mechanicallypossible?
Fodor’s argument convinced many researchers that CCTM+FSCdecisively advances our understanding of the mind’s relation tothe physical world. But not everyone agrees that CCTM+FSC adequatelyintegrates semantics into the causal order. A common worry is that theformal syntactic picture veers dangerously close toepiphenomenalism (Block 1990; Kazez 1994). Pre-theoretically,semantic properties of mental states seem highly relevant to mentaland behavioral outcomes. For example, if I form an intention to walkto the grocery store, then the fact that my intention concerns thegrocery store rather than the post office helps explain why I walk tothe grocery store rather than the post office. Burge (2010) andPeacocke (1994) argue that cognitive science theorizing likewiseassigns causal and explanatory importance to semantic properties. Theworry is that CCTM+FSC cannot accommodate the causal and explanatoryimportance of semantic properties because it depicts them as causallyirrelevant: formal syntax, not semantics, drives mental computationforward. Semantics looks epiphenomenal, with syntax doing all the work(Stich 1983).
Fodor (1990, 1994) expends considerable energy trying to allayepiphenomenalist worries. Advancing a detailed theory of the relationbetween Mentalese syntax and Mentalese semantics, he insists that FSCcan honor the causal and explanatory relevance of semantic properties.Fodor’s treatment is widely regarded as problematic (Arjo 1996;Aydede 1997b, 1998; Aydede & Robbins 2001; Perry 1998; Prinz 2011;Wakefield 2002), although Rupert (2008) and Schneider (2005) espousesomewhat similar positions.
Partly in response to epiphenomenalist worries, some authors recommendthat we replace FSC with an alternativesemantic conceptionof computation (Block 1990; Burge 2010: 95–101; Figdor 2009;O’Brien & Opie 2006; Peacocke 1994, 1999; Rescorla 2012a).Semantic computationalists claim that computational transitions aresometimes sensitive to semantic properties, perhaps in addition tosyntactic properties. More specifically, semantic computationalistsinsist thatmental computation is sometimes sensitive tosemantics. Thus, they reject any suggestion that the mind is a“syntactic engine” or that mental computation is sensitiveonly to formal syntax.[6] To illustrate, consider Mentalese conjunction. This mental symbolexpresses the truth-table for conjunction. According to semanticcomputationalists, the symbol’s meaning is relevant (bothcausally and explanatorily) to mechanical operations over it. That thesymbol expresses the truth-table for conjunction rather than, say,disjunction influences the course of computation. We should thereforereject any suggestion that mental computation is sensitive to thesymbol’s syntactic properties rather than its semanticproperties. The claim is not that mental computationexplicitlyrepresents semantic properties of mental symbols. All partiesagree that, in general, it does not. There is no homunculus insideyour head interpreting your mental language. The claim is rather thatsemantic properties influence how mental computation proceeds.(Compare: the momentum of a baseball thrown at a window causallyinfluences whether the window breaks, even though the window does notexplicitly represent the baseball’s momentum.)
Proponents of the semantic conception differ as to how exactly theygloss the core claim that some computations are“sensitive” to semantic properties. They also differ intheir stance towards CCTM. Block (1990) and Rescorla (2014a) focusupon CCTM. They argue that a symbol’s semantic properties canimpact mechanical operations executed by a Turing-style computationalsystem. In contrast, O’Brien and Opie (2006) favor connectionismover CCTM.
Theorists who reject FSC must reject Fodor’s explanation ofsemantic coherence. What alternative explanation might they offer? Sofar, the question has received relatively little attention. Rescorla(2017a) argues that semantic computationalists can explain semanticcoherence and simultaneously avoid epiphenomenalist worries byinvoking neural implementation of semantically-sensitive mentalcomputations.
Fodor’s exposition sometimes suggests that CTM, CCTM, orCCTM+FSC is definitive of LOTH (1981: 26). Yet not everyone whoendorses RTT+COMP endorses CTM, CCTM, or FSC. One can postulate amental language without agreeing that mental activity iscomputational, and one can postulate mental computations over a mentallanguage without agreeing that the computations are sensitive only tosyntactic properties. For most purposes, it is not important whetherwe regard CTM, CCTM, or CCTM+FSC as definitive of LOTH. More importantis that we track the distinctions among the doctrines.
The literature offers many arguments for LOTH. This section introducesfour influential arguments, each of which supports LOTH abductively byciting its explanatory benefits.Section 5 discusses some prominent objections to the four arguments.
Fodor (1975) defends RTT+COMP+CCTM by appealing to scientificpractice: our best cognitive science postulates Turing-style mentalcomputations over Mentalese expressions; therefore, we should acceptthat mental computation operates over Mentalese expressions. Fodordevelops his argument by examining detailed case studies, includingperception, decision-making, and linguistic comprehension. He arguesthat, in each case, computation over mental representations plays acentral explanatory role. Fodor’s argument was widely heraldedas a compelling analysis of then-current cognitive science. Theargument from cognitive science practice has subsequently beendeveloped and updated both by Fodor and by other authors, such asQuilty-Dunn, Porot, and Mandelbaum (forthcoming).
When evaluating cognitive science support for LOTH, it is crucial tospecify what version of LOTH one has in mind. Specifically,establishing that certain mental processes operate over mentalrepresentations is not enough to establish RTT. For example, one mightaccept that mental representations figure in perception and animalnavigation but not in high-level human cognition. Gallistel and King(2009) defend COMP+CCTM+FSC through a number of (mainly non-human)empirical case studies, but they do not endorse RTT. They focus onrelatively low-level phenomena, such as animal navigation, withoutdiscussing human decision-making, deductive inference, problemsolving, or other high-level cognitive phenomena.
During your lifetime, you will only entertain a finite number ofthoughts. In principle, though, there are infinitely many thoughts youmight entertain. Consider:
Mary gave the test tube to John’s daughter.
Mary gave the test tube to John’s daughter’s daughter.
Mary gave the test tube to John’s daughter’sdaughter’s daughter.
⋮
The moral usually drawn is that you have thecompetence toentertain a potential infinity of thoughts, even though yourperformance is bounded by biological limits upon memory,attention, processing capacity, and so on. In a slogan: thought isproductive.
RTT+COMP straightforwardly explains productivity. We postulate afinite base of primitive Mentalese symbols, along with operations forcombining simple expressions into complex expressions. Iterativeapplication of the compounding operations generates an infinite arrayof mental sentences, each in principle within your cognitiverepertoire. By tokening a mental sentence, you entertain the thoughtexpressed by it. This explanation leverages the recursive nature ofcompositional mechanisms to generate infinitely many expressions froma finite base. It thereby illuminates how finite creatures such asourselves are able to entertain a potential infinity of thoughts.
Fodor and Pylyshyn (1988) argue that, since RTT+COMP provides asatisfying explanation for productivity, we have good reason to acceptRTT+COMP. A potential worry about this argument is that it rests uponan infinitary competence never manifested within actual performance.One might dismiss the supposed infinitary competence as anidealization that, while perhaps convenient for certain purposes, doesnot stand in need of explanation.
There are systematic interrelations among the thoughts a thinker canentertain. For example, if you can entertain the thought that Johnloves Mary, then you can also entertain the thought that Mary lovesJohn. Systematicity looks like a crucial property of human thought andso demands a principled explanation.
RTT+COMP gives a compelling explanation. According to RTT+COMP, yourability to entertain the thought thatp hinges upon yourability to bear appropriate psychological relations to a MentalesesentenceS whose meaning is thatp. If you are able tothink that John loves Mary, then your internal system of mentalrepresentations includes a mental sentenceJohn lovesMary, composed of mental wordsJohn,loves, andMarycombined in the right way. If you have the capacity to stand inpsychological relationA* toJohn lovesMary, then you also have the capacity to stand in relationA* to a distinct mental sentenceMary lovesJohn. The constituent wordsJohn,loves,andMary make thesame semantic contribution to both mental sentences (Johndenotes John,lovesdenotes the loving relation, andMary denotesMary), but the words are arranged in different constituency structuresso that the sentences have different meanings. WhereasJohnloves Mary means that John loves Mary,Maryloves John means that Mary loves John. Bystanding in relationA* to the sentenceMaryloves John, you entertain the thought that Mary loves John.Thus, an ability to think that John loves Mary entails an ability tothink that John loves Mary. By comparison, an ability to think thatJohn loves Mary does not entail an ability to think that whales aremammals or an ability to think that \(56 + 138 = 194\).
Fodor (1987: 148–153) supports RTT+COMP by citing its ability toexplain systematicity. In contrast with the productivity argument, thesystematicity argument does not depend upon infinitary idealizationsthat outstrip finite performance. Note that neither argument providesany direct support for CTM. Neither argument even mentionscomputation.
There are systematic interrelations among which inferences a thinkercan draw. For example, if you can inferp frompandq, then you can also inferm fromm andn. The systematicity of thinking requires explanation. Why is itthat thinkers who can inferp frompandq can also inferm frommandn?
RTT+COMP+CCTM gives a compelling explanation. During an inference fromp and q top, you transit from believing* mentalsentence \(S_1 \amp S_2\) (which meansthat p and q) tobelieving* mental sentence \(S_{1}\) (which means thatp).According to CCTM, the transition involves symbol manipulation. Amechanical operation detaches the conjunct \(S_{1}\) from theconjunction \(S_1 \amp S_2\). The same mechanical operation isapplicable to a conjunction \(S_{3} \amp S_{4}\) (which meansthatm and n), corresponding to the inference frommandn ton. An ability to execute the first inference entailsan ability to execute the second, because drawing the inference ineither case corresponds to executing a single uniform mechanicaloperation. More generally, logical inference deploys mechanicaloperations over structured symbols, and the mechanical operationcorresponding to a given inference pattern (e.g., conjunctionintroduction, disjunction elimination, etc.) is applicable to anypremises with the right logical structure. The uniform applicabilityof a single mechanical operation across diverse symbols explainsinferential systematicity. Fodor and Pylyshyn (1988) conclude thatinferential systematicity provides reason to accept RTT+COMP+CCTM.
Fodor and Pylyshyn (1988) endorse an additional thesis about themechanical operations corresponding to logical transitions. In keepingwith FSC, they claim that the operations are sensitive to formalsyntactic properties but not semantic properties. For example,conjunction elimination responds to Mentalese conjunction as a pieceof pure formal syntax, much as a computer manipulates items in aformal language without considering what those items mean.
Semantic computationalists reject FSC. They claim that mentalcomputation is sometimes sensitive to semantic properties. Semanticcomputationalists can agree that drawing an inference involvesexecuting a mechanical operation over structured symbols, and they canagree that the same mechanical operation uniformly applies to anypremises with appropriate logical structure. So they can still explaininferential systematicity. However, they can also say that thepostulated mechanical operation is sensitive to semantic properties.For example, they can say that conjunction elimination is sensitive tothe meaning of Mentalese conjunction.
In assessing the debate between FSC and semantic computationalism, onemust distinguish betweenlogical versusnon-logicalsymbols. For present purposes, it is common ground that the meaningsofnon-logical symbols do not inform logical inference. Theinference from \(S_1 \amp S_2\) to \(S_{1}\) features the samemechanical operation as the inference from \(S_{3} \amp S_{4}\) to\(S_{4}\), and this mechanical operation is not sensitive to themeanings of the conjuncts \(S_{1}\), \(S_{2}\), \(S_{3}\), or\(S_{4}\). It does not follow that the mechanical operation isinsensitive to the meaning of Mentalese conjunction. The meaning ofconjunction might influence how the logical inference proceeds, eventhough the meanings of the conjuncts do not.
In the 1960s and 1970s, cognitive scientists almost universallymodeled mental activity as rule-governed symbol manipulation. In the1980s, connectionism gained currency as an alternative computationalframework. Connectionists employ computational models, calledneural networks, that differ quite significantly fromTuring-style models. There is no central processor. There are nomemory locations for symbols to be inscribed. Instead, there is anetwork ofnodes bearing weighted connections to one another.During computation, waves of activation spread through the network. Anode’s activation level depends upon the weighted activations ofthe nodes to which it is connected. Nodes function somewhatanalogously to neurons, and connections between nodes functionsomewhat analogously to synapses. One should receive theneurophysiological analogy cautiously, as there are numerous importantdifferences between neural networks and actual neural configurationsin the brain (Bechtel & Abramson 2002: 341–343;Bermúdez 2010: 237–239; Clark 2014: 87–89; Harnish2002: 359–362).
Connectionists raise many objections to the classical computationalparadigm (Rumelhart, McClelland, & the PDP Research Group 1986;Horgan & Tienson 1996; McLaughlin & Warfield 1994; Bechtel& Abrahamsen 2002), such as that classical systems are notbiologically realistic or that they are unable to model certainpsychological tasks. Classicists in turn launch various argumentsagainst connectionism. The most famous arguments showcaseproductivity, systematicity of thought, and systematicity of thinking.Fodor and Pylyshyn (1988) argue that these phenomena support classicalCTM over connectionist CTM.
Fodor and Pylyshyn’s argument hinges on the distinction betweeneliminative connectionism andimplementationistconnectionism (cf. Pinker & Prince 1988). Eliminativeconnectionists advance neural networks as areplacement forthe Turing-style formalism. They deny that mental computation consistsin rule-governed symbol manipulation. Implementationist connectionistsallow that, in some cases, mental computation may instantiaterule-governed symbol manipulation. They advance neural networks not toreplace classical computations but rather to model how classicalcomputations are implemented in the brain. The hope is that, becauseneural network computation more closely resembles actual brainactivity, it can illuminate the physical realization of rule-governedsymbol manipulation.
Building on Aydede’s (2015) discussion, we may reconstruct Fodorand Pylyshyn’s argument like so:
The argument doesnot say that neural networks are unable tomodel systematicity. One can certainly build a neural network that issystematic. For example, one might build a neural network that canrepresent that John loves Mary only if it can represent that Maryloves John. The problem is that one might just as well build a neuralnetwork that can represent that John loves Mary but cannot representthat Mary loves John. Hence, nothing about the connectionist frameworkper se guarantees systematicity. For that reason, theframework does not explain the nomic necessity of systematicity. Itdoes not explain why all the minds we find are systematic. Incontrast, the classical framework mandates systematicity, and so itexplains the nomic necessity of systematicity. The only apparentrecourse for connectionists is to adopt the classical explanation,thereby becoming implementationist rather than eliminativeconnectionists.
Fodor and Pylyshyn’s argument has spawned a massive literature,including too many rebuttals to survey here. The most popularresponses fall into five categories:
We focus here on (vi).
As discussed insection 1.2, Fodor elucidates constituency structure in terms of part/wholerelations. A complex representation’s constituents are literalparts of it. One consequence is that, whenever the firstrepresentation is tokened, so are its constituents. Fodor takes thisconsequence to be definitive of classical computation. As Fodor andMcLaughlin (1990: 186) put it:
for a pair of expression types E1, E2, the first is aClassical constituent of the secondonly if thefirst is tokened whenever the second is tokened.
Thus, structured representations have aconcatenativestructure: each token of a structured representation involves aconcatenation of tokens of the constituent representations.Connectionists who deny (vi) espouse anon-concatenativeconception of constituency structure, according to which structure isencoded by a suitabledistributed representation.Developments of the non-concatenative conception are usually quitetechnical (Elman 1989; Hinton 1990; Pollack 1990; Smolensky 1990,1991, 1995; Touretzky 1990). Most models usevector ortensor algebra to define operations over connectionistrepresentations, which are codified by activity vectors across nodesin a neural network. The representations are said to haveimplicit constituency structure: the constituents are notliteral parts of the complex representation, but they can be extractedfrom the complex representation through suitable computationaloperations over it.
Fodor and McLaughlin (1990) grant that distributed representations mayhave constituency structure “in an extended sense”. Butthey insist that distributed representations are ill-suited to explainsystematicity. They focus especially on the systematicity of thinking,the classical explanation for which postulates mechanical operationsthat respond to constituency structure. Fodor and McLaughlin arguethat the non-concatenative conception cannot replicate the classicalexplanation and offers no satisfactory substitute for it. Chalmers(1993) and Niklasson and van Gelder (1994) disagree. They contend thata neural network can execute structure-sensitive computations overrepresentations that have non-concatenative constituency structure.They conclude that connectionists can explain productivity andsystematicity without retreating to implementationistconnectionism.
Aydede (1995, 1997a) agrees that there is a legitimate notion ofnon-concatenative constituency structure, but he questions whether theresulting models are non-classical. He denies that we should regardconcatenative structure as integral to LOTH. According to Aydede,concatenative structure is just one possible physical realization ofconstituency structure. Non-concatenative structure is anotherpossible realization. We can accept RTT+COMP without glossingconstituency structure in concatenative terms. On this view, a neuralnetwork whose operations are sensitive to non-concatenativeconstituency structure may still count as broadly classical and inparticular as manipulating Mentalese expressions.
The debate between classical and connectionist CTM is still active,although not as active as during the 1990s. Recent anti-connectionistarguments tend to have a more empirical flavor. For example, Gallisteland King (2009) defend CCTM by canvassing a range of non-humanempirical case studies. According to Gallistel and King, the casestudies manifest a kind of productivity that CCTM can easily explainbut eliminative connectionism cannot.
LOTH has elicited too many objections to cover in a singleencyclopedia entry. We will discuss two objections, both alleging thatLOTH generates a vicious regress. The first objection emphasizeslanguagelearning. The second emphasizeslanguageunderstanding.
Like many cognitive scientists, Fodor holds that children learn anatural language viahypothesis formation and testing.Children formulate, test, and confirm hypotheses about the denotationsof words. For example, a child learning English will confirm thehypothesis that “cat” denotes cats. According to Fodor,denotations are represented in Mentalese. To formulate the hypothesisthat “cat” denotes cats, the child uses a Mentalese wordcat that denotes cats. It may seem that aregress is now in the offing, sparked by the question: How does thechild learn Mentalese? Suppose we extend the hypothesis formation andtesting model (henceforth HF) to Mentalese. Then we must posit ameta-language to express hypotheses about denotations of Mentalesewords, a meta-meta-language to express hypotheses about denotations ofmeta-language words, and so onad infinitum (Atherton andSchwartz 1974: 163).
Fodor responds to the threatened regress by denying we should apply HFto Mentalese (1975: 65). Children do not test hypotheses about thedenotations of Mentalese words. They do not learn Mentalese at all.The mental language isinnate.
The doctrine thatsome concepts are innate was a focal pointin the clash between rationalism versus empiricism. Rationalistsdefended the innateness of certain fundamental ideas, such asgodandcause, whileempiricists held that all ideas derive from sensory experience. Amajor theme in the 1960s cognitive science revolution was revival of anativist picture, inspired by the rationalists, on which manykey elements of cognition are innate. Most famously, Chomsky (1965)explained language acquisition by positing innate knowledge aboutpossible human languages. Fodor’s innateness thesis was widelyperceived as going way beyond all precedent, verging on thepreposterous (P.S. Churchland 1986; Putnam 1988). How could we have aninnate ability to represent all the denotations we mentally represent?For example, how could we innately possess a Mentalese wordcarburetorthat represents carburetors?
In evaluating these issues, it is vital to distinguish betweenlearning a concept versusacquiring a concept. WhenFodor says that a concept is innate, he does not mean to deny that weacquire the concept or even that certain kinds of experience areneeded to acquire it. Fodor fully grants that we cannot mentallyrepresent carburetors at birth and that we come to represent them onlyby undergoing appropriate experiences. He agrees that most conceptsareacquired. He denies that they arelearned. Ineffect, he uses “innate” as a synonym for“unlearned” (1975: 96). One might reasonably challengeFodor’s usage. One might resist classifying a concept as innatesimply because it is unlearned. However, that is howFodoruses the word “innate”. Properly understood, then,Fodor’s position is not as far-fetched as it may sound.[7]
Fodor gives a simple but striking argument that concepts areunlearned. The argument begins from the premise that HF is the onlypotentially viable model of concept learning. Fodor thenargues that HF isnot a viable model of concept learning,from which he concludes that concepts are unlearned. He offers variousformulations and refinements of the argument over his career. Here isa relatively recent rendition (2008: 139):
Now, according to HF, the process by which one learnsC mustinclude the inductive evaluation of some such hypothesis as “TheC things are the ones that are green or triangular”. Butthe inductive evaluation of that hypothesis itself requires (interalia) bringing the propertygreen or triangular beforethe mind as such… Quite generally, you can’t representanything assuch and such unless you already have the conceptsuch and such. All that being so, it follows, on pain ofcircularity, that “concept learning” as HF understands itcan’t be a way of acquiring conceptC…Conclusion:If concept learning is as HF understands it, there canbe no such thing. This conclusion is entirely general; itdoesn’t matter whether the target concept is primitive (likegreen) or complex (likegreenor triangular).
Fodor’s argument does not presuppose RTT, COMP, or CTM. To theextent that the argument works, it applies to any view on which peoplehave concepts.
If concepts are not learned, then how are they acquired? Fodor offerssome preliminary remarks (2008: 144–168), but by his ownadmission the remarks are sketchy and leave numerous questionsunanswered (2008: 144–145). Prinz (2011) critiques Fodor’spositive treatment of concept acquisition.
The most common rejoinder to Fodor’s innateness argument is todeny that HF is the only viable model of concept learning. Therejoinder acknowledges that concepts are not learnedthroughhypothesis testing but insists they are learnedthrough othermeans. Three examples:
A lot depends here upon what counts as “learning” and whatdoes not, a question that seems difficult to adjudicate. A closelyconnected question is whether concept acquisition is arational process or a merecausal process. To theextent that acquiring some concept is a rational achievement, we willwant to say that one learned the concept. To the extent that acquiringthe concept is a mere causal process (more like catching a cold thanconfirming a hypothesis), we will feel less inclined to say thatgenuine learning took place (Fodor 1981: 275).
These issues lie at the frontier of psychological and philosophicalresearch. The key point for present purposes is that there are twooptions for halting the regress of language learning: we can say thatthinkers acquire concepts but do not learn them; or we can say thatthinkers learn concepts through some means other than hypothesistesting. Of course, it is not enough just to note that the two optionsexist. Ultimately, one must develop one’s favored option into acompelling theory. But there is no reason to think that doing so wouldreinitiate the regress. In any event, explaining concept acquisitionis an important task facing any theorist who accepts that we haveconcepts, whether or not the theorist accepts LOTH. Thus, the learningregress objection is best regarded not as posing a challenge specificto LOTH but rather as highlighting a more widely shared theoreticalobligation: the obligation to explain how we acquire concepts.
For further discussion, see the entry on innateness. See also theexchange between Cowie (1999) and Fodor (2001).
What is it to understand a natural language word? On a popularpicture, understanding a word requires that you mentally represent theword’s denotation. For example, understanding the word“cat” requires representing that it denotes cats. LOTtheorists will say that you use Mentalese words to representdenotations. The question now arises what it is to understand aMentalese word. If understanding the Mentalese word requiresrepresenting that it has a certain denotation, then we face aninfinite regress of meta-languages (Blackburn 1984: 43–44).
The standard response is to deny that ordinary thinkers representMentalese words as having denotations (Bach 1987; Fodor 1975:66–79). Mentalese is not an instrument of communication.Thinking is not “talking to oneself” in Mentalese. Atypical thinker does not represent, perceive, interpret, or reflectupon Mentalese expressions. Mentalese serves as a medium within whichher thought occurs, not an object of interpretation. We should not saythat she “understands” Mentalese in the same way that sheunderstands a natural language.
There is perhaps another sense in which the thinker“understands” Mentalese: her mental activity coheres withthe meanings of Mentalese words. For example, her deductive reasoningcoheres with the truth-tables expressed by Mentalese logicalconnectives. More generally, her mental activity is semanticallycoherent. To say that the thinker “understands” Mentaleseinthis sense is not to say that she represents Mentalesedenotations. Nor is there any evident reason to suspect thatexplaining semantic coherence will ultimately require us to positmental representation of Mentalese denotations. So there is no regressof understanding.
For further criticism of this regress argument, see the discussions ofKnowles (1998) and Laurence and Margolis (1997).[8]
Naturalism is a movement that seeks to ground philosophical theorizing in thescientific enterprise. As so often in philosophy, different authorsuse the term “naturalism” in different ways. Usage withinphilosophy of mind typically connotes an effort to depict mentalstates and processes as denizens of the physical world, with noirreducibly mental entities or properties allowed. In the modern era,philosophers have often recruited LOTH to advance naturalism. Indeed,LOTH’s supposed contribution to naturalism is frequently citedas a significant consideration in its favor. One example isFodor’s use of CCTM+FSC to explain semantic coherence. The othermain example turns uponthe problem of intentionality.
How does intentionality arise? How do mental states come to beabout anything, or to have semantic properties? Brentano(1874 [1973: 97]) maintained that intentionality is a hallmark of themental as opposed to the physical: “The reference to somethingas an object is a distinguishing characteristic of all mentalphenomena. No physical phenomenon exhibits anything similar”. Inresponse, contemporary naturalists seek tonaturalizeintentionality. They want to explain in naturalisticallyacceptable terms what makes it the case that mental states havesemantic properties. In effect, the goal is to reduce the intentionalto the non-intentional. Beginning in the 1980s, philosophers haveoffered various proposals about how to naturalize intentionality. Mostproposals emphasize causal or nomic links between mind and world(Aydede & Güzeldere 2005; Dretske 1981; Fodor 1987, 1990;Stalnaker 1984), sometimes also invoking teleological factors(Millikan 1984, 1993; Neander 2017l; Papineau 1987; Dretske 1988) orhistorical lineages of mental states (Devitt 1995; Field 2001).Another approach,functional role semantics, emphasizes thefunctional role of a mental state: the cluster of causal orinferential relations that the state bears to other mental states. Theidea is that meaning emerges at least partly through these causal andinferential relations. Some functional role theories cite causalrelations to the external world (Block 1987; Loar 1982), and others donot (Cummins 1989).
Even the best developed attempts at naturalizing intentionality, suchas Fodor’s (1990) version of the nomic strategy, face seriousproblems that no one knows how to solve (M. Greenberg 2014; Loewer1997). Partly for that reason, the flurry of naturalizing attemptsabated in the 2000s. Burge (2010: 298) reckons that the naturalizingproject is not promising and that current proposals are“hopeless”. He agrees that we should try to illuminaterepresentationality by limning its connections to the physical, thecausal, the biological, and the teleological. But he insists thatillumination need not yield a reduction of the intentional to thenon-intentional.
LOTH is neutral as to the naturalization of intentionality. An LOTtheorist might attempt to reduce the intentional to thenon-intentional. Alternatively, she might dismiss the reductiveproject as impossible or pointless. Assuming she chooses the reductiveroute, LOTH provides guidance regarding how she might proceed.According to RTT,
XA’s thatp iff there is a mentalrepresentationS such thatX bearsA* toSandS means thatp.
The task of elucidating “XA’s thatp” in naturalistically acceptable terms factors into twosub-tasks (Field 2001: 33):
As we have seen, functionalism helps with (a). Moreover, COMP providesa blueprint for tackling (b). We can first delineate a compositionalsemantics describing howS’s meaning depends uponsemantic properties of its component words and upon the compositionalimport of the constituency structure into which those words arearranged. We can then explain in naturalistically acceptable terms whythe component words have the semantic properties that they have andwhy the constituency structure has the compositional import that ithas.
How much does LOTH advance the naturalization of intentionality? Ourcompositional semantics for Mentalese may illuminate how the semanticproperties of a complex expression depend upon the semantic propertiesof primitive expressions, but it says nothing about how primitiveexpressions get their semantic properties in the first place.Brentano’s challenge (How could intentionality arise frompurely physical entities and processes?) remains unanswered. Tomeet the challenge, we must invoke naturalizing strategies that gowell beyond LOTH itself, such as the causal or nomic strategiesmentioned above. Those naturalizing strategies are not specificallylinked to LOTH and can usually be tailored to semantic properties ofneural states rather than semantic properties of Mentaleseexpressions. Thus, it is debatable how much LOTH ultimately helps usnaturalize intentionality. Naturalizing strategies orthogonal to LOTHseem to do the heavy lifting.
How are Mentalese expressions individuated? Since Mentaleseexpressions are types, answering this question requires us to considerthe type/token relation for Mentalese. We want to fill in theschema
e ande* are tokens of the same Mentalese type iffR(e,e*).
What should we substitute forR(e,e*)? Theliterature typically focuses onprimitive symbol types, andwe will follow suit here.
It is almost universally agreed among contemporary LOT theorists thatMentalese tokens are neurophysiological entities of some sort. Onemight therefore hope to individuate Mentalese types by citing neuralproperties of the tokens. DrawingR(e,e*) fromthe language of neuroscience induces a theory along the followinglines:
Neural individuation:e ande*are tokens of the same primitive Mentalese type iffe ande* are tokens of the same neural type.
This schema leaves open how neural types are individuated. We maybypass that question here, because neural individuation of Mentalesetypes finds no proponents in the contemporary literature. The mainreason is that it conflicts withmultiple realizability: the doctrine that a single mental state type can be realized byphysical systems that are wildly heterogeneous when described inphysical, biological, or neuroscientific terms. Putnam (1967)introduced multiple realizability as evidence against themind/brain identity theory, which asserts that mental state types are brain state types. Fodor(1975: 13–25) further developed the multiple realizabilityargument, presenting it as foundational to LOTH. Although the multiplerealizability argument has subsequently been challenged (Polger 2004),LOT theorists widely agree that we should not individuate Mentalesetypes in neural terms.
The most popular strategy is to individuate Mentalese typesfunctionally:
Functional individuation:e ande* are tokens of the same primitive Mentalese type iffeande* have the same functional role.
Field (2001: 56–67), Fodor (1994: 105–109), and Stich(1983: 149–151) pursue functional individuation. They specifyfunctional roles using a Turing-style computationalism formalism, sothat “functional role” becomes something like“computational role”, i.e., role within mentalcomputation.
Functional roles theories divide into two categories:molecular andholist. Molecular theories isolateprivileged canonical relations that a symbol bears to other symbols.Canonical relations individuate the symbol, but non-canonicalrelations do not. For example, one might individuate Mentaleseconjunction solely through the introduction and elimination rulesgoverning conjunction while ignoring any other computational rules. Ifwe say that a symbol’s “canonical functional role”is constituted by its canonical relations to other symbols, then wecan offer the following theory:
Molecular functional individuation:eande* are tokens of the same primitive Mentalese type iffe ande* have the same canonical functional role.
One problem facing molecular individuation is that, aside from logicalconnectives and a few other special cases, it is difficult to draw anyprincipled demarcation between canonical and non-canonical relations(Schneider 2011: 106). Which relations are canonical for SOFA?[9] Citing the demarcation problem, Schneider espouses a holist approachthat individuates mental symbols throughtotal functionalrole, i.e., every single aspect of the role that a symbol playswithin mental activity:
Holist functional individuation:eande* are tokens of the same primitive Mentalese type iffe ande* have the same total functional role.
Holist individuation is very fine-grained: the slightest difference intotal functional role entails that different types are tokened. Sincedifferent thinkers will always differ somewhat in their mentalcomputations, it now looks like two thinkers will never share the samemental language. This consequence is worrisome, for two reasonsemphasized by Aydede (1998). First, it violates the plausiblepublicity constraint that propositional attitudes are inprinciple shareable. Second, it apparently precludes interpersonalpsychological explanations that cite Mentalese expressions. Schneider(2011: 111–158) addresses both concerns, arguing that they aremisdirected.
A crucial consideration when individuating mental symbols is what roleto assign to semantic properties. Here we may usefully compareMentalese with natural language. It is widely agreed that naturallanguage words do not have their denotations essentially. The Englishword “cat” denotes cats, but it could just as well havedenoted dogs, or the number 27, or anything else, or nothing at all,if our linguistic conventions had been different. Virtually allcontemporary LOT theorists hold that a Mentalese word likewise doesnot have its denotation essentially. The Mentalese wordcatdenotes cats, but it could have had a differentdenotation had it born different causal relations to the externalworld or had it occupied a different role in mental activity. In thatsense,cat is a piece of formal syntax.Fodor’s early view (1981: 225–253) was that a Mentaleseword could have had adifferent denotation but not anarbitrarily different denotation:catcould not have denoted just anything—it could not have denotedthe number 27—but it could have denoted some other animalspecies had the thinker suitably interacted with that species ratherthan with cats. Fodor eventually (1994, 2008) embraces the strongerthesis that a Mentalese word bears anarbitrary relation toits denotation:cat could have had anyarbitrarily different denotation. Most contemporary theorists agree(Egan 1992: 446; Field 2001: 58; Harnad 1994: 386; Haugeland 1985: 91:117–123; Pylyshyn 1984: 50).
The historical literature on LOTH suggests an alternativesemantically permeated view: Mentalese words are individuatedpartly through their denotations. The Mentalese wordcatis not a piece of formal syntax subject toreinterpretation. It could not have denoted another species, or thenumber 27, or anything else. It denotes catsby its inherentnature. From a semantically permeated viewpoint, a Mentalese wordhas its denotation essentially. Thus, there is a profound differencebetween natural language and mental language. Mental words, unlikenatural language words, bring with them one fixed semanticinterpretation. The semantically permeated approach is present inOckham, among other medieval LOT theorists (Normore 2003, 2009). Inlight of the problems facing neural and functional individuation,Aydede (2005) recommends that we consider taking semantics intoaccount when individuating Mentalese expressions. Rescorla (2012b)concurs, defending a semantically permeated approach as applied to atleast some mental representations. He proposes that certain mentalcomputations operate over mental symbols with essential semanticproperties, and he argues that the proposal fits well with manysectors of cognitive science.[10]
A recurring complaint about the semantically permeated approach isthat inherently meaningful mental representations seem like highlysuspect entities (Putnam 1988: 21). How could a mental word have onefixed denotationby its inherent nature? What magic ensuresthe necessary connection between the word and the denotation? Theseworries diminish in force if one keeps firmly in mind that Mentalesewords are types. Types are abstract entities corresponding to a schemefor classifying, ortype-identifying, tokens. To ascribe atype to a token is to type-identify the token as belonging to somecategory. Semantically permeated types correspond to a classificatoryscheme that takes semantics into account when categorizing tokens. AsBurge emphasizes (2007: 302), there is nothing magical aboutsemantically-based classification. On the contrary, both folkpsychology and cognitive science routinely classify mental eventsbased at least partly upon their semantic properties.
A simplistic implementation of the semantically permeated approachindividuates symbol tokenssolely through theirdenotations:
Denotational individuation:e ande* are tokens of the same primitive Mentalese type iffeande* have the same denotation.
As Aydede (2000) and Schneider (2011) emphasize, denotationalindividuation is unsatisfying. Co-referring words may playsignificantly different roles in mental activity. Frege’s (1892[1997]) famous Hesperus-Phosphorus example illustrates: one canbelieve that Hesperus is Hesperus without believing that Hesperus isPhosphorus. As Frege put it, one can think about the same denotation“in different ways”, or “under different modes ofpresentation”. Different modes of presentation have differentroles within mental activity, implicating different psychologicalexplanations. Thus, a semantically permeated individuative schemeadequate for psychological explanation must be finer-grained thandenotational individuation allows. It must take mode of presentationinto account. But what it is to think about a denotation “underthe same mode of presentation”? How are “modes ofpresentation” individuated? Ultimately, semantically permeatedtheorists must grapple with these questions. Rescorla (2020)offers some suggestions about how to proceed.[11]
Chalmers (2012) complains that semantically permeated individuationsacrifices significant virtues that made LOTH attractive in the firstplace. LOTH promised to advance naturalism by grounding cognitivescience in non-representational computational models.Representationally-specified computational models seem like asignificant retrenchment from these naturalistic ambitions. Forexample, semantically permeated theorists cannot accept the FSCexplanation of semantic coherence, because they do not postulateformal syntactic types manipulated during mental computation.
How compelling one finds naturalistic worries about semanticallypermeated individuation will depend on how impressive one finds thenaturalistic contributions made by formal mental syntax. We sawearlier that FSC arguably engenders a worrisome epiphenomenalism.Moreover, the semantically permeated approach in no way precludes anaturalistic reduction of intentionality. It merely precludes invokingformal syntactic Mentalese types while executing such a reduction. Forexample, proponents of the semantically permeated approach can stillpursue the causal or nomic naturalizing strategies discussed insection 7. Nothing about either strategy presupposes formal syntactic Mentalesetypes. Thus, it is not clear that replacing a formal syntacticindividuative scheme with a semantically permeated schemesignificantly impedes the naturalistic endeavor.
No one has yet provided an individuative scheme for Mentalese thatcommands widespread assent. The topic demands continued investigation,because LOTH remains highly schematic until its proponents clarifysameness and difference of Mentalese types.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
artificial intelligence |belief |Church-Turing Thesis |cognitive science |computation: in physical systems |concepts |connectionism |consciousness: representational theories of |folk psychology: as a theory |functionalism |intentionality |mental content: causal theories of |mental imagery |mental representation |mind: computational theory of |naturalism |physicalism |propositional attitude reports |qualia |reasoning: automated |Turing, Alan |Turing machines
I owe a profound debt to the Murat Aydede, author of theprevious entry on the same topic. His exposition hugely influenced my work on theentry, figuring indispensably as a springboard, a reference, and astandard of excellence. Some of my formulations in the introductionand in sections 1.1, 2, 3, 4.3, 5, 6.1, and 7 closely trackformulations from the previous entry. Section 5’s discussion ofconnectionism is directly based on the previous entry’streatment. I also thank Calvin Normore, Melanie Schoenberg, and theStanford Encyclopedia editors for helpful comments.
Copyright © 2023 by
Michael Rescorla<rescorla@ucla.edu>
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054