Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Causal Theories of Mental Content

First published Thu Feb 4, 2010; substantive revision Thu Aug 12, 2021

Causal theories of mental content attempt to explain how thoughts canbe about things. They attempt to explain how one can think about, forexample, dogs. These theories begin with the idea that there aremental representations and that thoughts are meaningful in virtue of acausal connection between a mental representation and some part of theworld that is represented. In other words, the point of departure forthese theories is that thoughts of dogs are about dogs because dogscause the mental representations of dogs.


1. Introduction

Content is what is said, asserted, thought, believed, desired, hopedfor, etc. Mental content is the content had by mental states andprocesses. Causal theories of mental content attempt to explain whatgives thoughts, beliefs, desires, and so forth their contents. Theyattempt to explain how thoughts can be about things.[1]

2. Some Historical and Theoretical Context

Although one might find precursors to causal theories of mentalcontent scattered throughout the history of philosophy, the currentinterest in the topic was spurred, in part, by perceived inadequaciesin “similarity” or “picture” theories ofmental representation. Where meaning and representation are asymmetricrelations—that is, a syntactic item “X” might meanor represent X, but X does not (typically) mean or represent“X”—similarity and resemblance are symmetricrelations. Dennis Stampe (1977), who played an important role ininitiating contemporary interest in causal theories, drew attention torelated problems. Consider a photograph of one of two identical twins.What makes it a photo of Judy, rather than her identical twin Trudy?By assumption, it cannot be the similarity of the photo to one twinrather than the other, since the twins are identical. Moreover, onecan have a photo of Judy even though the photo happens not to lookvery much like her at all. What apparently makes a photo of Judy aphoto of Judy is that she was causally implicated, in the right way,in the production of the photo. Reinforcing the hunch that causationcould be relevant to meaning and representation is the observationthat there is a sense in which the number of rings in a tree stumprepresents the age of the tree when it died and that the presence ofsmoke means fire. The history of contemporary developments of causaltheories of mental content consists largely of specifying what it isfor something to be causally implicated in the right way in theproduction of meaning and refining the sense in which smoke representsfire to the sense in which a person’s thoughts, sometimes atleast, represent the world.

If one wanted to trace a simple historical arc for recent causaltheories, one would have to begin with the seminal 1977 paper byDennis Stampe, “Toward a Causal Theory of LinguisticRepresentation.” Among the many important features of this paperis its having set much of the conceptual and theoretical stage to bedescribed in greater detail below. It drew a contrast between causaltheories and “picture theories” that try to explainrepresentational content by appeal to some form of similarity betweena representation and the thing represented. It also drew attention tothe problem of distinguishing the content determining causes of arepresentation from adventitious non-content determining causes. So,for example, one will want “X” to mean dogs because dogscauses dogs, but one does not want “X” to meanblow-to-the-head, even though blows to the head might cause theoccurrence of an “X”. (Much more of this will be describedbelow.) Finally, it also provided some attempts to address thisproblem, such as an appeal to the function a thing might have.

Fred Dretske’s 1981Knowledge and the Flow ofInformation offered a much expanded treatment of a type of causaltheory. Rather than basing semantic content on a causal connectionper se, Dretske began with a type of informational connectionderived from the mathematical theory of information. This has led someto refer to Dretske’s theory as “informationsemantics”. Dretske also appealed to the notion of function inan attempt to distinguish content determining causes from adventitiousnon-content determining causes. This has led some to refer toDretske’s theory as a “teleoinformational” theory ora “teleosemantic” theory. Dretske’s 1988 book,Explaining Behavior, further refined his earliertreatment.

Jerry Fodor’s 1984 “Semantics, Wisconsin Style” gavethe problem of distinguishing content-determining causes fromnon-content determining causes its best-known guise as “thedisjunction problem”. How can a causal theory of content saythat “X” has the non-disjunctive content dog, rather thanthe disjunctive content dog-or-blow-to-the-head, when both dogs andblows to the head cause instances of “X”? By 1987, inPsychosemantics, Fodor published his first attempt at analternative method of solving the disjunction problem, the Asymmetric(Causal) Dependency Theory. This theory was further refined for thetitle essay in Fodor’s 1990 bookA Theory of Content andOther Essays.

Although these causal theories have subsequently spawned a significantcritical literature, other related causal theories have also beenadvanced. Two of these are teleosemantic theories that are sometimescontrasted with causal theories. (Cf., e.g., Papineau (1984), Millikan(1989), and the entry onteleological theories of mental content.) Other more purely causal theories are Dan Lloyd’s (1987, 1989)Dialectical Theory of Representation, Robert Rupert’s (1999)Best Test Theory (see section 3.5 below), Marius Usher’s (2001)Statistical Referential Theory, and Dan Ryder’s (2004) SINBADneurosemantics.

Causal theories of mental content are typically developed in thecontext of four principal assumptions. First, they typicallypresuppose that there is a difference between derived and underived meaning.[2] Normal humans can use one thing, such as “%”, to meanpercent. They can use certain large red octagons to mean that one isto stop at an intersection. In such cases, there are collectivearrangements that confer relatively specific meanings on relativelyspecific objects. In the case of human minds, however, it is proposedthat thoughts can have the meanings or contents they do withoutrecourse to collective arrangements. It is possible to think aboutpercentage or ways of negotiating intersections prior to collectivesocial arrangements. It, therefore, appears that the contents of ourthoughts do not acquire the content they do in the way that“%” and certain large red octagons do. Causal theories ofmental content presuppose that mental contents are underived, henceattempt to explain how underived meaning arises.

Second, causal theories of mental content distinguish what has come tobe known asnatural meaning andnon-natural meaning.[3] Cases where an object or event X has natural meaning are those inwhich, given certain background conditions, the existence oroccurrence of X “entails” the existence or occurrence ofsome state of affairs. If smoke in the unspoiled forest naturallymeans fire then, given the presence of smoke, there was fire. Underthe relevant background conditions, the effect indicates or naturallymeans the cause. An important feature of natural meaning is that itdoes not generate falsity. If smoke naturally means fire, then theremust really be a fire. By contrast, many non-naturally meaningfulthings can be false. Sentences, for example, can be meaningful andfalse. The utterance “Colleen currently has measles” meansthat Colleen currently has measles but does not entail that Colleencurrently has measles in the way that Colleen’s spots do entailthat she has measles. Like sentences, thoughts are also meaningful,but often false. Thus, it is generally supposed that mental contentmust be a form ofnon-natural underived meaning.[4]

Third, these theories assume that it is possible to explain the originof underived content without appeal to other semantic or contentfulnotions. So, it is assumed that there is more to the project thansimply saying that one’s thoughts mean that Colleen currentlyhas the measles because one’s thoughts are about Colleencurrently having the measles. Explicating meaning in terms ofaboutness, or aboutness in terms of meaning, or either in terms ofsome still further semantic notion, does not go as far as is commonlydesired by those who develop causal theories of mental content. Tonote some additional terminology, it is often said that causaltheories of mental content attempt tonaturalize non-natural,underived meaning. To put the matter less technically, one might saythat causal theories of mental content presuppose that it is possiblefor a purely physical system to bear underived content. Thus, theypresuppose that if one were to build a genuinely thinking robot orcomputer, one would have to design it in such a way that some of itsinternal components would bear non-natural, underived content invirtue of purely physical conditions. To get a feel for the differencebetween a naturalized theory and an unnaturalized theory of content,one might note the theory developed by Grice (1948). Grice developedan unnaturalized theory. Speaking of linguistic items, Grice held that‘Speaker S non-naturally means something by“X”’ is roughly equivalent to ‘S intended theutterance of “X” to produce some effect in an audience bymeans of the recognition of this intention.’ Grice did notexplicate the origin of mental content of speaker’s intentionsor audience recognition, hence he did not attempt to naturalize themeaning of linguistic items.

Fourth, it is commonly presupposed that naturalistic analyses ofnon-natural, underived meanings will apply, in the first instance,to the contents of thought. The physical items “X” thatare supposed to be bearers of causally determined content will,therefore, be something like the firings of a particular neuron or setof neurons. These contents of thoughts are said to be captured in whatis sometimes called a “language of thought” or“mentalese.” The contents of items in natural languages,such as English, Japanese, and French, will then be given a separateanalysis, presumably in terms of a naturalistic account of non-naturalderived meanings. It is, of course, possible to suppose that it isnatural language, or some other system of communication, that firstdevelops content, which can then serve as a basis upon which toprovide an account of mental content. Among the reasons that threatenthis order of dependency is the fact that cognitive agents appear tohave evolved before systems of communication. Another reason is thathuman infants at least appear to have some sophisticated cognitivecapacities involving mental representation, before they speak orunderstand natural languages. Yet another reason is that, althoughsome social animals may have systems of communication complex enoughto support the genesis of mental content, other non-social cognizinganimals may not.

It is worth noting that, in recent years, this last presupposition hassometimes been abandoned by philosophers attempting to understandanimal signaling or animal communication, as when toads emit matingcalls or vervet monkeys cry out when seeing a cheetah, eagle, orsnake. See, for example, Stegmann, 2005, 2009, Skyrms, 2008, 2010a, b,2012, and Birch, 2014. In other words, there have been efforts to usethe sorts of apparatus originally developed for theories of mentalcontent, plus or minus a bit, as apparatus for handling animalsignaling. These approaches seem to allow that there are mentalrepresentations in the brains of the signaling/communicating animals,but do not reply on the content of the mental representations toprovide the representational contents of the signals. In this way, thecontents of the signals are not derived from the contents of themental representations.

3. Specific Causal Theories of Mental Content

The unifying inspiration for causal theories of mental content is thatsome syntactic item “X” means X because “X”sare caused by Xs.[5] Matters cannot be this simple, however, since in general one expectsthat some causes of “X” are not among thecontent-specifying causes of “X”s. There are numerousexamples illustrating this point, each illustrating a kind of causethat must not typically be among the content-determining causes of“X”:

  1. Suppose there is some syntactic item “X” that is aputative mental representation of a dog. Dogs will presumably causetokens of “X”, but so might foxes at odd angles, with someobstructions, at a distance, or under poor lighting conditions. Thecausal theorist will need some principle that allows her to say thatthe causal links between dogs and “X”s will becontent-determining, where the causal links between, say, foxes and“X”s will not. Mice and shrews, mules and donkeys, GermanShepherds and wolves, dogs and paper mâché dogs, dogs andstuffed dogs, and any number of confusable groups would do to makethis point.
  2. A syntactic item “X” with the putative content of dogmight also be caused by a dose of LSD, a set of strategically placedand activated microelectrodes, a brain tumor, or quantum mechanicalfluctuations. Who knows what mental representations might be triggeredby these things? LSD, microelectrodes, etc., should (typically) not beamong the content-determining causes of most mentalrepresentations.
  3. Upon hearing the question “What kind of animal is named‘Fido’?” a person might token the syntactic item“X”. One will want at least some cases in which this“X” means dog, but to get this result the causal theoristwill not want the question to be among the content-determining causesof “X”.
  4. In seeing a dog, there is a causal pathway between the dog throughthe visual system (and perhaps beyond) to a token of “X”.What in this causal pathway from the dog to “X”constitutes the content-determining element? In virtue of what is itthe case that “X” means dog, rather than retinalprojection of a dog, or any number of other possible points along thepathway? Clearly there is a similar problem for other sensemodalities. In hearing a dog, there is a causal pathway between thedog through the auditory system (and perhaps beyond) to a token of“X”. What makes “X” mean dog, rather thansound of a dog (barking?) or eardrum vibration or motion in the stapesbone of the inner ear? One might press essentially the same point byasking what makes “X” mean dog, rather than some complexfunction of all the diverse causal intermediaries between dogs and“X”.

The foregoing problem cases are generally developed under the rubricof “false beliefs” or “the disjunctionproblem” in the following way and can be traced to Fodor (1984).No one is perfect, so a theory of content should be able to explicatewhat is going on when a person makes a mistake, such as mistaking afox for a dog. The first thought is that this happens when a fox (at adistance or in poor lighting conditions) causes the occurrence of atoken of “X” and, since “X” means dog, one hasmistaken a fox for a dog. The problem with this first thought ariseswith the invocation of the idea that “X” means dog. Whysay that “X” means dog, rather than dog or fox? On acausal account, we need some principled reason to say that the contentof “X” is dog, hence that the token of “X” isfalsely tokened by the fox, rather than the content of “X”is dog or fox, hence that the token of “X” is trulytokened by the fox. What basis is there for saying that“X” means dog, rather than dog or fox? Because thereappears always to be this option of making the content of a term somedisjunction of items, the problem has been called “thedisjunction problem”.[6]

As was noted above, what unifies causal theories of mental content issome version of the idea that “X”s being causallyconnected to Xs makes “X”s mean Xs. What divides causaltheories of mental content, most notably, is the different approachesthey take to separating the content-determining causes from thenon-content-determining causes. Some of these different theoriesappeal to normal conditions, others to functions generated by naturalselection, others to functions acquired ontogenetically, and stillothers to dependencies among laws. At present there is no approachthat is commonly agreed to correctly separate the content-determiningcauses from the non-content determining causes while at the same timerespecting the need not to invoke existing semantic concepts. Althougheach attempt may have technical problems of its own, the recurringproblem is that the attempts to separate content-determining fromnon-content-determining causes threaten to smuggle in semanticelements.

In this section, we will review the internal problematic of causaltheories by examining how each theory fares on our battery of testcases (I)–(IV), along with other objections from time to time.This provides a simple, readily understood organization of the projectof developing a causal theory of mental content, but it does this at aprice. The primary literature is not arranged exactly in this way. Thepositive theories found in the primary literature are typically morenuanced than what we present here. Moreover, the criticisms are notarranged into the kind of test battery we have with cases(I)–(IV). One paper might bring forward cases (I) and (III)against theory A, where another paper might bring forward cases (I)and (II) against theory B. Nor are the examples in our test batteryexactly the ones developed in the primary literature. In other words,the price one pays for this simplicity of organization is that we havesomething less like a literature review and more like a theoreticaland conceptual toolbox for understanding causal theories.

3.1 Normal Conditions

Trees usually grow a certain way. Each year, there is the passage ofthe four seasons with a tree growing more quickly at some times andmore slowly at others. As a result, each year a tree adds a“ring” to its girth in such a way that one might say thateach ring means a year of growth. If we find a tree stump that hastwelve rings, then that means that the tree was twelve years old whenit died. But, it is not an entirely inviolable law that a tree grows aring each year. Such a law, if it is one, is at most aceterisparibus law. It holds only given certain background conditions,such as that weather conditions are normal. If the weather conditionsare especially bad one season, then perhaps the tree will not growenough to produce a new ring. One might, therefore, propose that ifconditions are normal, then n rings means that the tree was n yearsold when it died. This idea makes its first appearance when Stampe(1977) invokes it as part of his theory of “fidelityconditions.”

An appeal to normal conditions would seem to be an obvious way inwhich to bracket at least some non-content-determining causes of awould-be mental representation “X”. It is only the causesthat operate under normal conditions that are content-determining. So,when it comes to human brains, under normal conditions one is notunder the influence of hallucinogens nor is one’s head beinginvaded by an elaborate configuration of microelectrodes. So, eventhough LSD and microelectrodes would, counterfactually speaking, causea token neural event “X”, these causes would not be amongthe content-determining causes of “X”. Moreover, one cantake normal conditions of viewing to include good lighting, aparticular perspective, a particular viewing distance, a lack of(seriously) occluding objects, and so forth, so that foxes in dimlight, viewed from the bottom up, at a remove of a mile, or through adense fog, would not be among the content-determining causes of“X”. Under normal viewing conditions, one does not confusea fox with a dog, so foxes are not to be counted as part of thecontent of “X”. Moreover, if one does confuse a fox with adog under normal viewing conditions, then perhaps one does not reallyhave a mental representation of a dog, but maybe only a mentalrepresentation of a member of the taxonomic family canidae.

Although an appeal to normal conditions initially appears promising,it does not seem to be sufficient to rule out the causalintermediaries between objects in the environment and “X”.Even under normal conditions of viewing that include good lighting, aparticular perspective, a particular viewing distance, a lack of(seriously) occluding objects, and so forth, it is still the case thatboth dogs and, say, retinal projections of dogs, lead to tokens of“X”. Why does the content of “X” not includeretinal projections of dogs or any of the other causal intermediaries?Nor do normal conditions suffice to keep questions from getting inamong the content-determining causes. What abnormal conditions arethere when the question, “What kind of animal is named‘Fido’?,” leads to a tokening of an “X”with the putative meaning of dog? Suppose there are instances ofquantum mechanical fluctuations in the nervous system, whereinspontaneous changes in neurons lead to tokens of “X”. Donormal conditions block these out? So, there are problem cases inwhich appeals to normal conditions do not seem to work. Fodor (1990b)discusses this problem with proximal stimulations in connection withhis asymmetric dependency theory, but it is one that clearlychallenges the causal theory plus normal conditions approach.

Next, suppose that we tightly construe normal conditions to eliminatethe kinds of problem cases described above. So, when completelyfleshed out, under normal conditions only dogs cause “X”s.What one intuitively wants is to be able to say that, under normalconditions of good lighting, proper viewing distance, etc.“X” means dog. But, another possibility is that in such asituation “X” does not mean dog, butdog-under-normal-conditions-of-good-lighting, proper-viewing-distance,etc. Why take one interpretation over another? One needs a principledbasis for distinguishingthe cause of “X” fromthe many causally contributing factors. In other words, we still havethe problem of bracketing non-content-determining causes, only in aslightly reformulated manner. This sort of objection may be found inFodor (1984).

Now set the preceding problem aside. There is still another developedin Fodor (1984). Suppose that “X” does mean dog underconditions of good lighting, lack of serious occlusions, etc. Do notmerely suppose that “X” is caused by dogs under conditionsof good light, lack of serious occlusions, etc.; grant that“X” really does mean dog under these conditions. Eventhen, why does “X”, the firing of the neuronal circuit,still mean dog, when those conditions do not hold? Why does“X” still mean dog under, say, degraded lightingconditions? After all, we could abide by another apparently trueconditional regarding these other conditions, namely, if the lightingconditions were not so good, there were no serious occlusions, etc.,then the neuronal circuit’s firing would mean dog or fox. Evenif “X” means X under one set of conditions C1,why doesn’t “X” mean Y under a different set ofconditions C2? It looks as though one could say thatC1 provides normal conditions under which “X”means X and C2 provides normal conditions under which“X” means Y. We need some non-semantic notions to enableus to fix on one interpretation, rather than the other. At this point,one might look to a notion of functions to solve these problems.[7]

3.2 Evolutionary Functions

Many physical objects have functions. (Stampe (1977) was the first tonote this as a fact that might help causal theories of content.) Afamiliar mercury thermometer has the function of indicatingtemperature. But, such a thermometer works against a set of backgroundconditions which include the atmospheric pressure. The atmosphericpressure influences the volume of the vacuum that forms above thecolumn of mercury in the glass tube. So, the height of the column ofmercury is the product of two causally relevant features, the ambientatmospheric temperature and the ambient atmospheric pressure. Thissuggests that one and the same physical device with the same causaldependencies can be used in different ways. A column of mercury in aglass tube can be used to measure temperature, but it is possible toput it to use as a pressure gauge. Which thing a column of mercurymeasures is determined by its function.

This observation suggests a way to specify which causes of“X” determine its content. The content of “X”,say, the firing of some neurons, is determined by dogs, and not foxes,because it is the function of those neurons to register the presenceof dogs, but not foxes. Further, the content of “X” doesnot include LSD, microelectrodes, or quantum mechanical fluctuations,because it is not the function of “X” to fire in responseto LSD, microelectrodes, or quantum mechanical fluctuations in thebrain. Similarly, the content of “X” does not includeproximal sensory projections of dogs, because the function of theneurons is to register the presence of the dogs, not the sensorystimulations. It is the objective features of the world that matter toan organism, not its sensory states. Finally, it is the function of“X” to register the presence of dogs, but not the presenceof questions, such as ‘What kind of animal is named“Fido”?’,that leads to “X” meaning dogs. Functions, thus, provideaprima facie attractive means of properly winnowing down thecauses of “X” to those that are genuinely contentdetermining.

In addition, the theory of evolution by natural selection apparentlyprovides a non-semantic, non-intentional basis upon which to explicatefunctions and, in turn, semantic content. Individual organisms vary intheir characteristics, such as how their neurons respond to featuresof the environment. Some of these differences in how neurons respondmake a difference to an organism’s survival and reproduction.Finally, some of these very differences may be heritable. Naturalselection, commonly understood as this differential reproduction ofheritable variation, is purely causal. Suppose that there is apopulation of rabbits. Further suppose that either by a geneticmutation or by the recombination of existing genes, some of theserabbits develop neurons that are wired into their visual systems insuch a way that they fire (more or less reliably) in the presence ofdogs. Further, the firing of these neurons is wired into a freezingbehavior in these rabbits. Because of this configuration, the rabbitswith the “dog neurons” are less likely to be detected bydogs, hence more likely to survive and reproduce. Finally, because thegenes for these neurons are heritable, the offspring of thesedog-sensitive rabbits will themselves be dog-sensitive. Over time, thenumber of the dog-sensitive rabbits will increase, thereby displacingthe dog-insensitive rabbits. So, natural selection will, in such ascenario, give rise to mental representations of dogs. Insofar as sucha story is plausible, there is hope that natural selection and thegenesis of functions can provide a naturalistically acceptable meansof delimiting content-determining causes.

3.2.1 Objections to Evolutionary Functions

There is no doubt that individual variation, differentialreproduction, and inheritance can be understood in a purely causalmanner. Yet, there remains skepticism about how naturalistically onecan describe what natural selection can select for. There are doubtsabout the extent to which the objects of selection really can bespecified without illicit importation of intentional notions. Fodor(1989, 1990a) give voice to some of this skepticism.Primafacie, it makes sense to say that the neurons in our hypotheticalrabbits fire in response to the presence of dogs, hence that there isselection for dog representations. But, it makes just as much sense,one might worry, to say that it is sensitivity to dog-look-alikes thatleads to the greater fitness of the rabbits with the new neurons.[8] There are genes for the dog-look-alike neurons and these genes areheritable. Moreover, those rabbits that freeze in response todog-look-alikes are more likely to survive and reproduce than arethose that do not so freeze, hence one might say that the freezing isin response to dog-look-alikes. So, our ability to say that themeaning of the rabbits’ mental representation “X” isdog, rather than dog-look-alike, depends on our ability to say that itis the dog-sensitivity of “X”, rather than thedog-look-alike-sensitivity of “X”, that keeps the rabbitsalive longer. Of course, being dog-sensitive and beingdog-look-alike-sensitive are connected, but the problem here is thatboth being dog-look-alike-sensitive and being dog-sensitive canincrease fitness in ways that lead to the fixation of a genotype. Andit can well be that it is avoidance of dogs that keeps a rabbit alive,but one still needs some principled basis for saying that the rabbitsavoid dogs by being sensitive to dogs, rather than by being sensitiveto dog-look-alikes. The latter appears to be good enough for thedifferential reproduction of heritable variation to do its work. Wherewe risk importing semantic notions into the mix is in understandingselection intentionally, rather than purely causally. We need a notionof “selection for” that is both general enough to work forall the mental contents causal theorists aspire to address and thatdoes not tacitly import semantic notions.

In response to this sort of objection, it has been proposed that thecorrect explanation of a rabbit’s evolutionary success with,say, “X”, is not that this enables the rabbit to avoiddog-look-alikes, but that it enables them to avoid dogs. It is dogs,but not mere dog-look-alikes, that prey on rabbits. (This sort ofresponse is developed in Millikan (1991) and Neander (1995).) Yet, therejoinder is that if we really want to get at the correct explanationof a rabbit-cum-“X” system, then we should not supposethat “X” means dog. Instead, we should say that it is invirtue of the fact that “X” picks up on something like,say, predator of such and such characteristics that the“X” alarm system increases the chance of a rabbit’ssurvival. (This sort of rejoinder may be found in Agar (1993).)

This problem aside, there is also some concern about the extent towhich it is plausible to suppose that natural selection could act onthe fine details of the operation of the brain, such as the firing ofneurons in the presence of dogs. (This is an objection raised in Fodor(1990c)). Natural selection might operate to increase the size of thebrain so there is more cortical mass for cognitive processing. Naturalselection might also operate to increase the folding of the brain soas to maximize the cortical surface area that can be contained withinthe brain. Natural selection might also lead to compartmentalizationof the brain, so that one particular region could be dedicated tovisual processing, another to auditory processing, and still anotherto face processing. Yet, many would take it to be implausible tosuppose that natural selection works at the level of individual mentalrepresentations. The brain is too plastic and there is too muchindividual variation in the brains of mammals to admit of selectionacting in this way. Moreover, such far reaching effects of naturalselection would lead to innate ideas not merely of colors and shapes,but of dogs, cats, cars, skyscrapers, and movie stars. Rather thansupposing that functions are determined by natural selection acrossmultiple generations, many philosophers contend that it is moreplausible that the functions that underlie mental representations areacquired through cognitive development.

3.3 Developmental Functions

Hypothesizing that certain activities or events within the brain meanwhat they do, in part, because of some function that develops over thecourse of an individual’s lifetime shares many of the attractivefeatures of the hypothesis that these same activities or events meanwhat they do, in part, because of some evolutionarily acquiredfunction. One again can say that it is not the function of“X” to register the presence of LSD, microelectrodes,foxes, stuffed dogs, or paper mâché dogs, or questions,but it is their function to report on dogs. Moreover, it does notinvoke dubious suppositions about an intimate connection betweennatural selection and the precise details of neuronal hardware and itsoperation. A functional account based on ontogenetic functionacquisition or learning seems to be an improvement. This is the coreof the approach taken in Dretske (1981; 1988).

The function acquisition story proposes that during development, anorganism is trained to discriminate real flesh and blood dogs fromquestions, foxes, stuffed dogs, paper mâché dogs underconditions of good lighting, without occlusions, or distractions. Ateacher ensures that training proceeds according to plan. Once“X” has acquired the function to respond to dogs, thetraining is over. Thereafter, any instances in which “X”is triggered by foxes, stuffed dogs, paper mâché dogs,LSD, microelectrodes, etc., are false tokenings and figure into falsebeliefs.

3.3.1 Objections to Developmental Functions

Among the most familiar objections to this proposal is that there isno principled distinction between when a creature is learning and whenit is done learning. Instances in which a creature entertains thehypothesis that “X” means X, instances in which thecreature entertains the hypothesis that “X” means Y,instances in which the creature straightforwardly uses “X”to mean X, and instances in which the creature straightforwardly uses“X” to mean Y are thoroughly intermingled. The problem isperhaps more clearly illustrated with tokens of natural language,where children will use words struggling through correct and incorrectuses of a word before (perhaps) finally settling on a correct usage.There seems to be no principled way to specify if learning has stoppedor whether there is instead “lifelong learning”. This isamong the objections to be found in Fodor (1984).

This, however, is a relatively technical objection. Further reflectionsuggests that there may be an underlying appeal to the intentions ofthe teacher. Let us revisit the learning story. Suppose that duringthe learning period the subject is trained to use “X” as amental representation of dogs. Now, let the student graduate from“X” using school and immediately thereafter see a fox.Seeing this fox causes a token of “X” and one would liketo say that this is an instance of mistaking a fox for a dog, hence afalse tokening. But, consider the situation counterfactually. If thestudent had seen the fox during the training period just beforegraduation, the fox would have triggered a token of “X”.This suggests that we might just as well say that the student learnedthat “X” means fox or dog as that the student learned that“X” means dog. Thus, we might just as well say that, aftertraining, the graduate does not falsely think of a dog, but trulythinks of a fox or a dog. The threat of running afoul of naturalistscruples comes if one attempts to say, in one way or another, that itis because the teachermeant for the student to learn that“X” means dog, rather than “X” means fox ordog. The threatened violation of naturalism comes in invoking theteacher’s intentions. This, too, is an objection to be found inFodor (1984).

3.4 Asymmetric Dependency Theory

The preceding attempts to distinguish the content-determining causesfrom non-content-determining causes focused on the background orboundary conditions under which the distinct types of causes may bethought to act. Fodor’s Asymmetric Dependency Theory (ADT),however, represents a bold alternative to these approaches. AlthoughFodor (1987, 1990a, b, 1994) contain numerous variations on thedetails of the theory, the core idea is that the content-determiningcause is in an important sense fundamental, where thenon-content-determining causes are non-fundamental. The sense of beingfundamental is that the non-content-determining causes depend on thecontent-determining cause; the non-content-determining causes wouldnot exist if not for the content-determining cause. Put a bit moretechnically, there are numerous laws such as ‘Y1causes “X”,’ ‘Y2 causes“X”,’ etc., but none of these laws would exist wereit not a law that X causes “X”. The fact that the ‘Xcauses “X”’ law does not in the same way depend onany of the Y1, Y2, …, Yn lawsmakes the dependence asymmetric. Hence, there is an asymmetricdependency between the laws. The intuition here is that the question,‘What kind of animal is called “Fido”?’ willcause an occurrence of the representation “X” only becauseof the fact that dogs cause “X”. Instances of foxes causeinstances of “X” only because foxes are mistaken for dogsand dogs cause instances of “X”.

Causation is typically understood to have a temporal dimension. Firstthere is event C and this event C subsequently leads to event E. Thus,when the ADT is sometimes referred to as the “Asymmetric CausalDependency Theory,” the term “causal” might suggesta diachronic picture in which there is, first, an X-“X”law which subsequently gives rise to the various Y-“X”laws. Such a diachronic interpretation, however, would lead tocounterexamples for the ADT approach. Fodor (1987) discusses thispossibility. Consider Pavlovian conditioning. Food causes salivationin a dog. Then a bell causes salivation in the dog. It is likely thatthe bell causes salivation only because the food causes it. Yet,salivation hardly means food. It may well naturally mean that food ispresent, but salivation is not a thought or thought content and it isnot ripe for false semantic tokening. Or take a more exotic kind ofcase. Suppose that one comes to apply “X” to dogs, butonly by means of observations of foxes. This would be a weird case of“learning”, but if things were to go this way, one wouldnot want “X” to mean fox. To block this kind of objection,the theory maintains the dependency between the fundamentalX-“X” law and the non-fundamental Y-“X” lawsis synchronic. The dependency is such that if one were to break theX-“X” law at time t, then one would therebyinstantaneously break all the Y-“X” laws at that time.

The core of ADT, therefore, comes down to this. “X” meansX if

  1. ‘Xs cause “X”s’ is a law,
  2. For all Ys that are not Xs, if Ys qua Ys actually cause“X”s, then the Y’s causing “X”s isasymmetrically dependent on the Xs causing “X”s,
  3. The dependence in (2) issynchronic (notdiachronic).

This seems to get a number of cases right. The reason that questionslike “What kind of animal is named ‘Fido’?” or“What is a Sheltie?” trigger “X”, meaning dog,is that dogs are able to trigger “X”s. Foxes only trigger“X”s, meaning dog, because dogs are able to trigger them.Moreover, it appears to solve the disjunction problem. Suppose we havea ‘dogs cause “X”s’ law and a ‘dogs orfoxes cause “X”s’ law. If one breaks the ‘dogscause “X”s’ law, then one thereby breaks the‘dogs or foxes cause “X”s’ law, since the onlyreason either dogs or foxes cause “X”s is because dogs do.Moreover, if one breaks the ‘dogs or foxes cause“X”s’ law, one does not thereby break the‘dogs cause “X”s’ law, since dogs alone mightsuffice to cause “X”s. So, the ‘dogs or foxes cause“X”s’ law depends on the ‘dogs cause“X”s’ law, but not vice versa. Asymmetric dependencyof laws gives the right results.[9]

3.4.1 Objections to ADT

Adams and Aizawa (1994) mention an important class of causes that theADT does not appear to handle, namely, the “non-psychologicalinterventions”. We have all along assumed the “X” issome sort of brain event, such as the firing of some neurons. But, itis plausible that some interventions, such as a dose of hallucinogenor maybe some carefully placed microelectrodes, could trigger suchbrain events, quite apart from the connection of those brain events toother events in the external world. If essentially all brain eventsare so artificially inducible, then it would appear that for allputative mental representations, there will be some laws, such as‘microelectrodes cause “X”s,’ that do notdepend on laws such as ‘dogs causes “X”s.’ Ifthis is the case, then the second condition of the ADT would rarely ornever be satisfied, so that the theory would have little relevance toactual cognitive scientific practice.

Fodor (1990a) discusses challenges that arise with the fact that theperception of objects involves causal intermediaries. Suppose thatthere is a dog-“X” law that is mediated entirely bysensory mechanisms. In fact, suppose unrealistically that thedog-“X” law is mediated by a single visual sensoryprojection. In other words let the dog-“X” law be mediatedby the combination of a dog-dogsp law and adogsp-“X” law. Under these conditions, itappears that “X” means dogsp, rather than dog.Condition (1) is satisfied, since there is adogsp-“X” law. Condition (2) is satisfied,since if one were to break the dogsp-“X” lawone would thereby break the dog-“X” law (i.e., there is adependence of one law one the other) and breaking thedog-“X” law would not necessarily break thedogsp-“X” law (i.e., the dependence is notsymmetric). The dependence is asymmetric, because one can break thedog-“X” law by breaking the dog-dogsp law (bychanging the way dogs look) without thereby breaking thedogsp-“X” law. Finally, condition (3) issatisfied, since the dependence of the dog-“X” law on thedogsp-“X” law is synchronic.

The foregoing version of the sensory projections problem relies onwhat was noted to be the unrealistic assumption that thedog-“X” law is mediated by a single visual sensoryprojection. Relaxing the assumption does not so much solve the problemas transform it. So, adopt the more realistic assumption that thedog-“X” law is sustained by a combination of a large setof dog-sensory projection laws and a large set ofdogsp-“X” laws. In the first set, we have lawsconnecting dogs to particular patterns of retinal stimulation, lawsconnecting dogs to particular patterns of acoustic stimulation, etc.In the second set, we have certain psychological laws connectingparticular patterns of retinal stimulation to “X”, certainpsychological laws connecting particular patterns of acousticstimulation to “X”, etc. In this sort of situation, therethreatens to be no “fundamental” law, no law on which allother laws asymmetrically depend. If one breaks thedog-“X” law one does not thereby break any of the sensoryprojection-“X” laws, since the former can be broken bydissolving all of the dog-sensory projection laws. If, however, onebreaks any one of the particular dogsp-“X”laws, e.g. one connecting a particular doggish visual appearance to“X”, one does not thereby break the dog-“X”law. The other sensory projections might sustain thedog-“X” law. Moreover, breaking the law connecting aparticular doggish look to “X” will not thereby break alaw connecting a particular doggish sound to “X”. Withouta “fundamental” law, there is no meaning in virtue of theconditions of the ADT. Further, the applicability of the ADT appearsto be dramatically reduced insofar as connections between mentalrepresentations and properties in the world are mediated by sensoryprojections. (See Neander, 2013, Schulte, 2018, Artiga &Sebastián, 2020, for discussion of the distality problem forother causal theories.)

Another problem arises with items or kinds that are indistinguishable.Adams and Aizawa (1994), and, implicitly, McLaughlin, (1991), amongothers, have discussed this problem. As one example, consider the timeat which the two minerals, jadeite and nephrite, were chemicallyindistinguishable and were both thought to be jade. As another, onemight appeal to H2O and XYZ (the stuff of philosophicalthought experiments, the water look-alike substance found ontwin-earth). Let X = jadeite and Y = nephrite and let there be laws‘jadeite causes “X”’ and ‘nephritecauses “X”’. Can “X” mean jadeite? No.Condition (1) is satisfied, since it is a law that ‘jadeitecauses “X”’. Condition (3) is satisfied, sincebreaking the jadeite-“X” law will immediately break thenephrite-“X” law. If jadeite cannot trigger an“X”, then neither can nephrite, since the two areindistinguishable. That is, there is a synchronic dependence of the‘nephrite causes “X”’ law on the‘jadeite causes “X” law. The problem arises withcondition (2). Breaking the jadeite-“X” law will therebybreak the nephrite-“X” law, but breaking thenephrite-“X” law will also thereby break thejadeite-“X” law. Condition (2) cannot be satisfied, sincethere is a symmetric dependence between the jadeite-“X”law and the nephrite-“X” law. By parity of reasoning,“X” cannot mean nephrite. So, can “X” meanjade? No. As before, conditions (1) and (3) could be satisfied, sincethere could be a jade-“X” law and thejadeite-“X” law and the nephrite-“X” law couldsynchronically depend on it. The problem is, again, with condition(2). Presumably breaking the jade-“X” law would break thejadeite-“X” and nephrite-“X” law, but breakingeither of them would break the jade-“X” law. The problemis, again, with symmetric dependencies.

Here is a problem that we earlier found in conjunction with othercausal theories. Despite the bold new idea underlying the ADT methodof partitioning off non-content-determining causes, it too appears tosneak in naturalistically unacceptable assumptions. Like all causaltheories of mental content, the asymmetric causal dependencies aresupposed to be the basis upon which meaning is created; thedependencies are not themselves supposed to be a product, orbyproduct, of meaning. Yet, ADT appears to violate this naturalisticpre-condition for causal theories. (This kind of objection may befound in Seager (1993), Adams & Aizawa (1994), (1994b), Wallis(1995), and Gibson (1996)). Ys are supposed to cause “X”sonly because Xs do and this must not be because of any semantic factsabout “X”s. So, what sort of mechanism would bring aboutsuch asymmetric dependencies among things connected to the syntacticitem “X”? In fact, why wouldn’t lots of things beable to cause “X”s besides Xs, quite independently of thefact that Xs do? The instantiation of “X”s in the brainis, say, some set of neurochemical events. There should be naturalcauses capable of producing such events in one’s brain under avariety of circumstances. Why on earth would foxes be able to causethe neurochemical “X” events in us only because dogs can?One might be tempted to observe that “X” means dog,“Y” means fox, we associate foxes with dogs and that iswhy foxes cause “X”s only because dogs cause“X”s. We would not associate foxes with “X”sunless we associated “X”s with dogs and foxes with dogs.This answer, however, involves deriving the asymmetric causaldependencies from meanings, which violates the background assumptionof the naturalization project. Unless there is a better explanation ofsuch asymmetrical dependencies, it may well be that the theory ismisguided in attempting to rest meaning upon them.

3.5 Best Test Theory

A relatively more recent causal theory is Robert Rupert’s (1999)Best Test Theory (BTT) for the meanings of natural kind terms. Unlikemost causal theories, this one is restricted in scope to just naturalkinds and terms for natural kinds. To mark this restriction, we willlet represented kinds be denoted by K’s, rather than our usualX’s.

Best Test Theory: If a subject S bears noextension-fixing intentions toward “X” and “X”is an atomic natural kind term in S’s language of thought (i.e.,not a compound of two or more other natural kind terms), then“X” has as its extension the members of natural kind K ifand only if members of K are more efficient in their causing of“X” in S than are the members of any other naturalkind.

To put the idea succinctly, “X” means, or refers to, thosethings that are the most powerful stimulants of “X”. Thatsaid, we need an account of what it is for a member of a natural kindto be more efficient in causing “X”s than are othernatural kinds. We need an account of how to measure the power of astimulus. This might be explained in terms of a kind of biography.

“X1“X2“X3“X4“X5
K1111
K1111
K111
K1111
K1111
K111
K21
K21
K211
K31
K31
K31
K31

Figure 1. A spreadsheet biography

Consider an organism S that (a) causally interacts with threedifferent natural kinds, K1-K3, in itsenvironment and (b) has a language of thought with five terms“X1”-“X5”. Further,suppose that each time S interacts with an individual of kindKi this causes an occurrence of one or more of“X1”-“X5”. We can thencreate a kind of “spreadsheet biography” or “log ofmental activity” for S in which there is a column for each of“X1”-“X5” and a row foreach instance in which a member of K1-K3 causesone or more instances of“X1”-“X5”. Each mentalrepresentation “Xi” that Ki triggersreceives a “1” in its column. Thus, a single spreadsheetbiography might look like that shown in Figure 1.

To determine what a given term “Xi” means, wefind the kind Ki that is most effective at causing“Xi”. This can be computed from S’sbiography. For each Ki and “Xi”, wecompute the frequency with which Ki triggers“Xi”. “X1” is tokenedfour out of six times that K1 is encountered, three out ofthree times that K2 is encountered, and one out of fourtimes that K3 is encountered. “Xi”means that Ki that has the highest sample frequency. Thus,in this case, “X1” means K2. Just tobe clear, when BTT claims that “Xi” means theKi that is the most powerful stimulant of “X”,this is not to say that “X” means the most commonstimulant of “X”. In our spreadsheet biography,K1 is the most common stimulant of“X1”, since it triggers“X1” four times, where K2 triggersit only three times, and K3 triggers it only one time. Thisis why, according to BTT, “X1” meansK2, rather than K1 or K3.

How does the BTT handle our range of test cases? Consider, first, thestandard form of the disjunction problem, the case of “X”meaning dog, rather than dog or fox-on-a-dark-night-at-a-distance.Since the latter is not apparently a natural kind, “X”cannot mean that.[10] Moreover, “X” means dog, rather than fox, because theonly times the many foxes that S encounters can trigger“X1”s is on dark nights at a distance, wheredogs trigger “X”s more consistently under a wider range ofconditions.

How does the BTT address the apparent problem of “braininterventions,” such as LSD, microelectrodes, or brain tumors?The answer is multi-faceted. The quickest method for taking much ofthe sting out of these cases is to note that they generally do notarise for most individuals. The Best Test Theory relies on personalbiographies in which only actual instances of kinds triggering mentalrepresentations are used to specify causal efficiency. Thecounterfactual truth that, were a stimulating microelectrode to beapplied to, say, a particular neuron, it would perfectly reliablyproduce an “X” token simply does not matter for thetheory. So, for all those individuals who do not take LSD, do not havemicroelectrodes inserted in their brains, do not have brain tumors,etc., these sorts of counterfactual possibilities are irrelevant. Asecond line of defense against “brain interventions”appeals to the limitation to natural kinds. The BTT might set asidemicroelectrodes, since they do not constitute a natural kind. Maybebrain tumors are; maybe not. Unfortunately, however, LSD is a verystrong candidate for a chemical natural kind. Still the BTT is notwithout a third line of defense for handling these cases. One mightsuppose that LSD and brain tumors act on the brain in a rather diffusemanner. Sometimes a dose of LSD triggers “Xi”,another time it triggers “Xj”, and another timeit triggers “Xk”. One might then propose that,if one counts all these episodes with LSD, none of these will actoften enough on, say, “Xi” to get it to meanLSD, rather than, say, dog. This is the sort of strategy that Rupertinvokes to keep mental symbols from meaning omnipresent, butnon-specific causes such as the heart. The heart might causallycontribute to “X1”, but it also contributes toso many other “Xi”s, that the heart will turnout not to be the most efficient cause of“X1”.

What about questions? Presumably questions as a category will count asan instance of a linguistic natural kind. Moreover, particularsentences will also count. So, the restriction of the BTT to naturalkinds is of little use here. So, what of causal efficiency? Manysentences appear to provoke a wide range of possible responses. Inresponse to, “I went to the zoo last week,” S could thinkof lions, tigers, bear, giraffes, monkeys, and any number of othernatural kinds. But, the question, “What animal goes ‘oink,oink’?”—perhaps uttered in “Motherese”in a clear deliberate fashion so that it is readily comprehensible toa child—will be rather efficient in generating thoughts of apig. Moreover, it could be more efficient than actual pigs, since achild might have more experience with the question than with actualpigs, often not figuring out that actual pigs are pigs. In suchsituations, “pig” would turn out to mean “Whatanimal goes ‘oink, oink’?,” rather than pig. So,there appear to be cases in which BTT could makeprima facieincorrect content assignments.

What, finally, of proximal projections of natural kinds? One plausibleline might be to maintain that proximal projections of natural kindsare not themselves natural kinds, hence that they are automaticallyexcluded from the scope of the theory. This plausible line, however,might be the only available line. Presumably, in the course ofS’s life, the only way dogs can cause “X”s is by wayof causal mediators between the dogs and the “X”s. Thus,each episode in which a dog causes an “X” is also anepisode in which a sensory projection of a dog causes an“X”. So, dog efficiency for “X” can be nohigher the efficiency of dog sensory projections. And, if it ispossible for there to be a sensory projection of a dog without therebeing an actual dog, then the efficiency of the projections would begreater than the efficiency of the dogs. So, “X” could notmean dog. But, this problem is not necessarily damaging to BTT.

Since the BTT has not received a critical response in the literature,we will not devote a section to objections to it. Instead, we willleave well enough alone with our somewhat speculative treatment of howBTT might handle our familiar test cases. The general upshot is thatthe combination of actual causal efficiency over the course of anindividual’s lifetime along with the restriction to naturalkinds provides a surprisingly rich means of addressing somelong-standing problems.

4. General Objections to Causal Theories of Mental Content

In the preceding section, we surveyed issues that face the philosopherattempting to work out the details of a causal theory of mentalcontent. These issues are, therefore, one might say, internal tocausal theories. In this section, however, we shall review some of theobjections that have been brought forward to the very idea of a causaltheory of mental content. As such, these objections might be construedas external to the project of developing a causal theory of mentalcontent. Some of these are coeval with causal theories and have beenaddressed in the literature, but some are relatively recent and havenot been discussed in the literature. The first objections, discussedin subsections 4.1–4.4, in one way or another push against theidea that all content could be explained by appeal to a causal theory,but leave open the possibility that one or another causal theory mightprovide sufficiency conditions for meaning. The last objections, thosediscussed in subsections 4.5–4.6 challenge the ability of causaltheories to provide even sufficiency conditions for mentalcontent.

4.1 Causal Theories do not Work for Logical and Mathematical Relations

One might think that the meanings of terms that denote mathematical orlogical relations could not be handled by a causal theory. How could amental version of the symbol “+” be causally connected tothe addition function? How could a mental version of the logicalsymbol “¬” be causally connected to the negation truthfunction? The addition function and the negation function are abstractobjects. To avoid this problem, causal theories typically acquiesceand maintain that their conditions are merely sufficient conditions onmeaning. If an object meets the conditions, then that object bearsmeaning. But, the conditions are not necessary for meaning, so thatrepresentations of abstract objects get their meaning in some otherway. Perhaps conceptual role semantics, wherein the meanings of termsare defined in terms of the meanings of other terms, could be made towork for these other theories.

4.2 Causal Theories do not Work for Vacuous Terms

Another class of potential problem cases are vacuous terms. So, forexample, people can think about unicorns, fountains of youth, or theplanet Vulcan. Cases such as these are discussed in Stampe (1977) andFodor (1990a), among other places. These things would be physicalobjects were they to exist, but they do not, so one cannot causallyinteract with them. In principle, one could say that thoughts aboutsuch things are not counterexamples to causal theories, since causaltheories are meant only to offer sufficiency conditions for meaning.But, this in principle reply appears to be ad hoc. It is notwarranted, for example, by the fact that these excluded meaningsinvolve abstract objects. There are, however, a number of options thatmight be explored here.

One strategy would be to turn to the basic ontology of one’scausal theory of mental content. This is where a theory based onnomological relations might be superior to a version that is based oncausal relations between individuals. One might say that there can bea unicorn-“unicorn” law, even if there are no actualunicorns. This story, however, would break down for mentalrepresentations of individuals, such as the putative planet Vulcan.There is no law that connects a mental representation to anindividual; laws are relations among properties.

Another strategy would be to propose that some thought symbols arecomplex and can decompose into meaningful primitive constituents. Onecould then allow that “X” is a kind of abbreviation for,or logical construction of, or defined in terms of “Y1,”“Y2,” and “Y3,” and that a causal theoryapplies to “Y1,” “Y2,” and “Y3.”So, for example, one might have a thought of a unicorn, but ratherthan having a single unicorn mental representation there is anotherrepresentation made up of a representation of a horse, arepresentation of a horn, and a representation of the relationshipbetween the horse and the horn. “Horse”,“horn”, and “possession,” may then haveinstantiated properties as their contents.

4.3 Causal Theories do not Work for Phenomenal Intentionality

Horgan and Tienson (2002) object to what they describe as“strong externalist theories” that maintain that causalconnections are necessary for content. They argue, first, that mentallife involves a lot of intentional content that is constituted byphenomenology alone. Perceptual states, such as seeing a red apple,are intentional. They are about apples. Believing that there are morethan 10 Mersenne primes and hoping to discover a new Mersenne primeare also intentional states, in this case about Mersenne primes. But,all these intentional states have a phenomenology—something itis like to be in these states. There is something it is like to see ared apple, something different that it is like to believe there aremore than 10 Mersenne primes, and something different still that it islike to hope to discover a new Mersenne prime. Horgan and Tiensonpropose that there can be phenomenological duplicates—twoindividuals with exactly the same phenomenology. Assume nothing aboutthese duplicates other than that they are phenomenological duplicates.In such a situation, one can be neutral regarding how much of theirphenomenological experience is veridical and how much illusory. So,one can be neutral on whether or not a duplicate sees a red apple orwhether there really are more than 10 Mersenne primes. This suggeststhat there is a kind of intentionality—that shared by theduplicates—that is purely phenomenological. Second, Horgan andTienson argue that phenomenology constitutively depends only on narrowfactors. They observe that one’s experiences are often caused ortriggered by events in the environment, but that these environmentalcauses are only parts of causal chains that lead to the phenomenologyitself. They do not constitute that phenomenology. The states thatconstitute, or provide the supervenience base for, the phenomenologyare not the elements of the causal chain leading back into theenvironment. If we combine the conclusions of these two arguments, weget Horgan and Tienson’s principal argument against any causaltheory that would maintain that causal connections are necessary forcontent.

P1. There is intentional content that is constituted by phenomenologyalone.

P2. Phenomenology is constituted only by narrow factors.

Therefore,

C. There is intentional content that is constituted only by narrowfactors.

Thus, versions of causal theories that suppose that all content mustbe based on causal connections are fundamentally mistaken. For thoseversions of causal theories that offer only sufficiency conditions onsemantic content, however, Horgan and Tienson’s argument may betaken to provide a specific limitation on the scope of causaltheories, namely, that causal theories do not work for intentionalcontent that is constituted by phenomenology alone.

A relatively familiar challenge to this argument may be found incertain representational theories of phenomenological properties.(See, for example, Dretske (1988) and Tye (1997).) According to theseviews, the phenomenology of a mental state derives from thatstate’s representational properties, but the representationalproperties are determined by external factors, such as the environmentin which an organism finds itself. Thus, such representationalisttheories challenge premise P2 of Horgan and Tienson’sargument.

4.4 Causal Theories do not Work for Certain Reflexive Thoughts

Buras (2009) presents another argument that is perhaps best thought ofas providing a novel reason to think that causal theories of mentalrepresentation only offer sufficiency conditions on meaning. Thisargument begins with the premise that some mental states are aboutthemselves. To motivate this claim, Buras notes that some sentencesare about themselves. So, by analogy with, “This sentence isfalse,” which is about itself, one might think that there is athought, “This thought is false,” that is also aboutitself. Or, how about “This thought is realized in braintissue” or “This thought was caused by LSD”? Theseappear to be about themselves. Buras’ second premise is thatnothing is a cause of itself. So, “This thought is false”is about itself, but could not be caused by itself. So, the sentence“This thought is false” could not mean that it itself isfalse in virtue of the fact that “This thought is false”was caused by its being false. So, “This thought is false”must get its meaning in some other way. It must get its meaning invirtue of some other conditions of meaning acquisition.

This is not, however, exactly the way Buras develops his argument. Inthe first place, he treats causal theories of mental content asmaintaining that, if “X” means X, then X causes“X”. (Cf. Buras, 2009, p. 118). He cites Stampe (1977),Dretske (1988), and Fodor (1987) as maintaining this. Yet, Stampe,Dretske, and Fodor explicitly formulate their theories in terms ofsufficiency conditions, so that (roughly) “X” means X, ifXs causes “X”s, etc. (See, for example, Stampe (1977), pp.82–3, Dretske (1988), p. 52, and Fodor (187), p. 100). In thesecond place, Buras seems to draw a conclusion that is orthogonal tothe truth or falsity of causal theories of mental content. He beginshis paper with an impressively succinct statement of his argument.

Some mental states are about themselves. Nothing is a cause of itself.So some mental states are not about their causes; they are aboutthings distinct from their causes (Buras, 2009, p. 117).

The causal theorist can admit that some mental states are not abouttheir causes, since some states are thoughts and thoughts mean whatthey do in virtue of, say, the meanings of mental sentences. Thesemental sentences might mean what they do in virtue of the meanings ofprimitive mental representations (which may or may not mean what theydo in virtue of a causal theory of meaning) and the way in which thoseprimitive mental representations are put together. As was mentioned insection 2, such a syntactically and semantically combinatoriallanguage of thought is a familiar background assumption for causaltheories. The conclusion that Buras may want, instead, is that thereare some thoughts that do not mean what they do in virtue of whatcauses them. So, through some slight amendments, one can understandBuras to be presenting a clarification of the scope of causal theoriesof mental content or as a challenge to a particularly strong versionof causal theories, a version that takes them as offering a necessarycondition on meaning.

4.5 Causal Theories do Not Work for Reliable Misrepresentations

As noted above, one of the central challenges for causal theories ofmental content has been to discriminate between a “core”content-determining causal connection, as between cows and“cow”s, and “peripheral”non-content-determining causal connections, as between horses and“cow”s. Cases of reliable misrepresentations arerepresentations which always misrepresent in the same way. In suchcases, there is supposed to be no “core”content-determining causal connection; there are no X’s to which“X”s are causally connected. Instead, there are only“peripheral” causal connections. Mendelovici, (2013),following a discussion by Hohman, (2002), suggests that colorrepresentations may be like this.[11] Color anti-realism, according to which there are no colors in theworld, seems to be committed to the view that color representationsare not caused by colors in the world. Color representations may bereliably tokened by something in the world, but not by colors that arein the world.

In some instances, reliable misrepresentations provide another take onsome of the familiar content-determination problems. So, take attemptsto use normal conditions to distinguish between content-determiningcauses and non-content-determining causes. Even in normal conditions,color representations are not caused by colors, but by, say, surfacereflectances under certain conditions of illumination, just in the waythat, even in normal conditions cow representations are sometimes notcaused by cows, but by, say, a question such as, “What kind ofanimal is sometimes named ‘Bessie’?” Take a versionof the asymmetric dependency theory. On this theory applied to colorterms, it might seem that there is no red-to-“red” law onwhich all the other laws depends in much the same way that it mightseem there is no unicorn-to-“unicorn” law on which allother laws depends. (Cf. Fodor (1987, pp. 163–4) and (1990, pp.100–1)).

Unlike the more familiar cases, Mendelovici, (2013), does not arguethat there actually are such problematic cases. The argument is notthat there are actual cases of reliable misrepresentations, but merelythat reliable misrepresentations are possible and that this is enoughto create trouble for causal theories of mental representation. Onesort of trouble stems from the need for a pattern of psychologicalexplanation. Let a mental representation “X” meanintrinsically-heavy. Such a representation is a misrepresentation,since there is no such property of being intrinsically heavy. Such amisrepresentation is, nonetheless, reliable (i.e. consistent), sinceit is consistently tokened by all the same sort of things on earth.But, one can see how an agent using “X” could make areasonable, yet mistaken, inference to the conclusion that an objectthat causes a tokening of “X” on earth would be hard tolift on the moon. To allow such a pattern of explanation, Mendeloviciargues, a causal theorist must allow for reliable misrepresentation. Atheory of what mental representations are should not preclude suchpatterns of explanation. Another sort of trouble stems from the ideathat if a theory of meaning does not allow for reliablemisrepresentation, but requires that there be a connection between“X”s and Xs, then this would be constitute a commitment toa realist metaphysics for Xs. While there can be good reasons forrealism, the needs of a theory of content would not seem to be aproper source for them.

Artiga, 2013, provides a defense of teleosemantic theories in the faceof Mendelovici’s examples of reliable misrepresentation. Some ofArtiga’s arguments might also be used by advocates of causaltheories of mental content. Mendelovici, (2016), replies to Artiga,2013, by providing refinements and a further defense of the view thatreliable misrepresentations are a problem for causal theories ofmental content.

4.6 Causal Theories Conflict with the Theory Mediation of Perception

Cummins (1997) argues that causal theories of mental content areincompatible with the fact that one’s perception of objects inthe physical environment is typically mediated by a theory. Hisargument proceeds in two stages. In one stage, he argues that, on acausal theory, for each primitive “X” there must be somebit of machinery or mechanism that is responsible for detecting Xs.But, since a finite device, such as the human brain, contains only afinite amount of material, it can only generate a finite number ofprimitive representations. Next, he observes that thought isproductive—that it can, in principle, generate an unboundednumber of semantically distinct representations. This means that togenerate the stock of mental representations corresponding to each ofthese distinct thoughts, one must have a syntactically andsemantically combinatorial system of mental representation of the sortfound in a language of thought (LOT). More explicitly, this scheme ofmental representation must have the following properties:

  1. It has a finite number of semantically primitive expressions.
  2. Every expression is a concatenation of one or more primitiveexpressions.
  3. The content of any complex expression is a function of thecontents of the primitives and the way those primitives areconcatenated into the whole expression.

The conclusion of this first stage is, therefore, that a causal theoryof mental representation requires a LOT. In the other stage of hisargument, Cummins observes that, for a wide range of objects, theirperception is mediated by a body of theory. Thus, to perceivedogs—for dogs to cause “dogs”—one has to knowthings such as that dogs have tails, dogs have fur, and dogs fourlegs. But, to know that dogs have tails, fur, and four legs, one needsa set of mental representations, such as “tail”,“fur”, “four”, and “legs”. Now theproblem fully emerges. According to causal theories, having an“X” representation requires the ability to detect dogs.But, the ability to detect dogs requires a theory of dogs. But, havinga theory of dogs requires already having a LOT—a system ofmental representation. One cannot generate mental representationswithout already having them.[12]

4.7 Causal Theories Conflict with the Implementation of Psychological Laws

Jason Bridges (2006) argues that the core hypothesis of informationalsemantics conflicts with the idea that psychological laws arenon-basic. As we have just observed, causal theories are often takento offer mere sufficiency conditions for meaning. Suppose, therefore,that we suitably restrict the scope of a causal theory and understandits core hypothesis as asserting that all “X”s with thecontent X are reliably caused by X. (Nothing in the logic ofBridges’ argument depends on any additional conditions on aputative causal theory of mental content, so for simplicity we canfollow Bridges in restricting attention to this simple version.)Bridges proposes that this core claim of a causal theory of mentalcontent is a constitution thesis. It specifies what constitutes themeaning relation (at least in some restricted domain). Thus, if onewere to ask, “Why is it that all ‘X’s with content Xare reliably caused by Xs?,” the answer is roughly,“That’s just what it is for ‘X’ to have thecontent X”. Being caused in that way is what constitutes havingthat meaning. So, when a theory invokes this kind of constitutiverelation, there is this kind of constitutive explanation. So, thefirst premise of Bridges’ argument is that causal theoriesspecify a constitutive relation between meaning and reliable causalconnection.

Bridges next observes that causal theorists typically maintain thatthe putative fact that all “X”s are reliably caused by Xsis mediated by underlying mechanisms of one sort or another. So,“X”s might be reliably caused by dogs in part through themediation of a person’s visual system or auditory system.One’s visual apparatus might causally connect particularpatterns of color and luminance produced by dogs to “X”s.One might put the point somewhat differently by saying that a causaltheorist’s hypothetical “Xs causes ‘X’s”law is not a basic or fundamental law of nature, but an implementedlaw.

Bridges’ third premise is a principle that he takes to be nearlyself-evident, once understood. We can develop a better first-passunderstanding of Bridges’ argument if, at the risk of distortingthe argument, we consider a slightly simplified version of thisprinciple:

(S) If it is a true constitutive claim that allfs aregs, then it’s not an implemented law that allfs aregs.

To illustrate the principle, suppose we say that gold is identical tothe element with atomic number 79, that all gold has atomic number 79.Then suppose one were to ask, “Why is it that all gold has theatomic number 79?” The answer would be, “Goldjustis the element with atomic number 79.” This would be aconstitutive explanation. According to (S), however, this constitutiveexplanation precludes giving a further mechanistic explanation of whygold has atomic number 79. There is no mechanism by which gold getsatomic number 79. Having atomic number 79 just is what makes goldgold.

So, here is the argument

P1. It is a true constitutive claim that all “X”s withcontent X are reliably caused by Xs.
P2. If it is a true constitutive claim that all “X”s withcontent X are reliably caused by Xs, then it is not an implemented lawthat all “X”s with content X are reliably caused by Xs.

Therefore, bymodus ponens on P1 and P2,

C1. It is not an implemented law that all “X”s withcontent X are reliably caused by Xs.

But, C1 contradicts the common assumption

P3. It is an implemented law that all “X”s with content Xare reliably caused by Xs.[13]

Rupert (2008) challenges the first premise of Bridges’ argumenton two scores. First, he notes that claims about constitutive natureshave modal implications which at least some naturalistic philosophershave found objectionable. Second, he claims that natural scientists donot appeal to constitutive natures, so that one need not develop atheory of mental content that invokes them.

4.8 Causal Theories do not Provide a Metasemantic Theory

In discussing informational variants of causal theories, Artiga &Sebastián, 2020, have proposed a new “metasemantic”problem. They correctly note that there is a difference betweenexplaining why “X” means X, rather than Y, and explainingwhy “X” has a meaning at all. Moreover, they correctlynote that having an answer to this first question does not necessarilyanswer the second question. But, it is unclear that the metasemanticproblem is serious. If one’s theory is that “X”means X if A, B, and C, then that would seem to provide an answer tothe metasemantic question. Why does “X” mean anything atall? Because “X” meets sufficiency conditions A, B, and C.In other words, insofar as there is a question to be answered, itappears that all theories provide one.

5. Concluding Remarks

Although philosophers and cognitive scientists frequently propose todispense with (one or another sort of) mental representation (cf.,e.g., Stich, 1983, Brooks, 1991, van Gelder, 1995, Haugeland, 1999,Johnson, 2007, Chemero, 2009), this is universally accepted to be arevolutionary shift in thinking about minds. Short of taking on boardsuch radical views, one will naturally want some explanation of howmental representations arise. In attempting such explanations, causaltheories have been widely perceived to have numerous attractivefeatures. If, for example, one use for mental representations is tohelp one keep track of events in the world, then some causalconnection between mind and world makes sense. This attractiveness hasbeen enough to motivate new causal theories (e.g. Rupert, 1999, Usher,2001, and Ryder, 2004), despite the widespread recognition of seriouschallenges to an earlier generation of theories developed by Stampe,Dretske, Fodor, and others.

Bibliography

  • Adams, F., 1979, “A Goal-State Theory of FunctionAttribution,”Canadian Journal of Philosophy, 9:493–518.
  • –––, 2003a, “Thoughts and their Contents:Naturalized Semantics,” in S. Stich and T. Warfield (eds.),The Blackwell Guide to Philosophy of Mind, Oxford: BasilBlackwell, pp. 143–171.
  • –––, 2003b, “The Informational Turn inPhilosophy,”Minds and Machines, 13:471–501.
  • Adams, F. and Aizawa, K., 1992, “‘X’ Means X:Semantics Fodor-Style,”Minds and Machines, 2:175–183.
  • –––, 1994a, “Fodorian Semantics,” inS. Stich and T. Warfield (eds.),Mental Representations,Oxford: Basil Blackwell, pp. 223–242.
  • –––, 1994b, “‘X’ Means X:Fodor/Warfield Semantics,”Minds and Machines, 4:215–231.
  • Adams, F., Drebushenko, D., Fuller, G., and Stecker, R., 1990,“Narrow Content: Fodor’s Folly,”Mind &Language, 5: 213–229.
  • Adams, F. and Dietrich, L., 2004, “What’s in a(nEmpty) Name?,”Pacific Philosophical Quarterly, 85:125–148.
  • Adams, F. and Enc, B., 1988, “Not Quite by Accident,”Dialogue, 27: 287–297.
  • Adams, F. and Stecker, R., 1994, “Vacuous SingularTerms,”Mind & Language, 71: 1–12.
  • Agar, N., 1993. “ What do frogs really believe?,”Australasian Journal of Philosophy, 9: 387–401.
  • Aizawa, K., 1994, “Lloyd’s Dialectical Theory ofRepresentation,”Mind & Language, 9:1–24.
  • Antony, L. and Levine, J., 1991, “The Nomic and theRobust,” in B. Loewer and G. Rey (eds.),Meaning in Mind:Fodor and His Critics, Oxford: Basil Blackwell, pp.1–16.
  • Artiga, M., & Sebastián, M. A., 2020, “InformationalTheories of Content and Mental Representation,”Review of Philosophyand Psychology, 11: 613–27.
  • Baker, L., 1989, “On a Causal Theory of Content,”Philosophical Perspectives, 3: 165–186.
  • –––, 1991, “Has Content BeenNaturalized?,” in B. Loewer and G. Rey (eds.),Meaning inMind: Fodor and His Critics, Oxford: Basil Blackwell, pp.17–32.
  • Bar-On, D., 1995, “‘Meaning’ Reconstructed:Grice and the Naturalizing of Semantics,”PacificPhilosophical Quarterly, 76: 83–116.
  • Boghossian, P., 1991, “Naturalizing Content,” in B.Loewer and G. Rey (eds.),Meaning in Mind: Fodor and HisCritics, Oxford: Basil Blackwell, pp. 65–86.
  • Bridges, J., 2006, “Does Informational Semantics CommitEuthyphro’s Fallacy?,”Noûs, 40:522–547.
  • Brooks, R., 1991, “Intelligence withoutRepresentation,”Artificial Intelligence, 47:139–159.
  • Buras, T., 2009, “An Argument against Causal Theories ofMental Content,”American Philosophical Quarterly, 46:117–129.
  • Cain, M. J., 2009, “Fodor’s Attempt to NaturalizeMental Content,”The Philosophical Quarterly, 49:520–526.
  • Chemero, A., 2009,Radical Embodied Cognitive Science,Cambridge, MA: The MIT Press.
  • Cummins, R., 1989,Meaning and Mental Representation,Cambridge, MA: MIT/Bradford.
  • –––, 1997, “The LOT of the Causal Theoryof Mental Content,”Journal of Philosophy, 94:535–542.
  • Dennett, D., 1987, “Review of J. Fodor’sPsychosemantics,”Journal of Philosophy, 85:384–389.
  • Dretske, F., 1981,Knowledge and the Flow of Information,Cambridge, MA: MIT/Bradford Press.
  • –––, 1983, “Precis ofKnowledge andthe Flow of Information,”Behavioral and BrainSciences, 6: 55–63.
  • –––, 1986, “Misrepresentation,” inR. Bogdan (ed.),Belief, Oxford: Oxford University Press, pp.17–36.
  • –––, 1988,Explaining Behavior: Reasons in aWorld of Causes, Cambridge, MA: MIT/Bradford.
  • –––, 1999,Naturalizing the Mind,Cambridge, MA: MIT Press.
  • Enç, B., 1982, “Intentional States of MechanicalDevices,”Mind, 91: 161–182.
  • Enç, B. and Adams, F., 1998, “Functions andGoal-Directedness,” in C. Allen, M. Bekoff and G. Lauder (eds.),Nature’s Purposes, Cambridge, MA: MIT/Bradford, pp.371–394.
  • Fodor, J., 1984, “Semantics, Wisconsin Style,”Synthese, 59: 231–250. (Reprinted in Fodor,1990a).
  • –––, 1987,Psychosemantics: The Problem ofMeaning in the Philosophy of Mind, Cambridge, MA:MIT/Bradford.
  • –––, 1990a,A Theory of Content and OtherEssays, Cambridge, MA: MIT/Bradford Press.
  • –––, 1990b, “Information andRepresentation,” in P. Hanson (ed.),Information, Language,and Cognition, Vancouver: University of British Columbia Press,pp. 175–190.
  • –––, 1990c, “Psychosemantics or Where doTruth Conditions come from?,” in W. Lycan (ed.),Mind andCognition, Oxford: Basil Blackwell, pp. 312–337.
  • –––, 1991, “Replies,” in B. Loewerand G. Rey (eds.),Meaning in Mind: Fodor and His Critics,Oxford: Basil Blackwell, pp. 255–319.
  • –––, 1994,The Elm and the Expert,Cambridge, MA: MIT/Bradford.
  • –––, 1998a,Concepts: Where CognitiveScience Went Wrong, Oxford: Oxford University Press.
  • –––, 1998b,In Critical Condition: PolemicalEssays on Cognitive Science and the Philosophy of Mind,Cambridge, MA: MIT/Bradford Press.
  • Gibson, M., 1996, “Asymmetric Dependencies, IdealConditions, and Meaning,”Philosophical Psychology, 9:235–259.
  • Godfrey-Smith, P., 1989, “Misinformation,”Canadian Journal of Philosophy, 19: 533–550.
  • –––, 1992, “Indication andAdaptation,”Synthese, 92: 283–312.
  • Grice, H., 1957, “Meaning,”The PhilosophicalReview, 66: 377–88.
  • –––, 1989,Studies in the Way of Words,Cambridge: Harvard University Press.
  • Haugeland, J., 1999, “Mind Embodied and Embedded,” inJ. Haugeland (ed.),Having Thought, pp. 207–237.
  • Horgan, T., and Tienson, J., 2002, “The Intentionality ofPhenomenology and the Phenomenology of Intentionality,” in D.Chalmers,Philosophy of Mind: Classical and ContemporaryReadings, Oxford: Oxford University Press, pp.520–933.
  • Johnson, M., 2007,The Meaning of the Body: Aesthetics ofHuman Understanding, Chicago, IL: University of ChicagoPress.
  • Jones, T., Mulaire, E., and Stich, S., 1991, “Staving offCatastrophe: A Critical Notice of Jerry Fodor’sPsychosemantics,”Mind & Language, 6:58–82.
  • Lloyd, D., 1987, “Mental Representation from the Bottomup,”Synthese, 70: 23–78.
  • –––, 1989,Simple minds, Cambridge, MA:The MIT Press.
  • Loar, B., 1991, “Can We Explain Intentionality?,” inB. Loewer and G. Rey (eds.),Meaning in Mind: Fodor and HisCritics, Oxford: Basil Blackwell, pp. 119–135.
  • Loewer, B., 1987, “From Information toIntentionality,”Synthese, 70: 287–317.
  • Maloney, C., 1990, “Mental Representation,”Philosophy of Science, 57: 445–458.
  • Maloney, J., 1994, “Content: Covariation, Control andContingency,”Synthese, 100: 241–290.
  • Manfredi, P. and Summerfield, D., 1992, “Robustness withoutAsymmetry: A Flaw in Fodor’s Theory of Content,”Philosophical Studies, 66: 261–283.
  • McLaughlin, B. P., 1991, “Belief individuation and Dretskeon naturalizing content,” in B. P. McLaughlin (ed.),Dretskeand His Critics, Oxford: Basil Blackwell, pp. 157–79.
  • –––, 2016, “The Skewed View From Here:Normal Geometrical Misperception,”PhilosophicalTopics, 44: 231–99.
  • Mendelovici, A., 2013, “Reliable misrepresentation andtracking theories of mental representation,”PhilosophicalStudies, 165: 421–443.
  • –––, 2016, “Why tracking theories shouldallow for clean cases of reliable misrepresentation,”Disputatio, 8: 57–92.
  • Millikan, R., 1984,Language, Thought and Other BiologicalCategories, Cambridge, MA: MIT Press.
  • –––, 1989, “Biosemantics,”Journal of Philosophy, 86: 281–97.
  • –––, 2001, “What Has Natural Informationto Do with Intentional Representation?,” in D. M. Walsh (ed.),Naturalism, Evolution and Mind, Cambridge: CambridgeUniversity Press, pp. 105–125.
  • Neander, K., 1995, “Misrepresenting andMalfunctioning,”Philosophical Studies, 79:109–141.
  • –––, 1996, “Dretske’s InnateModesty,”Australasian Journal of Philosophy, 74:258–274.
  • Papineau, D., 1984, “Representation and Explanation,”Philosophy of Science, 51: 550–72.
  • –––, 1998, “Teleosemantics andIndeterminacy,”Australasian Journal of Philosophy, 76:1–14.
  • Pineda, D., 1998, “Information and Content,”Philosophical Issues, 9: 381–387.
  • Possin, K., 1988, “Sticky Problems with Stampe onRepresentations,”Australasian Journal of Philosophy,66: 75–82.
  • Price, C., 1998, “Determinate functions,”Noûs, 32: 54–75.
  • Rupert, R., 1999, “The Best Test Theory of Extension: FirstPrinciple(s),”Mind & Language, 14:321–355.
  • –––, 2001, “Coining Terms in the Languageof Thought: Innateness, Emergence, and the Lot of Cummins’sArgument against the Causal Theory of Mental Content,”Journal of Philosophy, 98: 499–530.
  • –––, 2008, “Causal Theories of MentalContent,”Philosophy Compass, 3: 353–80.
  • Ryder, D., 2004, “SINBAD Neurosemantics: A Theory of MentalRepresentation,”Mind & Language, 19:211–240.
  • Schulte, P., 2012, “How Frogs See the World: PuttingMillikan’s Teleosemantics to the Test,”Philosophia, 40: 483–96.
  • –––, 2015, “Perceptual Representations: ATeleosemantic Answer to the Breadth-of-Application Problem,”Biology & Philosophy, 30: 119–36.
  • –––, 2018, “Perceiving the world outside:How to solve the distality problem for informationalteleosemantics,”The Philosophical Quarterly, 68:349–69.
  • Skyrms, B., 2008, “Signals,”Philosophy ofScience, 75: 489–500.
  • –––, 2010a,Signals: Evolution, Learning,and Information, Oxford: Oxford University Press
  • –––, 2010b, “The flow of information insignaling games,”Philosophical Studies, 147:155–65.
  • –––, 2012, “Learning to signal with probeand adjust,”Episteme, 9: 139–50.
  • Stampe, D., 1975, “Show and Tell,” in B. Freed, A.Marras, and P. Maynard (eds.),Forms of Representation,Amsterdam: North-Holland, pp. 221–245.
  • –––, 1977, “Toward a Causal Theory ofLinguistic Representation,” in P. French, H. K. Wettstein, andT. E. Uehling (eds.),Midwest Studies in Philosophy, vol. 2,Minneapolis: University of Minnesota Press, pp. 42–63.
  • –––, 1986, “Verification and a CausalAccount of Meaning,”Synthese, 69: 107–137.
  • –––, 1990, “Content, Context, andExplanation,” in E. Villanueva,Information, Semantics, andEpistemology, Oxford: Blackwell, pp. 134–152.
  • Stegmann, U. E., 2005, “John Maynard Smith’s notion ofanimal signals,”Biology and Philosophy, 20:1011–25.
  • –––, 2009, “A consumer-basedteleosemantics for animal signals,”Philosophy ofScience, 76: 864–75.
  • Sterelny, K., 1990,The Representational Theory of Mind,Oxford: Blackwell.
  • Stich, S., 1983,From Folk Psychology to CognitiveScience, Cambridge, MA: The MIT Press.
  • Sturdee, D., 1997, “The Semantic Shuffle: Shifting Emphasisin Dretske’s Account of Representational Content,”Erkenntnis, 47: 89–103.
  • Tye, M., 1995,Ten Problems of Consciousness: ARepresentational Theory of Mind, Cambridge, MA: MIT Press.
  • Usher, M., 2001, “A Statistical Referential Theory ofContent: Using Information Theory to Account forMisrepresentation,”Mind and Language, 16:311–334.
  • –––, 2004, “Comment on Ryder’sSINBAD Neurosemantics: Is Teleofunction Isomorphism the Way toUnderstand Representations?,”Mind and Language, 19:241–248.
  • Van Gelder, T. 1995, “What Might Cognition Be, If notComputation?,”The Journal of Philosophy, 91:345–381.
  • Wallis, C., 1994, “Representation and the ImperfectIdeal,”Philosophy of Science, 61: 407–428.
  • –––, 1995, “Asymmetrical Dependence,Representation, and Cognitive Science,”The Southern Journalof Philosophy, 33: 373–401.
  • Warfield, T., 1994, “Fodorian Semantics: A Reply to Adamsand Aizawa,”Minds and Machines, 4: 205–214.
  • Wright, L., 1973, “Functions,”PhilosophicalReview, 82: 139–168.

Other Internet Resources

[Please contact the authors with suggestions.]

Copyright © 2021 by
Fred Adams
Ken Aizawa

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp