Anglophone philosophers of mind generally use the term“belief” to refer to the attitude we have, roughly,whenever we take something to be the case or regard it as true. Tobelieve something, in this sense, needn’t involve activelyreflecting on it: Of the vast number of things ordinary adultsbelieve, only a few can be at the fore of the mind at any single time.Nor does the term “belief”, in standard philosophicalusage, imply any uncertainty or any extended reflection about thematter in question (as it sometimes does in ordinary English usage).Many of the things we believe, in the relevant sense, are quitemundane: that we have heads, that it’s the 21st century, that acoffee mug is on the desk. Forming beliefs is thus one of the mostbasic and important features of the mind, and the concept of beliefplays a crucial role in both philosophy of mind and epistemology. The“mind-body problem”, for example, so central to philosophyof mind, is in part the question of whether and how a purely physicalorganism can have beliefs. Much of epistemology revolves aroundquestions about when and how our beliefs are justified or qualify asknowledge.
Most contemporary philosophers characterize belief as a“propositional attitude”. Propositions are generally takento be whatever it is that sentences express (see the entry onpropositions). For example, if two sentences mean the same thing (e.g., “snowis white” in English, “Schnee ist weiss” in German),they express the same proposition, and if two sentences differ inmeaning, they express different propositions. (Here we are settingaside some complications that might arise concerning indexicals; seethe entry onindexicals.) Apropositional attitude, then, is the mental state ofhaving some attitude, stance, take, or opinion about a proposition orabout the potential state of affairs in which that proposition istrue—a mental state of the sort canonically expressible in theform “SA thatP”, whereS picks out the individual possessing the mental state,A picks out the attitude, andP is a sentenceexpressing a proposition. For example: Ahmed [the subject] hopes [theattitude] that Alpha Centauri hosts intelligent life [theproposition], or Yifeng [the subject] doubts [the attitude] that NewYork City will exist in four hundred years. What one person doubts orhopes, another might fear, or believe, or desire, orintend—different attitudes, all toward the same proposition.Discussions of belief are often embedded in more general discussionsof the propositional attitudes; and treatments of the propositionalattitudes often take belief as the first and foremost example.
It is common to think of believing as involvingentities—beliefs—that are in some sense contained in themind. When someone learns a particular fact, for example, when Kaireads that garden snails are hermaphrodites, they acquire a new belief(in this case, the belief that garden snails are hermaphrodites). Thefact in question—or more accurately, a representation, symbol,or marker of that fact—may be stored in memory and accessed orrecalled when necessary. In one way of speaking, the belief just isthe fact or proposition represented, or the particular stored token ofthat fact or proposition; in another way of speaking, the belief isthe state of having such a fact or representation stored.
It is also common to suppose that beliefs play a causal role in theproduction of behavior. Continuing the example, we might imagine thatafter learning about garden snail mating, Kai naturally turns theirattention elsewhere, not consciously considering the matter forseveral days, until they and their ten-year-old daughter startwatching an internet video about molluscs. Involuntarily, Kai’snew knowledge about the hermaphroditism of garden snails is called upfrom memory. Kai says to her, “Did you know that garden snailshave both male and female organs at the same time?” It seemsplausible to say that Kai’s belief about garden snails, or theirpossession of that belief, caused, or figured in a causal explanationof, their utterance.
Various elements of this intuitive characterization of belief havebeen challenged by philosophers, but it is probably fair to say thatthe majority of contemporary philosophers of mind accept the bulk ofthis picture, which embodies the core ideas of therepresentational approach to belief, according to whichcentral cases of belief involve someone’s having in their heador mind a representation with the same propositional content as thebelief. (But see §2.2, below, for some caveats, and see the entryonmental representation.) As discussed below, representationalists may diverge in theiraccounts of the nature of representation, and they need not agreeabout what further conditions, besides possessing such arepresentation, are necessary if a being is to qualify as having abelief. Among the more prominent advocates of a representationalapproach to belief are Fodor (1975, 1981, 1987, 1990), Millikan (1984,1993), Dretske (1988), Burge (2010), Mandelbaum (2016; Quilty-Dunn andMandelbaum 2018), and Zimmerman (2018).
One strand of representationalism, endorsed by Fodor, takes mentalrepresentations to be sentences in an internallanguage ofthought. To get a sense of what this view amounts to, it ishelpful to start with an analogy. Computers are sometimescharacterized as operating by manipulating sentences in “machinelanguage” in accordance with certain rules. Consider asimplified description of what happens as one enters numbers into aspreadsheet. Inputs from the keyboard cause the computer, depending onthe programs it is running and its internal state, to instantiate or“token” a sentence (in machine language) with the content(translated into English) of, for example, “numerical value 4 incell A1”. In accordance with certain rules, the machine thendisplays the shape “4” in a certain location on themonitor, and perhaps, if it is implementing the rule “the valuesof column B are to be twice the values of column A”, it tokensthe sentence “numerical value 8 in cell B1” and displaysthe shape “8” in another location on the monitor. If wesomeday construct a robot whose behavior resembles that of a humanbeing, we might imagine it to operate along broadly the linesdescribed above—that is, by manipulating machine-languagesentences in accordance with rules, in connection with variouspotential inputs and outputs. Such a robot might somewhere store themachine-language sentence whose English translation is “thechemical formula for water is H2O”. We might supposethis robot is able to act as does a human who possesses this beliefbecause it is disposed to access this sentence appropriately onrelevant occasions: When asked “of what chemical elements iswater compounded?”, the robot accesses the water sentence andmanipulates it and other relevant sentences in such a way that itproduces an appropriate response.
According to the language of thought hypothesis (see the entry on thelanguage of thought hypothesis), our cognition proceeds rather like such a robot’s. The formulaewe manipulate are not in “machine language”, of course,but rather in a species-wide “language of thought”. Asentence in the language of thought with some particular propositionalcontentP is a “representation” ofP. Onthis view, a subject believes thatP just in case they have arepresentation ofP that plays the right kind of role—a“belief-like” role—in their cognition. That is, therepresentation must not merely be instantiated somewhere in the mindor brain, but it must be deployed, or apt to be deployed, in ways weregard as characteristic of belief. For example, it must be apt to becalled up for use in theoretical inferences toward which it isrelevant. It must be ready for appropriate deployment in deliberationabout means to desired ends. It is sometimes said, in such a case,that the subject has the propositionP, or a representationof that proposition, tokened in their “belief box” (thoughof course it is not assumed that there is any literal box-likestructure in the head).
Dretske’s view centers on the idea of representational systemsas systems with the function of tracking features of the world (forsimilar views, see Millikan 1984, 2017; Neander 2017). Organisms,especially mobile ones, generally need to keep track of features oftheir environment to be evolutionarily successful. Consequently, theygenerally possess internal systems whose function it is to covary incertain ways with the environment. For example, certain marinebacteria contain internal magnets that align with the Earth’smagnetic field. In the northern hemisphere, these bacteria, guided bythe magnets, propel themselves toward magnetic north. Since in thenorthern hemisphere magnetic north tends downward, they are thuscarried toward deeper water and sediment, and away from toxic,oxygen-rich surface water. We might thus say that the magnetic systemof these bacteria is a representational system that functions toindicate the direction of benign or oxygen-poor environments. Ingeneral, on Dretske’s view, an organism can be said to representP just in case that organism contains a subsystem whosefunction it is to enter stateA only ifP holds, andthat subsystem is in stateA.
To have beliefs, Dretske suggests, is to have an integrated manifoldof such representational systems, acquired in part through associativelearning, poised to guide behavior. Given the lack of such a complex,and the lack of associative learning, magnetosome bacteria cannot, onDretske’s view, rightly be regarded as literally possessingfull-fledged beliefs. But exactly how rich an organism’srepresentational structure must be for it to have beliefs, and in whatways, Dretske regards as a terminological boundary dispute, ratherthan a matter of deep ontological significance. (For more on belief innon-human animals see §4 below.)
If one accepts a representational view of belief, it’s plausibleto suppose that the relevant representations arestructuredin some way—that the belief thatP &Q,for instance, shares something structurally in common with the beliefthatP. To say this is not merely to say that the belief thatP &Q has the following property: It cannot betrue unless the belief thatP is true. Consider the followingpossible development of Dretske’s representational approach: Anorganism has developed a system that functions to detect whetherP is or is not the case. It’s supposed to enter statealpha whenP is true; its being in alpha has the function ofindicatingP. Also, the organism has developed a separatesystem for detecting whetherP &Q is the case.It’s supposed to enter state beta whenP &Q is true; its being in beta has the function of indicatingP &Q. But alpha and beta have nothing importantin common other than what, in the outside world, they are supposed torepresent; they have no structural similarity; one is not compoundedin part from the other. Conceivably, all our beliefs could be set upin this way, having as little in common as alpha and beta—oneinternally unstructured representational state after another. To saythat mental representations are structured is in part to deny that ourminds work like that.
Among the reasons to suppose that our representations are structured,Fodor argues, are theproductivity andsystematicityof thought (Fodor 1987; Fodor and Pylyshyn 1988; Aizawa 2003). Thoughtand belief are “productive” in the sense that we canpotentially think or believe an indefinitely large number of things:that elephants despise bowling, that 245 + 382 = 627, that riverbottoms are usually not composed of blue beads. If representations areunstructured, each of these different potential beliefs must, oncebelieved, be an entirely new state, not constructed fromrepresentational elements previously available. Similarly, thought andbelief are “systematic” in the sense that an organism whothinks or believes that Mengzi repudiated Gaozi will normally alsohave the capacity (if not necessarily the inclination) to think orbelieve that Gaozi repudiated Mengzi; an organism who thinks orbelieves that dogs are insipid and cats are resplendent will normallyalso have the capacity to think or believe that dogs are resplendentand cats are insipid. If representations are structured, if they haveelements that can be shuffled and recombined, the productivity andsystematicity of thought and belief seem naturally to follow.Conversely, someone who holds that representations are unstructuredhas, at least, some explaining to do to account for these features ofthought. (So also, apparently, does someone who denies that belief isunderwritten or implemented by a representational system of anysort.)
Supposing representations are structured, then, whatkind ofstructure do they have? Fodor notes that productivity andsystematicity are features not just of thought but also of language,and concludes that representational structure must be linguistic. Heendorses the idea of an innate, species-wide language of thought (asdiscussed briefly in §1.1 above); others tie the structure moreclosely to the thinker’s own natural (learned) language (Harman1973; Field 1978; Carruthers 1996). However, still others assert thatthe representational structure underwriting belief isn’tlanguage-like at all.
A number of philosophers have argued that our cognitiverepresentations have, or can have, amap-like rather than alinguistic structure (Lewis 1994; Braddon-Mitchell and Jackson 1996;Camp 2007, 2018; Rescorla 2009; though see Blumson 2012 and Johnson2015 for concerns about whether map-like and language-like structuresare importantly distinct). Map-like representational systems are bothproductive and systematic: By recombination and repetition of itselements, a map can represent indefinitely many potential states ofaffairs; and a map-like system that has the capacity, for example, torepresent the river as north of the mountain will normally also havethe capacity to represent, by a re-arrangement of its parts, themountain as north of the river. Although maps may sometimes involvewords or symbols, nothing linguistic seems to be essential to thenature of map-like representation: Some maps are purely pictorial orcombine pictorial elements with symbolic elements, like coloration torepresent altitude, that we don’t ordinarily think of aslinguistic.
The maps view makes nice sense of the fact that when a person changesone belief, a multitude of other beliefs seem also to changesimultaneously and effortlessly: If you shift a mountain farther northon a map, for example, you immediately and automatically change manyother aspects of the representational system (the distance between themountain and the north coast, the direction one must hike to go fromthe mountain to the oasis, etc.). In contrast, if you change thelinguistic representation “the mountain peak is 15 km north ofthe river” to “the mountain peak is 20 km north of theriver”, no other representation necessarily changes: It takes acertain amount ofinferential work to ramify the consequencesthrough the rest of the system. Since it doesn’t seem likewe’re constantly making such a plethora of inferences, the mapsview might have an advantage here. On the other hand, perhaps justbecause the linguistic view requires inference for what appears tohappen automatically on the maps view, the linguistic view can moreeasily account for failures of rationality, in which not all thenecessary changes are made and the subject ends up with aninconsistent view. Indeed generally speaking it’s unclear howthe map view can accommodate inconsistent beliefs unless one allows aproliferation of maps, with the complications that ensue (likeredundancy and mechanisms for relating the maps; Yalcin 2021). Certainsorts of indeterminacy may also be more difficult to accommodate inmap-like than in language-like structures. A linguistic representationlike “there are some lakes east of the mountain” can leavecompletely unspecified how many lakes, of what shape, and where; a mapdoes not, it seems, as easily do this. One further point of apparentdifference between the two views will be discussed in §2.2 below.Generally speaking, one might worry that the maps viewovergenerates andoverspecifies beliefs, while thelinguistic viewundergenerates andunderspecifiesthem.
A third and very different way of thinking about representationalstructure arises from the perspective ofconnectionism, aposition in cognitive science and computational theory. According toconnectionism, cognition proceeds by activation streaming through aseries of “nodes” connected by adjustable“connection weights”—somewhat as neural networks inthe brain can have different levels of activation and differentstrengths of connection between each other. It is sometimes suggested(e.g., by van Gelder 1990; Smolensky 1995; Shea 2007) that thestructure of connectionist networks is representational butnon-linguistic or non-“compositional”; and perhaps so alsois human representational structure. However, it would take us too farafield to enter this technical issue here. (For more on this topic seethe entry onconnectionism.)
It would perhaps be surprising if the representational structures ofhuman cognition correspond precisely with any familiar technologies. Apluralistic, computational approach to representational structurewould allow for multiple format types, each characterized by thecomputational vehicles’ allowable value ranges and theconstraints on the relations among those values (Vernazzani and Molloforthcoming).
While representationalists like Fodor, Dretske, and Mandelbaum contendthat having the right internal, representational structure isessential to having beliefs, another group of philosophers treats theinternal structure of the mind as of only incidental relevance to thequestion of whether a being is properly described as believing. Oneway to highlight the difference between this view andrepresentationalism is this: Imagine that we discover an alien being,of unknown constitution and origin, whose behavior and overallbehavioral dispositions are perfectly normal by human standards.“Rudolfo”, say, emerges from a spacecraft and integratesseamlessly into U.S. society, becoming a tax lawyer, football fan, andDemocratic Party activist. Even if we know next to nothing about whatis going on inside his head, it may seem natural to say that Rudolfohas beliefs much like ours—for example, that the 1040 isnormally due April 15, that a field goal is worth 3 points, and thatlabor unions tend to support Democratic candidates. Perhaps we cancoherently imagine that Rudolfo does not manipulate sentences in alanguage of thought or possess internal representational structures ofthe right sort. Perhaps it is conceptually, even if not physically,possible that he has no complex, internal, cognitive organ, no realbrain. But even if it is granted that a creature must have human-likerepresentations in order to behave thoroughly like a human being, onemight still think that it is thepattern of actual and potentialbehavior that is fundamental in belief—that representationsare essential to belief only because, and to the extent to, theyground such a pattern.Dispositionalists andinterpretationists are drawn to this way of thinking.
Traditional dispositional views of belief assert that for someone tobelieve some propositionP is for that person to possess oneor more particular behavioral dispositions pertaining toP.Often cited is the disposition to assent to utterances ofPin the right sorts of circumstances (if one understands the language,wishes to reveal one’s true opinion, is not physicallyincapacitated, etc.). Other relevant dispositions might include thedisposition to exhibit surprise should the falsity ofP makeitself evident, the disposition to assent toQ if one isshown thatP impliesQ, and the disposition todepend onP’s truth in executing one’s plans.Perhaps all such dispositions can be brought under a single heading,which is, most generally, being disposed to act as thoughPis the case. Such actions are normally taken to be at least prettygoodprima facieevidence of belief inP;the question is whether being disposed, overall, so to act istantamount to believingP, as the dispositionalistthinks, or whether it is merely an outward sign of belief. Braithwaite(1932–1933) and Marcus (1990) are prominent advocates of thetraditional dispositional approach to belief (though Braithwaiteemphasizes in his analysis another form of belief, rather like“occurrent” belief as described in §2.1 below).
There are two standard objections to traditional dispositionalaccounts of belief. The first, tracing back at least to Chisholm(1957), assumes that the dispositionalist’s aim is toreduce oranalyze facts about belief entirely intofacts about outward behavior, facts specifiable without reference toother beliefs, desires, inner feelings, and so forth (see the entry onphilosophicalbehaviorism). Such a reduction or analysis appears impossible for the followingreason: People with the same belief may behave very differently,depending on their other beliefs, desires, and so forth. For example,a person who believes that it will rain will only be disposed to takean umbrella if they also believe that the umbrella will ward off thewater and if they don’t want to get wet. Change the surroundingbeliefs and desires and very different behavior may result. Adispositionalist attempting to specify the particular behavioraldispositions associated with, for example, the belief that it’sraining will then either get it wrong about the dispositions of somepeople (such as those who like to get wet) or will be forced toincorporate into their dispositional analysis conditional antecedentsinvoking the very ideas they are trying to analyze or reduceaway—saying, for example, that the person who believes thatP will behave in such-and-such a wayif they alsobelieveX and desireY—apparently dooming thereductionist project. (It may be possible to avoid this objection byinvoking a “Ramsey”-like approach to the reduction [seethe section on Functional States and Ramsey Sentences in the entry onfunctionalism and Lewis 1972], but this type of analysis was not widely discusseduntil after traditional dispositional approaches to belief had gonelargely out of fashion.)
The second standard objection to traditional dispositional accounts ofbelief is to note the loose connection between belief and behavior insome cases—for example, in a recently paralyzed person, or insomeone who wants to keep a private opinion (e.g., a Muscovite whobelieves, in 1937, that Stalin’s purges are morally wrong), orin matters of very little practical relevance (e.g., an Americanhomebody’s belief that there is at least one church in Nice).Again, the traditional dispositionist seems faced with a choicebetween oversimplifying (and thus mischaracterizing somepeople’s dispositions) and loading the dispositions withpotentially problematic or unwieldy conditional antecedents (e.g.,they’d get the umbrellaif their paralysis healed;they’d speak upif the political climate changed). Onthe other hand, however, the demand for anabsolutely precisespecification of the conditions under which a disposition will bemanifested, without exception, may be excessive. As Cartwright (1983)has noted, even perfectly respectable claims in the physical sciencesoften hold onlyceteris paribus or “all else beingequal”.
In light of these concerns and others, most recent philosopherssympathetic with the view described in the first paragraph of thissection have abandoned traditional dispositionalism. They divide intoroughly two classes, which we may callliberaldispositionalists andinterpretationists. Liberaldispositionalists avoid the first objection by abandoning thereductionist project associated with traditional dispositionalism.They permit appeal to other mental states in specifying thedispositions relevant to any particular belief—possiblyincluding other beliefs and desires. They also broaden the range ofdispositions considered relevant to the possession of a belief so asto include at least some dispositions to undergo private mentalepisodes that do not manifest in outwardly observablebehavior—dispositions, for example, for the subject to feel (andnot just exhibit) surprise should they discover the falsity ofP, for them privately to draw conclusions fromP, tofeel confidence in the truth ofP, to utterPsilently in inner speech, and so forth. This appears also to mitigatethe second objection to some extent (though see Moore and Botterill2023): The Muscovite possesses their belief about Stalin’spurges at least as much in virtue of the things they says silently ininner speech and the disapproval they privately feel as in virtue oftheir disposition to express that opinion were the political climateto change. Advocates of views of this sort include Price (1969), Baker(1995), Schwitzgebel (2002, 2013), and arguably Ryle (1949) and Ramsey(1926 [1990], 1927–1929 [1991]; see Wright 2017). Smithies(forthcoming) sheds the behavioral focus of traditionaldispositionalism entirely, arguing that to believe is to be disposedto feel (or to occurrently feel) conviction (see also Cohen 1992).
However, a philosopher approaching belief with the specific goal ofdefending physicalism or materialism—the view that everything inthe world, including the mind, is wholly physical or material (seephysicalism)—might have reason to be dissatisfied with liberal dispositionalism, for thevery reason that it abandons the reductionist project. Althoughliberal dispositional accounts of belief are consistent withphysicalism, they do not substantiallyadvance that thesis,since they relate belief to other mental states that may or may not beseen as physical. The defense of physicalism was one of the drivingforces in philosophy of mind in the period during which the mostinfluential approaches to belief in contemporary analytic philosophyof mind were developed—the 1960s through the 1980s—and itwas one of the principal reasons philosophers were interested inaccounts of propositional attitudes such as belief. Consequently, thefailure of liberal dispositionalism to advance the physicalist projectmight be seen as an important drawback.
Interpretationism shares with dispositionalism the emphasis onpatterns of action and reaction, rather than internal representationalstructures, but retains the focus, abandoned by the liberaldispositionalist, onobservable behavior—behaviorinterpretable by an outside observer. Since behavior is widely assumedto be physical, interpretationism can thus more easily be seen asadvancing the physicalist project. The two most prominentinterpretationists have been Dennett (1978, 1987, 1991) and Davidson(1984; seeDonald Davidson; also see Lewis 1974; Mölder 2010; Curry 2020).
To gain a sense of Dennett’s view, consider three differentmethods we can use to predict the behavior of a human being. Onemethod, which involves taking what Dennett calls the “physicalstance”, is to apply our knowledge of physical law. We canpredict that a diver will trace a parabolic trajectory to the waterbecause we know how objects of that mass and size behave in fall nearthe surface of the Earth. A second method, which involves taking the“design stance”, is to attribute functions to the systemor its parts and to predict that the system will function properly. Wecan predict that a jogger’s pulse will increase as she heads upthe hill because of what we know about exercise and the properfunction of the circulatory system. A third method, which involvestaking the “intentional stance”, is to attribute beliefsand desires to the person, and then to predict that they will behaverationally, given those beliefs and desires. Much of our prediction ofhuman behavior appears to involve such attribution (though see Andrews2012). Certainly, treating people as mere physical bodies or asbiological machines will not, as a practical matter, get us very farin predicting what is important to us.
On Dennett’s view, a system with beliefs is a system whosebehavior, while complex and difficult to predict when viewed from thephysical or the design stance, falls into patterns that may becaptured with relative simplicity and substantial if not perfectaccuracy by means of the intentional stance. The system has theparticular belief thatP if its behavior conforms to apattern that can be effectively captured by taking the intentionalstance and attributing the belief thatP. For example, we cansay that Heddy believes that a hurricane may be coming becauseattributing her that belief (along with other related beliefs anddesires) helps reveal the pattern, invisible from the physical anddesign stances, behind her boarding up her windows, making certainphone calls, stocking up provisions, etc. All there is to havingbeliefs, according to Dennett, is embodying patterns of this sort.Dennett acknowledges that his view has the unintuitive consequencethat a sufficiently sophisticated chess-playing machine would havebeliefs if its behavior is very complicated from the design stance(which would involve appeal to its programmed strategies) butpredictable with relative accuracy and simplicity from the intentionalstance (attributing the desire to defend its queen, the belief thatyou won’t sacrifice a rook for a pawn, etc.).
Davidson also characterizes belief in terms of practices of beliefattribution. He invites us to imagine encountering a being with awholly unfamiliar language and then attempting the task ofconstructing, from observation of the being’s behavior in itsenvironment, an understanding of that language (e.g., 1984, p.135–137). Success in this enterprise would necessarily involveattributing beliefs and desires to the being in question, in light ofwhich its utterances make sense. An entity with beliefs is a being forwhom such a project is practicable in principle—a being thatemits, or is disposed to emit, a complex pattern of behavior that canproductively be interpreted as linguistic, rational, and expressive ofbeliefs and desires.
Dennett and Davidson both endorse the “indeterminacy” ofbelief attributions: In at least some cases, multiple incompatibleinterpretive schemes may be equally good, and thus there may be nofact of the matter which of those schemes is “really” thecorrect one, and thus whether the subject “really”believesP, if belief thatP is attributed by onescheme but not by the other.
Many philosophers identify themselves as functionalists (seefunctionalism) about mental states in general or belief in particular. Functionalismabout mental states is the view that what makes something a mentalstate of a particular type are its actual and potential, or itstypical, causal relations to sensory stimulations, behavior, and othermental states (seminal sources include Armstrong 1968; Fodor 1968;Lewis 1972, 1980; Putnam 1975; Block 1978). Functionalists generallycontrast their view with the view that what makes something a mentalstate of a particular type are facts about its internal structure. Tounderstand this distinction, it may be helpful to begin with somenon-mental examples. Arguably, what makes something a streptococcalbacterium, or a cube, is its shape or internal structure; its causalhistory or proneness to produce particular effects on particularoccasions is only secondarily relevant, if at all. In contrast,whether something is a hard drive or not is not principally a matterof internal structure. A hard drive could be made of plastic or steel,employ magnetic tape or lasers. What matters are the causalrelationships it’s prone to enter with a computer: Under certainpromptings, it enters states such that, under certain furtherpromptings, it will generate outputs of a certain sort. Internalstructure is relevant only secondarily, insofar as it grounds thesecausal capacities. Likewise, according to the functionalist, whatmakes a statepain is not its particular neuralconfiguration. People and animals with very different neuralconfigurations could all equally be in pain (even, conceivably, aMartian with an internal structure radically different from ours couldsuffer pain). What matters is that the subject is in a state that(roughly) is apt to be caused by tissue damage or tissue stress andthat, in turn, is apt to cause signs of distress, withdrawal, futureavoidance of the painful stimulus, and (in verbal subjects) thoughtsand utterances like “that hurts!”.
Philosophers frequently endorse functionalism about belief withouteven briefly sketching out the various particular functionalrelationships that are supposed to be involved, though Loar (1981) isa notable exception to this tendency (see also Leitgeb 2017). However,among the causal relationships contemporary philosophers have oftenseen as characteristic of belief are the following (these are sketchedhere only roughly; they come in many versions differing innuance):
(1) Reflection on propositions (e.g., [Q] and [ifQthenP]) from whichP straightforwardly follows, ifone believes those propositions and is not antecedently committed tothe falsity ofP, typically causes the belief thatP.
(2) Directing perceptual attention to the perceptible properties ofthings, events, or states of affairs, in conditions favorable toaccurate perception, typically causes the belief that those things,events, or states of affairs have those properties (e.g., visuallyattending to a red shirt in good viewing conditions will typicallycause the belief that the shirt is red).
(3) Believing that performing actionA would lead to event orstate of affairsE, conjoined with a desire forEand no overriding contrary desire, will typically cause an intentionto doA.
(4) Believing thatP, in conditions favoring sincereexpression of that belief, will typically lead to an assertion ofP.
Loar emphasizes versions of (2) and (3) over (1) and (4), but one seesconditions of this sort at least briefly alluded to by a number offunctionalist philosophers, including Armstrong (1973), Dennett (1969,1978), Stalnaker (1984), Fodor (1990), Pettit (1993), Shoemaker(2003), and Zimmerman (2018). For the functionalist, to believe justis to be in a state that plays this sort of causal role. The intimateconnection, noted in (3), between belief and action is alsohistorically rooted in the pragmatist tradition (Bain 1859/1876;Peirce 1878).
As the list of names of the previous paragraph suggests, functionalismis compatible with either a representationalist approach to belief (asin Fodor) or an interpretationist one (as in Dennett). (Theinterpretationist, of course, will have to treat the relevantfunctional states as posits of an interpretative theory or scheme.)Dispositional accounts of belief, too, can be functionalist. Indeed,dispositional accounts can be seen as a special or limiting case offunctional accounts. To see this, it’s helpful to divide thecausal relations appealed to by functionalism into thebackward-looking and theforward-looking.Backward-looking causal relations pertain to what actually,potentially, or typicallycauses the state in question;forward-looking causal relations pertain to whateffects thestate in question actually, potentially, or typically has. Thus (1)and (2) above are backward-looking causal relations, while (3) and (4)are forward-looking. We might, then, see the dispositionalist as afunctionalist who thinks only the forward-looking causal relations aredefinitive of belief: To believe is to be in a state apt to causesuch-and-such behavioral (or other) manifestations. (This view is, ofcourse, compatible with accepting the existence of regularities like(1) and (2), as long as they are not regarded asdefiningcharacteristics of belief.) Two caveats, however, should accompanythis reduction of dispositionalism to functionalism: First, insofar asfunctionalism about belief requires acausal relationshipbetween the belief state and its manifestations in behavior (or inother mental states), it will exclude dispositionalists like Ryle(1949) who don’t view the disposition-manifestation relationshipcausally (for discussion, see Section 6 (‘The causal efficacy ofdispositions’) of the entry ondispositions). Second, the liberal dispositionalist may wish to demur from thefunctionalist’s usual commitment to the reducibility of factsabout functionally-definable mental states,en masse and inprinciple (allowing for the intricate network of interrelationshipsamong them), to facts about sensory inputs and outward behavior.
The compatibility of functionalism and representationalism is notevident on its face, though a number of prominent philosophers appearto embrace both positions (e.g., Fodor 1968, 1975, 1981, 1990;Armstrong 1973; Harman 1973; Lycan 1981a, 1981b; Stalnaker 1984; Lewis1994). As Millikan (1984), Papineau (1984), and others have suggested,it seems one thing to say that to believe is to be in a state thatfills a particularcausal role, and it seems quite another tosay that beliefs are essentially states thatrepresent howthings stand in the world. How can something represent the worldoutside simply by virtue of playing a certain causal role in acognitive system? Suppose, for example, that a state represents byvirtue of having an indicator function of the sort described at theend of §1.1 above. The indicator function of an internal state orsystem would seem, at least sometimes and in part, to dependconstitutively on the evolutionary history of that state or system, orits learning history, and not simply on the causal relationships it iscurrently disposed to enter. Despite the word“function” in “functionalism”, it’s notclear that standard functionalist accounts, limited as they are toappeal to a state’s actual, potential, or typical causal roles,can incorporate facts about a system’s evolutionary history orlearning history: Conceivably, for example, two states in differentindividuals may have exactly analogous causal roles, yet differ intheir (as Millikan says) “proper function” because ofdifferences in the evolutionary or learning history of thosesystems.
Three escapes from this potential difficulty suggest themselves. Oneis to endorse a version of “conceptual [or functional] rolesemantics” according to which the representational status andcontent of a mental state is reducible just to facts about what is aptto cause and to be caused by the mental state in question—thatis, to deny the relevance of remote evolutionary or learning history(e.g., Harman 1973, 1987). Another is to accept that causal roledetermines the representationalstatus of a mental state(i.e.,that it is a representation) but does not fullyspecify representationalcontent (i.e. how thatrepresentation represents things as being); but this seems to involveabandoning full-blown functionalism. A third is to interpret moreliberally what it is for a mental state to be “typicallycaused” (or perhaps “normally caused”) by some eventor state of affairs: Perhaps it is enough that in the young organism,or its evolutionary ancestors, mental states of that sort were causedin a particular way, or the system was selected to be responsive tocertain sorts of environmental factors. Such claims may be more easilyreconcilable with certain canonical statements of functionalism (suchas Lewis 1980) than with others (such as Putnam 1975). The issue hasnot been as fully discussed as it should be.
Some philosophers have denied the existence of beliefs altogether.Advocates of this view, generally known aseliminativism,include Churchland (1981), Stich (in his 1983 book; he subsequentlymoderated his opinion), and Jenson (2016). On this view,people’s everyday conception of the mind, their “folkpsychology”, is a theory on par with folk theories about theorigin of the universe or the nature of physical bodies. And just asour pre-scientific theories on the latter topics were shown to beradically wrong by scientific cosmology and physics, so also will folkpsychology, which is essentially still pre-scientific, be overthrownby scientific psychology and neuroscience once they have advanced farenough. According to eliminativism, once folk psychology isoverthrown, strict scientific usage will have no place for referenceto most of the entities postulated by folk psychology, such as belief.Beliefs, then, like “celestial spheres” or“phlogiston”, will be judged not actually to exist, butrather to be the mistaken posits of a radically false theory. We maystill find it convenient to speak of “belief” in informalcontexts, if scientific usage is cumbersome, much as we still speak of“the sun going down”, but if the concept of belief doesnot map onto the categories described by a mature scientificunderstanding of the mind, then, literally speaking, no one believesanything. For further discussion of eliminativism and theconsiderations for and against it, see the entry oneliminative materialism.
Instrumentalists about belief regard belief attributions asuseful for certain purposes, but hold that there are no definiteunderlying facts about what people really believe, or that beliefs arenot robustly real, or that belief attributions are never in thestrictest sense true. One sort of instrumentalism—what we mightcallhard instrumentalism—denies that beliefs exist inany sense. Hard instrumentalism is thus a form of eliminativism,conjoined with the thesis that belief-talk is nonethelessinstrumentally useful (e.g., Quine 1960, p. 221 [but for a caveat seep. 262–266]). Another type of instrumentalism, which we mightcallsoft instrumentalism, grants that beliefs are real, butonly in a less robust sense than is ordinarily thought. Dennett (1991)articulates a view of this sort. Consider as an analogy: Is theequator real? Well, not in the sense that there’s a red striperunning through the Congo; but saying that a country is on the equatorsays something true about its position relative to other countries andhow it travels on the spinning Earth. Are beliefs real? Well, notperhaps in the sense of being representations stored somewhere in themind; but attributing a belief to someone says something true aboutthat person’s patterns of behavior and response. Beliefs are asreal as equators, or centers of gravity, or the average Canadian. Thesoft instrumentalist holds that such things are notrobustlyreal—not as real as mountains or masses or individual, actualCanadians. They are in some sense inventions that capture somethinguseful in the structure of more robustly real phenomena. Softinstrumentalism in this sense comports naturally with approaches tobelief such as dispositionalism and interpretationism, to the extentthose positions treat belief attribution simply as a convenient meansof pointing toward certain patterns in a subject’s real andhypothetical behavior (see also Poslajko 2022).
Similarly to instrumentalism,fictionalism treats beliefattribution practices as potentially useful while downplaying ordenying the real existence of the attributed beliefs (Demeter, Parent,and Toon, eds., 2022). Instrumentalism and fictionalism are notincompatible. However, fictionalism emphasizes the resemblance betweenbelief attribution and fictional storytelling, while instrumentalismemphasizes the resemblance to devising a predictively successfulscientific instrument or model.
Normativists hold that belief necessarily has a normative orevaluative dimension. That is, they emphasize the idea that it iscentral to a mental state’s being abelief that it isnecessarily defective in a certain way if it is false, unjustified, ornot rationally related to other attitudes. Shah and Velleman (2005)argue that conceiving of an attitude as a belief thatPentails conceiving of it as governed by anorm of truth, thatis, as an attitude that is correct if and only ifP is true.Engel (2018) argues that among propositional attitudes, belief is theonly one whose “correctness condition” is truth,distinguishing it from other closely related mental states, such asacceptances and epistemic feelings. (See also Wedgwood 2002; Gibbard2005; Glüer and Wikforss 2013; McHugh and Whiting 2014.) Zangwill(2005) argues that part of the essence of belief is that if we believethatP and thatP implies Q we should believe thatQ (note the difference from the functionalist view thatpossessing those two beliefstypically causes belief thatQ). Helton (2020) and Flores (forthcoming) argue thatbelieving entails having the capacity to rationally update one’sbeliefs.
Since normativism commits only to one necessary condition for a mentalstate to qualify as a belief, it is not by itself a full positiveaccount of the nature of belief and is compatible with most of theapproaches described above. Representationalist normativism, forexample, starts from the idea that representational systems arefunctional systems of a certain sort (Millikan 1984; Dretske 1988),and function appears to be a normative concept, implying at least acontrast with malfunction. Burge (2010) argues that the “primaryconstitutive function” of believing is the production ofveridical propositional representations. More broadly, belief hasoften been described as having a “direction of fit” in thesense that beliefs (unlike, for example, desires)ought tofit with, or get it right about, or match up to, the states of affairsthey describe or represent (Anscombe 1957/1963; Searle 1983;Humberstone 1992; Frost 2014). If you believe thatP andP is false, you have erred or made a mistake, whereas if youdesire thatP andP is false, you have not in thesame way erred or made a mistake.
Philosophers often distinguishdispositional (alternatively,standing) fromoccurrent belief. This distinctiondepends on the more general distinction betweendispositionsandoccurrences. Examples of dispositional statementsinclude:
(1a) Corina runs a six-minute mile,
(1b) Leopold is excitable,
(1c) salt dissolves in water.
These statements can all be true even if, at the time they areuttered, Corina is asleep, Leopold is relaxed, and no salt is actuallydissolved in any water. They thus contrast with statements aboutparticular occurrences, such as:
(2a) Corina is running a six-minute mile,
(2b) Leopold is excited,
(2c) some salt is dissolving in water.
Although (1a-c) can be true while (2a-c) are false, (1a-c) cannot betrue unless there are conditions under which (2a-c) would be true. Wecannot say that Corina runs a six-minute mile unless there areconditions under which she would in fact do so. A dispositional claimis a claim, not about anything that is actually occurring at the time,but rather that some particular thing isprone to occur,under certain circumstances.
Suppose Harry thinks plaid ties are hideous. Only rarely does thethought or judgment that they are hideous actually come to theforefront of his mind. When it does, he possesses the beliefoccurrently. The rest of the time, Harry possesses the belief onlydispositionally. The occurrent belief comes and goes, depending onwhether circumstances elicit it; the dispositional belief endures. Thecommon representationalist warehouse model of memory and beliefsuggests a way of thinking about this. A subject dispositionallybelievesP if a representation with the contentP isstored in their memory or “belief box” (in the central,“explicit” case: see §2.2). When that representationis retrieved from memory for active deployment in reasoning orplanning, the subject occurrently believesP. As soon as theymove to the next topic, the occurrent belief ceases.
As the last paragraph suggests, one needn’t adopt adispositional approach to belief in general to regard some beliefs asdispositional in the sense here described. In fact, a strictdispositionalism may entail the impossibility of occurrent belief: Ifto believe something is to embody a particular dispositionalstructure, then a thought or judgment might not belong to the rightcategory of things to count as a belief. The thought or judgment,P, may be amanifestation of an overalldispositional structure characteristic of the belief thatP,but it itself is not that structure.
Though the distinction between occurrent and dispositional belief iswidely employed, it is rarely treated in detail. Important discussionsinclude Price (1969), Armstrong (1973), Lycan (1986), Searle (1992),Audi (1994), and Bartlett (2018). David Hume (1740) famously offers anaccount of belief that treats beliefs principally as occurrences (seethe section on Belief inHume), in which he is partly followed by Braithwaite (1932–1933) andGertler (2007).
It seems natural to say that you believe that the number of planets isless than 9, and also that the number of planets is less than 10, andalso that the number of planets is less than 11, and so on, for anynumber greater than 8 that one cares to name. On a simplistic readingof the representational approach, this presents a difficulty. If eachbelief is stored individually in representational format somewhere inthe mind, it would seem that we must have a huge number of storedrepresentations relevant to the number of planets—more than itseems plausible or necessary to attribute to an ordinary human being.And of course this problem generalizes easily.
The advocate of the maps view of representational structure (see§1.1.1, above) can, perhaps, avoid this difficulty entirely,since it seems a map of the solar systemdoes represent allthese facts about the number of planets within a simple, tractablesystem. However, representationalists have more commonly responded tothis issue by drawing a distinction between explicit and implicitbelief. One believesPexplicitly if arepresentation with that content is actually present in the mind inthe right sort of way—for example, if a sentence with thatcontent is inscribed in the “belief box” (see §1.1above). One believesPimplicitly (ortacitly) if one believesP, but the mind does notpossess, in a belief-like way, a representation with that content.(Philosophers sometimes use the termdispositional to referto beliefs that are implicit in the present sense—but thisinvites confusion with the occurrent-dispositional distinctiondiscussed above (§2.1). Implicit beliefs are, perhaps,necessarily dispositional in the sense of the previous subsection, ifoccurrently deploying a belief requires explicitly tokening arepresentation of it; but explicit beliefs may plausibly bedispositionalor occurrent.)
Perhaps all that’s required to implicitly believe something isthat the relevant content be swiftly derivable from something oneexplicitly believes (Dennett 1978, 1987). Thus, in the planets case,we may say that you believe explicitly that the number of planets is 8and only implicitly that the number of planets is less than 9, lessthan 10, etc. Of course, if swift derivability is the criterion, thenalthough there may be a sharp line between explicit and implicitbeliefs (depending on whether the representation is stored or not),there will not be a sharp line between what one believes implicitlyand what, though derivable from one’s beliefs, one does notactually believe, since swiftness is a matter of degree (see Field1978; Lycan 1986).
The representationalist may also grant the possibility of implicitbelief, or belief without explicit representation, in cases of thefollowing sort (discussed in Dennett 1978; Fodor 1987). Achess-playing computer is explicitly programmed with a large number ofspecific strategies, in consequence of which it almost always ends uptrying to get its queen out early; but nowhere is there any explicitlyprogrammed representation with the content “get the queen outearly”, or any explicitly programmed representation from which“get the queen out early” is swiftly derivable. Thepattern emerges as a product of various features of the hardware andsoftware, despite its not being explicitly encoded. While mostphilosophers would not want to say that any currently existingchess-playing computer literally has the belief that it should get itsqueen out early, it is clear that an analogous possibility could arisein the human case and thus threaten representationalism, unlessrepresentationalism makes room for a kind of emergent, implicit beliefthat arises from more basic structural facts in this way. However, ifthe representationalist grants the presence of beliefwhenever there is a belief-like pattern of actual orpotential behavior, regardless of underlying representationalstructure, then the position risks collapsing into dispositionalism orinterpretationism. The issue of how to account for apparent cases ofbelief without explicit representation poses an underexploredchallenge to representationalism (see Schwitzgebel forthcoming).
Empirical psychologists have drawn a contrast between implicit andexplicit memory or knowledge, but this distinction does not map neatlyonto the implicit/explicit belief distinction described in Section2.2.1. In the psychologists’ sense, explicit memory involves theconscious recollection of previously presented information, whileimplicit memory involves the facilitation of a task or a change inperformance as a result of previous exposure to information, without,or at least not as a result of, conscious recollection (Schacter 1987;Schacter and Tulving 1994; though see Squire 2004). For example, if asubject is asked to memorize a list of word pairs—bird/truck,stove/desk, etc.—and is then cued with one word and asked toprovide the other, the subject’s explicit memory is beingtested. If the subject is brought back two weeks later, and has noconscious recollection of most of the word pairs on the list, thenthey have no explicit memory of them. However, implicit memory of theword-pairs would be revealed if they found it easier to learn the“forgotten” pairs a second time. Knowledge that is“implicit” in this sense will normallynot beimplicit in the sense of the previous subsection (if it were swiftlyderivable from what one explicitly believes, presumably one couldanswer the test questions correctly); it’s also at leastconceptually possible that some such psychologically implicitknowledge may be stored stored “explicitly” in the senseof the previous subsection.
A different empirical literature addresses the issue of“implicit attitudes”, for example implicit racism orsexism, which are often held to conflict with verbally or consciouslyespoused attitudes. Such implicit attitudes might be revealed byemotional reactions (e.g., more negative affect among Whiteparticipants when assigned to a co-operative task with a Black personthan with a White person) or by association or priming tasks (e.g.,faster categorization responses when White participants are asked topair negative words with dark-skinned faces and positive words withlight-skinned faces than vice versa). However, it remainscontroversial to what extent tests of this sort reveal subjects’(implicit)beliefs, as opposed to merely culturally-givenassociations or attitudes other than full-blown belief (Wilson,Lindsey, and Schooler 2000; Kihlstrom 2004; Lane et al. 2007; Hunter2011; Tumulty 2014; Levy 2015; Machery 2016; Madva 2016; Zimmerman2018; Brownstein, Madva, and Gawronski 2019). Gendler, for example,suggests that we regard such implicit attitudes as arational andautomaticaliefs rather than genuine evidence-responsivebeliefs (Gendler 2008a–b; for critique see Schwitzgebel2010; Mandelbaum 2013).
Jessie believes that Stalin was originally a Tsarist mole among theBolsheviks, that her son is at school, and that she is eating atomato. She feels different degrees of confidence with respect tothese different propositions. The first she recognizes to be aspeculative historical conjecture; the second she takes for granted,though she knows it could be false; the third she regards as anear-certainty. Consequently, Jessie is more confident of the secondproposition than the first and more confident of the third than thesecond. We might suppose that every subject holds each of theirbeliefs with some particular degree of confidence. In general, thegreater the confidence one has in a proposition, the more willing oneis to depend on it in one’s actions.
One common way of formalizing this idea is by means of a scale from 0to 1, where 0 indicates absolute certainty in the falsity of aproposition, 1 indicates absolute certainty in its truth, and .5indicates that the subject regards the proposition just as likely tobe true as false. This number then indicates one’scredence ordegree of belief. Standard approachesequate degree of belief with the maximum amount the subject would, oralternatively should, be willing to wager on a bet that pays nothingif the proposition is false and 1 unit if the proposition is true. So,for example, if the subject thinks that the proposition “therestaurant is open” is three times more likely to be true thanfalse, they should be willing to pay no more than $0.75 for a wagerthat pays nothing if the restaurant is closed and $1 if it is open.Consequently, the subject’s degree of belief is .75, or 75%.Such a formalized approach to degree of belief has proven useful indecision theory, game theory, and economics. Standard philosophicaltreatments of this topic include Jeffrey (1983) and Skyrms (2000).
However, the phrase “degree of belief” may be misleading,because the relationship between confidence, betting behavior, andbelief is not straightforward. The dispositionalist orinterpretationist, for example, might regard exhibitions of confidenceand attitudes toward risk as only part of the overall patternunderwriting belief ascription. Similarly, the representationalistmight hold that readiness to deploy a representation in belief-likeways need not line up perfectly with betting behavior. Some peoplealso find it intuitive to say that a rational person holding a ticketin a fair lottery may not actually believe that they will lose, butinstead regard it as an open question, despite having a “degreeof belief” of, say, .9999 that they will lose. If this persongenuinely believes some other propositions, such as that their son isat school, with a “degree of belief” considerably lessthan .9999, then it appears to follow that a rational person may insome cases have a higher “degree of belief” in aproposition that they do not believe than in a proposition they dobelieve (see Harman 1986; Sturgeon 2008; Buchak 2014; Leitgeb 2017;Friedman 2019). This suggests a dissociation between belief andcredence, raising the question of whether they are distinct attitudes,and if so whether one is more fundamental (see Jackson 2020 for areview).
Relatedly, Neil Van Leeuwen has argued for a functional distinctionbetween “factual belief” and “religiouscredence” (Van Leeuwen 2014) or, alternatively,“mundane” and “groupish” belief (Van Leeuwenforthcoming). The first type of belief guides mundane action and tendsto do so successfully when the belief is true. Typical contents mightbe: the light switch is to the left; class is at 2 p.m. The secondtype of belief, in contrast, is connected with group identity andworks well if it effectively signals group membership, regardless oftruth. Typical contents might be: God is a Trinity; Earth is flat. IfVan Leeuwen is correct, mundane and groupish beliefs are sufficientlydifferent in their causes and effects, or their functional roles, asto be worth distinguishing as distinct types of attitude.
Philosophers have sometimes drawn a distinction betweenacceptance andbelief. Generally speaking,acceptance is held to be more under the voluntary control of thesubject than belief and more directly tied to a particular practicalaction in a context. For example, a scientist, faced with evidencesupporting a theory, evidence acknowledged not to be completelydecisive, may choose to accept the theory or not to accept it. If thetheory is accepted, the scientist ceases inquiring into its truth andbecomes willing to ground their own research and interpretations inthat theory; the contrary if the theory is not accepted. If one isabout to use a ladder to climb to a height, one may check thestability of the ladder in various ways. At some point, one acceptsthat the ladder is stable and climbs it. In both of these examples,acceptance involves a decision to cease inquiry and to act as thoughthe matter is settled. This does not, of course, rule out thepossibility of re-opening the question if new evidence comes to lightor new risks arise.
The distinction between acceptance and belief can be supported byappeal to cases in which one accepts a proposition without believingit and cases in which one believes a proposition without accepting it.Van Fraassen (1980) has argued that the former attitude is common inscience: the scientist often does not think that some particulartheory on which their work depends is the literal truth, and thus doesnot believe it, but nonetheless they accept it as an adequate basisfor research. The ladder case, due to Bratman (1999), may involvebelief without acceptance: One may genuinely believe, even beforechecking it, that the ladder is stable, but because so much depends onit and because it is good general policy, one nonetheless does notaccept that the ladder is stable until one has checked it morecarefully.
Important discussions of acceptance include van Fraassen (1980),Harman (1986), Cohen (1989, 1992), Lehrer (1990), Bratman (1999),Velleman (2000), and Frankish (2004).
The traditional analysis of knowledge, brought into contemporarydiscussion (and famously criticized) by Gettier (1963), takespropositional knowledge to be a species of belief—specifically,justified true belief. Most contemporary treatments of knowledge aremodifications or qualifications of the traditional analysis andconsequently also treat knowledge as a species of belief. (For adetailed treatment of this topic see the entry on theanalysis of knowledge. For critique of the view that propositional knowledge entails belief,see Radford 1966; Murray, Sytsma, and Livengood 2013; Myers-Schulz andSchwitzgebel 2013).
There may also be types of knowledge that are not types of belief,though they have received less attention from epistemologists. Ryle(1949), for example, emphasizes the distinction between knowinghow to do something (e.g., ride a bicycle) and knowingthat some particular proposition is true (e.g., that Seoul isthe capital of Korea). In contemporary psychology, a similardistinction is sometimes drawn betweenprocedural anddeclarative knowledge (see Squire 1987; Schacter, Wagner, andBuckner 2000; also the entry onmemory). Although knowledge-that or declarative knowledge may plausibly be akind of belief, it is not easy to see how procedural knowledge orknowledge-how could be so, unless one holds that people have a myriadof beliefs about minute and non-obvious procedural details. At least,there is no readily apparent relation between knowledge-how and“belief-how” that runs parallel to the relationepistemologists generally accept between knowledge-that andbelief-that. (For an influential attempt to subsume knowledge-howunder knowledge-that, see Stanley and Williamson 2001; Stanley2011.)
The standard reference text in psychiatry, theDiagnostic andStatistical Manual of Mental Disorders (DSM-V-TR, 2022)characterizes delusions (e.g., persecutory delusions, delusions ofgrandiosity) as beliefs. However, delusions often do not appear toconnect with behavior in the usual way. For example, a victim ofCapgras delusion—a delusion in which the subject asserts that afamily member or close friend has been replaced by anidentical-looking imposter—may continue to live with the“imposter” and make little effort to find the supposedlymissing loved one. Some philosophers have therefore suggested thatdelusions do not occupy quite the functional role characteristic ofbelief and thus are not, in fact, beliefs (e.g., Currie 2000; Stephensand Graham 2004; Gallagher 2009; Matthews 2013). Others have defendedthe view that delusions are beliefs (e.g., Campbell 2001; Bayne andPacherie 2005; Bortolotti 2010, 2012) or in-between cases, with somefeatures of belief but not other features (e.g., Egan 2009; Tumulty2011). See the entry ondelusion, especially §4.2Are Delusions Beliefs?
Philosophers generally say that the belief thatP has the(propositional)contentP. A variety of issues ariseabout how to characterize those contents and what determines them.
The standard view that the contents of beliefs are propositions givesrise to a debate about belief contents parallel to, and closelyrelated to, a debate about the metaphysics of propositions. Onestandard view of propositions takes propositions to be sets ofpossible worlds; another takes propositions to have something moreclosely resembling a linguistic logical structure (seestructured propositions for a detailed exposition of this issue).
Stalnaker (1984) endorses the possible-worlds view of propositions andimports it directly into his discussion of belief content: He contendsthe content of a belief is specified by the set of “possibleworlds” at which that belief is true (see Lewis 1979 for asimilar approach). The structure of belief content is thus thestructure of set theory. Among the advantages Stalnaker claims forthis view is its smooth accommodation of gradual change and of whatmight, from the point of view of a discrete linguistic structure, beseen as problematically indeterminate belief contents. Developing anexample from Dennett (1969), he describes the gradual transition froma child’s learning to say (without really understanding) that“Daddy is a doctor” to having a full, adult appreciationof the fact that their father is a doctor. At some point, Stalnakersuggests, it’s best to say that child “sort of” or“half” believes the proposition in question. It’snot clear how to characterize such gradual shifts by means of alinguistic or quasi-linguistic propositional structure (1984, p.64–65; see also Schwitzgebel 2001). On Stalnaker’s view,the child’s half-belief is handled by attributing the child thecapacity to rule out some but not all of the possibilitiesincompatible with Daddy’s being a doctor: As their knowledgegrows, so does their sense of the excluded possibilities.
The possible worlds approach to belief content is sometimes referredto as a “coarse-grained” approach because it implies thatany two beliefs that would be true in exactly the same set of possibleworlds have the same content—as opposed to a“fine-grained” approach on which beliefs that would betrue at exactly the same set of possible worlds may nonetheless differin content. The difference between these two approaches is brought outmost starkly by considering mathematical propositions. On standardaccounts of possibility, all mathematically true propositions are truein exactly the same set of possible worlds—every world.It seems to follow, on the coarse-grained view, that the belief that 1+ 1 = 2 has exactly the same content as the belief that the cosine of0 is 1, and thus that anyone who believes (or fails to believe) theone accordingly believes (or fails to believe) the other. And thatseems absurd.
Stalnaker attempts to escape this difficulty by characterizingmathematical belief as belief aboutsentences: The beliefthat thesentence “1 + 1 = 2” expresses a truthand the belief that thesentence “the cosine of 0 is1” expresses a truth have different content and may differ intruth value between possible worlds (due simply to possible variationsin the meanings of terms, if nothing else). However, it’sprobably fair to say that few philosophers follow Stalnaker in thisview (see discussion in Robbins 2004; and Rayo 2013 for a recent viewsimilar to Stalnaker’s). The apparent difficulty of sustainingsuch a view of belief is often held to reflect badly on the acoarse-grained possible-worlds view of propositions in general, sinceit’s generally thought that one of the principal metaphysicalfunctions of propositions is to serve as the contents of belief andother “propositional attitudes” (e.g., Field 1978; Soames1987).
Ani believes that salmon are fish; not knowing that whales aremammals, she also believes that whales are fish. Sanjay, like Ani,believes that salmon are fish, but he denies that whales are fish. DoAni and Sanjay shareexactly the same belief aboutsalmon—namely, that they are fish—or is the content oftheir belief somehow subtly different in virtue of their differentattitude toward whales? With certain caveats, theatomistwill say the former, theholist the latter. In general, theatomist holds that the content of one’s beliefs does not dependin any general way on one’s related beliefs (though it maydepend on the contents of a few specially related beliefs such asdefinitions) and thus, consequently, that people who sincerely andcomprehendingly accept the same sentence normally have exactly thesame belief. Holism is the contrary view that the content of everybelief depends to a large degree on a broad range of one’srelated beliefs, and consequently that two people will rarely believeexactly the same thing.
Holism may be defended by a slippery-slope argument. It seems that wecan imagine Sanjay’s and Ani’s beliefs about the nature offish and the members of the class of fish slowly diverging. At somepoint, it will seem plainly correct to say that even though they mayboth say “salmon are fish”, they are not expressing thesame belief by that sentence. As an extreme case, we might imagine Anito be so benighted as to hold that to be a “fish” isneither more nor less than to be an Earthly animal in regular contactwith Martians, and that only salmon, whales, leopards, and bananaslugs are in such contact. But if we deny, in the extreme case, thatAni and Sanjay share the same belief, expressed by the sentence“salmon are fish”, it seems artificial to draw a sharpline anywhere in the progression of divergence, on one side of whichthey share exactly the same belief about salmon and on the other sideof which they have divergent beliefs. One is thus led to theconclusion that similarity in belief is a matter of degree, and it maythen be difficult to avoid accepting that even a relatively smalldivergence in surrounding beliefs may be sufficient to generate subtledifferences between two beliefs expressed in the same words. Similarslippery slope arguments can be constructed that emphasize gradualbelief change in concept acquisition (“Leibniz was ametaphysician” agreed to before and after learning philosophy)or gradual change in surrounding theory or in the meaning of a term(“electrons have orbits” as uttered by Niels Bohr in 1913and as uttered by Richard Feynman in 1980). (This argument is similarin some ways to Stalnaker’s argument for a possible-worldsanalysis of the propositional contents of belief—see §3.1,above—and indeed Stalnaker takes himself, there, to be committedto holism.)
Dispositional and interpretational approaches to belief tend to beholist. On these views, recall, to believe is to be disposed toexhibit patterns of behavior interpretable or classifiable by means ofvarious belief attributions (see §1.2 and §1.3 above). It isplausible to suppose that a subject’s match to the relevantpatterns will generally be a matter of degree. There may be few actualcases in which two subjects exactly match in their behavioral patternsregardingP, even if it gets matters approximately right toattribute to each of them the belief thatP. Since behavioraldispositions are interlaced in a complex way, divergence in any of avariety of attitudes related toP may be sufficient to ensuredivergence in the patterns relevant toP itself. AsAni’s associated beliefs grow stranger, her overall behavioralpattern, or dispositional structure, begins to look less and less likeone that we would associate with believing that salmon are fish.
It is sometimes objected to holism that, intuitively, both Shakespeareand contemporary physicians believe that blood is red, while on theholist view it is hard to see how their beliefs could even be similar,given that they have so many different surrounding beliefs about bothblood and redness. Although in principle a holist could respond tothis objection by describing what sort of differences in surroundingbelief create only minor divergences and what differences create majorones, there have been no influential attempts at such a project.
Holism appears to be incompatible with a certain variety ofrepresentationalism about belief. If beliefs, or the representationsunderlying them, are stored symbols in the mind, somewhat likesentences on a chalkboard or objects in a box (to use standardFodorian metaphors), then it is natural to suppose that those beliefscan, in principle, exist independently of each other. Whether onebelievesP depends on whether a representation with thecontent “P” is present in the right sort of wayin the mind, which would not seem to be directly affected by whetherQ or not-Q, orR or not-R, is alsorepresented. If there is, in addition, an innate language of thoughtof the sort advocated by Fodor and others, then the basic terms ofthat language may also be exactly the same from person to person. If aview of this sort about the mind can be sufficiently well supported,holism would have to be rejected. Conversely, if holism is plausible,it cuts against the more atomistic forms of representationalism.
Fodor and Lepore (1992) contains an excellent if dated review andcritique of arguments for holism. The foremost defenders of holism areprobably Quine (1951) and Davidson (1984).
Quine (1956) introduced contemporary philosophy of mind to thedistinction betweende re andde dicto beliefattributions by means of examples like the following. Ralph sees asuspicious-looking man in a trenchcoat and concludes that that man isa spy. Unbeknownst to him, however, the man in the trenchcoat is thenewly elected mayor, Bernard J. Ortcutt, and Ralph would sincerelydeny the claim that “the mayor is a spy”. So does Ralphbelieve that the mayor is a spy? There appears to be a sense in whichhe does and a sense in which he does not. Philosophers have attemptedto characterize the difference between these two senses by saying thatRalph believesde re, of that man (the man in the trenchcoatwho happens also to be the mayor), that “he is a spy”,while he does not believede dicto that “the mayor is aspy”.
The standard test for distinguishingde re fromdedicto attributions isreferential transparency oropacity. A sentence, or more accurately a position in asentence, is held to be referentially transparent if terms or phrasesin that position that refer to the same object can be freelysubstituted without altering the truth of the sentence. The(non-belief attributing) sentence “Jill kickedX”is naturally read as referentially transparent in this sense. If“Jill kicked the ball” is true, then so also is anysentence in which “the ball” is replaced by a term orphrase that refers to that same ball, e.g., “Jill kickedDavy’s favorite birthday present”, “Jill kicked thething we bought at Walmart on August 26”. Sentences, orpositions, are referentially opaque just in case they are nottransparent, that is, if the substitution of co-referring terms orphrases could potentially alter their truth value.De dictobelief attribution is held to be referentially opaque in this sense.On thede dicto reading of belief, “Ralph believes thatthe man in the trenchcoat is a spy” may be true while“Ralph believes that the mayor is a spy” is false.Likewise, on ade dicto reading, “Lois Lane believesthat Superman is strong” may be true while “Lois believesthat Clark Kent is strong” is false, even if Superman and ClarkKent are, unbeknownst to Lois, one and the same person. (Regarding theLois example, however, see also §3.5, on Frege’s Puzzle,below.)
In some contexts, the liberal substitution of co-referential terms orphrases seems permissible in ascribing belief. Shifting examples,suppose Davy is a preschooler who has just met a new teacher, Mrs.Sanchez, who is Mexican, and he finds her too strict. Davy’smother, in reporting this fact to his father, might say “Davythinks Mrs. Sanchez is too strict” or “Davy thinks the newMexican teacher is too strict”, even though Davy does not knowthe teacher’s name or that she is Mexican. Similarly, if Ralpheventually discovers that the man in the trenchcoat was Ortcutt, hemight, in recounting the incident to his friends later, laughinglysay, “For a moment, I thought the mayor was a spy!” or“For a moment, I thought Ortcutt was a spy”. In adere mood, then, we can say that Davy believes, ofX, thatshe is too strict and Ralph believes, ofY, that he is a spy,whereX is replaced by any term or phrase that picks out Mrs.Sanchez andY is replaced by any term or phrase that picksout Ortcutt—though of course, depending on the situation,pragmatic considerations will favor the use of some terms or phrasesover others. In a strictde re sense, perhaps we can even saythat Lois believes, of Clark Kent, that he is strong (though she mayalso simultaneously believe of him that he is not strong).
The standard view, then, takes belief-attributing sentences to besystematically ambiguous between a referentially opaque,dedicto structure and a referentially transparent,de restructure. Sometimes this view is conjoined with the view thatdere but notde dicto belief requires some kind of directacquaintance with the object of belief.
The majority of the literature on thede re /dedicto distinction since at least the 1980s has challenged thisstandard view in one way or another. The challenges are sufficientlydiverse that they resist brief classification, except perhaps toremark that a number of them invoke pragmatics or conversationalcontext, instead of an ambiguity in the term “belief”, orin the structure of belief ascriptions, to explain the fact that itseems in some way appropriate and in some way inappropriate to saythat Ralph believes the mayor is a spy.
Among the more important discussions of thede re /dedicto distinction are Quine (1956), Kaplan (1968), Burge (1977),Lewis (1979), Stich (1983), Dennett (1987), Crimmins (1992), Brandom(1994), Jeshion (2002), Taylor (2002), and Keshet (2010). See also thesupplement on the De Re/De Dicto Distinction in the entry onpropositional attitude reports.
A number of philosophers have suggested that the content ofone’s beliefs depends entirely on things going on insideone’s head, and not at all on the external world, except via theeffects of the latter on one’s brain. Consequently, if a geniusneuroscientist were to create a molecule-for-molecule duplicate ofyour brain and maintain it in a vat, stimulating it artificially sothat it underwent exactly the same sequence of electrical and chemicalevents as your actual brain, that brain would have exactly the samebeliefs as you. Those who accept this position areinternalists about belief content. Those who reject it areexternalists.
Several arguments against internalism have prompted considerabledebate in philosophy of mind. Here is a condensed version of oneargument, due to Putnam (1975; though it should be said thatPutnam’s original emphasis was on linguistic meaning, not onbelief). Suppose that in 1750, in a far-off region of the universe,there existed a planet that was physically identical to Earth,molecule-for-molecule, in every respect but one: Where Earth hadwater, composed of H2O, Twin Earth had something elseinstead, “twater”, coming down as rain and fillingstreams, behaving identically to water by all the chemical tests thenavailable, but having a different atomic formula, XYZ. Intuitively, itseems that the inhabitants of Earth in 1750 would have beliefs aboutwater and no beliefs about twater, while the inhabitants of Twin Earthwould have beliefs about twater and no beliefs about water. Byhypothesis, however, each inhabitant of Earth will have a molecularlyidentical counterpart on Twin Earth with exactly the same brainstructures (except, of course, that their brains will contain XYZinstead of H2O, but reflection on analogous examplesregarding chemicals not contained in the brain suggests that this factis irrelevant). Consequently, the argument goes, the contents ofone’s beliefs do not depend entirely on internal properties ofone’s brain.
For further detail on the debate between internalists andexternalists, see the entries onexternalism about the mind andnarrow mental content.
Recall that in thede dicto sense (see §3.3 above) itseemed plausible to say that Lois Lane, who does not know that ClarkKent is Superman, believes that Superman is strong but does notbelieve that Clark Kent is strong. Despite the intuitive appeal ofthis view, some widely accepted “Russellian” views in thephilosophy of language appear committed to attributing to Lois exactlythe same beliefs about Clark Kent as she has about Superman. On suchviews, the semantic content of a name, or the contribution it makes tothe meaning or truth conditions of a sentence, depends only on theindividual picked out by that name. Since the names“Superman” and “Clark Kent” pick out the sameindividual, it follows that the sentence “Lois believes Supermanis strong” could not have a different meaning or truth valuefrom the sentence “Lois believes Clark Kent is strong”.Philosophers of language have discussed this issue, known as“Frege’s Puzzle”, extensively since the 1970s.Although the issues here arise for all the propositional attitudes (atleast), generally the puzzle is framed and discussed in terms ofbelief. See the entry onpropositional attitude reports.
Some philosophers have argued that beings without language, notablyhuman infants and non-human animals, cannot have beliefs. The mostinfluential case for this view has been Davidson’s (1982, 1984;Heil 1992). Three primary arguments in favor of the necessity oflanguage for belief can be extracted from Davidson.
The first starts from the observation that if we are to ascribe abelief to a being without language—a dog, say, who is barking upa tree into which a squirrel has just run—we must ascribe abelief with some particular content. At first blush, it seems naturalto say that, in the case described, the dog believes that the squirrelis in the tree. However, on reflection, that attribution may seem tobe not quite right. The dog does not really have the concept of asquirrel or a tree in the human sense. The dog may not know, forinstance, that trees have roots and require water to grow.Consequently, according to Davidson, it is not really accurate to saythat the dog believes that thesquirrel is in thetree (at least in thede dicto sense: see §3.3above). However, Davidson argues, neither does the dog have anyother particular belief. Embracing holism (see §3.2above), Davidson asserts that to have a belief with a specificcontent, that belief must be embedded in a rich network of otherbeliefs with specific contents, but a dog’s cognitive life isnot complex enough to support such a network. “Belief”talk thus cannot get traction (cf. Dennett 1969; Stich 1979,1983).
Several philosophers (e.g., Routley 1981; Smith 1982; Allen 1992;Glock 2010) have objected to this argument on the grounds that thedog’s cognition about things such as trees, while perhaps notmuch like ours, is nonetheless relatively rich, involving a number ofelements relatively neglected by us, such as their scent and their usein marking territory. The dog’s understanding of a tree may beat least as rich as the human understanding of some objects aboutwhich we seem to have beliefs. For example, it seems that a chemicallyuntrained person may believe that boron is a chemical element withoutknowing very much about boron apart from that fact. Since we have nolanguage for doggy concepts, our belief ascriptions to dogs can onlybe approximate—but if one accepts holism, then belief ascriptionto other human beings may be similarly approximate.
Davidson also argues that to have a belief one must have theconcept of belief, which involves the ability to recognizethat beliefs can be false or that there is a mind-independent realitybeyond one’s beliefs; and that these abilities require language.However, Davidson offers little explicit support for this claim.Furthermore, many developmental psychologists have suggested thatchildren do not understand the appearance-reality distinction and donot recognize that beliefs can be false until they are at least threeyears old (Perner 1991; Wellman, Cross, and Watson 2001; though seeSouthgate, Senju, and Csibra 2007; Scott and Baillargeon 2017).Davidson’s view thus requires him either to reject thisempirical thesis or embrace the seemingly implausible view thattwo-year-olds have no beliefs (see also Andrews 2002).
The view that belief requires language is a natural consequence of theview that belief attribution is inextricably intertwined with theinterpretation of a subject’s linguistic utterances. Davidson,as described above (§1.3), argues that the interpretation ofcreature’s beliefs, desires,and its language must cometogether as a package. This provides a third Davidsonian reason forrejecting belief without language (a reason that, however, remainslargely implicit in Davidson): Creatures without language are missingpart of what is essential to a behavioral pattern of the sort that canunderwrite proper belief ascription (and recall that on aninterpretational view, all there is to having a belief is having apattern of behavior that is interpretable in that way to an outsideobserver). Any view that ties belief attribution and thesubject’s language as closely together as Davidson’sdoes—Sellars (1956, 1969), Brandom (1994), and Wettstein (2004)also offer views of this sort—will have difficulty accommodatingthe possibility of belief in creatures without language. Thus,whatever draws us to such views will also provide reason to denybelief (or at least robust, full-blown belief) to languagelesscreatures.
Positive arguments for attributing beliefs to (at least) human infantsand non-linguistic mammals have tended to focus on the generalbiological and behavioral similarity between adult human beings, humaninfants, and non-human mammals; the naturalness of describing thebehavior of infants and non-linguistic mammals in terms of theirbeliefs and desires; and the difficulty of usefully characterizingtheir mental lives without relying on the ascription of propositionalattitudes (e.g., Routley 1981; Marcus 1995; Allen and Bekoff 1997;Zimmerman 2018; Curry forthcoming).
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
animal: cognition |behaviorism |belief, ethics of |belief, formal representations of |bias, implicit |cognitive science |compositionality |connectionism |consciousness |consciousness: and intentionality |Davidson, Donald |delusion |desire |dispositions |externalism about the mind |fictionalism |folk psychology: as a theory |functionalism |intentionality |intentionality: phenomenal |knowledge: analysis of |language of thought hypothesis |logic: of belief revision |materialism: eliminative |meaning: normativity of |memory |mental causation |mental content: causal theories of |mental content: narrow |mental content: nonconceptual |mental content: teleological theories of |mental representation |mind: computational theory of |physicalism |propositional attitude reports |propositions |propositions: singular |propositions: structured
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054