Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

The Computational Theory of Mind

First published Fri Oct 16, 2015; substantive revision Wed Dec 18, 2024

Could a machine think? Could the mind itself be a thinking machine?The computer revolution transformed discussion of these questions,offering our best prospects yet for machines that emulate reasoning,decision-making, problem solving, perception, linguisticcomprehension, and other mental processes. Advances in computing raisethe prospect that the mind itself is a computational system—aposition known asthe computational theory of mind (CTM).Computationalists are researchers who endorse CTM, at leastas applied to certain important mental processes. CTM played a centralrole within cognitive science during the 1960s and 1970s. For manyyears, it enjoyed orthodox status. More recently, it has come underpressure from various rival paradigms. A key task facingcomputationalists is to explain what one means when one says that themind “computes”. A second task is to argue that the mind“computes” in the relevant sense. A third task is toelucidate how computational description relates to other common typesof description, especiallyneurophysiological description(which cites neurophysiological properties of the organism’sbrain or body) andintentional description (which citesrepresentational properties of mental states).


1. Turing machines

The intuitive notions ofcomputation andalgorithmare central to mathematics. Roughly speaking, an algorithm is anexplicit, step-by-step procedure for answering some question orsolving some problem. An algorithm providesroutine mechanicalinstructions dictating how to proceed at each step. Obeying theinstructions requires no special ingenuity or creativity. For example,the familiar grade-school algorithms describe how to compute addition,multiplication, and division. Until the early twentieth century,mathematicians relied upon informal notions of computation andalgorithm without attempting anything like a formal analysis.Developments in the foundations of mathematics eventually impelledlogicians to pursue a more systematic treatment. Alan Turing’slandmark paper “On Computable Numbers, With an Application tothe Entscheidungsproblem” (Turing 1936) offered the analysisthat has proved most influential.

ATuring machine is an abstract model of an idealizedcomputing device with unlimited time and storage space at itsdisposal. The device manipulatessymbols, much as a humancomputing agent manipulates pencil marks on paper during arithmeticalcomputation. Turing says very little about the nature of symbols. Heassumes that primitive symbols are drawn from a finite alphabet. Healso assumes that symbols can be inscribed or erased at “memorylocations”. Turing’s model works as follows:

  • There are infinitely many memory locations, arrayed in a linearstructure. Metaphorically, these memory locations are“cells” on an infinitely long “paper tape”.More literally, the memory locations might be physically realized invarious media (e.g., silicon chips).
  • There is a central processor, which can access one memory locationat a time. Metaphorically, the central processor is a“scanner” that moves along the paper tape one“cell” at a time.
  • The central processor can enter into finitely manymachinestates.
  • The central processor can perform four elementary operations:write a symbol at a memory location; erase a symbol from a memorylocation; access the next memory location in the linear array(“move to the right on the tape”); access the previousmemory location in the linear array (“move to the left on thetape”).
  • Which elementary operation the central processor performs dependsentirely upon two facts: which symbol is currently inscribed at thepresent memory location; and the scanner’s own current machinestate.
  • Amachine table dictates which elementary operation thecentral processor performs, given its current machine state and thesymbol it is currently accessing. The machine table also dictates howthe central processor’s machine state changes given those samefactors. Thus, the machine table enshrines a finite set of routinemechanical instructions governing computation.

Turing translates this informal description into a rigorousmathematical model. For more details, see the entry onTuring machines.

Turing motivates his approach by reflecting on idealized humancomputing agents. Citing finitary limits on our perceptual andcognitive apparatus, he argues that any symbolic algorithm executed bya human can be replicated by a suitable Turing machine. He concludesthat the Turing machine formalism, despite its extreme simplicity, ispowerful enough to capture all humanly executable mechanicalprocedures over symbolic configurations. Subsequent discussants havealmost universally agreed.

Turing computation is often described asdigital rather thananalog. What this means is not always so clear, but the basicidea is usually that computation operates over discreteconfigurations. By comparison, many historically important algorithmsoperate over continuously variable configurations. For example,Euclidean geometry assigns a large role toruler-and-compassconstructions, which manipulate geometric shapes. For any shape,one can find another that differs to an arbitrarily small extent.Symbolic configurations manipulated by a Turing machine do not differto arbitrarily small extent. Turing machines operate over discretestrings of elements (digits) drawn from a finite alphabet. Onerecurring controversy concerns whether the digital paradigm iswell-suited to model mental activity or whether an analog paradigmwould instead be more fitting (MacLennan 2012; Piccinini and Bahar 2013).[1]

Besides introducing Turing machines, Turing (1936) proved severalseminal mathematical results involving them. In particular, he provedthe existence of auniversal Turing machine (UTM). Roughlyspeaking, a UTM is a Turing machine that can mimic any other Turingmachine. One provides the UTM with a symbolic input that codes themachine table for Turing machineM. The UTM replicatesM’s behavior, executing instructions enshrined byM’s machine table. In that sense, the UTM is aprogrammable general purpose computer. To a firstapproximation, all personal computers are also general purpose: theycan mimic any Turing machine, when suitably programmed. The maincaveat is that physical computers have finite memory, whereas a Turingmachine has unlimited memory. More accurately, then, a personalcomputer can mimic any Turing machineuntil it exhausts itslimited memory supply.

Turing’s discussion helped lay the foundations forcomputerscience, which seeks to design, build, and understand computingsystems. As we know, computer scientists can now build extremelysophisticated computing machines. All these machines implementsomething resembling Turing computation, although the details differfrom Turing’s simplified model.

2. Artificial intelligence

Rapid progress in computer science prompted many, including Turing, tocontemplate whether we could build a computer capable of thought.Artificial intelligence (AI) aims to construct“thinking machinery”. More precisely, it aims to constructcomputing machines that execute core mental tasks such as reasoning,decision-making, problem solving, and so on. During the 1950s and1960s, this goal came to seem increasingly realistic (Haugeland 1985).A famous early success was theLogic Theorist computerprogram (Newell and Simon 1956), which proved 38 of the first 52theorems fromPrincipia Mathematica (Whitehead and Russell1925). In one case, it discovered a simpler proof thanPrincipia’s. Initial achievements of this kindstimulated enormous interest inside and outside the academy. Manyresearchers predicted that intelligent machines were only a few yearsaway. When confident predictions of thinking machines proved toooptimistic, many observers lost interest or concluded that AI was afool’s errand. Nevertheless, the decades have witnessed gradualprogress, including some striking recent advances. A fewmilestones:

  • IBM’s Deep Blue defeated chess champion Gary Kasparov in1997 (Campbell 1999).
  • The driverless car Stanley completed a 132-mile course in theMojave Desert, winning the 2005 Defense Advanced Research ProjectsAgency (DARPA) Grand Challenge (Thrun, Montemerlo, Dahlkamp, et al.2006).
  • In 2012, AlexNet dramatically surpassed all previous computationalmodels in a standard image classification task (Krizhevsky, Sutskever,and Hinton 2012).
  • DeepMind’s AlphaGo defeated Lee Sedol, one of the top Goplayers in the world, in 2016 (Silver, Schrittwieser, Simonyan, et al.2017).
  • In 2020, OpenAI released GPT-3, which generates uncannilyhuman-like text in response to written prompts (Brown, Mann, Ryder, etal. 2020). An improved version, ChatGPT, was released in 2022 andattracted widespread societal attention.

These and other recent advances have sparked intense renewed focusupon AI, including numerous commercial applications.

Some philosophers insist that computers, no matter how sophisticatedthey become, will at bestmimic rather thanreplicate thought. A computer simulation of the weather doesnot really rain. A computer simulation of flight does not really fly.Even if a computing system could simulate mental activity, why suspectthat it would constitute the genuine article?

Turing (1950) anticipated these worries and tried to defuse them. Heproposed a scenario, now calledthe Turing Test, where oneevaluates whether an unseen interlocutor is a computer or a human. Acomputerpasses the Turing test if one cannot determine thatit is a computer. Turing proposed that we abandon the question“Could a computer think?” as hopelessly vague, replacingit with the question “Could a computer pass the Turingtest?”. Turing’s discussion has received considerableattention, proving especially influential within AI. Ned Block (1981)offers an influential critique. He argues that certain possiblemachines pass the Turing test even though these machines do not comeclose to genuine thought or intelligence. See the entrythe Turing test for discussion of Block’s objection and other issuessurrounding the Turing Test. For discussion of the Turing test inrelation to ChatGPT and similar models, see (Bayne and Williams 2023;Floridi and Chiriatti 2020; Mahowald, Ivanova, Blank, et al.2024).

For more on AI, see the entrylogic and artificial intelligence. For much more detail, see Russell and Norvig (2022).

3. The classical computational theory of mind

Warren McCulloch and Walter Pitts (1943) first suggested thatsomething resembling the Turing machine might provide a good model forthe mind. In the 1960s, Turing computation became central to theemerging interdisciplinary initiativecognitive science,which studies the mind by drawing upon psychology, computer science(especially AI), linguistics, philosophy, economics (especially gametheory and behavioral economics), anthropology, and neuroscience. Thelabelclassical computational theory of mind (which we willabbreviate as CCTM) is now fairly standard. According to CCTM, themind is a computational system similar in important respects to aTuring machine, and core mental processes (e.g., reasoning,decision-making, and problem solving) are computations similar inimportant respects to computations executed by a Turing machine. Theseformulations are imprecise. CCTM is best seen as a family of views,rather than a single well-defined view.[2]

It is common to describe CCTM as embodying “the computermetaphor”. This description is doubly misleading.

First, CCTM is better formulated by describing the mind as a“computing system” or a “computational system”rather than a “computer”. As David Chalmers (2011) notes,describing a system as a “computer” strongly suggests thatthe system isprogrammable. As Chalmers also notes, one neednot claim that the mind is programmable simply because one regards itas a Turing-style computational system. (Most Turing machines are notprogrammable.) Thus, the phrase “computer metaphor”strongly suggests theoretical commitments that are inessential toCCTM. The point here is not just terminological. Critics of CCTM oftenobject that the mind is not a programmable general purpose computer(Churchland, Koch, and Sejnowski 1990). Since classicalcomputationalists need not claim (and usually do not claim) that themind is a programmable general purpose computer, the objection ismisdirected.

Second, CCTM is not intended metaphorically. CCTM does not simply holdthat the mind islike a computing system. CCTM holds that themindliterally is a computing system. Of course, the mostfamiliar artificial computing systems are made from silicon chips orsimilar materials, whereas the human body is made from flesh andblood. But CCTM holds that this difference disguises a morefundamental similarity, which we can capture through a Turing-stylecomputational model. In offering such a model, we prescind fromphysical details. We attain an abstract computational description thatcould be physically implemented in diverse ways (e.g., through siliconchips, or neurons, or pulleys and levers). CCTM holds that a suitableabstract computational model offers a literally true description ofcore mental processes.

It is common to summarize CCTM through the slogan “the mind is aTuring machine”. This slogan is also somewhat misleading,because no one regards Turing’s precise formalism as a plausiblemodel of mental activity. The formalism seems too restrictive inseveral ways:

  • Turing machines execute pure symbolic computation. The inputs andoutputs are symbols inscribed in memory locations. In contrast, themind receivessensory input (e.g., retinal stimulations) andproducesmotor output (e.g., muscle activations). A completetheory must describe how mental computation interfaces with sensoryinputs and motor outputs.
  • A Turing machine has infinite discrete memory capacity. Ordinarybiological systems have finite memory capacity. A plausiblepsychological model must replace the infinite memory store with alarge but finite memory store
  • Modern computers haverandom access memory: addressablememory locations that the central processor can directly access.Turing machine memory is not addressable. The central processor canaccess a location only by sequentially accessing intermediatelocations. Computation without addressable memory is hopelesslyinefficient. For that reason, C.R. Gallistel and Adam King (2009)argue that addressable memory gives a better model of the mind thannon-addressable memory.
  • A Turing machine has a central processor that operatesserially, executing one instruction at a time. Othercomputational formalisms relax this assumption, allowing multipleprocessing units that operate inparallel. Classicalcomputationalists can allow parallel computations (Fodor and Pylyshyn1988; Gallistel and King 2009: 174). See Gandy (1980) and Sieg (2009)for general mathematical treatments that encompass both serial andparallel computation.
  • Turing computation isdeterministic: total computationalstate determines subsequent computational state. One might insteadallowstochastic computations. In a stochastic model, currentstate does not dictate a unique next state. Rather, there is a certainprobability that the machine will transition from one state toanother.

CCTM claims that mental activity is “Turing-stylecomputation”, allowing these and other departures fromTuring’s own formalism.

3.1 Machine functionalism

Hilary Putnam (1967) introduced CCTM into philosophy. He contrastedhis position withlogical behaviorism andtype-identitytheory. Each position purports to reveal the nature of mentalstates, including propositional attitudes (e.g., beliefs), sensations(e.g., pains), and emotions (e.g., fear). According to logicalbehaviorism, mental states are behavioral dispositions. According totype-identity theory, mental states are brain states. Putnam advancesan opposingfunctionalist view, on which mental states arefunctional states. According to functionalism, a system has a mindwhen the system has a suitablefunctional organization.Mental states are states that play appropriate roles in thesystem’s functional organization. Each mental state isindividuated by its interactions with sensory input, motor output, andother mental states.

Functionalism offers notable advantages over logical behaviorism andtype-identity theory:

  • Behaviorists want to associate each mental state with acharacteristic pattern of behavior—a hopeless task, becauseindividual mental states do not usually have characteristic behavioraleffects. Behavior almost always results from distinct mental statesoperating together (e.g., a belief and a desire). Functionalism avoidsthis difficulty by individuating mental states through characteristicrelations not only to sensory input and behavior but also to oneanother.
  • Type-identity theorists want to associate each mental state with acharacteristic physical or neurophysiological state. Putnam casts thisproject into doubt by arguing that mental states aremultiplyrealizable: the same mental state can be realized by diversephysical systems, including not only terrestrial creatures but alsohypothetical creatures (e.g., a silicon-based Martian). Functionalismis tailor-made to accommodate multiple realizability. According tofunctionalism, what matters for mentality is a pattern oforganization, which could be physically realized in many differentways. See the entrymultiple realizability for further discussion of this argument.

Putnam defends a brand of functionalism now calledmachinefunctionalism. He emphasizesprobabilistic automata,which are similar to Turing machines except that transitions betweencomputational states are stochastic. He proposes that mental activityimplements a probabilistic automaton and that particular mental statesare machine states of the automaton’s central processor. Themachine table specifies an appropriate functional organization, and italso specifies the role that individual mental states play within thatfunctional organization. In this way, Putnam combines functionalismwith CCTM.

Machine functionalism faces several problems. One problem, highlightedby Ned Block and Jerry Fodor (1972), concerns theproductivity ofthought. A normal human can entertain a potential infinity ofpropositions. Machine functionalism identifies mental states withmachine states of a probabilistic automaton. Since there are onlyfinitely many machine states, there are not enough machine states topair one-one with possible mental states of a normal human. Of course,an actual human will only ever entertain finitely many propositions.However, Block and Fodor contend that this limitation reflects limitson lifespan and memory, rather than (say) some psychological law thatrestricts the class of humanly entertainable propositions. Aprobabilistic automaton is endowed with unlimited time and memorycapacity yet even still has only finitely many machine states.Apparently, then, machine functionalism mislocates the finitary limitsupon human cognition.

Another problem for machine functionalism, also highlighted by Blockand Fodor (1972), concerns thesystematicity of thought. Anability to entertain one proposition is correlated with an ability tothink other propositions. For example, someone who can entertain thethoughtthat John loves Mary can also entertain the thoughtthat Mary loves John. Thus, there seem to be systematicrelations between mental states. A good theory should reflect thosesystematic relations. Yet machine functionalism identifies mentalstates with unstructured machines states, which lack the requisitesystematic relations to another. For that reason, machinefunctionalism does not explain systematicity. In response to thisobjection, machine functionalists might deny that they are obligatedto explain systematicity. Nevertheless, the objection suggests thatmachine functionalism neglects essential features of human mentality.A better theory would explain those features in a principled way.

While the productivity and systematicity objections to machinefunctionalism are perhaps not decisive, they provide strong impetus topursue an improved version of CCTM. See Block (1978) for additionalproblems facing machine functionalism and functionalism moregenerally.

3.2 The representational theory of mind

Fodor (1975, 1981, 1987, 1990, 1994, 2008) advocates a version of CCTMthat accommodates systematicity and productivity much moresatisfactorily. He shifts attention to thesymbolsmanipulated during Turing-style computation.

An old view, stretching back at least to William of Ockham’sSumma Logicae, holds that thinking occurs in alanguageof thought (sometimes calledMentalese). Fodor revivesthis view. He postulates a system of mental representations, includingboth primitive representations and complex representations formed fromprimitive representations. For example, the primitive Mentalese wordsJOHN, MARY, and LOVES can combine to form the Mentalese sentence JOHNLOVES MARY. Mentalese iscompositional: the meaning of acomplex Mentalese expression is a function of the meanings of itsparts and the way those parts are combined. Propositional attitudesare relations to Mentalese symbols. Fodor calls this viewtherepresentational theory of mind (RTM). Combining RTMwith CCTM, he argues that mental activity involves Turing-stylecomputation over the language of thought. Mental computation storesMentalese symbols in memory locations, manipulating those symbols inaccord with mechanical rules.

A prime virtue of RTM is how readily it accommodates productivity andsystematicity:

Productivity: RTM postulates a finite set of primitiveMentalese expressions, combinable into a potential infinity of complexMentalese expressions. A thinker with access to primitive Mentalesevocabulary and Mentalese compounding devices has the potential toentertain an infinity of Mentalese expressions. She therefore has thepotential to instantiate infinitely many propositional attitudes(neglecting limits on time and memory).

Systematicity: According to RTM, there are systematicrelations between which propositional attitudes a thinker canentertain. For example, suppose I can think that John loves Mary.According to RTM, my doing so involves my standing in some relationR to a Mentalese sentence JOHN LOVES MARY, composed ofMentalese words JOHN, LOVES, and MARY combined in the right way. If Ihave this capacity, then I also have the capacity to stand in relationR to the distinct Mentalese sentence MARY LOVES JOHN, therebythinking that Mary loves John. So the capacity to think that Johnloves Mary is systematically related to the capacity to think thatMary loves John.

By treating propositional attitudes as relations to complex mentalsymbols, RTM explains both productivity and systematicity.

CCTM+RTM differs from machine functionalism in several other respects.First, machine functionalism is a theory of mental statesingeneral, while RTM is only a theory of propositional attitudes.Second, proponents of CCTM+RTM need not say that propositionalattitudes are individuated functionally. As Fodor (2000: 105, fn. 4)notes, we must distinguishcomputationalism (mental processesare computational) fromfunctionalism (mental states arefunctional states). Machine functionalism endorses both doctrines.CCTM+RTM endorses only the first. Unfortunately, many philosophersstill mistakenly assume that computationalism entails a functionalistapproach to propositional attitudes (see Piccinini 2004 fordiscussion).

Philosophical discussion of RTM tends to focus mainly onhigh-level human thought, especially belief and desire.However, CCTM+RTM is applicable to a much wider range of mental statesand processes. Many cognitive scientists apply it to non-humananimals. For example, Gallistel and King (2009) apply it to certaininvertebrate phenomena (e.g., honeybee navigation). Even confiningattention to humans, one can apply CCTM+RTM tosubpersonalprocessing. Fodor (1983) argues that perception involves asubpersonal “module” that converts retinal input intoMentalese symbols and then performs computations over those symbols.Thus, talk about a language ofthought is potentiallymisleading, since it suggests a non-existent restriction tohigher-level mental activity.

Also potentially misleading is the description of Mentalese as alanguage, which suggests that all Mentalese symbols resembleexpressions in a natural language. Many philosophers, including Fodor,sometimes seem to endorse that position. However, there are possiblenon-propositional formats for mental representations. Proponents ofCCTM+RTM can adopt a pluralistic line, allowing mental computation tooperate over items akin to images, maps, diagrams, or othernon-propositional representations (Johnson-Laird 2004: 187; McDermott2001: 69; Pinker 2005: 7; Sloman 1978: 144–176). The pluralisticline seems especially plausible as applied to subpersonal processes(such as perception) and non-human animals. Michael Rescorla (2009a,2009b) surveys research oncognitive maps (Tolman 1948;O’Keefe and Nadel 1978; Gallistel 1990), suggesting that someanimals may navigate by computing over mental representations moresimilar to maps than sentences. Elisabeth Camp (2009), citing researchon baboon social interaction (Cheney and Seyfarth 2007), argues thatbaboons may encode social dominance relations through non-sententialtree-structured representations.

CCTM+RTM is schematic. To fill in the schema, one must providedetailed computational models of specific mental processes. A completemodel will:

  • describe the mental representations manipulated by theprocess;
  • isolate elementary operations that manipulate the representations(e.g.,inscribing a symbol in a memory location); and
  • delineate mechanical rules governing application of elementaryoperations.

By providing a detailed computational model, we decompose a complexmental process into a series of elementary operations governed byprecise, routine instructions.

CCTM+RTM remains neutral in the traditional debate between physicalismand substance dualism. A Turing-style model proceeds at a veryabstract level, not saying whether mental computations are implementedby physical stuff or Cartesian soul-stuff (Block 1983: 522). Inpractice, all proponents of CCTM+RTM embrace a broadly physicalistoutlook. They hold that mental computations are implemented not bysoul-stuff but rather by the brain. On this view, mentalrepresentations are realized by neural states, and computationaloperations over mental representations are realized by neuralprocesses. Ultimately, physicalist proponents of CCTM+RTM must produceempirically well-confirmed theories that explain how exactly neuralactivity implements Turing-style computation. As Gallistel and King(2009) emphasize, we do not currently have such theories—thoughsee Zylberberg, Dehaene, Roelfsema, and Sigman (2011) and Akhlaghpour(2022) for some speculations.

Fodor (1975) advances CCTM+RTM as a foundation for cognitive science.He discusses mental phenomena such as decision-making, perception, andlinguistic processing. In each case, he maintains, our best scientifictheories postulate Turing-style computation over mentalrepresentations. In fact, he argues that ouronly viabletheories have this form. He concludes that CCTM+RTM is “the onlygame in town”. Many cognitive scientists argue along similarlines. C.R. Gallistel and Adam King (2009), Philip Johnson-Laird(1988), Allen Newell and Herbert Simon (1976), and Zenon Pylyshyn(1984) all recommend Turing-style computation over mental symbols asthe best foundation for scientific theorizing about the mind.

4. Neural networks

In the 1980s, connectionism emerged as a prominent rival to classicalcomputationalism. Connectionists draw inspiration from neurophysiologyrather than logic and computer science. They employ computationalmodels,neural networks, that differ significantly fromTuring-style models. Aneural network is a collection ofinterconnected nodes. Nodes fall into three categories:inputnodes,output nodes, andhidden nodes (which mediatebetween input and output nodes). Nodes have activation values, givenby real numbers. One node can bear aweighted connection toanother node, also given by a real number. Activations of input nodesare determined exogenously: these are the inputs to computation.Total input activation of a hidden or output node is aweighted sum of the activations of nodes feeding into it. Activationof a hidden or output node is a function of its total inputactivation; the particular function varies with the network. Duringneural network computation, waves of activation propagate from inputnodes to output nodes, as determined by weighted connections betweennodes.

In afeedforward network, weighted connections flow only inone direction.Recurrent networks have feedback loops, inwhich connections emanating from hidden units circle back to hiddenunits. Recurrent networks are less mathematically tractable thanfeedforward networks. However, they figure crucially in psychologicalmodeling of various phenomena, such as phenomena that involve somekind of memory (Elman 1990).

Weights in a neural network are typically mutable, evolving in accordwith alearning algorithm. The literature offers variouslearning algorithms, but the basic idea is usually to adjust weightsso thatactual outputs gradually move closer to thetarget outputs one would expect for the relevant inputs. Thebackpropagation algorithm is a widely used algorithm of thiskind (Rumelhart, Hinton, and Williams 1986).

Connectionism traces back to McCulloch and Pitts (1943), who studiednetworks of interconnectedlogic gates (e.g., AND-gates andOR-gates). One can view a network of logic gates as a neural network,with activations confined to two values (0 and 1) and activationfunctions given by the usual truth-functions. McCulloch and Pittsadvanced logic gates as idealized models of individual neurons. Theirdiscussion exerted a profound influence on computer science (vonNeumann 1945). Modern digital computers are simply networks of logicgates. Within cognitive science, however, researchers usually focusupon networks whose elements are more “neuron-like” thanlogic gates. In particular, modern-day connectionists typicallyemphasize analog neural networks whose nodes take continuous ratherthan discrete activation values. Some authors even use the phrase“neural network” so that it exclusively denotes suchnetworks.

Neural networks received relatively scant attention from cognitivescientists during the 1960s and 1970s, when Turing-style modelsdominated. The 1980s witnessed a huge resurgence of interest in neuralnetworks, especially analog neural networks, with the two-volumeParallel Distributed Processing (Rumelhart, McClelland, andthe PDP research group, 1986; McClelland, Rumelhart, and the PDPresearch group, 1987) serving as a manifesto. Researchers constructedconnectionist models of diverse phenomena: object recognition, speechperception, sentence comprehension, cognitive development, and so on.Impressed by connectionism, many researchers concluded that CCTM+RTMwas no longer “the only game in town”.

In the 2010s, a class of computational models known asdeep neuralnetworks became quite popular (Krizhevsky, Sutskever, and Hinton2012; LeCun, Bengio, and Hinton 2015). These models are neuralnetworks with multiple layers of hidden nodes (sometimes hundreds ofsuch layers). Deep neural networks—trained on large data setsthrough one or another learning algorithm (usuallybackpropagation)—have achieved great success in many areas ofAI, including image classification (AlexNet), strategic game-playing(AlphaGo), and natural language processing (ChatGPT). Deep neuralnetworks are now widely deployed in commercial applications, and theyare the focus of extensive ongoing investigation within both academiaand industry. Researchers have also used them to model the mind (e.g.Kriegeskorte 2015; Marblestone, Wayne, and Kording 2016; Storrs,Kietzmann, Walther, et al. 2021; Zhuang, Yan, Nayebi, et al. 2021),although how much success this modeling enterprise has thus farachieved is a controversial question (Bowers, Malholtra,Dujmović, et al. 2023).

For a detailed overview of neural networks, see Haykin (2008). For auser-friendly introduction, with an emphasis on psychologicalapplications, see Marcus (2001). For a philosophically orientedintroduction to deep neural networks, see Buckner (2019). Forconnections between deep neural networks and the history ofphilosophy, see Buckner (2024).

4.1 Relation between neural networks and classical computation

Neural networks have a very different “feel” thanclassical (i.e., Turing-style) models. Yet classical computation andneural network computation are not mutually exclusive:

  • One can implement a neural network in a classical model.Indeed, every neural network ever physically constructed has beenimplemented on a digital computer.
  • One can implement a classical model in a neural network.Modern digital computers implement Turing-style computation innetworks of logic gates. Alternatively, one can implement Turing-stylecomputation using an analog recurrent neural network whose nodes takecontinuous activation values (Graves, Wayne, and Danihelka 2014, OtherInternet Resources; Siegelmann and Sontag 1991; Siegelmann and Sontag1995).

Although some researchers suggest a fundamental opposition betweenclassical computation and neural network computation, it seems moreaccurate to identify two modeling traditions that overlap in certaincases but not others (cf. Boden 1991; Piccinini 2008b). In thisconnection, it is also worth noting that classical computationalismand connectionist computationalism have their common origin in thework of McCulloch and Pitts.

Philosophers often say that classical computation involves“rule-governed symbol manipulation” while neural networkcomputation is non-symbolic. The intuitive picture is that“information” in neural networks is globally distributedacross the weights and activations, rather than concentrated inlocalized symbols. However, the notion of “symbol” itselfrequires explication, so it is often unclear what theorists mean bydescribing computation as symbolic versus non-symbolic. As mentionedin§1, the Turing formalism places very few conditions on“symbols”. Regarding primitive symbols, Turing assumesjust that there are finitely many of them and that they can beinscribed in read/write memory locations. Neural networks can alsomanipulate symbols satisfying these two conditions: as just noted, onecan implement a Turing-style model in a neural network.

Many discussions of the symbolic/non-symbolic dichotomy employ a morerobust notion of “symbol”. On the more robust approach, asymbol is the sort of thing that represents a subject matter. Thus,something is a symbol only if it has semantic or representationalproperties. If we employ this more robust notion of symbol, then thesymbolic/non-symbolic distinction cross-cuts the distinction betweenTuring-style computation and neural network computation. A Turingmachine need not employ symbols in the more robust sense. As far asthe Turing formalism goes, symbols manipulated during Turingcomputation need not have representational properties (Chalmers 2011).Conversely, a neural network can manipulate symbols withrepresentational properties. Indeed, an analog neural network canmanipulate symbols that have a combinatorial syntax and semantics(Horgan and Tienson 1996; Marcus 2001).

Following Steven Pinker and Alan Prince (1988), we may distinguishbetweeneliminative connectionism andimplementationistconnectionism.

Eliminative connectionists advance connectionism as a rival toclassical computationalism. They argue that the Turing formalism isirrelevant to psychological explanation. Often, though not always,they seek to revive theassociationist tradition inpsychology, a tradition that CCTM had forcefully challenged. Often,though not always, they attack the mentalist, nativist linguisticspioneered by Noam Chomsky (1965). Often, though not always, theymanifest overt hostility to the very notion of mental representation.But the defining feature of eliminative connectionism is that it usesneural networks asreplacements for Turing-style models.Eliminative connectionists view the mind as a computing system of aradically different kind than the Turing machine. A few authorsexplicitly espouse eliminative connectionism (Churchland 1989;Rumelhart and McClelland 1986; Horgan and Tienson 1996), and manyothers incline towards it.

Implementationist connectionism is a more ecumenical position. Itallows a potentially valuable role for both Turing-style modelsand neural networks, operating harmoniously at differentlevels of description (Marcus 2001; Smolensky 1988). A Turing-stylemodel is higher-level, whereas a neural network model is lower-level.The neural network illuminates how the brain implements theTuring-style model, just as a description in terms of logic gatesilluminates how a personal computer executes a program in a high-levelprogramming language.

4.2 Arguments for connectionism

Connectionism excites many researchers because of the analogy betweenneural networks and the brain. Nodes resemble neurons, whileconnections between nodes resemble synapses. Connectionist modelingtherefore seems more “biologically plausible” thanclassical modeling. A connectionist model of a psychologicalphenomenon apparently captures (in an idealized way) howinterconnected neurons might generate the phenomenon.

When evaluating the argument from biological plausibility, one shouldrecognize that neural networks vary widely in how closely they matchactual brain activity. Many networks that figure prominently inconnectionist writings are not so biologically plausible (Bechtel andAbrahamsen 2002: 341–343; Bermúdez 2010: 237–239;Clark 2014: 87–89; Harnish 2002: 359–362). A fewexamples:

  • Real neurons are much more heterogeneous than the interchangeablenodes that figure in typical connectionist networks.
  • Real neurons emit discrete spikes (action potentials) as outputs.But the nodes that figure in many prominent neural networks, includingthe best known deep neural networks, instead have continuousoutputs.
  • The backpropagation algorithm requires that weights between nodescan vary between excitatory and inhibitory, yet actual synapses cannotso vary (Crick and Asanuma 1986). Moreover, traditional applicationsof the algorithm assume target outputs supplied exogenously bymodelerswho know the desired answer. In that sense, learningissupervised. Very little learning in actual biologicalsystems involves anything resembling supervised training.

On the other hand, some neural networks are more biologicallyplausible (Buckner and Garson 2019; Illing, Gerstner, and Brea 2019).For example, there are neural networks whose nodes output discretespikes roughly akin to those emitted by real neurons in the brain(Maass 1996; Buesing, Bill, Nessler, and Maass 2011). Furthermore, alarge literature seeks to articulate biologically realisticconnectionist learning algorithms, sometimes by approximatingbackpropagation (e.g. Lillicrap et al. 2016; Whittington and Bogacz,2017), sometimes by replacing it with an alternative approach (e.g.Krotov and Hopfield 2019). Lillicrap et al. (2020) argue at lengththat backpropagation can be developed in a biologically plausible way.They note in particular that, although backpropagation wastraditionally combined with supervised learning, it can instead becombined with unsupervised learning (e.g. Kingma and Welling 2019) orwith reinforcement learning (e.g. Silver et al. 2016).

Even when a neural network is not biologically plausible, it may stillbemore biologically plausible than classical models. Neuralnetworks certainly seem closer than Turing-style models, in bothdetails and spirit, to neurophysiological description. Many cognitivescientists worry that CCTM reflects a misguided attempt at imposingthe architecture of digital computers onto the brain. Some doubt thatthe brain implements anything resembling digital computation, i.e.,computation over discrete configurations of digits (Piccinini andBahar 2013). Others doubt that brains display clean Turing-styleseparation between central processor and read/write memory (Dayan2009). Neural networks fare better on both scores: they do not requirecomputation over discrete configurations of digits, and they do notpostulate a clean separation between central processor and read/writememory.

Classical computationalists typically reply that it is premature todraw firm conclusions based upon biological plausibility, given howlittle we understand about the relation between neural, computational,and cognitive levels of description (Gallistel and King 2009; Marcus2001). Using measurement techniques such as cell recordings andfunctional magnetic resonance imaging (fMRI), and drawing upondisciplines as diverse as physics, biology, AI, information theory,statistics, graph theory, and dynamical systems theory,neuroscientists have accumulated substantial knowledge about the brainat varying levels of granularity (Zednik 2019). We now know quite alot about individual neurons, about how neurons interact within neuralpopulations, about the localization of mental activity in corticalregions (e.g. the visual cortex), and about interactions amongcortical regions. Yet we still have a tremendous amount to learn abouthow neural tissue accomplishes the tasks that it surely accomplishes:perception, reasoning, decision-making, language acquisition, and soon. Given our present state of relative ignorance, it would be rash toinsist that the brain does not implement anything resembling Turingcomputation.

Connectionists offer numerous further arguments that we should employconnectionist models instead of, or in addition to, classical models.See the entryconnectionism for an overview. For purposes of this entry, we mention twoadditional arguments.

The first argument emphasizeslearning (Bechtel andAbrahamsen 2002: 51). A vast range of cognitive phenomena involvelearning from experience. Many connectionist models are explicitlydesigned to model learning, through backpropagation or some otheralgorithm that modifies the weights between nodes. By contrast,connectionists often complain that there are no good classical modelsof learning. Classical computationalists can respond by citingperceived defects of connectionist learning algorithms. Classicalcomputationalists can also citeBayesian decision theory, amathematical model of inference and decision-making under uncertainty.In the Bayesian framework, uncertainty is codified throughprobability. Precise rules dictate how to updateprobabilities in light of new evidence and how to select actions inlight of probabilities and utilities. (See the entriesBayes’s theorem andnormative theories of rational choice: expected utility for details.)Bayesian cognitive science uses Bayesiandecision theory to construct mathematical models of mental activity(Ma 2019; Ma, Kording, and Goldreich 2023). Over the past few decades,Bayesian cognitive science has accrued many explanatory successes.This impressive track record suggests that some mental processes areBayesian or approximately Bayesian (Rescorla forthcoming). Moreover,classical computing systems can execute or at least approximatelyexecute Bayesian updating in various realistic scenarios (Murphy 2023;Thrun, Burgard, and Fox 2005). Arguably, then, classical computationcan model many important cases of learning.

The second argument emphasizesspeed of computation. Neuronsare much slower than silicon-based components of digital computers.For this reason, neurons could not execute serial computation quicklyenough to match rapid human performance in perception, linguisticcomprehension, decision-making, etc. Connectionists maintain that theonly viable solution is to replace serial computation with a“massively parallel” computationalarchitecture—precisely what neural networks provide (Feldman andBallard 1982; Rumelhart 1989). However, this argument is onlyeffective against classical computationalists who insist upon serialprocessing. As noted in§3, some Turing-style models involve parallel processing. Many classicalcomputationalists are happy to allow “massively parallel”mental computation, and the argument gains no traction against theseresearchers. That being said, the argument highlights an importantquestion that any computationalist—whether classical,connectionist, or otherwise—must address: How does a brain builtfrom relatively slow neurons execute sophisticated computations soquickly? Neither classical nor connectionist computationalists haveanswered this question satisfactorily (Gallistel and King 2009: 174and 265).

4.3 Systematicity and productivity

Fodor and Pylyshyn (1988) offer a widely discussed critique ofeliminativist connectionism. They argue that systematicity andproductivity fail in connectionist models, except when theconnectionist model implements a classical model. Hence, connectionismdoes not furnish a viable alternative to CCTM. At best, it supplies alow-level description that helps bridge the gap between Turing-stylecomputation and neuroscientific description.

This argument has elicited numerous replies and counter-replies. Someargue that neural networks can exhibit systematicity withoutimplementing anything like classical computational architecture(Horgan and Tienson 1996; Chalmers 1990; Smolensky 1991; van Gelder1990). Some argue that Fodor and Pylyshyn vastly exaggeratesystematicity (Johnson 2004) or productivity (Rumelhart and McClelland1986), especially for non-human animals (Dennett 1991). These issues,and many others raised by Fodor and Pylyshyn’s argument, havebeen thoroughly investigated in the literature. For furtherdiscussion, see Bechtel and Abrahamsen (2002: 156–199),Bermúdez (2005: 244–278), Chalmers (1993), Clark (2014:84–86), and the encyclopedia entries onthe language of thought hypothesis and onconnectionism.

Gallistel and King (2009) advance a related but distinct productivityargument. They emphasizeproductivity of mental computation,as opposed toproductivity of mental states. Through detailedempirical case studies, they argue that many non-human animals canextract, store, and retrieve detailed records of the surroundingenvironment. For example, the Western scrub jay records where itcached food, what kind of food it cached in each location, when itcached the food, and whether it has depleted a given cache (Clayton,Emery, and Dickinson 2006). The jay can access these records andexploit them in diverse computations: computing whether a food itemstored in some cache is likely to have decayed; computing a route fromone location to another; and so on. The number of possiblecomputations a jay can execute is, for all practical purposes,infinite.

CCTM explains the productivity of mental computation by positing acentral processor that stores and retrieves symbols in addressableread/write memory. When needed, the central processor can retrievearbitrary, unpredicted combinations of symbols from memory. Incontrast, Gallistel and King argue, connectionism has difficultyaccommodating the productivity of mental computation. AlthoughGallistel and King do not carefully distinguish between eliminativistand implementationist connectionism, we may summarize their argumentas follows:

  • Eliminativist connectionism cannot explain how organisms combinestored memories (e.g., cache locations) for computational purposes(e.g., computing a route from one cache to another). There are avirtual infinity of possible combinations that might be useful, withno predicting in advance which pieces of information must be combinedin future computations. The only computationally tractable solution issymbol storage in readily accessible read/write memorylocations—a solution that eliminativist connectionistsreject.
  • Implementationist connectionists can postulate symbol storage inread/write memory,as implemented by a neural network.However, the mechanisms that connectionists usually propose forimplementing memory are not plausible. Existing proposals are mainlyvariants upon a single idea: a recurrent neural network that allowsreverberating activity to travel around a loop (Elman 1990). There aremany reasons why the reverberatory loop model is hopeless as a theoryof long-term memory. For example, noise in the nervous system ensuresthat signals would rapidly degrade in a few minutes. Implementationistconnectionists have thus far offered no plausible model of read/write memory.[3]

Gallistel and King conclude that CCTM is much better suited thaneither eliminativist or implementationist connectionism to explain avast range of cognitive phenomena.

Critics attack this new productivity argument from various angles,focusing mainly on the empirical case studies adduced by Gallistel andKing. Peter Dayan (2009), John Donahoe (2010), and Christopher Mole(2014) argue that biologically plausible neural network models canaccommodate at least some of the case studies. Dayan and Donahoe arguethat empirically adequate neural network models can dispense withanything resembling read/write memory. Mole argues that, in certaincases, empirically adequate neural network models canimplement the read/write memory mechanisms posited byGallistel and King. Debate on these fundamental issues seems poised tocontinue well into the future.

4.4 Computational neuroscience

Computational neuroscience describes the nervous systemthrough computational models (Trappenberg 2010; Miller 2018). Althoughcomputational neuroscience is grounded in mathematical modeling ofindividual neurons, its distinctive focus issystems ofinterconnected neurons. Computational neuroscientists typically modelthese systems as neural networks. This research may be seen as avariant, off-shoot, or descendant of connectionism. However, mostcomputational neuroscientists do not self-identify as connectionists.There are several differences between connectionism and computationalneuroscience:

  • Neural networks employed by computational neuroscientists are muchmore biologically realistic than those employed by connectionists. Thecomputational neuroscience literature is filled with talk about firingrates, action potentials, tuning curves, etc. These notions play atbest a limited role in connectionist research, such as most of theresearch canvassed in (Rogers and McClelland 2014).
  • Computational neuroscience is driven in large measure by knowledgeabout the brain, and it assigns huge importance to neurophysiologicaldata (e.g., cell recordings). Connectionists place much less emphasisupon such data. Their research is primarily driven by behavioral data(although more recent connectionist writings cite neurophysiologicaldata with somewhat greater frequency).
  • Computational neuroscientists usually regard individual nodes inneural networks as idealized descriptions of actual neurons.Connectionists usually instead regard nodes asneuron-likeprocessing units (Rogers and McClelland 2014) while remainingneutral about how exactly these units map onto actualneurophysiological entities.

One might say that computational neuroscience is concerned mainly withneural computation (computation by systems of neurons),whereas connectionism is concerned mainly with abstract computationalmodelsinspired by neural computation. But the boundariesbetween connectionism and computational neuroscience are admittedlysomewhat porous. Doerig, Sommers, Seeliger, et al. (2023) propose thelabelneuroconnectionism for a research program thatthoroughly integrates neuroscience with neural network modeling.

Serious philosophical engagement with neuroscience dates back at leastto Patricia Churchland’sNeurophilosophy (1986). Ascomputational neuroscience matured, Churchland became one of its mainphilosophical champions (Churchland, Koch, and Sejnowski 1990;Churchland and Sejnowski 1992). She was joined by Paul Churchland(1995, 2007) and others (Eliasmith 2013; Eliasmith and Anderson 2003;Piccinini and Bahar 2013; Piccinini and Shagrir 2014). All theseauthors hold that theorizing about mental computation should beginwith the brain, not with Turing machines or other inappropriate toolsdrawn from logic and computer science. They also hold that neuralnetwork modeling should strive for greater biological realism thanconnectionist models typically attain. Chris Eliasmith (2013) developsthis neurocomputational viewpoint through theNeural EngineeringFramework, which supplements computational neuroscience withtools drawn from control theory (Brogan 1990). He aims to“reverse engineer” the brain, building large-scale,biologically plausible neural network models of cognitivephenomena.

Computational neuroscience differs in a crucial respect from CCTM andconnectionism: it abandons multiply realizability. Computationalneuroscientists cite specific neurophysiological properties andprocesses, so their models do not apply equally well to (say) asufficiently different silicon-based creature. Thus, computationalneuroscience sacrifices a key feature that originally attractedphilosophers to CTM. Computational neuroscientists will respond thatthis sacrifice is worth the resultant insight into neurophysiologicalunderpinnings. But many computationalists worry that, by focusing toomuch on neural underpinnings, we risk losing sight of the cognitiveforest for the neuronal trees. Neurophysiological details areimportant, but don’t we also need an additional abstract levelof computational description that prescinds from such details?Gallistel and King (2009) argue that a myopic fixation upon what wecurrently know about the brain has led computational neuroscience toshortchange core cognitive phenomena such as navigation, spatial andtemporal learning, and so on. Similarly, Edelman (2014) complains thatthe Neural Engineering Framework substitutes a blizzard ofneurophysiological details for satisfying psychologicalexplanations.

Partly in response to such worries, some researchers propose anintegratedcognitive computational neuroscience that connectspsychological theories with neural implementation mechanisms(Naselaris et al. 2018; Kriegeskorte and Douglas 2018). The basic ideais to use neural network models to illuminate how mental processes areinstantiated in the brain, thereby grounding multiply realizablecognitive description in the neurophysiological. A good example isrecent work on neural implementation of Bayesian inference (Pouget etal. 2013; Orhan and Ma 2017; Aitchison and Lengyel 2016). Researchersarticulate (multiply realizable) Bayesian models of various mentalprocesses; they construct biologically plausible neural networks thatexecute or approximately execute the posited Bayesian computations;and they evaluate how well these neural network models fit withneurophysiological data.

Despite the differences between connectionism and computationalneuroscience, these two movements raise many similar issues. Inparticular, the dialectic from§4.4 regarding systematicity and productivity arises in similar form.

5. Computation and representation

Philosophers and cognitive scientists use the term“representation” in diverse ways. Within philosophy, themost dominant usage ties representation to intentionality, i.e., the“aboutness” of mental states. Contemporary philosophersusually elucidate intentionality by invokingrepresentationalcontent. A representational mental state has a content thatrepresents the world as being a certain way, so we can ask whether theworld is indeed that way. Thus, representationally contentful mentalstates aresemantically evaluable with respect to propertiessuch as truth, accuracy, fulfillment, and so on. To illustrate:

  • Beliefs are the sorts of things that can be true or false. Mybeliefthat Emmanuel Macron is French is true if EmmanuelMacron is French, false if he is not.
  • Perceptual states are the sorts of things that can be accurate orinaccurate. My perceptual experienceas of a red sphere isaccurate only if a red sphere is before me.
  • Desires are the sorts of things that can fulfilled or thwarted. Mydesireto eat chocolate is fulfilled if I eat chocolate,thwarted if I do not eat chocolate.

Beliefs have truth-conditions (conditions under which they are true),perceptual states have accuracy-conditions (conditions under whichthey are accurate), and desires have fulfillment-conditions(conditions under which they are fulfilled).

In ordinary life, we frequently predict and explain behavior byinvoking beliefs, desires, and other representationally contentfulmental states. We identify these states through their representationalproperties. When we say “Frank believes that Emmanuel Macron isFrench”, we specify the condition under which Frank’sbelief is true (namely, that Emmanuel Macron is French). When we say“Frank wants to eat chocolate”, we specify the conditionunder which Frank’s desire is fulfilled (namely, that Frank eatschocolate). So folk psychology assigns a central role tointentional descriptions, i.e., descriptions that identifymental states through their representational properties. Whetherscientific psychology should likewise employ intentional descriptionsis a contested issue within contemporary philosophy of mind.

Intentional realism is realism regarding representation. At aminimum, this position holds that representational properties aregenuine aspects of mentality. Usually, it is also taken to hold thatscientific psychology should freely employ intentional descriptionswhen appropriate. Intentional realism is a popular position, advocatedby Tyler Burge (2010a), Jerry Fodor (1987), Christopher Peacocke(1992, 1994), and many others. One prominent argument for intentionalrealism citescognitive science practice. The argumentmaintains that intentional description figures centrally in many coreareas of cognitive science, such as perceptual psychology andlinguistics. For example, perceptual psychology describes howperceptual activity transforms sensory inputs (e.g., retinalstimulations) into representations of the distal environment (e.g.,perceptual representations of distal shapes, sizes, and colors). Thescience identifies perceptual states by citing representationalproperties (e.g., representational relations to specific distalshapes, sizes, colors). Assuming a broadly scientific realistperspective, the explanatory achievements of perceptual psychologysupport a realist posture towards intentionality.

Eliminativism is a strong form of anti-realism aboutintentionality. Eliminativists dismiss intentional description asvague, context-sensitive, interest-relative, explanatorilysuperficial, or otherwise problematic. They recommend that scientificpsychology jettison representational content. An early example is W.V.Quine’sWord and Object (1960), which seeks to replaceintentional psychology with behaviorist stimulus-response psychology.Paul Churchland (1981), another prominent eliminativist, wants toreplace intentional psychology with neuroscience.

Between intentional realism and eliminativism lie various intermediatepositions. Daniel Dennett (1971, 1987) acknowledges that intentionaldiscourse is predictively useful, but he questions whether mentalstatesreally have representational properties. According toDennett, theorists who employ intentional descriptions are notliterally asserting that mental states have representationalproperties. They are merely adopting the “intentionalstance”. Donald Davidson (1980) espouses a neighboringinterpretivist position. He emphasizes the central role thatintentional ascription plays within ordinary interpretive practice,i.e., our practice of interpreting one another’s mental statesand speech acts. At the same time, he questions whether intentionalpsychology will find a place within mature scientific theorizing.Davidson and Dennett both profess realism about intentional mentalstates. Nevertheless, both philosophers are customarily read asintentional anti-realists. (In particular, Dennett is frequently readas a kind ofinstrumentalist about intentionality.) Onesource of this customary reading involvesindeterminacy ofinterpretation. Suppose that behavioral evidence allows twoconflicting interpretations of a thinker’s mental states.Following Quine, Davidson and Dennett both say there is then “nofact of the matter” regarding which interpretation is correct.This diagnosis indicates a less than fully realist attitude towardsintentionality.

Debates over intentionality figure prominently in philosophicaldiscussion of CTM. Let us survey some highlights.

5.1 Computation as formal

Classical computationalists typically assume what one might callthe formal-syntactic conception of computation (FSC). Theintuitive idea is that computation manipulates symbols in virtue oftheir formal syntactic properties rather than their semanticproperties.

FSC stems from innovations in mathematical logic during the late19th and early 20th centuries, especiallyseminal contributions by George Boole and Gottlob Frege. In hisBegriffsschrift (1879/1967), Frege effected a thoroughgoingformalization of deductive reasoning. To formalize, wespecify aformal language whose component linguisticexpressions are individuated non-semantically (e.g., by theirgeometric shapes). We may have some intended interpretation in mind,but elements of the formal language are purely syntactic entities thatwe can discuss without invoking semantic properties such as referenceor truth-conditions. In particular, we can specifyinferencerules in formal syntactic terms. If we choose our inference ruleswisely, then they will cohere with our intended interpretation: theywill carry true premises to true conclusions. Through formalization,Frege invested logic with unprecedented rigor. He thereby laid thegroundwork for numerous subsequent mathematical and philosophicaldevelopments.

Formalization plays a significant foundational role within computerscience. We can program a Turing-style computer that manipulateslinguistic expressions drawn from a formal language. If we program thecomputer wisely, then its syntactic machinations will cohere with ourintended semantic interpretation. For example, we can program thecomputer so that it carries true premises only to true conclusions, orso that it updates probabilities as dictated by Bayesian decisiontheory.

FSC holds thatall computation manipulates formal syntacticitems, without regard to any semantic properties those items may have.Precise formulations of FSC vary. Computation is said to be“sensitive” to syntax but not semantics, or to have“access” only to syntactic properties, or to operate“in virtue” of syntactic rather than semantic properties,or to be impacted by semantic properties only as“mediated” by syntactic properties. It is not always soclear what these formulations mean or whether they are equivalent toone another. But the intuitive picture is that syntactic propertieshave causal/explanatory primacy over semantic properties in drivingcomputation forward.

Fodor’s article “Methodological Solipsism Considered as aResearch Strategy in Cognitive Psychology” (1980) offers anearly statement. Fodor combines FSC with CCTM+RTM. He analogizesMentalese to formal languages studied by logicians: it contains simpleand complex items individuated non-semantically, just as typicalformal languages contain simple and complex expressions individuatedby their shapes. Mentalese symbols have a semantic interpretation, butthis interpretation does not (directly) impact mental computation. Asymbol’s formal properties, rather than its semantic properties,determine how computation manipulates the symbol. In that sense, themind is a “syntactic engine”. Virtually all classicalcomputationalists follow Fodor in endorsing FSC.

Connectionists often deny that neural networks manipulatesyntactically structured items. For that reason, many connectionistswould hesitate to accept FSC. Nevertheless, most connectionistsendorse ageneralized formality thesis: computation isinsensitive to semantic properties. The generalized formality thesisraises many of the same philosophical issues raised by FSC. We focushere on FSC, which has received the most philosophical discussion.

Fodor combines CCTM+RTM+FSC with intentional realism. He holds thatCCTM+RTM+FSC vindicates folk psychology by helping us convert commonsense intentional discourse into rigorous science. He motivates hisposition with a famous abductive argument for CCTM+RTM+FSC (1987:18–20). Strikingly, mental activity tracks semantic propertiesin a coherent way. For example, deductive inference carries premisesto conclusions that are true if the premises are true. How can weexplain this crucial aspect of mental activity? Formalization showsthat syntactic manipulations can track semantic properties, andcomputer science shows how to build physical machines that executedesired syntactic manipulations. If we treat the mind as asyntax-driven machine, then we can explain why mental activity trackssemantic properties in a coherent way. Moreover, our explanation doesnot posit causal mechanisms radically different from those positedwithin the physical sciences. We thereby answer the pivotal question:How is rationality mechanically possible?

Stephen Stich (1983) and Hartry Field (2001) combine CCTM+FSC witheliminativism. They recommend that cognitive science model the mind informal syntactic terms, eschewing intentionality altogether. Theygrant that mental states have representational properties, but theyask what explanatory value scientific psychology gains by invokingthose properties. Why supplement formal syntactic description withintentional description? If the mind is a syntax-driven machine, thendoesn’t representational content drop out as explanatorilyirrelevant?

At one point in his career, Putnam (1983: 139–154) combinedCCTM+FSC with a Davidson-tingedinterpretivism. Cognitivescience should proceed along the lines suggested by Stich and Field,delineating purely formal syntactic computational models. Formalsyntactic modeling co-exists with ordinary interpretive practice, inwhich we ascribe intentional contents to one another’s mentalstates and speech acts. Interpretive practice is governed by holisticand heuristic constraints, which stymie attempts at convertingintentional discourse into rigorous science. For Putnam, as for Fieldand Stich, the scientific action occurs at the formal syntactic levelrather than the intentional level.

CTM+FSC comes under attack from various directions. One criticismtargetsthe causal relevance of representational content(Block 1990; Figdor 2009; Kazez 1995). Intuitively speaking, thecontents of mental states are causally relevant to mental activity andbehavior. For example, my desire to drink water rather than orangejuice causes me to walk to the sink rather than the refrigerator. Thecontent of my desire (that I drink water) seems to play animportant causal role in shaping my behavior. According to Fodor(1990: 137–159), CCTM+RTM+FSC accommodates such intuitions.Formal syntactic activityimplements intentional mentalactivity, thereby ensuring that intentional mental states causallyinteract in accord with their contents. However, it is not so clearthat this analysis secures the causal relevance of content. FSC saysthat computation is “sensitive” to syntax but notsemantics. Depending on how one glosses the key term“sensitive”, it can look like representational content iscausally irrelevant, with formal syntax doing all the causal work.Here is an analogy to illustrate the worry. When a car drives along aroad, there are stable patterns involving the car’s shadow.Nevertheless, shadow position at one time does not influence shadowposition at a later time. Similarly, CCTM+RTM+FSC may explain howmental activity instantiates stable patterns described in intentionalterms, but this is not enough to ensure the causal relevance ofcontent. If the mind is a syntax-driven machine, then causal efficacyseems to reside at the syntactic rather the semantic level. Semanticsis just “along for the ride”. Apparently, then, CTM+FSCencourages the conclusion that representational properties arecausally inert. The conclusion may not trouble eliminativists, butintentional realists usually want to avoid it.

A second criticism dismisses the formal-syntactic picture asspeculation ungrounded in scientific practice. Tyler Burge (2010a,b,2013: 479–480) contends that formal syntactic description ofmental activity plays no significant role within large areas ofcognitive science, including the study of theoretical reasoning,practical reasoning, and perception. In each case, Burge argues, thescience employs intentional descriptionrather than formalsyntactic description. For example, perceptual psychology individuatesperceptual states not through formal syntactic properties but throughrepresentational relations to distal shapes, sizes, colors, and so on.To understand this criticism, we must distinguishformal syntacticdescription andneurophysiological description. Everyoneagrees that a complete scientific psychology will assign primeimportance to neurophysiological description. However,neurophysiological description is distinct from formal syntacticdescription, because formal syntactic description is supposed to bemultiply realizable in the neurophysiological. The issue here iswhether scientific psychology should supplementintentionaldescriptions andneurophysiological descriptions withmultiply realizable, non-intentional formal syntacticdescriptions.

5.2 Externalism about mental content

Putnam’s landmark article “The Meaning of‘Meaning’” (1975: 215–271) introduced theTwin Earth thought experiment, which postulates a world justlike our own except that H2O is replaced by a qualitativelysimilar substance XYZ with different chemical composition. Putnamargues that XYZ is not water and that speakers on Twin Earth use theword “water” to refer to XYZ rather than to water. Burge(1982) extends this conclusion fromlinguistic reference tomental content. He argues that Twin Earthlings instantiatemental states with different contents. For example, if Oscar on Earththinksthat water is thirst-quenching, then his duplicate onTwin Earth thinks a thought with a different content, which we mightgloss asthat twin-water is thirst-quenching. Burge concludesthat mental content does not supervene upon internal neurophysiology.Mental content is individuated partly by factors outside thethinker’s skin, including causal relations to the environment.This position isexternalism about mental content.

Formal syntactic properties of mental states are widely taken tosupervene upon internal neurophysiology. For example, Oscar and TwinOscar instantiate the same formal syntactic manipulations. Assumingcontent externalism, it follows that there is a huge gulf betweenordinary intentional description and formal syntactic description.

Content externalism raises serious questions about the explanatoryutility of representational content for scientific psychology:

Argument from Causation (Fodor 1987, 1991): How can mentalcontent exert any causal influence except as manifested withininternal neurophysiology? There is no “psychological action at adistance”. Differences in the physical environment impactbehavior only by inducing differences in local brain states. So theonly causally relevant factors are those that supervene upon internalneurophysiology. Externally individuated content iscausallyirrelevant.

Argument from Explanation (Stich 1983): Rigorous scientificexplanation should not take into account factors outside thesubject’s skin. Folk psychology may taxonomize mental statesthrough relations to the external environment, but scientificpsychology should taxonomize mental states entirely through factorsthat supervene upon internal neurophysiology. It should treat Oscarand Twin Oscar as psychological duplicates.[4]

Some authors pursue the two arguments in conjunction with one another.Both arguments reach the same conclusion: externally individuatedmental content finds no legitimate place within causal explanationsprovided by scientific psychology. Stich (1983) argues along theselines to motivate his formal-syntactic eliminativism.

Many philosophers respond to such worries by promotingcontentinternalism. Whereas content externalists favorwidecontent (content that does not supervene upon internalneurophysiology), content internalists favornarrow content(content that does so supervene). Narrow content is what remains ofmental content when one factors out all external elements. At onepoint in his career, Fodor (1981, 1987) pursued internalism as astrategy for integrating intentional psychology with CCTM+RTM+FSC.While conceding that wide content should not figure in scientificpsychology, he maintained that narrow content should play a centralexplanatory role.

Radical internalists insist thatall content is narrow. Atypical analysis holds that Oscar is thinking not about water butabout some more general category of substance that subsumes XYZ, sothat Oscar and Twin Oscar entertain mental states with the samecontents. Tim Crane (1991) and Gabriel Segal (2000) endorse such ananalysis. They hold that folk psychology always individuatespropositional attitudes narrowly. A less radical internalismrecommends that we recognize narrow contentin addition towide content. Folk psychology may sometimes individuate propositionalattitudes widely, but we can also delineate a viable notion of narrowcontent that advances important philosophical or scientific goals.Internalists have proposed various candidate notions of narrow content(Block 1986; Chalmers 2002; Cummins 1989; Fodor 1987; Lewis 1994; Loar1988; Mendola 2008). See the entrynarrow mental content for an overview of prominent candidates.

Externalists complain that existing theories of narrow content aresketchy, implausible, useless for psychological explanation, orotherwise objectionable (Burge 2007; Sawyer 2000; Stalnaker 1999).Externalists also question internalist arguments that scientificpsychology requires narrow content:

Argument from Causation: Externalists insist that widecontent can be causally relevant. The details vary among externalists,and discussion often becomes intertwined with complex issuessurrounding causation, counterfactuals, and the metaphysics of mind.See the entrymental causation for an introductory overview, and see Burge (2007), Rescorla (2014),and Yablo (1997, 2003) for representative externalist discussion.

Argument from Explanation: Externalists claim thatpsychological explanation can legitimately taxonomize mental statesthrough factors that outstrip internal neurophysiology (Peacocke 1993;Shea 2018). Burge observes that non-psychological sciences oftenindividuate explanatory kindsrelationally, i.e., throughrelations to external factors. For example, whether an entity countsas a heart depends (roughly) upon whether its biological function inits normal environment is to pump blood. So physiology individuatesorgan kinds relationally. Why can’t psychology likewiseindividuate mental states relationally? For a notable exchange onthese issues, see Burge (1986, 1989, 1995) and Fodor (1987, 1991).

Externalists doubt that we have any good reason to replace orsupplement wide content with narrow content. They dismiss the searchfor narrow content as a wild goose chase.

Burge (2007, 2010a) defends externalism by analyzing current cognitivescience. He argues that many branches of scientific psychology(especially perceptual psychology) individuate mental content throughcausal relations to the external environment. He concludes thatscientific practice embodies an externalist perspective. By contrast,he maintains, narrow content is a philosophical fantasy ungrounded incurrent science.

Suppose we abandon the search for narrow content. What are theprospects for combining CTM+FSC with externalist intentionalpsychology? The most promising option emphasizeslevels ofexplanation. We can say that intentional psychology occupies onelevel of explanation, while formal-syntactic computational psychologyoccupies a different level. Fodor advocates this approach in his laterwork (1994, 2008). He comes to reject narrow content as otiose. Hesuggests that formal syntactic mechanisms implement externalistpsychological laws. Mental computation manipulates Mentaleseexpressions in accord with their formal syntactic properties, andthese formal syntactic manipulations ensure that mental activityinstantiates appropriate law-like patterns defined over widecontents.

In light of the internalism/externalism distinction, let us revisitthe eliminativist challenge raised in§5.1: what explanatory value does intentional description add toformal-syntactic description? Internalists can respond that suitableformal syntactic manipulations determine and maybe even constitutenarrow contents, so that internalist intentional description isalready implicit in suitable formal syntactic description (cf. Field2001: 75). Perhaps this response vindicates intentional realism,perhaps not. Crucially, though, no such response is available tocontent externalists. Externalist intentional description is notimplicit in formal syntactic description, because one can hold formalsyntax fixed while varying wide content. Thus, content externalistswho espouse CTM+FSC must say what we gain by supplementingformal-syntactic explanations with intentional explanations. Once weaccept that mental computation is sensitive to syntax but notsemantics, it is far from clear that any useful explanatory workremains for wide content. Fodor addresses this challenge at variouspoints, offering his most systematic treatment inThe Elm and theExpert (1994). See Arjo (1996), Aydede (1998), Aydede and Robbins(2001), Wakefield (2002); Perry (1998), and Wakefield (2002) forcriticism. See Rupert (2008) and Schneider (2005) for positions closeto Fodor’s. Dretske (1993) and Shea (2018, pp. 197–226)pursue alternative strategies for vindicating the explanatoryrelevance of wide content.

5.3 Content-involving computation

The perceived gulf between computational description and intentionaldescription animates many writings on CTM. A few philosophers try tobridge the gulf using computational descriptions that individuatecomputational states in representational terms. These descriptions arecontent-involving, to use Christopher Peacocke’s (1994)terminology. On the content-involving approach, there is no rigiddemarcation between computational and intentional description. Inparticular, certain scientifically valuable descriptions of mentalactivity are both computational and intentional. Call this positioncontent-involving computationalism.

Content-involving computationalists need not say that allcomputational description is intentional. To illustrate, suppose wedescribe a simple Turing machine that manipulates symbols individuatedby their geometric shapes. Then the resulting computationaldescription is not plausibly content-involving. Accordingly,content-involving computationalists do not usually advancecontent-involving computation as a general theory of computation. Theyclaim only thatsome important computational descriptions arecontent-involving.

One can develop content-involving computationalism in an internalistor externalist direction.Internalist content-involvingcomputationalists hold that some computational descriptionsidentify mental states partly through theirnarrow contents.Murat Aydede (2005) recommends a position along these lines.Externalist content-involving computationalism holds thatcertain computational descriptions identify mental states partlythrough theirwide contents. Tyler Burge (2010a:95–101), Christopher Peacocke (1994, 1999), and Mark Sprevak(2010) espouse this position. Oron Shagrir (2001, 2020, 2022)advocates a content-involving computationalism that is neutral betweeninternalism and externalism.

Externalist content-involving computationalists typically citecognitive science practice as a motivating factor. For example,perceptual psychology describes the perceptual system as computing anestimate of some object’s size from retinal stimulations andfrom an estimate of the object’s depth. Perceptual“estimates” are identified representationally, asrepresentations of specific distal sizes and depths. Quite plausibly,representational relations to specific distal sizes and depths do notsupervene on internal neurophysiology. Quite plausibly, then,perceptual psychology type-identifies perceptual computations throughwide contents. So externalist content-involving computationalism seemsto harmonize well with current cognitive science.

A major challenge facing content-involving computationalism concernsthe interface with standard computationalism formalisms, such as theTuring machine. How exactly do content-involving descriptions relateto the computational models found in logic and computer science?Philosophers usually assume that these models offer non-intentionaldescriptions. If so, that would be a major and perhaps decisive blowto content-involving computationalism.

Arguably, though, many familiar computational formalisms allow acontent-involving rather than formal syntactic construal. Toillustrate, consider the Turing machine. Onecan individuatethe “symbols” comprising the Turing machine alphabetnon-semantically, through factors akin to geometric shape. But doesTuring’s formalismrequire a non-semantic individuativescheme? Arguably, the formalism allows us to individuate symbolspartly through their contents. Of course, the machine table for aTuring machine does not explicitly cite semantic properties of symbols(e.g., denotations or truth-conditions). Nevertheless, the machinetable can encode mechanical rules that describe how to manipulatesymbols, where those symbols are type-identified in content-involvingterms. In this way, the machine table dictates transitions amongcontent-involving states without explicitly mentioning semanticproperties. Aydede (2005) suggests an internalist version of thisview, with symbols type-identified through their narrow contents.[5] Rescorla (2017a) develops the view in an externalist direction, withsymbols type-identified through their wide contents. He argues thatsome Turing-style models describe computational operations overexternalistically individuated Mentalese symbols.[6]

In principle, one might embrace both externalist content-involvingcomputational descriptionand formal syntactic description.One might say that these two kinds of description occupy distinctlevels of explanation. Peacocke suggests such a view. Othercontent-involving computationalists regard formal syntacticdescriptions of the mind more skeptically. For example, Burgequestions what explanatory value formal syntactic descriptioncontributes to certain areas of scientific psychology (such asperceptual psychology). From this viewpoint, the eliminativistchallenge posed in§5.1 has matters backwards. We should not assume that formal syntacticdescriptions are explanatorily valuable and then ask what valueintentional descriptions contribute. We should instead embrace theexternalist intentional descriptions offered by current cognitivescience and then ask what value formal syntactic descriptioncontributes.

Proponents of formal syntactic description often respond by citingimplementation mechanisms. Externalist description of mentalactivity presupposes that suitable causal-historical relations betweenthe mind and the external physical environment are in place. Butsurely we want a “local” description that ignores externalcausal-historical relations, a description that reveals underlyingcausal mechanisms. Fodor (1987, 1994) argues in this way to motivatethe formal syntactic picture. For possible externalist responses tothe argument from implementation mechanisms, see Burge (2010b),Rescorla (2017b), Shea (2013), and Sprevak (2010). For an argumentthat current cognitive science practice does indeed assign animportant explanatory role to formal syntax, see Calzavarini andPaternoster (2022). Debate over the explanatory value of formalsyntax, and more generally over the relation between computation andrepresentation, seems likely to continue into the indefinitefuture.

6. Alternative conceptions of computation

The literature offers several alternative conceptions, usuallyadvanced as foundations for CTM. In many cases, these conceptionsoverlap with one another or with the conceptions considered above.

6.1 Information-processing

It is common for cognitive scientists to describe computation as“information-processing”. It is less common for proponentsto clarify what they mean by “information” or“processing”. Lacking clarification, the description islittle more than an empty slogan.

Claude Shannon introduced a scientifically important notion of“information” in his 1948 article “A MathematicalTheory of Communication”. The intuitive idea is that informationmeasuresreduction in uncertainty, where reduced uncertaintymanifests as an altered probability distribution over possible states.Shannon codified this idea within a rigorous mathematical framework,laying the foundation forinformation theory (Cover andThomas 2006). Shannon information is fundamental to modernengineering. It finds fruitful application within cognitive science,especially cognitive neuroscience. Does it support a convincinganalysis of computation as “information-processing”?Consider an old-fashioned tape machine that records messages receivedover a wireless radio. Using Shannon’s framework, one canmeasure how much information is carried by some recorded message.There is a sense in which the tape machine “processes”Shannon information whenever we replay a recorded message. Still, themachine does not seem to implement a non-trivial computational model.[7] Certainly, neither the Turing machine formalism nor the neuralnetwork formalism offers much insight into the machine’soperations. Arguably, then, a system can process Shannon informationwithout executing computations in any interesting sense.

Confronted with such examples, one might try to isolate a moredemanding notion of “processing”, so that the tape machinedoes not “process” Shannon information. Alternatively, onemight insist that the tape machine executes non-trivial computations.Piccinini and Scarantino (2010) advance a highly general notion ofcomputation—which they dubgenericcomputation—with that consequence.

A second prominent notion of information derives from PaulGrice’s (1989) influential discussion ofnaturalmeaning. Natural meaning involves reliable,counterfactual-supporting correlations. For example, tree ringscorrelate with the age of the tree, and pox correlate with chickenpox.We colloquially describe tree rings as carrying information about treeage, pox as carrying information about chickenpox, and so on. Suchdescriptions suggest a conception that ties information to reliable,counterfactual-supporting correlations. Fred Dretske (1981) developsthis conception into a systematic theory, as do various subsequentphilosophers. Does Dretske-style information subserve a plausibleanalysis of computation as “information-processing”?Consider an old-fashionedbimetallic strip thermostat. Twometals are joined together into a strip. Differential expansion of themetals causes the strip to bend, thereby activating or deactivating aheating unit. Strip state reliably correlates with current ambienttemperature, and the thermostat “processes” thisinformation-bearing state when activating or deactivating the heater.Yet the thermostat does not seem to implement any non-trivialcomputational model. One would not ordinarily regard the thermostat ascomputing. Arguably, then, a system can process Dretske-styleinformation without executing computations in any interesting sense.Of course, one might try to handle such examples through maneuversparallel to those from the previous paragraph.

A third prominent notion of information issemanticinformation, i.e., representational content.[8] Some philosophers hold that a physical system computes only if thesystem’s states have representational properties (Dietrich 1989;Fodor 1998: 10; Ladyman 2009; Shagrir 2006; Sprevak 2010). In thatsense, information-processing isnecessary for computation.As Fodor memorably puts it, “no computation withoutrepresentation” (1975: 34). However, this position is debatable.Chalmers (2011) and Piccinini (2008a) contend that a Turing machinemight execute computations even though symbols manipulated by themachine have no semantic interpretation. The machine’scomputations are purely syntactic in nature, lacking anything likesemantic properties. On this view, representational content is notnecessary for a physical system to count as computational.

It remains unclear whether the slogan “computation isinformation-processing” provides much insight. Nevertheless, theslogan seems unlikely to disappear from the literature anytime soon.For further discussion of possible connections between computation andinformation, see Gallistel and King (2009: 1–26), Lizier,Flecker, and Williams (2013), Miłkowski (2013), Piccinini andScarantino (2010), and Sprevak (2020).

6.2 Function evaluation

In a widely cited passage, the perceptual psychologist David Marr(1982) distinguishes three levels at which one can describe an“information-processing device”:

Computational theory: “[t]he device is characterized asa mapping from one kind of information to another, the abstractproperties of this mapping are defined precisely, and itsappropriateness and adequacy for the task at hand aredemonstrated” (p. 24).

Representation and algorithm: “the choice ofrepresentation for the input and output and the algorithm to be usedto transform one into the other” (pp. 24–25).

Hardware implementation: “the details of how thealgorithm and representation are realized physically” (p.25).

Marr’s three levels have attracted intense philosophicalscrutiny. For our purposes, the key point is that Marr’s“computational level” describes a mapping from inputs tooutputs, without describing intermediate steps. Marr illustrates hisapproach by providing “computational level” theories ofvarious perceptual processes, such as edge detection.

Marr’s discussion suggests afunctional conception ofcomputation, on which computation is a matter of transforminginputs into appropriate outputs. Frances Egan elaborates thefunctional conception over a series of articles (1991, 1992, 1999,2003, 2010, 2014, 2019, 2020). Like Marr, she treats computationaldescription as description of input-output relations. She also claimsthat computational models characterize a purelymathematicalfunction: that is, a mapping from mathematical inputs to mathematicaloutputs. She illustrates by considering a visual mechanism (called“Visua”) that computes an object’s depth fromretinal disparity. She imagines a neurophysiological duplicate(“Twin Visua”) embedded so differently in the physicalenvironment that it does not represent depth. Visua and Twin Visuainstantiate perceptual states with different representationalproperties. Nevertheless, Egan says, vision science treats Visua andTwin Visua ascomputational duplicates. Visua and Twin Visuacompute the same mathematical function, even though the computationshave different representational import in the two cases. Eganconcludes that computational modeling of the mind yields an“abstract mathematical description” consistent with manyalternative possible representational descriptions. Intentionalattribution is just a heuristic gloss upon underlying computationaldescription.

Chalmers (2012) argues that the functional conception neglectsimportant features of computation. As he notes, computational modelsusually describe more than just input-output relations. They describeintermediate steps through which inputs are transformed into outputs.These intermediate steps, which Marr consigns to the“algorithmic” level, figure prominently in computationalmodels offered by logicians and computer scientists. Restricting theterm “computation” to input-output description does notcapture standard computational practice.

An additional worry faces functional theories, such as Egan’s,that exclusively emphasizemathematical inputs and outputs.Critics complain that Egan mistakenly elevates mathematical functions,at the expense of intentional explanations routinely offered bycognitive science (Burge 2005; Rescorla 2015; Silverberg 2006; Sprevak2010). To illustrate, suppose perceptual psychology describes theperceptual system as estimating that some object’s depth is 5meters. The perceptual depth-estimate has a representational content:it is accurate only if the object’s depth is 5 meters. We citethe number 5 to identify the depth-estimate. But our choice of thisnumber depends upon our arbitrary choice of measurement units. Criticscontend that the content of the depth-estimate, not the arbitrarilychosen number through which we theorists specify that content, is whatmatters for psychological explanation. Egan’s theory places thenumber rather than the content at explanatory center stage. Accordingto Egan, computational explanation should describe the visual systemas computing aparticular mathematical function that carriesparticular mathematical inputs intoparticularmathematical outputs. Those particular mathematical inputs andoutputs depend upon our arbitrary choice of measurement units, so theyarguably lack the explanatory significance that Egan assigns tothem.

We should distinguish the functional approach, as pursued by Marr andEgan, from thefunctional programming paradigm in computerscience. The functional programming paradigm models evaluation of acomplex function as successive evaluation of simpler functions. Totake a simple example, one might evaluate \(f(x,y) = (x^{2}+y)\) byfirst evaluating the squaring function and then evaluating theaddition function. Functional programming differs from the“computational level” descriptions emphasized by Marr,because it specifies intermediate computational stages. The functionalprogramming paradigm stretches back to Alonzo Church’s (1936)lambda calculus, continuing with programming languages suchas PCF and LISP. It plays an important role in AI and theoreticalcomputer science. Some authors suggest that it offers special insightinto mental computation (Klein 2012; Piantadosi, Tenenbaum, andGoodman 2012). However, many computational formalisms do not conformto the functional paradigm: Turing machines; imperative programminglanguages, such as C; logic programming languages, such as Prolog; andso on. Even though the functional paradigm describes numerousimportant computations (possibly including mental computations), itdoes not plausibly capture computationin general.

6.3 Structuralism

Many philosophical discussions embody astructuralist conceptionof computation: a computational model describes an abstractcausal structure, without taking into account particular physicalstates that instantiate the structure. This conception traces back atleast to Putnam’s original treatment (1967). Chalmers (1995,1996a, 2011, 2012) develops it in detail. He introduces thecombinatorial-state automaton (CSA) formalism, which subsumesmost familiar models of computation (including Turing machines andneural networks). A CSA provides an abstract description of a physicalsystem’scausal topology: the pattern of causalinteraction among the system’s parts, independent of the natureof those parts or the causal mechanisms through which they interact.Computational description specifies a causal topology.

Chalmers deploys structuralism to delineate a very general version ofCTM. He assumes the functionalist view that psychological states areindividuated by their roles in a pattern of causal organization.Psychological description specifies causal roles, abstracted away fromphysical states that realize those roles. So psychological propertiesareorganizationally invariant, in that they supervene uponcausal topology. Since computational description characterizes acausal topology, satisfying a suitable computational descriptionsuffices for instantiating appropriate mental properties. It alsofollows that psychological description is a species of computationaldescription, so that computational description should play a centralrole within psychological explanation. Thus, structuralist computationprovides a solid foundation for cognitive science. Mentality isgrounded in causal patterns, which are precisely what computationalmodels articulate.

Structuralism comes packaged with an attractive account of theimplementation relation between abstract computational modelsand physical systems. Under what conditions does a physical systemimplement a computational model? Structuralists say that a physicalsystem implements a model just in case the model’s causalstructure is “isomorphic” to the model’s formalstructure. A computational model describes a physical system byarticulating a formal structure that mirrors some relevant causaltopology. Chalmers elaborates this intuitive idea, providing detailednecessary and sufficient conditions for physical realization of CSAs.Few if any alternative conceptions of computation can provide sosubstantive an account of the implementation relation.

We may instructively compare structuralist computationalism with someother theories discussed above:

Machine functionalism. Structuralist computationalismembraces the core idea behind machine functionalism: mental states arefunctional states describable through a suitable computationalformalism. Putnam advances CTM as an empirical hypothesis, and hedefends functionalism on that basis. In contrast, Chalmers followsDavid Lewis (1972) by grounding functionalism in the conceptualanalysis of mentalistic discourse. Whereas Putnam defendsfunctionalism by defending computationalism, Chalmers defendscomputationalism by assuming functionalism.

Classical computationalism, connectionism, and computationalneuroscience. Structuralist computationalism emphasizesorganizationally invariant descriptions, which are multiplyrealizable. In that respect, it diverges from computationalneuroscience. Structuralism is compatible with both classical andconnectionist computationalism, but it differs in spirit from thoseviews. Classicists and connectionists present their rival positions asbold, substantive hypotheses. Chalmers advances structuralistcomputationalism as a relatively minimalist position unlikely to bedisconfirmed.

Intentional realism and eliminativism. Structuralistcomputationalism is compatible with both positions. CSA descriptiondoes not explicitly mention semantic properties such as reference,truth-conditions, representational content, and so on. Structuralistcomputationalists need not assign representational content anyimportant role within scientific psychology. On the other hand,structuralist computationalism does not preclude an important role forrepresentational content.

The formal-syntactic conception of computation. Wide contentdepends on causal-historical relations to the external environment,relations that outstrip causal topology. Thus, CSA description leaveswide content underdetermined. Narrow content presumably supervenesupon causal topology, but CSA description does not explicitly mentionnarrow contents. Overall, then, structuralist computationalismprioritizes a level of formal, non-semantic computational description.In that respect, it resembles FSC. On the other hand, structuralistcomputationalists need not say that computation is“insensitive” to semantic properties, so they need notendorse all aspects of FSC.

Although structuralist computationalism is distinct from CTM+FSC, itraises some similar issues. For example, Rescorla (2012) denies thatcausal topology plays the central explanatory role within cognitivescience that structuralist computationalism dictates. He suggests thatexternalist intentional description rather than organizationallyinvariant description enjoys explanatory primacy. Coming from adifferent direction, computational neuroscientists will recommend thatwe forego organizationally invariant descriptions and instead employmore neurally specific computational models. In response to suchobjections, Chalmers (2012) argues that organizationally invariantcomputational description yields explanatory benefits that neitherintentional description nor neurophysiological description replicate:it reveals the underlying mechanisms of cognition (unlike intentionaldescription); and it abstracts away from neural implementation detailsthat are irrelevant for many explanatory purposes.

6.4 Mechanistic theories

The mechanistic nature of computation is a recurring theme in logic,philosophy, and cognitive science. Several authors develop this themeinto a mechanistic conception of computing systems (Coelho Mollo,2017; Dewhurst 2016; Fresco 2014, 2017; Miłkowski 2013; Piccinini2007, 2012, 2015). On Gualtiero Piccinini’s (2015) influentialdevelopment, afunctional mechanism is a system ofinterconnected components, where each component performs some functionwithin the overall system.Mechanistic explanation proceedsby decomposing the system into parts, describing how the parts areorganized into the larger system, and isolating the function performedby each part. A computing system is a functional mechanism of aparticular kind: it is a mechanism whose components are functionallyorganized to process vehicles in accord with rules. EchoingPutnam’s discussion of multiple realizability, Piccinini demandsthat the rules bemedium-independent, in that they abstractaway from the specific physical implementations of the vehicles.Computational explanation decomposes the system into parts anddescribes how each part helps the system process the relevantvehicles. If the system processes discretely structured vehicles, thenthe computation is digital. If the system processes continuousvehicles, then the computation is analog. MarcinMiłkowski’s (2013) version of the mechanistic approach issimilar. He differs from Piccinini by pursuing an“information-processing” gloss, so that computationalmechanisms operate over information-bearing states. Miłkowski andPiccinini deploy their respective mechanistic theories to defendcomputationalism. Piccinini (2020) focuses especially uponneural computation, drawing extensive connections withcognitive neuroscience.

Mechanistic computationalists typically individuate computationalstates non-semantically. They therefore encounter worries about theexplanatory role of representational content, similar to worriesencountered by FSC and structuralism. Critics protest that mechanisticcomputationalism does not accommodate cognitive science explanationsthat are simultaneously computational and representational (Rescorla2016; Shagrir 2014; Shagrir 2022). The perceived force of thiscriticism will depend upon one’s sympathy for content-involvingcomputationalism. To defuse the criticism, Miłkowski (2017)retorts that mechanistic computationalists can assign a centraltheoretical role to representational content by attributingrepresentationalfunctions to certain computingmechanisms.

6.5 Pluralism

We have surveyed various contrasting and sometimes overlappingconceptions of computation: classical computation, connectionistcomputation, neural computation, formal-syntactic computation,content-involving computation, information-processing computation,functional computation, structuralist computation, and mechanisticcomputation. Each conception yields a different form ofcomputationalism. Each conception has its own strengths andweaknesses. One might adopt apluralistic stance thatrecognizes distinct legitimate conceptions. Rather than elevate oneconception above the others, pluralists happily employ whicheverconception seems useful in a given explanatory context. Edelman (2008)takes a pluralistic line, as does Chalmers (2012) in his most recentdiscussion.

The pluralistic line raises some natural questions. Can we provide ageneral analysis that encompasses all or most types of computation? Doall computations share certain characteristic marks with one another?Are they perhaps instead united by something like family resemblance?Deeper understanding of computation requires us to grapple with thesequestions.

7. Arguments against computationalism

CTM has attracted numerous objections. In many cases, the objectionsapply only to specific versions of CTM (such as classicalcomputationalism or connectionist computationalism). Here are a fewprominent objections. See also the entry onthe Chinese room argument for a widely discussed objection to classical computationalismadvanced by John Searle (1980).

7.1 Triviality arguments

A recurring worry is that CTM istrivial, because we candescribe almost any physical system as executing computations. Searle(1990) claims that a wall implementsany computer program,since we can discern some pattern of molecular movements in the wallthat is isomorphic to the formal structure of the program. Putnam(1988: 121–125) defends a less extreme but still very strongtriviality thesis along the same lines. Triviality arguments play alarge role in the philosophical literature. Anti-computationalistsdeploy triviality arguments against computationalism, whilecomputationalists seek to avoid triviality.

Computationalists usually rebut triviality arguments by insisting thatthe arguments overlook constraints upon computational implementation,constraints that bar trivializing implementations. The constraints maybe counterfactual, causal, semantic, or otherwise, depending onone’s favored theory of computation. For example, David Chalmers(1995, 1996a) and B. Jack Copeland (1996) hold that Putnam’striviality argument ignores counterfactual conditionals that aphysical system must satisfy in order to implement a computationalmodel. Other philosophers say that a physical system must haverepresentational properties to implement a computational model (Fodor1998: 11–12; Ladyman 2009; Sprevak 2010) or at least toimplement a content-involving computational model (Rescorla 2013). Thedetails here vary considerably, and computationalists debate amongstthemselves exactly which types of computation can avoid whichtriviality arguments. But most computationalists agree that we canavoid any devastating triviality worries through a sufficiently robusttheory of the implementation relation between computational models andphysical systems.

Pancomputationalism holds that every physical systemimplements a computational model. This thesis is plausible, since anyphysical system arguably implements a sufficiently trivialcomputational model (e.g., a one-state finite state automaton). AsChalmers (2011) notes, pancomputationalism does not seem worrisome forcomputationalism. What would be worrisome is the much strongertriviality thesis that almost every physical system implements almostevery computational model.

For further discussion of triviality arguments and computationalimplementation, see Sprevak (2019) and the entrycomputation in physical systems.

7.2 Gödel’s incompleteness theorem

According to some authors, Gödel’s incompleteness theoremsshow that human mathematical capacities outstrip the capacities of anyTuring machine (Nagel and Newman 1958). J.R. Lucas (1961) developsthis position into a famous critique of CCTM. Roger Penrose pursuesthe critique inThe Emperor’s New Mind (1989) andsubsequent writings. Various philosophers and logicians have answeredthe critique, arguing that existing formulations suffer fromfallacies, question-begging assumptions, and even outrightmathematical errors (Bowie 1982; Chalmers 1996b; Feferman 1996; Lewis1969, 1979; Putnam 1975: 365–366, 1994; Shapiro 2003). There isa wide consensus that this criticism of CCTM lacks any force. It mayturn out that certain human mental capacities outstripTuring-computability, but Gödel’s incompleteness theoremsprovide no reason to anticipate that outcome.

7.3 Limits of computational modeling

Could a computer compose theEroica symphony? Or discovergeneral relativity? Or even replicate a child’s effortlessability to perceive the environment, tie her shoelaces, and discernthe emotions of others? Intuitive, creative, or skillful humanactivity may seem to resist formalization by a computer program(Dreyfus 1972, 1992). More generally, one might worry that crucialaspects of human cognition elude computational modeling, especiallyclassical computational modeling.

Ironically, Fodor promulgates a forceful version of this critique.Even in his earliest statements of CCTM, Fodor (1975: 197–205)expresses considerable skepticism that CCTM can handle all importantcognitive phenomena. The pessimism becomes more pronounced in hislater writings (1983, 2000), which focus especially onabductivereasoning as a mental phenomenon that potentially eludescomputational modeling. His core argument may be summarized asfollows:

(1)
Turing-style computation is sensitive only to “local”properties of a mental representation, which are exhausted by theidentity and arrangement of the representation’sconstituents.
(2)
Many mental processes, paradigmatically abduction, are sensitiveto “nonlocal” properties such as relevance, simplicity,and conservatism.
(3)
Hence, we may have to abandon Turing-style modeling of therelevant processes.
(4)
Unfortunately, we have currently have no idea what alternativetheory might serve as a suitable replacement.

Some critics deny (1), arguing that suitable Turing-style computationscan be sensitive to “nonlocal” properties (Schneider 2011;Wilson 2005). Some challenge (2), arguing that typical abductiveinferences are sensitive only to “local” properties(Carruthers 2003; Ludwig and Schneider 2008; Sperber 2002). Someconcede step (3) but dispute step (4), insisting that we havepromising non-Turing-style models of the relevant mental processes(Pinker 2005). Partly spurred by such criticisms, Fodor elaborates hisargument in considerable detail. To defend (2), he critiques theoriesthat model abduction by deploying “local” heuristicalgorithms (2005: 41–46; 2008: 115–126) or by positing aprofusion of domain-specific cognitive modules (2005: 56–100).To defend (4), he critiques various theories that handle abductionthrough non-Turing-style models (2000: 46–53; 2008), such asconnectionist networks.

The scope and limits of computational modeling remain controversial.We may expect this topic to remain an active focus of inquiry, pursuedjointly with AI.

7.4 Temporal arguments

Mental activity unfolds in time. Moreover, the mind accomplishessophisticated tasks (e.g., perceptual estimation) very quickly. Manycritics worry that computationalism, especially classicalcomputationalism, does not adequately accommodate temporal aspects ofcognition. A Turing-style model makes no explicit mention of the timescale over which computation occurs. One could physically implementthe same abstract Turing machine with a silicon-based device, or aslower vacuum-tube device, or an even slower pulley-and-lever device.Critics recommend that we reject CCTM in favor of some alternativeframework that more directly incorporates temporal considerations. vanGelder and Port (1995) use this argument to promote anon-computationaldynamical systems framework for modelingmental activity. Eliasmith (2003, 2013: 12–13) uses it tosupport his Neural Engineering Framework.

Computationalists respond that we cansupplement an abstractcomputational model with temporal considerations (Piccinini 2010;Weiskopf 2004). For example, a Turing machine model presupposesdiscrete “stages of computation”, without describing howthe stages relate to physical time. But we can supplement our model bydescribing how long each stage lasts, thereby converting ournon-temporal Turing machine model into a theory that yields detailedtemporal predictions. Many advocates of CTM employ supplementationalong these lines to study temporal properties of cognition (Newell1990). Similar supplementation figures prominently in computerscience, whose practitioners are quite concerned to build machineswith appropriate temporal properties. Computationalists conclude thata suitably supplemented version of CTM can adequately capture howcognition unfolds in time.

A second temporal objection highlights the contrast betweendiscrete andcontinuous temporal evolution (vanGelder and Port 1995). Computation by a Turing machine unfolds indiscrete stages, while mental activity unfolds in a continuous time.Thus, there is a fundamental mismatch between the temporal propertiesof Turing-style computation and those of actual mental activity. Weneed a psychological theory that describes continuous temporalevolution.

Computationalists respond that this objection assumes what is to beshown: that cognitive activity does not fall into explanatorysignificant discrete stages (Weiskopf 2004). Assuming that physicaltime is continuous, it follows that mental activity unfolds incontinuous time. It doesnot follow that cognitive modelsmust have continuous temporal structure. A personal computer operatesin continuous time, and its physical state evolves continuously. Acomplete physical theory will reflect all those physical changes. Butourcomputational model does not reflect every physicalchange to the computer. Our computational model has discrete temporalstructure. Why assume that a good cognitive-level model of the mindmust reflect every physical change to the brain? Even if there is acontinuum of evolvingphysical states, why assume a continuumof evolvingcognitive states? The mere fact of continuoustemporal evolution does not militate against computational models withdiscrete temporal structure.

For discussion of how to reconcile CTM with a dynamical systemsperspective, see (Beer and Williams 2015; Phattanasri, Chiel, and Beer2007; Weinberger and Allen 2022).

7.5 Embodied cognition

Embodied cognition is a research program that draws inspiration fromthe continental philosopher Maurice Merleau-Ponty, the perceptualpsychologist J.J. Gibson, and other assorted influences. It is afairly heterogeneous movement, but the basic strategy is to emphasizelinks between cognition, bodily action, and the surroundingenvironment. See Varela, Thompson, and Rosch (1991) for an influentialearly statement. In many cases, proponents deploy tools of dynamicalsystems theory. Proponents typically present their approach as aradical alternative to computationalism (Chemero 2009; Kelso 1995;Thelen and Smith 1994). CTM, they complain, treats mental activity asstatic symbol manipulation detached from the embedding environment. Itneglects myriad complex ways that the environment causally orconstitutively shapes mental activity. We should replace CTM with anew picture that emphasizes continuous links between mind, body, andenvironment. Agent-environment dynamics, not internal mentalcomputation, holds the key to understanding cognition. Often, abroadly eliminativist attitude towards intentionality propels thiscritique.

Computationalists respond that CTM allows due recognition ofcognition’s embodiment. Computational models can take intoaccount how mind, body, and environment continuously interact. Afterall, computational models can incorporate sensory inputs and motoroutputs. There is no obvious reason why an emphasis uponagent-environment dynamics precludes a dual emphasis upon internalmental computation (Clark 2014: 140–165; Rupert 2009).Computationalists maintain that CTM can incorporate any legitimateinsights offered by the embodied cognition movement. They also insistthat CTM remains our best overall framework for explaining numerouscore psychological phenomena.

Bibliography

  • Akhlaghpour, H., 2022, “An RNA-Based Theory of NaturalUniversal Computation”,Journal of Theoretical Biology,537: 110984.
  • Aitchison, L. and Lengyel, M., 2016, “The Hamiltonian Brain:Efficient Probabilistic Inference with Excitatory-Inhibitory NeuralCircuit Dynamics”,PloS Computational Biology, 12:e1005186.
  • Arjo, D., 1996, “Sticking Up for Oedipus: Fodor onIntentional Generalizations and Broad Content”,Mind andLanguage, 11: 231–245.
  • Aydede, M., 1998, “Fodor on Concepts and FregePuzzles”,Pacific Philosophical Quarterly, 79:289–294.
  • –––, 2005, “Computationalism andFunctionalism: Syntactic Theory of Mind Revisited”, inTurkish Studies in the History and Philosophy of Science, G.Irzik and G. Güzeldere (eds), Dordrecht: Springer.
  • Aydede, M. and P. Robbins, 2001, “Are Frege Cases Exceptionsto Intentional Generalizations?”,Canadian Journal ofPhilosophy, 31: 1–22.
  • Bayne, T., and I. Williams, 2023, “The Turing Test is not aGood Benchmark for Thought in LLMs”,Nature HumanBehavior, 7: 1806–1807.
  • Bechtel, W. and A. Abrahamsen, 2002,Connectionism and theMind, Malden: Blackwell.
  • Beer, R., and P. Williams, 2015, “Information Processing anDynamics in Minimally Cognitive Agents”,CognitiveScience, 39: 1–38.
  • Bermúdez, J.L., 2005,Philosophy of Psychology: AContemporary Introduction, New York: Routledge.
  • –––, 2010,Cognitive Science: AnIntroduction to the Science of the Mind, Cambridge: CambridgeUniversity Press.
  • Block, N., 1978, “Troubles With Functionalism”,Minnesota Studies in the Philosophy of Science, 9:261–325.
  • –––, 1981, “Psychologism andBehaviorism”,Philosophical Review, 90:5–43.
  • –––, 1983, “Mental Pictures and CognitiveScience”,Philosophical Review, 92: 499–539.
  • –––, 1986, “Advertisement for a Semanticsfor Psychology”,Midwest Studies in Philosophy, 10:615–678.
  • –––, 1990, “Can the Mind Change theWorld?”, inMeaning and Method: Essays in Honor of HilaryPutnam, G. Boolos (ed.), Cambridge: Cambridge UniversityPress.
  • –––, 1995,The Mind as the Software of theBrain, inInvitation to Cognitive Science, vol. 3:Thinking, E. Smith and B. Osherson (eds), Cambridge, MA: MITPress.
  • Block, N. and J. Fodor, 1972, “What Psychological States AreNot”,The Philosophical Review, 81: 159–181.
  • Boden, M., 1991, “Horses of a Different Color?”, inRamsey et al. 1991: 3–19.
  • Bontly, T., 1998, “Individualism and the Nature of SyntacticStates”,The British Journal for the Philosophy ofScience, 49: 557–574.
  • Bowers, J., G. Malhotra, M. Dujmović, M. Llera Montero, C.Tsvetkov, V. Biscione, G. Puebla, F. Adolfi, J. Hummel, R. Heaton, B.Evans, J. Mitchell, and R. Blything, 2023, “Deep Problems withNeural Network Models of Human Vision”,Behavioral and BrainSciences, 46: e386.
  • Bowie, G.L., 1982, “Lucas’s Number is FinallyUp”,Journal of Philosophical Logic, 11:79–285.
  • Brogan, W., 1990,Modern Control Theory, 3rd edition.Englewood Cliffs: Prentice Hall.
  • Brown, T., B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A.Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D.Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S.Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I.Sutskever, and D. Amodei, 2020, “Language Models Are Few-ShotLearners”,Advances in Neural Information ProcessingSystems, 33: 1877–1901.
  • Buckner, C., 2019, “Deep Learning: A PhilosophicalIntroduction”,Philosophy Compass, 14: e12625.
  • –––, 2024,From Deep Learning to RationalMachines: What the History of Philosophy Can Teach Us about the Futureof Artificial Intelligence, Oxford: Oxford University Press.
  • Buckner, C., and J. Garson, 2019, “Connectionism andPost-Connectionist Models”, in Sprevak and Colombo 2019:175–191.
  • Buesing, L., J. Bill, B. Nessler, and W. Maass, W., 2011,“Neural Dynamics of Sampling: A Model for Stochastic Computationin Recurring Networks of Spiking Neurons”,PLOSComputational Biology, 7: e1002211.
  • Burge, T., 1982, “Other Bodies”, inThought andObject, A. Woodfield (ed.), Oxford: Oxford University Press.Reprinted in Burge 2007: 82–99.
  • –––, 1986, “Individualism andPsychology”,The Philosophical Review, 95: 3–45.Reprinted in Burge 2007: 221–253.
  • –––, 1989, “Individuation and Causation inPsychology”,Pacific Philosophical Quarterly, 70:303–322. Reprinted in Burge 2007: 316–333.
  • –––, 1995, “Intentional Properties andCausation”, inPhilosophy of Psychology, C. MacDonaldand G. MacDonald (eds), Oxford: Blackwell. Reprinted in Burge 2007:334–343.
  • –––, 2005, “Disjunctivism and PerceptualPsychology”,Philosophical Topics, 33: 1–78.
  • –––, 2007,Foundations of Mind, Oxford:Oxford University Press.
  • –––, 2010a,Origins of Objectivity,Oxford: Oxford University Press.
  • –––, 2010b, “Origins of Perception”,Disputatio, 4: 1–38.
  • –––, 2010c, “Steps Towards Origins ofPropositional Thought”,Disputatio, 4:39–67.
  • –––, 2013,Cognition throughUnderstanding, Oxford: Oxford University Press.
  • Calzavarini, F., and A. Paternoster, 2022, “The SemanticView of Computation and the Argument from the Cognitive SciencePractice”,Synthese, 200: 77.doi:10.1007/s11229-022-03542-z
  • Camp, E., 2009, “A Language of Baboon Thought?”, inThe Philosophy of Animal Minds, R. Lurz (ed.), Cambridge:Cambridge University Press.
  • Campbell, M., 1999, “Knowledge Discovery in DeepBlue”,Communications of the ACM, 42: 65–67.
  • Carruthers, P., 2003, “On Fodor’s Problem”,Mind and Language, 18: 508–523.
  • Chalmers, D., 1990, “Syntactic Transformations onDistributed Representations”,Connection Science, 2:53–62.
  • –––, 1993, “Why Fodor and Pylyshyn WereWrong: The Simplest Refutation”,PhilosophicalPsychology, 63: 305–319.
  • –––, 1995, “On Implementing aComputation”,Minds and Machines, 4:391–402.
  • –––, 1996a, “Does a Rock Implement EveryFinite State Automaton?”,Synthese, 108:309–333.
  • –––, 1996b, “Minds, Machines, andMathematics”,Psyche, 2: 11–20.
  • –––, 2002, “The Components ofContent”, inPhilosophy of Mind: Classical and ContemporaryReadings, D. Chalmers (ed.), Oxford: Oxford UniversityPress.
  • –––, 2011, “A Computational Foundation forthe Study of Cognition”,The Journal of CognitiveScience, 12: 323–357.
  • –––, 2012, “The Varieties of Computation:A Reply”,The Journal of Cognitive Science, 13:213–248.
  • Chemero, A., 2009,Radical Embodied Cognitive Science,Cambridge, MA: MIT Press.
  • Cheney, D. and R. Seyfarth, 2007,Baboon Metaphysics: TheEvolution of a Social Mind, Chicago: University of ChicagoPress.
  • Chomsky, N., 1965,Aspects of the Theory of Syntax,Cambridge, MA: MIT Press.
  • Church, A., 1936, “An Unsolvable Problem of ElementaryNumber Theory”,American Journal of Mathematics, 58:345–363.
  • Churchland, P.M., 1981, “Eliminative Materialism and thePropositional Attitudes”,Journal of Philosophy, 78:67–90.
  • –––, 1989,A Neurocomputational Perspective:The Nature of Mind and the Structure of Science, Cambridge, MA:MIT Press.
  • –––, 1995,The Engine of Reason, the Seat ofthe Soul, Cambridge, MA: MIT Press.
  • –––, 2007,Neurophilosophy At Work,Cambridge: Cambridge University Press.
  • Churchland, P.S., 1986,Neurophilosophy, Cambridge, MA:MIT Press.
  • Churchland, P.S., C. Koch, and T. Sejnowski, 1990, “What IsComputational Neuroscience?”, inComputationalNeuroscience, E. Schwartz (ed.), Cambridge, MA: MIT Press.
  • Churchland, P.S. and T. Sejnowski, 1992,The ComputationalBrain, Cambridge, MA: MIT Press.
  • Clark, A., 2014,Mindware: An Introduction to the Philosophyof Cognitive Science, Oxford: Oxford University Press.
  • Clayton, N., N. Emery, and A. Dickinson, 2006, “TheRationality of Animal Memory: Complex Caching Strategies of WesternScrub Jays”, inRational Animals?, M. Nudds and S.Hurley (eds), Oxford: Oxford University Press.
  • Coelho Mollo, D., 2018, “Functional Individuation,Mechanistic Implementation: The Proper Way of Seeing the MechanisticView of Concrete Computation”,Synthese, 195:3477–3497.
  • Copeland, J., 1996, “What is Computation?”,Synthese, 108: 335–359.
  • Cover, T. and J. Thomas, 2006,Elements of InformationTheory, Hoboken: Wiley.
  • Crane, T., 1991, “All the Difference in the World”,Philosophical Quarterly, 41: 1–25.
  • Crick, F. and C. Asanuma, 1986, “Certain Aspects of theAnatomy and Physiology of the Cerebral Cortex”, in McClelland etal. 1987: 333–371.
  • Cummins, R., 1989,Meaning and Mental Representation,Cambridge, MA: MIT Press.
  • Davidson, D., 1980,Essays on Actions and Events, Oxford:Clarendon Press.
  • Dayan, P., 2009, “A Neurocomputational Jeremiad”,Nature Neuroscience, 12: 1207.
  • Dennett, D., 1971, “Intentional Systems”,Journalof Philosophy, 68: 87–106.
  • –––, 1987,The Intentional Stance,Cambridge, MA: MIT Press.
  • –––, 1991, “Mother Nature versus theWalking Encyclopedia”, in Ramsey, et al. 1991: 21–30.
  • Dewhurst, J., 2018, “Computing Mechanisms Without ProperFunctions”,Minds and Machines, 28: 569–588.
  • Dietrich, E., 1989, “Semantics and the ComputationalParadigm in Cognitive Psychology”,Synthese, 79:119–141.
  • Donahoe, J., 2010, “Man as Machine: A Review ofMemoryand Computational Brain, by C.R. Gallistel and A.P. King”,Behavior and Philosophy, 38: 83–101.
  • Doerig, A., R. Sommers, K. Seeliger, B. Richards, J. Ismael, G.Lindsay, K. Kording, T. Konkle, M. van Gerven, N. Kriegeskorte, and T.Kietzmann, 2023, “The Neuroconnectionist ResearchProgramme”,Nature Reviews Neuroscience, 24:431–450.
  • Dreyfus, H., 1972,What Computers Can’t Do,Cambridge, MA: MIT Press.
  • –––, 1992,What Computers Still Can’tDo, Cambridge, MA: MIT Press.
  • Dretske, F., 1981,Knowledge and the Flow of Information,Oxford: Blackwell.
  • –––, 1993, “Mental Events as StructuringCauses of Behavior”, inMental Causation, J. Heil andA. Mele (eds), Oxford: Clarendon Press.
  • Edelman, S., 2008,Computing the Mind, Oxford: OxfordUniversity Press.
  • –––, 2014, “How to Write a ‘How aBuild a Brain’ Book”,Trends in CognitiveScience, 18: 118–119.
  • Egan, F., 1991, “Must Psychology be Individualistic?”,Philosophical Review, 100: 179–203.
  • –––, 1992, “Individualism, Computation,and Perceptual Content”,Mind, 101: 443–459.
  • –––, 1999, “In Defense of NarrowMindedness”,Mind and Language, 14: 177–194.
  • –––, 2003, “Naturalistic Inquiry: WhereDoes Mental Representation Fit In?”, inChomsky and HisCritics, L. Antony and N. Hornstein (eds), Malden:Blackwell.
  • –––, 2010, “A Modest Role forContent”,Studies in History and Philosophy of Science,41: 253–259.
  • –––, 2014, “How to Think About MentalContent”,Philosophical Studies, 170:115–135.
  • –––, 2019, “The Nature and Function ofContent in Computational Models”, in Sprevak and Colombo 2019:247–258.
  • –––, 2020, “A Deflationary Account ofMental Representation”, inWhat Are MentalRepresentations?, J. Smortchkova, K. Dołęga, and T.Schlicht (eds), Oxford: Oxford University Press.
  • Eliasmith, C., 2003, “Moving Beyond Metaphors: Understandingthe Mind for What It Is”,Journal of Philosophy, 100:493–520.
  • –––, 2013,How to Build a Brain,Oxford: Oxford: University Press.
  • Eliasmith, C. and C.H. Anderson, 2003,Neural Engineering:Computation, Representation and Dynamics in NeurobiologicalSystems, Cambridge, MA: MIT Press.
  • Elman, J., 1990, “Finding Structure in Time”,Cognitive Science, 14: 179–211.
  • Feferman, S., 1996, “Penrose’s GödelianArgument”,Psyche, 2: 21–32.
  • Feldman, J. and D. Ballard, 1982, “Connectionist Models andtheir Properties”,Cognitive Science, 6:205–254.
  • Field, H., 2001,Truth and the Absence of Fact, Oxford:Clarendon Press.
  • Figdor, C., 2009, “Semantic Externalism and the Mechanics ofThought”,Minds and Machines, 19: 1–24.
  • Floridi, L., and M. Chiriatti, 2020, “GPT-3: Its Nature,Scope, Limits, and Consequences”,Minds and Machines,30: 681–694.
  • Fodor, J., 1975,The Language of Thought, New York:Thomas Y. Crowell.
  • –––, 1980, “Methodological SolipsismConsidered as a Research Strategy in Cognitive Psychology”,Behavioral and Brain Science, 3: 63–73. Reprinted inFodor 1981: 225–253.
  • –––, 1981,Representations, Cambridge:MIT Press.
  • –––, 1983,The Modularity of Mind,Cambridge, MA: MIT Press.
  • –––, 1987,Psychosemantics, Cambridge:MIT Press.
  • –––, 1990,A Theory of Content and OtherEssays, Cambridge, MA: MIT Press.
  • –––, 1991, “A Modal Argument for NarrowContent”,Journal of Philosophy, 88: 5–26.
  • –––, 1994,The Elm and the Expert,Cambridge, MA: MIT Press.
  • –––, 1998,Concepts, Oxford: ClarendonPress.
  • –––, 2000,The Mind Doesn’t Work ThatWay, Cambridge, MA: MIT Press.
  • –––, 2005, “Reply to Steven Pinker‘So How Does the Mind Work?’”,Mind andLanguage, 20: 25–32.
  • –––, 2008,LOT2, Oxford: ClarendonPress.
  • Fodor, J. and Z. Pylyshyn, 1988, “Connectionism andCognitive Architecture: A Critical Analysis”,Cognition, 28: 3–71.
  • Frege, G., 1879/1967,Begriffsschrift, eine der ArithmetischenNachgebildete Formelsprache des Reinen Denkens. Reprinted asConcept Script, a Formal Language of Pure Thought Modeled uponthat of Arithmetic, inFrom Frege to Gödel: A SourceBook in Mathematical Logic, 1879–1931, J. van Heijenoort(ed.), S. Bauer-Mengelberg (trans.), Cambridge: Harvard UniversityPress.
  • Fresco, N. 2014,Physical Computation and CognitiveScience, Berlin: Springer.
  • –––, 2021, “Long-Arm FunctionalIndividuation of Computation”,Synthese, 199:13993–14016.
  • Gallistel, C.R., 1990,The Organization of Learning,Cambridge, MA: MIT Press.
  • Gallistel, C.R. and King, A., 2009,Memory and theComputational Brain, Malden: Wiley-Blackwell.
  • Gandy, R., 1980, “Church’s Thesis and Principles forMechanism”, inThe Kleene Symposium, J. Barwise, H.Keisler, and K. Kunen (eds). Amsterdam: North Holland.
  • Gödel, K., 1936/65. “On Formally UndecidablePropositions of Principia Mathematica and Related Systems”,Reprinted with a new Postscript inThe Undecidable, M. Davis(ed.), New York: Raven Press Books.
  • Grice, P., 1989,Studies in the Ways of Words, Cambridge:Harvard University Press.
  • Hadley, R., 2000, “Cognition and the Computational Power ofConnectionist Networks”,Connection Science, 12:95–110.
  • Harnish, R., 2002,Minds, Brains, Computers, Malden:Blackwell.
  • Haykin, S., 2008,Neural Networks: A ComprehensiveFoundation, New York: Prentice Hall.
  • Haugeland, J., 1985,Artificial Intelligence: The VeryIdea, Cambridge, MA: MIT Press.
  • Horgan, T. and J. Tienson, 1996,Connectionism and thePhilosophy of Psychology, Cambridge, MA: MIT Press.
  • Horowitz, A., 2007, “Computation, External Factors, andCognitive Explanations”,Philosophical Psychology, 20:65–80.
  • Illing, B., W. Gerstner, and J. Brea, 2019, “BiologicallyPlausible Deep Learning—But How Far Can We Go with ShallowNetworks”,Neural Networks, 118: 90–101.
  • Johnson, K., 2004, “On the Systematicity of Language andThought”,Journal of Philosophy, 101:111–139.
  • Johnson-Laird, P., 1988,The Computer and the Mind,Cambridge: Harvard University Press.
  • –––, 2004, “The History of MentalModels”, inPsychology of Reasoning: Theoretical andHistorical Perspectives, K. Manktelow and M.C. Chung (eds), NewYork: Psychology Press.
  • Kazez, J., 1995, “Computationalism and the Causal Role ofContent”,Philosophical Studies, 75:231–260.
  • Kelso, J., 1995,Dynamic Patterns, Cambridge, MA: MITPress.
  • Kingma, D. and M. Welling, 2019, “An Introduction toVariational Autoencoders”,Foundations and Trends in MachineLearning, 12: 307–392.
  • Klein, C., 2012, “Two Paradigms for IndividuatingImplementations”,Journal of Cognitive Science, 13:167–179.
  • Kriegesgorte, K., 2015, “Deep Neural Networks: A NewFramework for Modeling Biological Vision and Brain InformationProcessing”,Annual Review of Vision Science, 1:417–446.
  • Kriegesgorte, K. and P. Douglas, 2018, “CognitiveComputational Neuroscience”,Nature Neuroscience, 21:1148–1160.
  • Krishevsky, A., I. Sutskever, and G. Hinton, 2012, “ImageNetClassification with Deep Convolutional Neural Networks”,Advances in Neural Information Processing Systems, 25:1097–1105.
  • Krotov, D., and J. Hopfield, 2019, “Unsupervised Learning byCompeting Hidden Units”,Proceedings of the National Academyof Sciences, 116: 7723–7731.
  • Ladyman, J., 2009, “What Does it Mean to Say that a PhysicalSystem Implements a Computation?”,Theoretical ComputerScience, 410: 376–383.
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015, “DeepLearning”,Nature, 521: 436–444.
  • Lewis, D., 1969, “Lucas against Mechanism”,Philosophy, 44: 231–3.
  • –––, 1971, “Analog and Digital”,Noûs, 5: 321–327.
  • –––, 1972, “Psychophysical and TheoreticalIdentifications”,Australasian Journal of Philosophy,50: 249–58.
  • –––, 1979, “Lucas Against MechanismII”,Canadian Journal of Philosophy, 9:373–376.
  • –––, 1994, “Reduction of Mind”, inA Companion to the Philosophy of Mind, S. Guttenplan (ed.),Oxford: Blackwell.
  • Lillicrap, T., D. Cownden, D. Tweed, and C. Akerman, 2016,“Random Synaptic Feedback Weights Support Error Backpropagationfor Deep Learning”,Nature Communications, 7:13276–346.
  • Lillicrap, T., A. Sontoro, L. Marris, C. Akerman, and G. Hinton,2020, “Backpropagation and the Brain”,Nature ReviewsNeuroscience, 21: 335–346.
  • Lizier, J., B. Flecker, and P. Williams, 2013, “Towards aSynergy-based Account of Measuring Information Modification”,Proceedings of the 2013 IEEE Symposium on Artificial Life(ALIFE), Singapore: 43–51.
  • Ludwig, K. and S. Schneider, 2008, “Fodor’s Critiqueof the Classical Computational Theory of Mind”,Mind andLanguage, 23: 123–143.
  • Lucas, J.R., 1961, “Minds, Machines, and Gödel”,Philosophy, 36: 112–137.
  • Ma, W. J., 2019, “Bayesian Decision Models: A Primer”,Neuron, 104: 164–175.
  • Ma, W. J., K. Kording, and D. Goldreich, 2023,Bayesian Modelsof Perception and Action: An Introduction, Cambridge, MA: MITPress.
  • Maass, W., 1997, “Networks of Spiking Neurons: The NextGeneration of Neural Network Models”,Neural Networks,10: 1659–1671.
  • MacLennan, B., 2012, “Analog Computation”,Computational Complexity, R. Meyers (ed.), New York:Springer.
  • Mahowald, K., A. Ivanova, I. Blank, N. Kanwisher, J. Tenenbaum,and E. Fedorenko, 2024, “Dissociating Language and Thought inLarge Language Models”,Trends in Cognitive Sciences,28: 517–540.
  • Maley, C., 2011, “Analog and Digital, Continuous andDiscrete”,Philosophical Studies, 155:117–131.
  • –––, 2023, “Analogue Computation andRepresentation”,The British Journal for the Philosophy ofScience, 739: 249–769.
  • Marblestone, A., G. Wayne, and K. Kording, 2016, “Toward anIntegration of Deep Learning and Neuroscience”,Frontiers inComputational Neuroscience, 10: 1–41.
  • Marcus, G., 2001,The Algebraic Mind, Cambridge, MA: MITPress.
  • Marr, D., 1982,Vision, San Francisco: W.H. Freeman.
  • McClelland, J., D. Rumelhart, and G. Hinton, 1986, “TheAppeal of Parallel Distributed Processing”, in Rumelhart et al.1986: 3–44.
  • McClelland, J., D. Rumelhart, and the PDP Research Group, 1987,Parallel Distributed Processing, vol. 2. Cambridge, MA: MITPress.
  • McCulloch, W. and W. Pitts, 1943, “A Logical Calculus of theIdeas Immanent in Nervous Activity”,Bulletin ofMathematical Biophysics, 7: 115–133.
  • McDermott, D., 2001,Mind and Mechanism, Cambridge, MA:MIT Press.
  • Mendola, J., 2008,Anti-Externalism, Oxford: OxfordUniversity Press.
  • Miłkowski, M., 2013,Explaining the ComputationalMind, Cambridge, MA: MIT Press.
  • –––, 2017, “The False Dichotomy betweenCausal Realization and Semantic Computation”,Hybris.Internetowy Magazyn Filozoficzny, 38: 1–21.
  • Miller, P., 2018,An Introductory Course in ComputationalNeuroscience, Cambridge, MA: MIT Press.
  • Mole, C., 2014, “Dead Reckoning in the Desert Ant: A Defenseof Connectionist Models”,Review of Philosophy andPsychology, 5: 277–290.
  • Murphy, K., 2023,Probabilistic Machine Learning: AdvancedTopics, Cambridge, MA: MIT Press.
  • Naselaris, T., Bassett, D., Fletcher, A., Körding, K.,Kriegeskorte, N., Nienborg, H., Poldrack, R., Shohamy, D., and Kay,K., 2018, “Cognitive Computational Neuroscience: A NewConference for an Emerging Discipline”,Trends in CognitiveScience, 22: 365–367.
  • Nagel, E. and J.R. Newman, 1958,Gödel’sProof, New York: New York University Press.
  • Newell, A., 1990,Unified Theories of Cognition,Cambridge: Harvard University Press.
  • Newell, A. and H. Simon, 1956, “The Logic Theory Machine: AComplex Information Processing System”,IRE Transactions onInformation Theory, IT-2, 3: 61–79.
  • –––, 1976, “Computer Science as EmpiricalInquiry: Symbols and Search”,Communications of theACM, 19: 113–126.
  • O’Keefe, J. and L. Nadel, 1978,The Hippocampus as aCognitive Map, Oxford: Clarendon University Press.
  • Ockham, W., 1957,Summa Logicae, in hisPhilosophicalWritings, A Selection, P. Boehner (ed. and trans.), London:Nelson.
  • Orhan, A. E. and Ma, W. J., 2017, “Efficient ProbabilisticInference in Generic Neural Networks Trained with Non-probabilisticFeedback ”,Nature Communications, 8: 1–14.
  • Peacocke, C., 1992,A Study of Concepts, Cambridge, MA:MIT Press.
  • –––, 1993, “ExternalistExplanation”,Proceedings of the Aristotelian Society,67: 203–230.
  • –––, 1994, “Content, Computation, andExternalism”,Mind and Language, 9: 303–335.
  • –––, 1999, “Computation as InvolvingContent: A Response to Egan”,Mind and Language, 14:195–202.
  • Penrose, R., 1989,The Emperor’s New Mind: ConcerningComputers, Minds, and the Laws of Physics, Oxford: OxfordUniversity Press.
  • Perry, J., 1998, “Broadening the Mind”,Philosophyand Phenomenological Research, 58: 223–231.
  • Phattanasri, P., H. Chiel, and R. Beer, 2007, “The Dynamicsof Associative Learning in Evolved Model Circuits”,AdaptiveBehavior, 15: 377–396.
  • Piantadosi, S., J. Tenenbaum, and N. Goodman, 2012,“Bootstrapping in a Language of Thought”,Cognition, 123: 199–217.
  • Piccinini, G., 2004, “Functionalism, Computationalism, andMental States”,Studies in History and Philosophy ofScience, 35: 811–833.
  • –––, 2007, “Computing Mechanisms”,Philosophy of Science, 74: 501–526.
  • –––, 2008a, “Computation WithoutRepresentation”,Philosophical Studies, 137:205–241.
  • –––, 2008b, “Some Neural Networks Compute,Others Don’t”,Neural Networks, 21:311–321.
  • –––, 2010, “The Resilience ofComputationalism”,Philosophy of Science, 77:852–861.
  • –––, 2012, “Computationalism”, inThe Oxford Handbook of Philosophy and Cognitive Science, E.Margolis, R. Samuels, and S. Stich (eds), Oxford: Oxford UniversityPress.
  • –––, 2015,Physical Computation: AMechanistic Account, Oxford: Oxford University Press.
  • –––, 2020,Neurocognitive Mechanisms:Explaining Biological Cognition, Oxford: Oxford UniversityPress.
  • Piccinini, G. and A. Scarantino, 2010, “Computation vs.Information processing: Why their Difference Matters to CognitiveScience”,Studies in History and Philosophy of Science,41: 237–246.
  • Piccinini, G. and S. Bahar, 2013, “Neural Computation andthe Computational Theory of Cognition”,CognitiveScience, 37: 453–488.
  • Piccinini, G. and O. Shagrir, 2014, “Foundations ofComputational Neuroscience”,Current Opinion inNeurobiology, 25: 25–30.
  • Pinker, S., 2005, “So How Does the Mind Work?”,Mind and Language, 20: 1–24.
  • Pinker, S. and A. Prince, 1988, “On Language andConnectionism”,Cognition, 28: 73–193.
  • Pouget, A., Beck, J., Ma., W. J., and Latham, P., 2013,“Probabilistic Brains: Knowns and Unknowns”,NatureNeuroscience, 16: 1170–1178.
  • Putnam, H., 1967, “Psychophysical Predicates”, inArt, Mind, and Religion, W. Capitan and D. Merrill (eds),Pittsburgh: University of Pittsburgh Press. Reprinted in Putnam 1975as “The Nature of Mental States”: 429–440.
  • –––, 1975,Mind, Language, and Reality:Philosophical Papers, vol. 2, Cambridge: Cambridge UniversityPress.
  • –––, 1983,Realism and Reason: PhilosophicalPapers, vol. 3. Cambridge: Cambridge University Press.
  • –––, 1988,Representation and Reality,Cambridge, MA: MIT Press.
  • –––, 1994, “The Best of All PossibleBrains?”,The New York Times, November 20, 1994:7.
  • Pylyshyn, Z., 1984,Computation and Cognition, Cambridge,MA: MIT Press.
  • Quine, W.V.O., 1960,Word and Object, Cambridge, MA: MITPress.
  • Ramsey, W., S. Stich, and D. Rumelhart (eds), 1991,Philosophyand Connectionist Theory, Hillsdale: Lawrence ErlbaumAssociates.
  • Rescorla, M., 2009a, “Chrysippus’s Dog as a Case Studyin Non-Linguistic Cognition”, inThe Philosophy of AnimalMinds, R. Lurz (ed.), Cambridge: Cambridge University Press.
  • –––, 2009b, “Cognitive Maps and theLanguage of Thought”,The British Journal for the Philosophyof Science, 60: 377–407.
  • –––, 2012, “How to IntegrateRepresentation into Computational Modeling, and Why We Should”,Journal of Cognitive Science, 13: 1–38.
  • –––, 2013, “Against Structuralist Theoriesof Computational Implementation”,British Journal for thePhilosophy of Science, 64: 681–707.
  • –––, 2014, “The Causal Relevance ofContent to Computation”,Philosophy and PhenomenologicalResearch, 88: 173–208.
  • –––, 2015, “Bayesian PerceptualPsychology”, inThe Oxford Handbook of the Philosophy ofPerception, M. Matthen (ed.), Oxford: Oxford UniversityPress.
  • –––, 2016, “Review of GualtieroPiccinini’sPhysical Computation”,BritishJournal for the Philosophy of Science Review of Books. [Rescorla 2016 available online]
  • –––, 2017a, “From Ockham toTuring—and Back Again”, inTuring 100: PhilosophicalExplorations of the Legacy of Alan Turing, (Boston Studies inthe Philosophy and History), A. Bokulich and J. Floyd (eds),Springer.
  • –––, 2017b, “Levels of ComputationalExplanation”, inPhilosophy and Computing: Essays inEpistemology, Philosophy of Mind, Logic, and Ethics, T. Powers(ed.), Cham: Springer.
  • –––, forthcoming,Bayesian Models of theMind, Cambridge: Cambridge University Press.
  • Rogers, T. and J. McClelland, 2014, “Parallel DistributedProcessing at 25: Further Explorations of the Microstructure ofCognition”,Cognitive Science, 38:1024–1077.
  • Rumelhart, D., 1989, “The Architecture of Mind: AConnectionist Approach”, inFoundations of CognitiveScience, M. Posner (ed.), Cambridge, MA: MIT Press.
  • Rumelhart, D., G. Hinton, and R. Williams, 1986, “LearningRepresentations by Back-propagating Errors”,Nature,323: 533–536.
  • Rumelhart, D. and J. McClelland, 1986, “PDP Models andGeneral Issues in Cognitive Science”, in Rumelhart et al. 1986:110–146.
  • Rumelhart, D., J. McClelland, and the PDP Research Group, 1986,Parallel Distributed Processing, vol. 1. Cambridge: MITPress.
  • Rupert, R., 2008, “Frege’s Puzzle and Frege Cases:Defending a Quasi-Syntactic Solution”,Cognitive SystemsResearch, 9: 76–91.
  • –––, 2009,Cognitive Systems and theExtended Mind, Oxford: Oxford University Press.
  • Russell, S. and P. Norvig, 2022,Artificial Intelligence: AModern Approach, 4th ed., Global ed. Harlow:Pearson.
  • Sawyer, S., 2000, “There Is No Viable Notion of NarrowContent”, inContemporary Debates in Philosophy ofMind, B. McLaughlin and J. Cohen (eds), Malden: Blackwell.
  • Schneider, S., 2005, “Direct Reference, PsychologicalExplanation, and Frege Cases”,Mind and Language, 20:423–447.
  • –––, 2011,The Language of Thought: A NewPhilosophical Direction, Cambridge, MA: MIT Press.
  • Searle, J., 1980, “Minds, Brains, and Programs”,Behavioral and Brain Sciences, 3: 417–457.
  • –––, 1990, “Is the Brain a DigitalComputer?”,Proceedings and Addresses of the AmericanPhilosophical Association, 64: 21–37.
  • Segal, G., 2000,A Slim Book About Narrow Content,Cambridge, MA: MIT Press.
  • Shagrir, O., 2001, “Content, Computation, andExternalism”,Mind, 110: 369–400.
  • –––, 2006, “Why We View the Brain as aComputer”,Synthese, 153: 393–416.
  • –––, 2014, “Review ofExplaining theComputational Theory of Mind, by Marcin Miłkowski”,Notre Dame Review of Philosophy, January 2014.
  • –––, 2020, “In Defense of the SemanticView of Computation”,Synthese, 197:4083–4108
  • –––, 2022,The Nature of PhysicalComputation, Oxford: Oxford University Press.
  • Shannon, C., 1948, “A Mathematical Theory ofCommunication”,Bell System Technical Journal 27:379–423, 623–656.
  • Shapiro, S., 2003, “Truth, Mechanism, and Penrose’sNew Argument”,Journal of Philosophical Logic, 32:19–42.
  • Shea, N., 2013, “Naturalizing RepresentationalContent”,Philosophy Compass, 8: 496–509.
  • –––, 2018,Representation in CognitiveScience, Oxford: Oxford University Press.
  • Sieg, W., 2009, “On Computability”, inPhilosophyof Mathematics, A. Irvine (ed.), Burlington: Elsevier.
  • Siegelmann, H. and E. Sontag, 1991, “Turing Computabilitywith Neural Nets”,Applied Mathematics Letters, 4:77–80.
  • Siegelmann, H. and E. Sontag, 1995, “On the ComputationalPower of Neural Nets”,Journal of Computer and ScienceSystems, 50: 132–150.
  • Silver, D., J. Schrittwieser, K. Simonyan, I. Antonoglou, A.Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T.Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D.Hassabis, 2016, “Mastering the Game of Go with Deep NeuralNetworks and Tree Search”,Nature, 529:484–489.
  • Silverberg, A., 2006, “Chomsky and Egan on ComputationalTheories of Vision”,Minds and Machines, 16:495–524.
  • Sloman, A., 1978,The Computer Revolution in Philosophy,Hassocks: The Harvester Press.
  • Smolensky, P., 1988, “On the Proper Treatment ofConnectionism”,Behavioral and Brain Sciences, 11:1–74.
  • –––, 1991, “Connectionism, Constituency,and the Language of Thought”, inMeaning in Mind: Fodor andHis Critics, B. Loewer and G. Rey (eds), Cambridge:Blackwell.
  • Sperber, D., 2002, “In Defense of Massive Modularity”,inLanguage, Brain, and Cognitive Development: Essays in Honor ofJacques Mehler, E. Dupoux (ed.), Cambridge, MA: MIT Press.
  • Sprevak, M., 2010, “Computation, Individuation, and theReceived View on Representation”,Studies in History andPhilosophy of Science, 41: 260–270.
  • –––, 2019, “Triviality Arguments AboutComputational Implementation”, in Sprevak and Colombo 2019:175–191.
  • –––, 2020, “Two Kinds of InformationProcessing in Cognition”,Review of Philosophy andPsychology, 11: 591–611.
  • Sprevak, M. and M. Colombo, 2019,The Routledge Handbook ofthe Computational Mind, New York: Routledge.
  • Stalnaker, R., 1999,Context and Content, Oxford: OxfordUniversity Press.
  • Stich, S., 1983,From Folk Psychology to CognitiveScience, Cambridge, MA: MIT Press.
  • Storrs, K., T. Kietzmann, A. Walther, J. Mehrer, and N.Kriegeskorte, 2021, “Diverse Deep Neural Networks All PredictHuman Inferior Temporal Cortex Well, After Training andFitting”,Journal of Cognitive Neuroscience, 33:2044–2064.
  • Thelen, E. and L. Smith, 1994,A Dynamical Systems Approach tothe Development of Cognition and Action, Cambridge, MA: MITPress.
  • Thrun, S., W. Burgard, and D. Fox, 2005,ProbabilisticRobotics, Cambridge, MA: MIT Press.
  • Thrun, S., M. Montemerlo, and H. Dahlkamp, et al., 2006,“Stanley: The Robot That Won the DARPA Grand Challenge”,Journal of Field Robotics, 23: 661–692.
  • Tolman, E., 1948, “Cognitive Maps in Rats and Men”,Psychological Review, 55: 189–208.
  • Trappenberg, T., 2010,Fundamentals of ComputationalNeuroscience, Oxford: Oxford University Press.
  • Turing, A., 1936, “On Computable Numbers, with anApplication to the Entscheidungsproblem”,Proceedings of theLondon Mathematical Society, 42: 230–265.
  • –––, 1950, “Computing Machinery andIntelligence”,Mind, 49: 433–460.
  • Ulmann, B., 2023,Analog Computation, 2nd edition,Boston: de Gruyter.
  • van Gelder, T., 1990, “Compositionality: A ConnectionistVariation on a Classical Theme”,Cognitive Science, 14:355–384.
  • van Gelder, T. and R. Port, 1995, “It’s About Time: AnOverview of the Dynamical Approach to Cognition”, inMind asMotion: Explorations in the Dynamics of Cognition, R. Port and T.van Gelder (eds), Cambridge, MA: MIT Press.
  • Varela, F., Thompson, E. and Rosch, E., 1991,The EmbodiedMind: Cognitive Science and Human Experience, Cambridge, MA: MITPress.
  • von Neumann, J., 1945, “First Draft of a Report on theEDVAC”, Moore School of Electrical Engineering, University ofPennsylvania. Philadelphia, PA.
  • Wakefield, J., 2002, “Broad versus Narrow Content in theExplanation of Action: Fodor on Frege Cases”,PhilosophicalPsychology, 15: 119–133.
  • Weinberg, N., and C. Allen, 2022, “Static-Dynamic Hybridityin Dynamical Models of Cognition”,Philosophy ofScience, 89: 1–20.
  • Weiskopf, D., 2004, “The Place of Time in Cognition”,British Journal for the Philosophy of Science, 55:87–105.
  • Whitehead, A.N. and B. Russell, 1925,PrincipiaMathematica, vol. 1, 2nd ed., Cambridge: CambridgeUniversity Press.
  • Wilson, R., 2005, “What Computers (Still, Still) Can’tDo”, inNew Essays in Philosophy of Language and Mind,R. Stainton, M. Ezcurdia, and C.D. Viger (eds).Canadian Journalof Philosophy, supplementary issue 30: 407–425.
  • –––, 2003, “Causal Relevance”,Philosophical Issues, 13: 316–327.
  • Whittington, J., and R. Bogacz, 2017, “An Approximation ofthe Error Backpropagation Algorithm in a Predictive Coding Networkwith Local Hebbian Synaptic Plasticity”,NeuralComputation, 29: 1229–1262.
  • Zednik, C., 2019, “Computational CognitiveNeuroscience”, in Sprevak and Colombo 2019: 357–369.
  • Zhuang, C., S. Yan, A. Nayebi, M. Schrimpf, M. Frank, J. DiCarlo,and D. Yamins, 2021, “Unsupervised Neural Network Models of theVentral Visual Stream”,Proceedings of the National Academyof Sciences, 118: e2014196118.
  • Zylberberg, A., S. Dehaene, P. Roelfsema, and M. Sigman, 2011,“The Human Turing Machine”,Trends in CognitiveScience, 15: 293–300.

Other Internet Resources

Related Entries

analogy and analogical reasoning |anomalous monism |causation: the metaphysics of |Chinese room argument |Church-Turing Thesis |cognitive science |computability and complexity |computation: in physical systems |computer science, philosophy of |computing: modern history of |connectionism |culture: and cognitive science |externalism about the mind |folk psychology: as mental simulation |frame problem |functionalism |Gödel, Kurt |Gödel, Kurt: incompleteness theorems |Hilbert, David: program in the foundations of mathematics |language of thought hypothesis |mental causation |mental content: causal theories of |mental content: narrow |mental content: teleological theories of |mental imagery |mental representation |mental representation: in medieval philosophy |mind/brain identity theory |models in science |multiple realizability |other minds |reasoning: automated |reasoning: defeasible |reduction, scientific |simulations in science |Turing, Alan |Turing machines |Turing test |zombies

Copyright © 2024 by
Michael Rescorla<rescorla@ucla.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp