Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Embodied Cognition

First published Fri Jun 25, 2021

Embodied Cognition is a wide-ranging research program drawing from andinspiring work in psychology, neuroscience, ethology, philosophy,linguistics, robotics, and artificial intelligence. Whereastraditional cognitive science also encompasses these disciplines, itfinds common purpose in a conception of mind wedded tocomputationalism: mental processes are computational processes; thebrain,qua computer, is the seat of cognition. In contrast,embodied cognition variously rejects or reformulates the computationalcommitments of cognitive science, emphasizing the significance of anagent’s physical body in cognitive abilities. Unifyinginvestigators of embodied cognition is the idea that the body or thebody’s interactions with the environment constitute orcontribute to cognition in ways that require a new framework for itsinvestigation. Mental processes are not, or not only, computationalprocesses. The brain is not a computer, or not the seat ofcognition.

Once a fringe movement, embodied cognition now enjoys a fair amount ofprominence. Unlike, say, ecological psychology, which has faced anuphill battle for mainstream acceptance, embodied cognition has gaineda substantial following. The appointment of researchers who take anembodied perspective to cognition would, today, raise few eyebrows.Embodied cognition has been the subject of numerous articles inpopular outlets. Moreover, there is not an area of cognitivescience—perception, language, learning, memory, categorization,problem solving, emotion, social cognition—that has not receivedan embodied “make-over.”

None of this is to say, of course, that embodied cognition does notface hard questions, or has escaped harsh criticism. The numerous andsometimes incompatible claims it makes about the body’s role incognition and the myriad methods it employs for understanding thisrole make it ripe for philosophical reflection. Critics chargeembodied cognition with embracing a depleted conception of cognition,or with not offering a genuine replacement to computational cognitivescience, or with claiming that bodies play a constitutive role incognition when in fact their role is merely causal. Proponents haveresponded to all of these objections. A welcome byproduct of thesedebates is a new perspective on some old philosophical questionsconcerning what minds are, what concepts are, and how to understandthe nature and significance of representation.

1. The Foils and Inspirations for Embodied Cognition

The ontological and methodological commitments of traditionalcomputational cognitive science, which have been in play since atleast the mid-Twentieth Century, are by now well understood. Early orinfluential applications of computationalism to cognition includetheories of language acquisition (Chomsky 1959), attention (Broadbent1958), problem solving (Newell, Shaw, and Simon 1958), memory(Sternberg 1969), and perception (Marr 1982). Common to allcomputationally-oriented research is the idea that cognition involvesa step-wise series of events, beginning with the transduction ofstimulus energy into a symbolic expression, followed bytransformations of this expression according to various rules, theresult of which is a particular output—a grammatical linguisticutterance, isolation of one stream of words from another, a solutionto a logic problem, the identification of a stimulus as being among aset of memorized stimuli, or a 3-D perception of the world.

The symbolic expressions over which cognitive processes operate, aswell as the rules according to which these operations proceed, appearas representational states internal to the cognizing agent. They areindividuated in terms of what they are about (phonemes, lightintensity, edges, shapes, etc.). All of this cognitive activity takesplace in the agent’s nervous system. It is in virtue of theactivation of the nervous system that stimuli become encoded into a“mentalese” language of thought, akin to the programminglanguages found in ordinary computers; similarly, the rules dictatingthe manipulation of symbols in the language of thought are like theinstructions that a C.P.U. executes in the course of carrying out atask. Rather than running spread sheets or displaying Tetris pieces,the computational brain produces language, or perceives the world, orretrieves items from memory. The methods of computational cognitivescience reflect these ontological commitments. Experiments aredesigned to reveal the content of representational states or touncover the steps by which mental algorithms transform input intooutput.

So pervasive has this computational conception of cognition been overthe past decades that many cognitive scientists would be happy toidentify cognition with computation, giving little thought to thepossibility of alternatives. Certainly the great strides towardunderstanding cognition that the advent of computationalism has madepossible invites the idea that computational cognitive science, if notthe only game in town, is likely the best. However, ecologicalpsychology, which J.J. Gibson (1966; 1979) began to develop around thetime that computationalism came to dominate psychological practice,rejected nearly every plank of the information processing model ofcognition that computational cognitive science epitomizes. Morerecently, connectionist cognitive science has challenged the symbolistcommitments of computationalism even while conceding a role forcomputational processes. Both ecological psychology and connectionistpsychology have played significant roles in the rise of embodiedcognition and so a brief discussion of their points of influence isnecessary to understand the “embodied turn.” Likewise,some embodied cognition researchers draw on a very different sourcefor inspiration—the phenomenological tradition with specialattention to Merleau-Ponty’s contributions. The next threesubsections examine these various strands of influence.

1.1 Ecological Psychology

A primary disagreement between computational and ecologicalpsychologists concerns the nature of the stimuli to which organismsare exposed. Computationalists largely regard these stimuli as, inChomsky’s terminology,impoverished (Chomsky 1980). Thelinguistic utterances to which an infant comes in contact do not, ontheir own, contain sufficient information to indicate the grammar of alanguage. Similarly, the visual information present in the light thatstimulates an organism’s retina does not, on its own, specifythe layout of surfaces in the organism’s environment. Visualperception faces an “inverse optics” problem. For anypattern of light on a retina, there exists an infinite number ofpossible distal surfaces capable of producing that pattern. The visualsystem thus seems to confront an impossible task—while it ispossible to calculate the pattern of light a reflecting surface willproduce on the retina, the inverse of this problem appears to beunsolvable, and yet visual systems solve it all the time and,phenomenologically speaking, immediately.

Computationalists regard the inescapable poverty of stimuli as placingon cognitive systems a need to draw inferences. Just as backgroundknowledge allows you to infer from the footprints in the snow that adeer has passed by, cognitive systems, according to computationalists,rely on sub-conscious background knowledge to infer what the world islike given the partial clues the stimuli offer. The perception of anobject’s size, for instance, would, according to thecomputationalist, be inferred on the basis of the size of the retinalimage of the object together with knowledge of the object’sdistance from the viewer. Perception of an object’s shape,similarly, is inferred from the shape of the retinal image along withknowledge of the object’s orientation relative to theviewer.

Ecological psychologists, on the other hand, deny that organismsencounter impoverished stimuli (Michaels and Palatinus 2014). Such aview, they believe, falsely identifies whole sensory systems withtheir parts—with eyes, or with retinal images, or with brainactivity. Visual perceptual processes, for instance, are not exclusiveto the eye or even the brain, but involve the whole organism as itmoves about its environment. The motions of an organism create anever-changing pattern of stimulation in which invariant featuressurface. The detection of these invariants, according to theecological psychologist, provides all the information necessary forperception. Perception of an object’s shape, for instance,becomes apparent as a result of detecting the kinds of transformationsin the stimulus pattern that occur when approaching or moving aroundthe object. The edges of a square, for instance, will create patternsof light quite different from those that a diamond would reflect asone moves toward or around a square, thus eliminating the need forrule-guided inferences, drawing upon background knowledge, todistinguish the square from a diamond. Insights like these haveencouraged embodied cognition proponents to seek explanations ofcognition that minimize or disavow entirely the role of inference and,hence, the need for computation. Just as perception, according to theecological psychologist, is an extended process involving wholeorganisms in motion through their environments, the same may well betrue for many other cognitive achievements.

1.2 Connectionism

Connectionist systems offer a means of computation that, in manycases, eschews the symbolist commitments of computational cognitivescience. In contrast to the computer that operates on symbols on thebasis of internally stored rules, a connectionist system consists innetworks of nodes that excite or inhibit each other’s activityaccording to weighted connections between them. Different stimuli willaffect input nodes differently, causing distinct patterns ofactivation in deeper layers of nodes depending on the values ofactivation that the input nodes send upstream. The result of thisactivity will reveal itself in the activation values of a final layerof nodes—the output nodes.

Connectionist networks thus compute—they transform inputactivation values into output activation values—but theimputation of symbolic structures within this computational process,as well as explicit rules by which a C.P.U. executes operations uponthese symbols, appears unfounded. As Hatfield (1991) describesconnectionist networks, they arenon-cognitive, in the sensethat their operation involves none of the trappings of cognition uponwhich computationalists insist, and yet still computational, insofaras stimulation of their input nodes creates patterns of activationthat lead to particular activation values in output nodes. For more onconnectionism generally, see the entry onconnectionism.

Many embodied cognition researchers saw in connectionist networks anew way to conceptualize cognition and, accordingly, to explaincognitive processes. Non-symbolic explanations of cognition, despitethe “only game in town” mantra of computationalists (Fodor1987), might be possible after all. Adding momentum to theconnectionist challenge was the realization that the mathematics ofdynamical systems theory could often illuminate the unfolding patternsof activity in connectionist networks and could as well be extended toinclude within its explanatory scope the body-environment interactionsbetween which connectionist networks reside. Consequently, someembodied cognition researchers have argued that dynamical systemstheory offers the best framework from which to understandcognition.

1.3. Phenomenology

Another source of inspiration for embodied cognition is thephenomenological tradition. Phenomenology investigates the nature andstructure of our conscious, lived experiences. Although the subject ofphenomenological analyses can vary widely from perception toimagination, emotion, willing, and intentional physical movements, allphenomenological analyses aim to elucidate the intentional structureof consciousness. They do so by analyzing our conscious experiences interms of temporal, spatial, attentional, kinesthetic, social, and selfawareness. In contrast to computational accounts of the mind thatmodel consciousness in terms of input, processing, and output,phenomenological accounts ground consciousness in a host of rich andvaried attentional experiences, which with practice can be describedand analyzed. For more on phenomenology, see see the entry onphenomenology.

Some variations of embodied cognition are inspired by the works ofphenomenologists like Martin Heidegger (1975), Edmund Husserl (1929),and Maurice Merleau-Ponty (1962) who emphasize the physical embodimentof our conscious cognitive experiences. These thinkers analyze thevarious ways in which our bodies shape our thoughts and how weexperience our conscious activities. Some even argue thatconsciousness isconstituted by embodiment. Merleau-Ponty,for example, argues that consciousness itself is embodied:

Insofar as, when I reflect on the essence of subjectivity, I find itbound up with that of the body and that of the world, this is becausemy existence as subjectivity [= consciousness] is merely one with myexistence as a body and with the existence of the world, and becausethe subject that I am, when taken concretely, is inseparable from thisbody and this world (Merleau-Ponty 1962, p. 408).

This phenomenological influence can be seen clearly in embodiedcognition analyses of the relation between mind and body. Theseanalyses reject the idea that mentality is fundamentally different andseparate from physicality and the corollary idea that others’mentality is somehow hidden from view. Inspired by Husserl and otherphenomenologists, embodied cognition proponents argue thatCartesian-style analyses of the mind and the body fundamentallymisconstrue cognition (Gallagher and Zahavi, 2008). Cognition is notpurely or even typically an intellectual, solipsistic introspection inthe way Descartes’ Meditations suggest. Rather, cognition isphysically interactive, embedded in physical contexts, and manifestedin physical bodies. Even contemporary philosophers and cognitivescientists who reject mind-body dualism may fall into the trap ofintuitively regarding mental and physical as distinct and therebyaccept the idea that we must infer the existence and nature of otherminds from indirect cues. From the perspective that phenomenologistsfavor, however, all cognition is embodied and interactive and embeddedin dynamically changing environments. Attention to the way in whichour own conscious experiences are structured by our bodies andenvironments reveals that there is no substantial distinction betweenmind and body. The embodiment of cognition makes our own andothers’ minds just as observable as any other feature of theworld. In other words, phenomenological analysis of our consciousexperiences reveals the Mind-Body Problem and Problem of Other Mindsto be merely illusory problems. This phenomenological analysis of therelation between mind and body and our relation to other minds deeplyinfluenced proponents of embodied cognition such as Shaun Gallagher(2005), Dan Zahavi (2005), and Evan Thompson (2010).

2. Embodied Cognition: Themes and Close Relations

Unlike computational cognitive science, the commitments of which canbe readily identified, embodied cognition is better characterized as aresearch program with no clear defining features other than the tenetthat computational cognitive science has failed to appreciate thebody’s significance in cognitive processing and to do sorequires a dramatic re-conceptualization of the nature of cognitionand how it must be investigated. Different researchers view thebody’s significance for cognition as entailing differentconsequences for the subject matter and practice of cognitive science.Nevertheless, through this very broad diversity of views it ispossible to extract three major themes around which discussion ofembodied cognition can be organized (see Shapiro 2012; 2019a).

2.1 Three Themes of Embodied Cognition

The three themes of embodiment around which most of the followingdiscussion will be organized are as follows.

Conceptualization: The properties of anorganism’s body limit or constrain the concepts an organism canacquire. That is, the concepts by which an organism understands itsenvironment depend on the nature of its body in such a way thatdifferently embodied organisms would understand their environmentsdifferently.

Replacement: The array of computationally-inspiredconcepts, includingsymbol,representation, andinference, on which traditional cognitive science has drawnmust be abandoned in favor of others that are better-suited to theinvestigation of bodily-informed cognitive systems.

Constitution: The body (and, perhaps, parts of theworld) does more than merely contribute causally to cognitiveprocesses: it plays a constitutive role in cognition, literally as apart of a cognitive system. Thus, cognitive systems consist in morethan just the nervous system and sensory organs.

The theses above are not intended to be individuallyexclusive—embodied cognition research might show tendenciestoward more than one at a time. Similarly, descriptions of embodiedcognition might be organized around a larger number of narrower themes(M. Wilson 2002); however, efforts to broaden the themes, therebyreducing their number, risks generalizing the description of embodiedcognition to the extent that its purported novelty is jeopardized.

Before examining how these themes receive expression, it is worthpausing to compare embodied cognition to some closely related researchareas. Sometimes embodied cognition is distinguished fromembedded cognition, as well asextended cognitionandenactive cognition. However, despite the distinctionsbetween the four “Es”—embodied, embedded, enactive,and extended—it is not uncommon to use the label“embodied” to include any or all of these“Es”. The E-fields share the view, after all, that thebrain-centrism of traditional cognitive science, as well as itsdependence on the computer for inspiration, stands in the way of acorrect understanding of cognition.

2.2 Embedded Cognition

Embedded cognition assumes that cognitive tasks—dividing anumber into fractions, navigating a large ship, retrieving the correctbook from a shelf—require some quantity of cognitive effort. Thecognitive “load” that a task requires can be reduced whenthe agent embeds herself within an appropriately designed physical orsocial environment. For instance, Martin and Schwartz (2005) foundthat children are more successful at calculating ¼ of 8 whenallowed to manipulate pie pieces than if only viewing the pieces. Thecognitive load required to navigate a large Navy vessel exceeds thecapacity of any single individual, but can be distributed across anumber of specialists, each with his or her own particular task(Hutchins 1996). Arranging books on a shelf alphabetically makessearching for a particular title much easier than it would be if thebooks were simply set randomly upon the shelf. In all of these cases,the cognitive capacities of an individual are enhanced when providedwith the opportunity to interact with features of a suitably organizedphysical or social environment.

2.3 Extended Cognition

Close kin to embedded cognition, extended cognition moves from theclaim that cognition is embedded to claim additionally that theenvironmental and social resources that enhance the cognitivecapacities of an agent are in factconstituents of a largercognitive system, rather than merely useful tools for a cognitivesystem that retains its traditional location wholly within anagent’s nervous system (Clark and Chalmers 1998; Menary 2008).Some interpret the thesis of extended cognition to mean that cognitionactually takes place outside the nervous system—within theextra-cranial resources involved in the cognitive task (Adams andAizawa 2001; 2008; 2009; 2010). Others interpret the thesis moremodestly, as claiming that parts of an agent’s environment orbody should be construed as parts of a cognitive system, even ifcognition does not take place within these parts, thus extendingcognitive systems beyond the agent’s nervous system (Clark andChalmers 1998; Wilson 1994; Wilson and Clark 2001).

2.4 Enactive Cognition

Enactivism is the view that cognition emerges from or is constitutedby sensorimotor activity. Currently, there are three distinct strandsof enactivism (Ward, Silverman, and Villalobos 2017). AutopoieticEnactivism conceives of cognition in terms of the biodynamics ofliving systems (Varela, Thompson, and Rosch 2017; Di Paolo 2005). Justas a bacterium is created and maintained by processes that span theorganism and environment, so too is cognition generated and specifiedthrough operation of sensorimotor processes that crisscross the brain,body, and world. On this version of enactivism, there is no brightline between mental processes and non-mental biological processes. Theformer simply are an enriched version of the latter. SensorimotorEnactivism is another strand of enactivism that focuses on explainingthe intentionality and phenomenology of perceptual experiences inparticular (O’Regan and Noë 2001; Noë 2004). This viewholds that perception consists in active exploration of theenvironment, which establishes patterns of dependence between ourmovements, sensory states, and the world. Perceivers need not buildand manipulate internal models of the external world. Instead, theyneed only skillfully exploit sensorimotor dependences that theirexploratory activities reveal. Finally, Radical Enactivism aims toreplace all representational explanations of cognition with embodied,interactive explanations (Hutto and Myin 2013; Chemero 2011). Theprimary tactic guiding Radical Enactivism is to deconstruct andeliminate the notion of mental content in cognitive science. Thistactic manifests in critiques of attempts to naturalizeintentionality, redescribing cognitive processes studied in mainstreamcognitive science, and challenging concepts employed even by closelyrelated views, such as Autopoietic Enactivism’s notion ofsensemaking (Chemero 2016). These three strands of enactivism vary intheir target explanations and methodology. However, they share thecommitment to the idea that cognition emerges from sensorimotoractivity.

3. Conceptualization

Returning now to the three themes around which this discussion ofembodied cognition is organized, the first is Conceptualization.According to Conceptualization, the concepts by which organismsrecognize and categorize objects in the world, reason and drawinferences, and communicate with each other, are heavilybody-dependent. The morphological properties of an agent’s bodywill constrain and inform the meaning of its concepts. The claim thatconcepts are embodied in this way has been defended via quite distinctroutes.

3.1 Metaphor and Basic Concepts

Lakoff and Johnson (1980; 1999) offered an early and influentialdefense of Conceptualization. Their argument begins with the plausiblepremise that human beings rely extensively upon metaphorical reasoningwhen learning or developing an understanding of unfamiliar concepts.Imagine, for instance, trying to explain to a child the meaning ofelection. Drawing a connection betweenelection anda concept the child already understands, likefoot race,makes the job easier. The “elections are races” metaphorprovides a kind of scaffolding for introducing and explaining thecontent of theelection concept. Candidates are likerunners hoping to win therace. They will adoptvariousstrategies. They must be careful not tostart toofast or they mightburn out before reaching the finishline. It’s aboutendurance through the longstretch—more amarathon than asprint.Some willplay dirty, trying totrip others up,knocking them off theirstride. There will besorelosers but alsograceful winners. Appeal to the contentof a familiar concept—foot race—provides thechild with a framework or stance for learning the unfamiliarconcept—election.

The next step toward the embodiment of concepts proceeds with theobservation that, through pain of regress, not all concepts can beacquired through metaphorical scaffolding. There must be a class ofbasic concepts that (if not innate) we learn some other way.Lakoff and Johnson argue that these basic concepts derive from thekinds of “direct physical experience” (1980: 57) that comefrom moving a human body through the environment. The conceptup, for instance, is basic, emerging from possession of abody that stands erect, so that “[a]lmost every movement we makeinvolves a motor program that either changes our up-down orientation,maintains it, presupposes it, or takes it into account in someway” (1980, 56). Lakoff and Johnson offer a similar account forhow human beings come to possess concepts likefront,back,pushing,pulling, and so on.

Basic concepts reflect the idiosyncrasies of particular kinds ofbodies. Insofar as less-basic concepts depend upon metaphoricalextensions of these most basic concepts, they will in turn reflect theidiosyncrasies of particular kinds of bodies. All concepts, Lakoff andJohnson appear to believe, are “stamped” with thebody’s imprint as the characteristics of the body “trickleup” into more abstract concepts. They thus arrive atConceptualization: “the peculiar nature of our bodies shapes ourvery possibilities for conceptualization and categorization”(1999, 19). Insofar as this is true, one should expect thatdifferently-bodied organisms, equipped with a different class of basicconcepts, would conceptualize and categorize their worlds in nonhumanways.

Although Lakoff and Johnson see Conceptualization as incompatible withcomputational cognitive science, their grounds for doing so aretenuous. Metaphorical reasoning consists in applying aspects of oneconcept’s content to that of another. Because such reasoning isexplicitly about content, and because computationalism is a theoryabout how to process mental states in virtue of their content, Lakoffand Johnson’s antagonism toward computationalism seemsunwarranted. Additionally, their case for Conceptualization remainslargelya priori. They claim that organisms morphologicallydistinct from human beings—spherical in shape, say—wouldbe unable to develop some human concepts (1980, 57), but with no suchbeings available to test, this assertion is entirely speculative.

3.2 Embodied Concepts

A far more developed and empirically grounded case forConceptualization comes from psychological and neurological studiesthat show a connection between a subject’s use of a concept andactivity in the subject’s sensorimotor systems. Arising fromthese studies is a view of concepts as containing within their contentfacts about the sensorimotor particularities of their possessors.Because these particularities reflect the properties of anorganism’s body—how, for instance, it moves its limbs wheninteracting with the world—the content of its concepts too willbe constrained and informed by the nature of its body.

Central to the idea that concepts are embodied is the description ofsuch concepts asmodal. This label is intended to make starkthe anti-computationalism that proponents of embodied conceptsendorse. Symbols in a computer—strings of 1s and 0s—areamodal, in the sense that their relationship to theircontents is arbitrary. Words too are amodal symbols. The symbol‘lake’ meanslake, but not in virtue of anyresemblance or nomological connection it bears towards lakes. There isno reason that ‘lake’, rather than some other symbol,should meanlake—as is obvious when thinking aboutwords that meanlake in non-English languages. All mentalsymbols, from the perspective of computational cognitive science, areamodal in this sense.

Modal symbols, on the other hand, retain information about the sourcesof their origin. They are not just symbols, but, in Barsalou’s(1999; Barsalou et al. 2003) terminology,perceptual symbols.Thoughts about a lake, for instance, consist in activation of thesensorimotor areas of the brain that had been activated duringprevious encounters with actual lakes. Alake thoughtre-activates areas of visual cortex that respond to visual informationcorresponding to lakes; areas of auditory cortex that respond toauditory information corresponding to lakes; areas of motor cortexthat correspond to actions typically associated with lakes (althoughthis activation is suppressed so that it does not lead to actualmotion), and so on. The result is alake concept thatreflects the kinds of sensory and motor activities that are unique tohuman bodies and sensory systems.Lake means something like“thing that looks likethis, sounds likethis,smells likethis, allows me to swim within it likethis”. Moreover, because how things look and sounddepend on the properties of sensory systems, and because theinteractions something affords depends on the properties of motorsystems, concepts will be body-specific.

Much of the evidence for the modality of concepts arises fromdemonstrations of anorientation-dependent spatial compatibilityeffect (OSC) (Symes, Ellis, and Tucker 2007). Tucker and Ellis(1998), for instance, asked subjects to judge whether a given object,e.g., a pan, was right-side-up or upside down. The object was orientedeither rightwards or leftwards. So, for instance, the pan’shandle extended toward the right or left. Subjects would indicatewhether the object was right-side-up or upside down by pressing abutton to their right with their right index finger or to their leftwith their left index finger. Subjects’ reaction times wereshorter when using a right finger to indicate a response when theobject was oriented to the right than when oriented toward the left,and,mutatis mutandis, for left-finger responses when theobject was oriented to the left. Despite the fact that subjects werenot asked to consider horizontal orientation of the stimulus object,this orientation influenced response times (for related work on theOSC, see Tucker and Ellis 2001; 2004).

Relatedly, Glenberg and Kaschak (2002) showed anaction-sentencecompatibility effect (ASC). Subjects were asked to judge thesensibility of sentences like “open the drawer” or“close the drawer.” Sentences of the first kind suggestedactions that would require a motion of the hand toward the body andsentences of the second kind suggested actions with motions away fromthe body. Subjects would indicate the sensibility of the sentence bypressing a button that required a hand motion either away from thebody or toward the body. Glenberg and Kaschak found that reactiontimes were shorter when the response motion was compatible with themotion suggested by the action sentences.

Both the OSC and ASC effects have been taken to show that concepts aremodal. Thoughts about pans, for instance, activate areas in motorcortex that would be activated when actually manipulating a pan.Subjects are slower to respond to a leftward oriented pan with theirright finger, because seeing the pan’s orientation activatesmotor areas in the brain associated with grasping the pan with theleft hand, priming a left finger response while inhibiting a rightfinger response. Similarly, the meaning of words like“open” and “close” include in their contentthe kinds of motor activity that would be involved in opening orclosing motions. The meaning of object concepts thus containinformation about how objects might be manipulated by bodieslikeours; action concepts consist, in part, of information about howbodieslike ours move.

Further evidence for the claim that concepts are packed withsensorimotor information comes from Edmiston and Lupyan (2017), whoasked subjects questions that required for their answers either“encyclopedic” knowledge—“Does a swan layeggs?”—or visual knowledge—“Does a swan have abeak?”. Interestingly, they found that visual interferenceduring the task would diminish performance on questions requiringvisual knowledge but not encyclopedic knowledge. They took this asevidence for the embodiment of concepts insofar as the effect ofvisual interference would be expected if concepts were modal—if,in this case, they involved the activation of vision centers in thebrain—but not if concepts were amodal symbols, divorced fromtheir sensorimotor origins.

A final source of evidence for embodied concepts comes fromneurological investigations that reveal activation in the sensorimotorareas of the brain associated with particular actions. Reading a wordlike ‘kick’ or ‘punch’ causes activity inmotor areas of the brain associated with kicking and punching(Pulvermüller 2005). Stimulation of these areas by transcranialmagnetic stimulation can affect comprehension of such words(Pulvermüller 2005; Buccino et al. 2005). Again, results likethese are precisely what an embodied theory of concepts would predictbut would be unexpected on standard computational amodal theories ofconcepts. If the conceptkick includes in its content motionsdistinctive of a human leg, as determined by activity in the motorcortex, then, as Conceptualization entails, it shows the imprint of aspecific sort of embodiment.

Critics of embodied concepts have issued a number of challenges. Mostbasically, one might question whether empirical studies like thosejust mentioned are targeting concepts at all. Why think, for instance,that the meaning of the conceptpan includes informationabout how pans must be grasped; and that the meaning ofopenincludes information about how an arm should move? Claims like theseseem inattentive to a distinction between aconcept and aconception (Rey 1983; 1985; Shapiro 2019a). The meaning ofthe conceptbachelor, for instance, isunmarriedmale. But apart from this concept is aconception of abachelor, where a conception involves something like typical orrepresentative features. A bachelor conception might include thingslike being a lothario, or being young, or participating inbro-culture. These concepts areassociated with the conceptbachelor, but not actually part of the meaning of theconcept. Similarly, that pans might be graspedso, oropening involves moving an arm likeso, might not bepart of the meaning of the conceptspan andopen,but instead features of one’s conception of pans and one’sconception of how to open things.

Just as defenders of embodied concepts might not be investigatingconcepts after all, but instead only conceptions—only featuresassociated with concepts—it may be that the motoractivity that accompanies thoughts about concepts does not contributeto the meaning of a concept but is instead only associated with theconcept. The psychologists Mahon and Caramazza (2008) argue for thisway of interpreting the neurological studies taken to support ofembodied concepts. The finding that exposure to the word‘kick’ causes activity in the motor areas of the brainresponsible for kicking does not show thatkick is a modalconcept. Mahon and Caramazza suggest that linguistic processing of aword might create a cascade of activity that flows to areas of thebrain that areassociated with the meaning of the word. Athought aboutkick is associated with thoughts about movingone’s leg, which in turn causes activity in the motor system,but there is no motivation for regarding this activity as part of thekick concept—no motivation for seeing it as evidencefor the modality of the concept (Mahon 2015). Thinking about a kickcauses one to think about moving one’s leg, which causesactivity in motor cortex, but the meaning ofkick isindependent of such activity.

Finally, even granting the modality of concepts likepan andkick, one might question whetherall concepts areembodied, as some embodied cognition researchers suggest (Barsalou1999). Of special concern are abstract concepts likedemocracy,justice, andmorality (Dove2009; 2016). Unlikepan, the meaning of which might involveinformation from sensory and motor systems, what sensory and motoractivity might be included in the meaning ofjustice?Barsalou (2008) and Barsalou and Wiemer-Hastings (2005) offer anaccount of how abstract concepts might be analyzed in modal terms, butdebate over the issue is far from settled.

4. Replacement

Many who take an embodied perspective on cognition believe that thecommitments of traditional cognitive science must be jettisoned andreplaced with something else. No more computation, no morerepresentation, no more manipulation of symbols. Researchers whopromote the complete replacement of traditional cognitive science tendto show the influence of ecological psychology. Less radical arearguments for abandoning some elements of traditional cognitivescience, for instance the idea that cognition is a product ofrule-guided inference, while retaining others, e.g., the idea thatcognition still involves representational states. This position hasroots in the connectionist alternative to computationalism discussedin §1.2. Support for Replacement arrives from severaldirections.

4.1 Robotics

Early ventures in robotics took on board the idea that cognition iscomputation over symbolic representations. The robot Shakey(1966–1972) for instance, created at the Artificial IntelligenceLaboratory at what was then the Stanford Research Institute, wasprogrammed to navigate through a room, avoiding or pushing blocks ofvarious shapes and colors. Guiding Shakey’s behavior was aprogram, called STRIPS, which operated on symbolically encoded imagesof the blocks, combining them with stored descriptions ofShakey’s world. As the roboticist Brooks characterizesShakey’s architecture, it cycles through iterations ofsense-model-plan-act sequences (Brooks 1991a). A camera senses theenvironment, a computer builds a symbolic model of the environmentfrom the camera images, the STRIPS program combines the model withstored symbolic descriptions of the environment, creating plans for acourse of action. Shakey’s progress was slow—some taskswould take days to complete—and heavily dependent on anenvironment carefully structured to make images easier to process.

Brooks’s approach to robotics disavows the computationalprinciples on which Shakey was designed, embracing instead aGibsonian-inspired architecture. The result has been robots thatexhibit far more versatility than Shakey ever displayed—robotsthat can roam cluttered environments, avoiding obstacles, settinggoals for themselves, collecting soda cans for recycling, and more.Brooks’s “Creatures” run on what he calls asubsumption architecture. Rather than cycling throughsense-model-plan-act sequences, Creatures contain arrays of sensorsthat are connected directly to behavior-generating mechanisms. Forinstance, the sensors on Brooks’s robot Allen were connecteddirectly to three different kinds of behavior-generators: Avoid,Wander, and Explore. When sensors detected an object in Allen’spath, the Avoid mechanism would cause Allen to stop its forwardmotion, turn, and then proceed. The Wander generator would simply sendAllen along a random heading, while the Explore generator would steerAllen toward a selected target. The three kinds ofactivitylayers as Brooks called them, continually compete with eachother. For instance, if Allen were Wandering and came across anobstacle, Avoid would step in and prevent Allen from a collision.Explore could inhibit Wander’s activation in order to keep Allenon course toward a target. From the competitive interactions of thethree layers emerged unexpectedly flexible and seemingly goal-orientedbehavior.

According to Brooks, his Creatures have no need for representations.Implementing an idea from ecological psychology, Brooks says that theactivity layers in his robots connect “perception to actiondirectly” (1991b, 144). A robot designed in this way need notrepresent the world because it is able to “use the world as itsown model” (1991b, 139). The robot’s behavior evolvesthrough a continuous loop: the body moves, which changes thestimulation its sensors receive, which directly causes new movement,and so on. Because nothing stands “in between” the sensorysignals and the robot’s behavior, there is no need for somethingthat plays the standard intermediating role of a representationalstate. The robot does not require, for instance, a model of itsenvironment in order to navigate through hallways. The move-sense loopdoes its job without one.

Despite the success of Brooks’s robots in comparison to theircomputational ancestors, and the impact Brooks’s ideas have hadon industry (e.g., Roomba vacuum cleaners), whether Brooks’sinsights pave the way for a radical, representation-free, cognitivescience, as some enactivists like Chemero (2009) and Hutto and Myin(2013) believe, is far from certain. A first question concerns whetherthe behavior of Brooks’s Creatures really proceeds without thebenefit of representational states. The sensors with which theCreatures are equipped, after all, send signals to the variousactivity layers so that the layers can respond to objects in theenvironment. Moreover, the various layers communicate with each otherin order to modulate each other’s activities. They are, ineffect, signaling each other with messages that seem to have asemantics: “go ahead,” or “stop!”.

Skeptics about representation, such as Chemero (2009) and Hutto andMyin (2013) focus on the continuous contact that Brooks’sCreatures bear to their environments as a reason to deny a role forrepresentation. Because a Creature is in constant contact with theworld, it does not need to represent the world. But constant contactdoes not always obviate a need for representation. Consider, forinstance, that an organism might be in constant contact with manyfeatures of its environment—sunlight, humidity, oxygen, thegravitational pull of the moon, and so on. Yet, surely it will besensitive to only some of these features—only some of thesefeatures will shape the organism’s behavior. A natural way todescribe how some features make a difference to an organism whileothers do not might appeal to representation—an organismdetects some features, represents them, and not others.Whether detection of this sort must involve representation will dependon the theory of representation that one adopts. One might thereforesee the success of Brooks’s challenge to representation, and theenactivists’ embrace of the challenge, as hostage to a theory ofrepresentation, the details of which will no doubt themselves becontroversial.

Another response to Brooks’s work doubts whether something likethe subsumption architecture, even granting that it makes no use ofrepresentations, can “scale up”—can produce the moreadvanced sorts of behavior that cognitive scientists typicallyinvestigate (Shapiro 2007). Matthen (2014) argues that once we movejust a little beyond the capabilities of Brooks’s Creatures,explanations of behavior will require an appeal to representations.For instance, imagine an organism that knows how to move from pointA to pointB, and from pointA to pointC, and on the basis of this knowledge, “figuresout” how to move from pointB to pointC(Matthen 2014). It would seem that such an organism must possess arepresentation of the relations between pointsA,B,andC for such a calculation to be possible.

Clark and Toribio (1994) describe some cognitive tasks as“representation-hungry.” Examples include imagining orthinking about non-existent entities (e.g., unicorns) orcounterfactual states of affairs (what would happen if I sawed throughthe tree in this direction?). Of necessity, an organism cannot be inconstant contact with non-existents. That human beings so readily andoften entertain such thoughts poses a difficulty for enactivists likeChemero and Hutto and Myin who see in Brooks’s “world asits own model” slogan a foundation for all or most cognition.Because the world contains no unicorns, using the world as a modelcannot explain thoughts about unicorns.

4.2 Dynamical Systems Approaches to Cognition

Around the turn of the century, some cognitive scientists (Beer 2000;2003; Kelso 1995; Thelen and Smith 1993; Thelen et al. 2001) andphilosophers (Van Gelder 1995; 1998) began to advocate for dynamicalsystems approaches to cognition. Van Gelder (1995; 1998) argued thatthe computer, as the defining metaphor for cognitive systems, shouldbe replaced with something more like the Watt’s centrifugalgovernor. A centrifugal governor regulates the speed of a steam engineby modulating the opening of a steam valve. As the valve opens, aspindle to which flyballs are connected spins faster, causing theflyballs to rise, which then cause the steam valve to close,decreasing the speed at which the spindle spins, causing the flyballsto drop, thus opening the steam valve, and so on. Whereas acomputational solution to maintaining engine speed might represent theengine’s current speed, compare it to a representation of theengine’s desired speed, and then calculate and correct for thedifference, the centrifugal governor does its job without having torepresent or calculate anything (although some have argued thatrepresentations are indeed present in the governor: Bechtel 1998;Prinz and Barsalou 2000).

The centrifugal governor is an example of a dynamical system. Typicalof dynamical systems is behavior that changes continuously throughtime—the height of the flyballs, the speed of the spindle, andthe size of the steam valve opening all change continuously throughtime, and the rate of change in each effects the rate of change of theothers. Dynamical systemstheory provides the mathematicalapparatus—differential and difference equations—to modeldynamical systems. It is to these equations thatdynamicalcognitive science looks for explanations of cognition.

Among the most-cited examples of a dynamical explanation of cognitionis the Haken-Kelso-Bunz (HKB) model of coordination dynamics (Haken,Kelso, and Bunz 1985; Kelso 1995). This model, consisting of a singledifferential equation, captures the dynamics of coordinated fingerwagging. Subjects are asked to wag their right and left index fingerseither in-phase, where each finger moves toward and away from eachother, or out-of-phase, like windshield wipers. As the rate of fingerwagging increases, out-of-phase motion will “flip” toin-phase motion, but motion that starts in-phase will remain in-phase.In dynamical terms, the coordination of finger wagging has twoattractors, or regions of stability, at slower speeds(in-phase and out-of-phase) but only one attractor at a higher speed(in-phase). The HKB model makes a number of predictions borne out byobservation, for instance that there are only two stable waggingpatterns at lower speeds, that erratic fluctuations in coordinationwill occur near the critical threshold at which out-of-phase waggingtransforms to in-phase, and that deviations from out-of-phase waggingwill take longer to correct near the speed of transformation toin-phase (see Chemero 2001 discussion).

Other influential examples of dynamical explanations of cognition havefocused on the coordination of infants’ legs for steppingbehavior (Thelen and Smith 1993), perseverative reaching behavior ininfants (Thelen et al. 2001), and categorization in a simulated agent(Beer 2003). Authors of these studies have been explicit in theirbelief that traditional cognitive science should be replaced with thecommitments of dynamical cognitive science. Among these commitments isa rejection of representation as a necessary component of cognition aswell as a view of cognition as “unfolding” from thecontinuous interactions between an organism’s brain, body, andenvironment rather than as emerging from discrete, rule-guided,algorithmic steps. This latter commitment returns us to the theme ofembodiment. As Thelen et al. explain:

To say that cognition is embodied means that it arises from bodilyinteractions with the world. From this point of view, cognitiondepends on the kinds of experiences that come from having a body withparticular perceptual and motor capabilities that are inseparablylinked and that together form the matrix within which reasoning,memory, emotion, language, and all other aspects of mental life aremeshed“ (2001, 1).

Of course, computational cognitive scientists can accept as well thatcognition ”arises from bodily interactions with theworld,“ in the sense that the inputs to cognitive processesoften arise from bodily interactions with the world. Thelen et al.(2001) must then mean something more than that. Presumably, the ideais that the body is like a component in a centrifugal governor, andcognition arises from the continuous interactions between the body,the brain, and the world. Spivey, another prominent dynamicalcognitive scientist, puts matters like this: ”For the newpsychology on the horizon, perhaps we are ready to discard themetaphor of the mind as computer…and replace it with atreatment of the mind as a natural continuous event“ (2007, 29),much as, presumably, how the regulation of a steam engine’sspeed is the result of the continuous interactions of the componentsof a centrifugal governor.

One challenge facing dynamical approaches to cognition echoes thatconfronting roboticists like Brooks. Just as the principles underlyingthe subsumption architecture may not scale-up in ways that can explainmore advanced cognitive capacities, so too one might wonder whetherdynamical approaches to such capacities will succeed. Perhaps fingerwagging and infant stepping behavior are not instances of cognition inthe first place, or are so only in an attenuated sense (Shapiro 2007;2013), in which case any lessons learned from their investigation havelittle relevance to cognitive science.

Or perhaps as dynamical cognitive scientists examine more explicitlycognitive phenomena, they will find themselves in need of toolsassociated with standard cognitive science. Spivey, a pioneer ofdynamical systems approaches, is on friendly terms with the idea ofrepresentations. Dietrich and Markman (2001) have argued that evenbehavior like coordinated finger wagging depends on representation,although perhaps not a conception of representation as”thick“ as one usually attributed to computationalism.Once again, it is evident that resolving some of the controversysurrounding the Replacement thesis hinges on the theory ofrepresentation that one adopts.

Another criticism of dynamical cognitive science questions whether thedifferential equations that are offered as explanations of cognitivephenomena are genuinely explanatory. Chemero (2001) and Beer (2003)insist that they are. The equations can be used topredictthe behavior of organisms as well as to addresscounterfactuals about behavior (how would the organism havebehaved if such and such had occurred?)—both hallmarks ofexplanation. Dietrich and Markman (2001), on the other hand, arguethat the equations offer only descriptions of phenomena rather thanexplaining them (see also Eliasmith 1996; van Leeuwen 2005). Spivey,despite his devotion to dynamical cognitive science, shares this view.Dynamical systems theory, he thinks, does not explain cognition. Itsutility consists in ”modeling how the mind works“(2007, 33, his emphasis). He continues:

The emergence of mind takes place in the medium of patterns ofactivation across neuronal cell assemblies in conjunction with theinteraction of their attached sensors (eyes, ears, etc.) and effectors(hands, speech apparatus, etc.) with the environment in which they areembedded. Make no mistake about it,that is the stuff ofwhich human minds are made: brains, bodies, and environments.Trajectories through high-dimensional state spaces are merelyconvenient ways for scientists to describe, visualize, and model whatis going on in those brains, bodies, and environments” (2007,33, his emphasis).

However, as Zednik (2011) has noted (see also Clark 1997 and Bechtel1998), the differential equations on which dynamical explanationsdepend contain terms that permit interpretation. This is what turns apiece of pure mathematics into applied mathematics, which routinely isunderstood as describing causal processes (Sauer 2010). As an instanceof applied (rather than pure) mathematics, The Lotka Volterraequations, for instance, do indeed explain the dynamics ofpredator-prey populations when their terms are taken to refer topredation rate and reproductive rate. The equations revealhow predation affects the size of the prey population, andhow depletion in the prey population affects the size of thepredator population, andhow reproduction restores the preypopulation. So, Spivey may be right that the “stuff” ofminds consists in brains, bodies, and environments, but this does notpreclude the differential equations that describe these interactionsfrom being explanatory. They are explanatorybecause theydescribe how brains, bodies, and environments interact and theconsequences ensuing from these interactions.

5. Constitution

Baking powder is a constituent of a scone, and its presence causes thescone to rise when baked. A hot oven is also a cause of thescone’s rising, but it is not a constituent of the scone. Youeat baking powder when you eat a scone, but you do not eat a hot oven.According to computational cognitive science, the constituents of acognitive system are brain processes, where these processes areperforming computations. The causes of cognition will be whatevercauses these brain processes—stimulation to the body from theenvironment, for instance. Many embodied cognition theorists believethat this account of the constituents of cognition is incorrect. Theconstituents of a cognitive system extend beyond the brain, to includethe body and the environment. A difficulty for this view is justifyingthe claim that the body and world are better construed as constituentsof cognition rather than causes. Why are they more like baking powderthan a hot oven?

5.1 Constitution Through Coupling

The previous discussion of dynamic cognitive science serves also toillustrate the Constitution theme. As the quotation above from Spiveyindicated, dynamically-oriented cognitive scientists regard cognitionto be the product of interactions between brain, body, and world. Thecontinuous interactions between these things, Chemero writes, explainswhy “dynamically-minded cognitive scientists do not assume thatan animal must represent the world to interact with it. Instead, theythink of the animal and the relevant parts of the environment astogether comprising a single, coupled system” (2001, 142).Chemero continues this idea: “It is only for convenience (andfrom habit) that we think of the organism and environment as separate;in fact, they are best thought of as comprising just onesystem…the animal and environment are not separate to beginwith” (2001, 142).

Chemero’s description of the animal and environment as coupledis ubiquitous in dynamical cognitive science. Coupling is a technicalnotion. The behaviors of objects are coupled when the differentialequations that describe the behavior of one contains a term thatrefers to the behavior of the other. The equations that apply to thecentrifugal governor, for instance, contain terms referring to theheight of the flyballs and the size of the steam valve opening. TheLotka Volterra equations contain terms that refer to the number ofpredators and the number of prey. The co-occurrence of terms in theequations that describe a dynamical system shows that the behavior ofthe objects to which they refer are co-dependent. They are thususefully construed as constituents of a single system—a systemheld together by the interactions of parts whose relationships arecaptured in coupled differential equations.

In addition to the technical sense of coupling, philosophers oftenappeal to a looser sense when defending Constitution. Clark, forinstance, discusses the process of writing. When writing, “[i]tis not always that fully formed thoughts get committed to paper.Rather, the paper provides a medium in which, this time via some kindof coupled neural-scribbling-reading unfolding, we are enabled toexplore ways of thinking that might otherwise be unavailable tous” (2008, 126). Clark’s idea is that the cognitive systemthat produces writing extends beyond a subject’s brain, toinclude among its constituents the paper on which words are written.The paper and the acts of reading and writing are literally parts ofthe cognitive process, no less than neural processes, because of thecontinuous interactions between all of these things. If it werepossible to provide differential equations that describe theproduction of writing, they would include terms referring to thebehaviors of each of these things. Thus, the reasoning that brings usto the conclusion that the components of a centrifugal governor areconstituents of a single system, and that predator and prey areconstituents of single system, leads also to the conclusion that theconstituents of many cognitive systems will include parts of the bodyand world.

The coupling concept underlies some arguments forextendedcognition. When brain processes are coupled to processes in thebody or world, either in the technical sense deriving from dyamicalsystems theory or in the less strict sense involving loops ofdependency, the resulting “brain+” is itself a singlecognitive system. It is a cognitive system that extends beyond thehead because the constituents of the system are not brain-bound.

Adams and Aizawa (2008; 2009; 2010) have objected to coupling-inspireddefenses of Constitution, and hence the idea of extended cognition, onthe grounds that they commit acoupling-constitution fallacy:“The pattern of reasoning here involves moving from theobservation that processX is in some way causally connected(coupled) to a processY of typej to the conclusionthatX is part of the process of typej”(2009, 81). They argue that this reasoning leads to absurd results.For instance, “[i]t is the interaction of the spinning bowlingball with the surface of the alley that leads to all the pins falling.Still, the process of the ball’s spinning does not extend intothe surface of the alley or the pins” (2009, 83). Similarly,Adams and Aizawa would claim, the process of cognition does not extendinto the paper and scribblings involved in writing.

This response is unlikely to impress supporters of coupling argumentsfor Constitution. Firstly, coupling arguments require that processX be more than simply causally connected to processY of typej forX to be part of thej process. Suppose that processY of typejis the production of a written paragraph on a piece of paper. LetX be the sound of the pencil as it leaves graphite on thesurface of the paper. The sound is causally connected to theproduction of writing, but defenders of Constitution need not regardit as a constituent in the system of that results in the writtenparagraph. The sound does not contribute to the“loop”—neural events, scribbling, reading—fromwhich the paragraph emerges. So, not just any causal connectionssuffice for constituency in a process.

Second, Clark and other defenders of Constitution would not claim thatthe writing processitself occurs in the constituents of thecognitive system that produces writing. Certainly the bowlingball’s spinning does not extend into the floor of the alley, andof course the writing process does not extend into a piece of paper.But the Constitution thesis is not committed to such claims (Shapiro2019a). Just as one can say that a neuron is a constituent of a braineven if cognition does not take place in a neuron, it might make senseto say that the floor of the alley is a constituent in a system thatresults in the ball’s spinning even if spinning does not takeplace in the floor, and the paper is a constituent in a system thatproduces writing even if the writing process does not take placein the paper. Such conclusions, even if ultimatelyunwarranted, do not fail for the reasons Adams and Aizawa muster.

5.2 Constitution Through Parity and Wide Computationalism

Apart from coupling arguments, some philosophers, e.g., Clark andChalmers (1998) and Clark (2008), have defended the idea thatcognitive systems include constituents outside the brain by appeal toaparity principle, whereas Wilson (2004) invokes the idea ofwide computationalism. The arguments are similar, bothseeking to reveal how a functionalist commitment to mental states orprocesses licenses the possibility of cognitive processes that extendbeyond the brain.

The parity principle says “[i]f, as we confront some task, apart of the world functions as a process which, were it done in thehead, we would have no hesitation in recognizing as part of thecognitive process, then that part of the world is…part of thecognitive process” (Clark 2008, 222). As illustration, Clark andChalmers (1998) compare the occurrent beliefs of Otto, who isafflicted with Alzheimer’s disease, to those of Inga, who has anormal biological brain. Otto keeps a notebook containing informationof the sort that would be stored in the hippocampus of a normallyfunctioning brain. When Inga wants to visit MoMA, she pulls from herbiological memory the information that MoMA is on 53rd St.which prompts her to take a subway to the destination. When Otto hasthe same desire, he consults his notebook in which is written“MoMA is on 53rd St.”, which in turn induceshis trip to that location. By stipulation, the representation ofMoMA’s location in Otto’s notebook plays an identicalfunctional role to the representation in Inga’s brain. Hence, bythe parity principle, the notebook entry is a memory—anoccurrent belief about the location of MoMA. The notebook is thus hometo constituents of many of Otto’s cognitive processes.

In a similar vein, Wilson (2004) discusses a person who wishes tosolve a multiplication problem involving two large numbers.Calculating the product “in the head” is a possibility,but solving the problem with the aid of pencil and paper would be mucheasier. In the latter case, Wilson claims that the brain“offloads” onto the paper some of the work that it wouldotherwise have to do on its own. Crucial to Wilson’s argument isthe idea that solving the multiplication problem is a computationalprocess and that computational processes are not confined toparticular spatial regions. When the multiplication problem is solved“in the head” the computational processes occur within thebrain alone. But some of the steps in the computation could as welltake place outside the head, on a piece of paper, in which case acomputational process might partly occur outside the head. There is,then, a parity in the two processes, whether the particularcomputations are internal or external to the agent. To the extent thatthis is plausible, one can find additional support forConstitution.

Most criticism of extended cognition has been aimed at Clark andChalmers’s original proposal, although because Wilson’sposition is similar, it is as much victim to these criticisms insofaras they succeed. Among the most vocal critics are Adams and Aizawa(2001; 2008; 2009; 2010), who argue that extended cognitive systemslike those involving Otto and his notebook or a person doingmultiplication with a paper and pencil, cannot actually be cognitivebecause they fail to satisfy two “marks” of the cognitive.The first mark is that “cognitive states must involve intrinsic,non-derived content” (Adams and Aizawa 2001, 48). The second isthat cognitive systems must display processes of sufficient uniformityto fall within the domain of a single science (Adams and Aizawa2010).

The intrinsic content criterion assumes a distinction between contentthat is derived from human thought, as the content of the word‘martini’ is derived from thoughts about martinis, andcontent that arises “on its own” without having to dependon some other contentful state for its origin. The thoughtmartini, for instance, presumably does not (or need not)derive from other contentful states but arises from some naturalisticprocess involving relationships between brain states and martinis(relationships that it is the business of a naturalistic theory ofcontent to specify). Words, maps, signs, and so on possess derived,non-intrinsic content whereas thoughts have intrinsic, non-derived,original content. Granting this distinction and itsimportance for identifying genuinely cognitive states and processes,Adams and Aizawa dismiss the plausibility of extended cognition on thegrounds that things like notebook entries and numerals written onpaper do not have intrinsic content.

Clark (2010) responds to this objection, in part pressing Adams andAizawa to clarify how much intrinsic content must be present in asystem for the system to qualify as cognitive. After all, brains, ifanything, are cognitive systems but not all activity occurring in abrain involves states or processes with intrinsic content.Accordingly, Clark wonders, why should the fact that some elements ofthe Otto+notebook system, because they lack intrinsic content,preclude the system from counting as cognitive?

Adams and Adams propose in response that “if you have a processthat involves no intrinsic content, then the condition rules that theprocess is non-cognitive” (2010, 70). However, this responseleaves open whether Otto+notebook constitutes a cognitive system.Because Otto’s brain does indeed contain states and processesthat “involve” intrinsic content—states andprocesses by which the notebook entries are read and understood andused to guide behavior—Clark can readily accept Adams andAizawa’s stipulation. Some of the Otto+notebook system involvesintrinsic content, some does not, and the cognitive system as a wholeincorporates both these elements.

The second mark of the mental that Adams and Aizawa take to precludesystems like Otto+notebook from counting as cognitive raises issuesconcerning the identification of scientific domains. If one supposes,reasonably enough, that the objects, processes, properties, etc. thatfall into the domain of a particular science do so in virtue ofsharing particular features, then one should expect the same for thedomain of cognitive science. The parts, properties, and activitiestaking place in brains do seem to share important features, featuresthat explain how it is possible to identify brains in newly discoveredspecies, how they differ from igneous rocks, and so on. But nowsuppose that cognitive systems can be extended in ways that Clark,Chalmers and Wilson have argued. Such systems would now containconstituents that could not possibly fit into the domain of a singlescience. Extended systems might include notebooks, or pencil andpaper, or tools of just about any sort. “[F]or thisreason,” Adams and Aizawa argue, “a would-be brain-toolscience would have to cover more than just a multiplicity of causalprocesses. It would have to cover a genuine motley” (2010,76).

Rupert (2004) shares a similar concern, noting that the processes bywhich Otto and Inga locate MoMA differ so considerably that it makesno sense to treat them as of a kind—as within the domain of asingle science. Additionally, Rupert argues, there is no good reasonto regard the various implements that combine with brain activity tobe constituents of a cognitive system rather than simply tools thatcognitive systems use to ease the processing they require to completesome task. Instead of insisting that cognitive systems extend, Rupertasks, why not regard them as seeking ways to embed themselves amongtools that make their jobs easier? An axe does not become part of aperson when she uses it to chop down a tree, why does a notebookbecome part of cognitive system when a brain uses it to locate MoMA? Asensible conservatism, Rupert thinks, speaks in favor of seeingcognitive systems as embedded in environments that allow ready use oftools to reduce their workloads, rather than as constituted, in part,by such tools. The hypothesis that cognitive systems use tools“is significantly less radical” (2004, 7) than thehypothesis that tools are constituents of cognitive systems and wouldseem to provide adequate explanations for all the phenomena thatinitially motivated the idea of extended cognition.

From Clark’s perspective, however, there is nothing motley, asAdams and Aizawa claim, about the brain+ tool systems that he believesconstitute a legitimate kind for scientific investigation. Moreover,the processes by which Otto and Inga locate MoMA are not, as Rupertinsists, vastly different. Once one steps back from the physicalparticularities of the constituents of extended cognitive systems andfocuses just on the functional, computational, roles they play, suchsystems are identical, or very similar, to wholly brain-boundcognitive systems.

Similarly, Clark would deny Rupert’s claim that the hypothesisof embedded cognition can equally well save the phenomena that thehypothesis of extended cognition was intended to capture and do sowhile requiring less revision of existing ideas about how cognitivesystems operate. A brain, Clark claims, is “’cognitivelyimpartial’: It does not care how and where key operations areperformed” (2008, 136). Rupert’s conservatism in factreflects a misunderstanding—it conceives of brains as having thefunction of cognizing, which is true in a sense, but more accuratewould be a description of the brain’s function as directing theconstruction of cognizing systems—some (many?) of which includeconstituents outside the brain proper (see also Wilson and Clark2009).

Finally, Shapiro (2019b; 2019c) has suggested that the parity andwide-computationalist defenses of Constitution do not sit well withother commitments of embodied cognition. As mentioned, such defensesrest on a functionalist theory of cognition (for more onfunctionalism, see the entry onfunctionalism). Functionalism may well justify the claim that states and processesoutside the brain can be identical to states and processes internal toa brain (can stand in a relation of parity towards them), which inturn grounds the possibility that cognitive systems can containnon-neural constituents. But, Shapiro argues, this strategy fordefending extended cognition seems contrary to the central theme ofembodied cognition. Motivating the embodied turn in cognitive scienceis the idea that bodies are somehow essential to cognition. But theparity and wide-computational arguments for extended cognition entailjust the opposite—important for cognition are computationalprocesses, and because computational processes are “hardwareneutral”, one need not consider the specifics of bodies in orderto describe them. Thus, it appears, arguments in favor of extendedcognition succeed to the extent that bodies,qua bodies, donot matter to cognition.

6. The Reach of Embodied Cognition

In addition to the usual cognitive terrain—language, perception,memory, categorization—that embodied cognition encompasses,researchers have recruited the concepts and methods of embodiedcognition for the purpose of investigating other psychologicaldomains. In particular, embodied cognition finds application in thefields of social cognition and moral cognition.

6.1 Social Cognition

Social cognition is the ability to understand and interact with otheragents. A wide variety of cognitive capacities are involved in socialcognition, such as attention, memory, affective cognition, andmetacognition (Fiske and Taylor 2013). Traditionally, however, thephilosophical discussion of social cognition has narrowly conceived ofit in terms of mentalizing (also calledtheory of mind ormindreading). Mentalizing refers to the attribution of mentalstates, often restricted to propositional attitudes, and typically forthe purpose of explaining and predicting others’ behavior. Thus,although social cognition is enabled by and involves numerous anddiverse cognitive processes, many philosophers have tended to think ofit simply as involving the attribution of propositional attitudes inorder to predict and explain behavior. For canonical expressions ofthis view of social cognition, see Davies and Stone (1995a) and(1995b). More recently, philosophers have begun to conceive of socialcognition more broadly. See Andrews, Spaulding, and Westra (2020) fora survey of Pluralistic Folk Psychology.

Embodied cognition theorists have rejected this narrow construal ofsocial cognition. Though they do not deny that neurotypical adulthumans have the capacity to attribute beliefs and desires and toexplain and predict behavior, they argue that this is a specializedand rarely used skill in our ordinary social interactions (Gallagher2020; Gallagher 2008; Hutto and Ratcliffe 2007). Most socialinteractions require only basic underlying social cognitive capacitiesthat are known as primary and secondary intersubjectivity (Trevarthen1979).

Primary intersubjectivity is the pre-theoretical, non-conceptual,embodied understanding of others that underlies and supports thehigher-level cognitive skills involved in mentalizing. It is“the innate or early developing capacity to interact with othersmanifested at the level of perceptual experience—we see or moregenerally perceive in the other person’s bodily movements,facial gestures, eye direction, and so on, what they intend and whatthey feel” (Gallagher 2005, 204). Primary intersubjectivity ispresent from birth, but it continues to serve as the basis for oursocial cognition in adulthood. It manifests as the capacity for facialimitation, the capacity to detect and track eye movement, detectintentional behavior, and “read” emotions from actions andexpressive movements of others. Primary intersubjectivity consists ininformational sensitivity and appropriate responsiveness to specificfeatures of one’s environment. It does not, embodied cognitiontheorists argue, involve representing and theorizing about thosefeatures. It simply requires certain practical abilities that havebeen shaped by selective pressures, e.g., sensitivity to certainbodily cues and facial expressions.

Around one year of age, neurotypical children develop the capacity forsecondary intersubjectivity. This development enables a subject tomove from one-on-one, immediate intersubjectivity to shared attention.At this stage, children learn to follow gazes, point, and communicatewith others about objects of shared attention. According to embodiedcognition, the cognitive skills acquired through secondaryintersubjectivity are not rich, meta-cognitive representations aboutother minds. Rather, children learn practical skills when gettingothers to attend to an object and when learning to attend to objectsothers are attending to. This allows for a richer understanding ofother agents, but it is still meant to be a behavioral, embodiedunderstanding rather than a representation of others’propositional attitudes (Gallagher 2005, 207).

Although primary and secondary intersubjectivity are described indevelopmental terms, according to embodied cognition theseintersubjective practices constitute our primary mode of socialcognition even as adults (Fuchs 2012; Gallagher 2008). For example,Hutto claims, “Our primary worldly engagements arenonrepresentational and do not take the form of intellectualactivity” (2008, 51). One can see in Hutto’s descriptionof social cognition a tendency toward the Replacement theme insofar ashe seeks to minimize or reject completely a role for representation inthe human capacity for understanding others’ behavior.Mentalizing, it is argued, is a late developing, rarely used,specialized skill. Primary and secondary intersubjectivity arefundamental insofar as they are sufficient for navigating most typicalsocial interactions and insofar as they enable the development ofhigher-level social cognition, like mentalizing. Although, seeSpaulding (2010) for a critique of these arguments.

Mirror neurons may be an important mechanism of social cognition onthis kind of view. Mirror neurons are neurons that activate bothendogenously in producing a behavior and exogenously in observing thatvery same behavior. For instance, neurons in the premotor cortex andinferior parietal lobule activate when a subject uses, say, awhole-handed grasp to pick up a bottle. These very same neuronsselectively activate when a subject observes a target using awhole-handed grasp to pick up an object. Neuroscientists havediscovered similar patterns of activation in neurons in various partsof the brain, leading to the proposal that there are mirror neuronsystems for action, fear, anger, pain, disgust, etc. Though theinterpretation of these findings is subject to a great deal ofcontroversy (Hickok 2009), many theorists propose that mirror neuronsare a basic mechanism of social cognition (Gallese 2009; Goldman 2009;Goldman and de Vignemont 2009; Iacoboni 2009). The rationale is thatmirror neurons explain how a subject understands a target’smental states without needing complicated, high-level inferences aboutbehavior and mental states. In observation mode, the subject’sbrain activates as if the subject is doing, feeling, or experiencingwhat the target is doing, feeling, or experiencing. Thus, theobservation of the target’s behavior is automatically meaningfulto the subject. Mirror neurons are a possible mechanism for embodiedsocial cognition. If the findings and interpretations are upheld, theysubstantiate the claim that we can understand and interact with otherswithout engaging in mentalizing. For a survey of the reasons to becautious about these interpretations of mirror neurons, see Spaulding(2011; 2013).

6.2 Moral Cognition

Embodied moral cognition takes moral sentimentalism as a startingpoint.Moral sentimentalism is the view that our emotions and desires are, in some way,fundamental to morality, moral knowledge, and moral judgments. Aparticular version of moral sentimentalism holds that emotions, moralattitudes, and moral judgments are generated by our “gutreactions,” and any moral reasoning that occurs is typicallypost-hoc rationalization of those gut reactions (Haidt 2001; Nichols2004; Prinz 2004). Embodied moral cognition takes inspiration fromthis kind of moral sentimentalism. It holds that many of our moraljudgments stem from our embodied, affective states rather thanabstract reasoning.

Various sources of empirical evidence support this kind of view.Consider, for example, pathological cases, such as psychopaths orindividuals with damage to the ventromedial prefrontal cortex (vmPFC).Such individuals are impaired in making moral judgments. Psychopathsfeel little compunction about behaving immorally and sometimes have ahard time differentiating moral from conventional norms (Hare 1999).Individuals with damage to the vmPFC retain knowledge of abstractmoral principles but struggle in making specific, everyday moraldecisions (Damasio 1994). In both cases, individuals lack thephysiological responses that accompany neurotypical moraldecision-making. Lacking these “somatic markers” thatguide moral judgments, these individuals behave in impulsive, selfish,and immoral ways (Damasio 1994). Embodied cognition would predict thisconnection between physiological responses (like increased heartrateand palm sweating) and moral decision-making.

Psychologists and neuroscientists have observed the influence ofembodied cues on moral judgments in neurotypical individuals, as well.For instance, experimentally manipulated perception of one’sheartrate seems to influence one’s moral judgments, withperceptions of faster heartrates leading to feelings of higher moraldistress and more just moral judgments (Gu, Zhong, and Page-Gould2013). Relatedly, there is some evidence that eliciting a feeling ofdisgust leads to harsher moral judgments (Schnall et al. 2008).Perceptions of cleanliness seem to lead to less severe moral judgments(Schnall, Benton, and Harvey 2008). In each of these cases, perceptionof embodied cues seems to mediate moral judgments. Moralsentimentalists have observed that many people have strong aversivereactions to harmless actions that violate taboos, such as consensualprotected sex between adult siblings, cleaning a toilet with thenational flag, eating one’s pet that had been run over, etc. Inthese cases, the strong negative affective response precedes the moraljudgment, and often people have a difficult time articulatingwhy they think these victimless, harmless actions are morallywrong (Strejcek and Zhong 2014; Haidt 2001; Haidt, Koller, and Dias1993; Cushman, Young, and Hauser 2006). From the perspective ofembodied cognition, this ordering confirms the notion that we makemoral judgments on the basis of embodied cues.

Dual process theories of moral psychology reject the moralsentimentalism claim that all moral judgments are made in the sameway. Dual process theories maintain that we have two systems of moraldecision-making: a system for Utilitarian reasoning that is driven byaffect-less, abstract deliberation, and system for Deontologicalreasoning that is driven by automatic, intuitive, emotional heuristicslike gut feelings (Greene 2014). Dual process theories are meant toexplain the seemingly inconsistent moral intuitions ordinary folkshave about moral dilemmas. For example, in a standard trolley casewhere an out-of-control trolley is heading toward five innocent,unaware individuals on the track ahead, most people have the intuitionthat we ought to throw the switch so that the trolley goes onto a spurof the track, thereby killing one person on the spur but saving fivelives. However, in the footbridge variation of the trolley problemwhere saving the five lives requires pushing an individual off afootbridge to derail the trolley, most people have the intuition thatwe should not do this even though the consequences are the same as inthe standard trolley dilemma. The dual process theory holds that inthe former case, our reasoning is guided by a System 2 type ofabstract reasoning. However, in the latter case, our moral reasoningis guided by an aversive physiological response triggered by imaginingpushing an individual off a footbridge. The dual process viewpartially vindicates the moral sentimentalist position insofar as itposits a distinctive System 1 type of moral reasoning that is based onembodied gut instincts. However, it maintains that there is a separatesystem operating on different inputs and processes for more abstractmoral reasoning.

Recently, theorists have challenged dual process theories’strict dichotomy between reason and emotion (Huebner 2015; Maibom2010; Woodward 2016). On the one hand, brain areas that are associatedwith emotions like fear, anger, and disgust are implicated in complexlearning and inferential processing. On the other hand, individualswho are clearly impaired in moral decision-making—psychopathsand those with damaged vmPFC—also suffer deficits in other kindsof learning and inferential processing. Abstract reasoning is not, asit turns out, cut off from affective processes. Somatic markers,affective cues, and physiological responses are central to reasoning,learning, and decision-making. For the proponent of embodied moralcognition, this serves as further confirmation of the idea that allcognition, including moral cognition, is deeply shaped by embodiedcues. Though see May (2018), May and Kumar (2018) and Railton (2017)for a moral rationalist take on these findings.

7. Conclusion

This article aims to convey a sense of the breadth of topics that fallwithin the field of embodied cognition, as well as the numerouscontroversies that have been of special philosophical interest. Aswith any nascent research program, there remain questions about howembodied cognition relates to its forebears, in particularcomputational cognitive science and ecological psychology. Some of thehardest philosophical questions arising within embodied cognition,such as those concerning representation, explanation, and the verymeaning of ‘mind’, are of a sort that any theory of mindmust address. Apart from philosophical challenges to the conceptualintegrity of embodied cognition there loom psychological concernsabout the replicability of some of the most-cited findings withinembodied cognition; although, in fairness, worries about replicabilityhave recently arisen in many areas of psychology (Goldhill 2019;Lakens 2014; Maxwell, Lau, and Howard 2015; Rabelo et al. 2015).Whatever the future of embodied cognition, careful study of its aims,methods, conceptual foundations, and motivations will doubtless enrichthe philosophy of psychology.

Bibliography

  • Adams, Fred, and Ken Aizawa, 2001, “The Bounds ofCognition,”Philosophical Psychology, 14(1):43–64. doi:10.1080/09515080120033571
  • –––, 2008,The Bounds of Cognition,Malden, MA: Blackwell.
  • –––, 2009, “Why the Mind Is Still in theHead,” in Philip Robbins and Murat Aydede (eds.),TheCambridge Handbook of Situated Cognition, 1st edition, Cambridge,New York: Cambridge University Press, pp, 78–95.
  • –––, 2010, “Defending the Bounds ofCognition,” in Richard Menary (ed.),The Extended Mind,Cambridge, MA: MIT Press, pp, 67–80.
  • Andrews, Kristin, Shannon Spaulding, and Evan Westra, 2020,“Introduction toFolk Psychology: PluralisticApproaches,”Synthese, August, 1–16,doi:10.1007/s11229-020-02837-3
  • Baggs, Edward, and Anthony Chemero, 2018, “RadicalEmbodiment in Two Directions,”Synthese, 198(Supplement 9): 2175–2190. doi:10.1007/s11229-018-02020-9
  • Barsalou, Lawrence W, 1999, “Perceptual SymbolSystems,”Behavioral and Brain Sciences, 22(4):577–660. doi:10.1017/S0140525X99002149
  • –––, 2008, “Grounded Cognition,”Annual Review of Psychology, 59(1): 617–45.doi:10.1146/annurev.psych.59.103006.093639
  • Barsalou, Lawrence W., W. Kyle Simmons, Aron K. Barbey, andChristine D. Wilson, 2003, “Grounding Conceptual Knowledge inModality-Specific Systems,”Trends in CognitiveSciences, 7(2): 84–91.doi:10.1016/S1364-6613(02)00029-3
  • Barsalou, Lawrence W., and Katja Wiemer-Hastings, 2005,“Situating Abstract Concepts,” in Diane Pecher and Rolf A.Zwaan (eds.),Grounding Cognition (1st edition), pp.129–63, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511499968.007
  • Bechtel, William, 1998, “Representations and CognitiveExplanations: Assessing the Dynamicist’s Challenge in CognitiveScience,”Cognitive Science, 22(3): 295–318.doi:10.1207/s15516709cog2203_2.
  • Beer, Randall D, 2000, “Dynamical Approaches to CognitiveScience,”Trends in Cognitive Sciences, 4(3):91–99. doi:10.1016/S1364-6613(99)01440-0
  • –––, 2003, “The Dynamics of ActiveCategorical Perception in an Evolved Model Agent,”AdaptiveBehavior, 11(4): 209–43. doi:10.1177/1059712303114001
  • Broadbent, Donald E., 1958,Perception and Communication,New York: Pergamon Press.
  • Brooks, Rodney. A., 1991a, “New Approaches toRobotics,”Science, 253 (5025): 1227–32.doi:10.1126/science.253.5025.1227
  • –––, 1991b, “Intelligence withoutRepresentation,”Artificial Intelligence,47(1–3): 139–59. doi:10.1016/0004-3702(91)90053-M
  • Buccino, Giovanni, Lucia Riggio, Gabor Melli, Ferdinand Binkofski,Vittorio Gallese, and Giacomo Rizzolatti, 2005, “Listening toAction-Related Sentences Modulates the Activity of the Motor System: ACombined TMS and Behavioral Study,”Cognitive BrainResearch, 24(3): 355–63.doi:10.1016/j.cogbrainres.2005.02.020
  • Chemero, Anthony, 2001, “Dynamical Explanation and MentalRepresentations,”Trends in Cognitive Sciences, 5(4):141–42. doi:10.1016/S1364-6613(00)01627-2
  • –––, 2009,Radical Embodied CognitiveScience, Cambridge, MA: MIT Press.
  • –––, 2016, “Sensorimotor Empathy,”Journal of Consciousness Studies, 23(5–6):138–52.
  • –––, 2021, “Epilogue: What EmbodimentIs,” in Nancy Dess (ed.),A Multidisciplinary Approach toEmbodiment: Understanding Human Being, New York: Routledge, pp.133–40.
  • Chomsky, Noam, 1959, “On Certain Formal Properties ofGrammars,”Information and Control, 2(2): 137–67.doi:10.1016/S0019-9958(59)90362-6
  • –––, 1980, “On Cognitive Structures andTheir Development: A Reply to Piaget,” in MassimoPiattelli-Palmarini (ed.),Language and Learning : TheDebate between Jean Piaget and Noam Chomsky, Cambridge, MA:Harvard University Press.
  • Clark, Andy, 1997, “The Dynamical Challenge,”Cognitive Science, 21(4): 461–81.doi:10.1207/s15516709cog2104_3
  • –––, 2008,Supersizing the Mind: Embodiment,Action, and Cognitive Extension, Oxford, New York: OxfordUniversity Press.
  • –––, 2010, “Coupling, Constitution, andthe Cognitive Kind: A Reply to Adams and Aizawa,” in RichardMenary (ed.),The Extended Mind, Cambridge, MA: MIT Press,pp. 81–100.
  • Clark, Andy, and David J. Chalmers, 1998, “The ExtendedMind,”Analysis, 58(1): 7–19.
  • Clark, Andy, and Josefa Toribio, 1994, “Doing withoutRepresenting?”Synthese, 101(3): 401–31.doi:10.1007/BF01063896.
  • Cushman, Fiery, Liane Young, and Marc Hauser, 2006, “TheRole of Conscious Reasoning and Intuition in Moral Judgment: TestingThree Principles of Harm,”Psychological Science,17(12): 1082–89.
  • Damasio, Antonio R., 1994, “Descartes’ Error and theFuture of Human Life,”Scientific American, 271(4):144.
  • Davies, Martin, and Tony Stone, 1995a,Folk Psychology: TheTheory of Mind Debate, Oxford: Blackwell.
  • –––, 1995b,Mental Simulation: Evaluationsand Applications (Volume 4), Oxford: Blackwell.
  • Dietrich, Eric, and Arthur B. Markman, 2001, “DynamicalDescription versus Dynamical Modeling,”Trends in CognitiveSciences, 5(8): 332. doi:10.1016/S1364-6613(00)01705-8
  • Di Paolo, Ezequiel A., 2005, “Autopoiesis, Adaptivity,Teleology, Agency,”Phenomenology and the cognitivesciences, 4: 429–452.
  • Dove, Guy, 2009, “Beyond Perceptual Symbols: A Call forRepresentational Pluralism,”Cognition, 110(3):412–31. doi:10.1016/j.cognition.2008.11.016
  • –––, 2016, “Three Symbol UngroundingProblems: Abstract Concepts and the Future of EmbodiedCognition,”Psychonomic Bulletin & Review, 23(4):1109–21.
  • Edmiston, Pierce, and Gary Lupyan, 2017, “VisualInterference Disrupts Visual Knowledge,”Journal of Memoryand Language, 92 (February): 281–92.doi:10.1016/j.jml.2016.07.002
  • Eliasmith, Chris, 1996, “The Third Contender: A CriticalExamination of the Dynamicist Theory of Cognition,”Philosophical Psychology, 9(4): 441–63.doi:10.1080/09515089608573194
  • Fiske, Susan T., and Shelley E. Taylor, 2013,SocialCognition: From Brains to Culture, London: Sage.
  • Fodor, Jerry A., 1987,Psychosemantics: The Problem of Meaningin the Philosophy of Mind, Cambridge, MA: MIT Press.
  • Fuchs, Thomas, 2013, “The Phenomenology and Development ofSocial Perspectives,”Phenomenology and the CognitiveSciences, 12(4): 655–683.doi:10.1007/s11097-012-9267-x
  • Gallagher, Shaun, 2005,How the Body Shapes the Mind,Oxford, Oxford University Press.
  • –––, 2008, “Inference or Interaction:Social Cognition without Precursors,”PhilosophicalExplorations, 11(3): 163–74.
  • –––, 2020,Action and Interaction,Oxford: University Press.
  • Gallagher, Shaun, and Daniel D. Hutto, 2008, “UnderstandingOthers through Primary Interaction and Narrative Practice,” inChris Sinha, Esa Itkonen, Jordan Zlatev, and Timothy P. Racine (eds.),The Shared Mind: Perspectives on Intersubjectivity,Amsterdam: John Benjamins, pp. 17–38.
  • Gallese, Vittorio, 2009, “Mirror Neurons and the NeuralExploitation Hypothesis: From Embodied Simulation to SocialCognition,” in Jaimie A. Pineda (ed.),Mirror NeuronSystems, New York: Humana, pp. 163–90.
  • Gibson, James J., 1966,The Senses Considered as PerceptualSystems, Boston: Houghton Mifflin.
  • –––, 1979,The Ecological Approach to VisualPerception, Boston: Houghton Mifflin.
  • Glenberg, Arthur M., and Michael P. Kaschak, 2002,“Grounding Language in Action,”Psychonomic Bulletin& Review, 9(3): 558–65. doi:10.3758/BF03196313
  • Goldhill, Olivia, 2019, “The Replication Crisis Is KillingPsychologists’ Theory of How the Body Influences theMind,”Quartz, 16 January 2019, [Goldhill 2019 available online].
  • Goldman, Alvin I., 2009, “Mirroring, Mindreading, andSimulation,” in Jaimie A. Pineda (ed.),Mirror NeuronSystems, New York: Humana, pp. 311–30.
  • Goldman, Alvin I., and Frederique de Vignemont, 2009, “IsSocial Cognition Embodied?”Trends in CognitiveSciences, 13(4): 154–59.
  • Greene, Joshua D., “Beyond Point-and-Shoot Morality,”Ethics, 124(4): 695–726.
  • Gu, Jun, Chen-Bo Zhong, and Elizabeth Page-Gould, 2013,“Listen to Your Heart: When False Somatic Feedback Shapes MoralBehavior,”Journal of Experimental Psychology: General,142(2): 307.
  • Haidt, Jonathan, 2001, “The Emotional Dog and Its RationalTail: A Social Intuitionist Approach to Moral Judgment,”Psychological Review, 108(4): 814–834.
  • Haidt, Jonathan, Silvia Helena Koller, and Maria G Dias, 1993,“Affect, Culture, and Morality, or Is It Wrong to Eat YourDog?”Journal of Personality and Social Psychology,65(4): 613–628.
  • Haken, Hermann, J. A. Scott Kelso, and Herbert Bunz, 1985,“A Theoretical Model of Phase Transitions in Human HandMovements,”Biological Cybernetics, 51(5):347–56. doi:10.1007/BF00336922
  • Hare, Robert D., 1999,Without Conscience: The DisturbingWorld of the Psychopaths among Us, New York: Guilford Press.
  • Hatfield, Gary, 1991, “Representation and Rule-Instantiationin Connectionist Systems,” in Terence Horgan and John Tienson(eds.),Connectionism and the Philosophy of Mind (Studies inCognitive Systems), Dordrecht: Springer Netherlands, pp. 90–112.doi:10.1007/978-94-011-3524-5_5
  • Heidegger, Martin, 1975,The Basic Problems ofPhenomenology, translated by Albert Hofstadter, 1988,Bloomington: Indiana University Press.
  • Hickok, Gregory, 2009, “Eight Problems for the Mirror NeuronTheory of Action Understanding in Monkeys and Humans,”Journal of Cognitive Neuroscience, 21(7): 1229–43.doi:10.1162/jocn.2009.21189
  • Huebner, Bryce, 2015, “Do Emotions Play a Constitutive Rolein Moral Cognition?”Topoi, 34(2): 427–40.
  • Husserl, Edmund, 1929,Cartesian Meditations: An Introductionto Phenomenology, translated by Dorian Cairns, 2012, Dordrect:Springer Science & Business Media.
  • Hutchins, Edwin, 1996,Cognition in the Wild (secondprinting), Cambridge, MA: MIT Press.
  • Hutto, Daniel D., 2008,Folk Psychological Narratives: TheSociocultural Basis of Understanding Reasons, Cambridge, MA: MITPress.
  • Hutto, Daniel D., and Erik Myin, 2013,RadicalizingEnactivism: Basic Minds without Content, Cambridge, MA: MITPress.
  • Hutto, Daniel D., and M. Ratcliffe, 2007,Folk PsychologyRe-Assessed, Dordrecht; London: Springer.
  • Kelso, J. A. Scott, 1995,Dynamic Patterns: TheSelf-Organization of Brain and Behavior, Cambridge, MA: MITPress.
  • Lakens, Daniël, 2014, “Grounding SocialEmbodiment,”Social Cognition, 32 (Supplement):168–83. doi:10.1521/soco.2014.32.supp.168
  • Lakoff, George, and Mark Johnson, 1980,Metaphors We LiveBy, Chicago: University of Chicago Press.
  • –––, 1999,Philosophy in the Flesh: TheEmbodied Mind and Its Challenge to Western Thought, New York:Basic Books.
  • Leeuwen, Marco van, 2005, “Questions For The Dynamicist: TheUse of Dynamical Systems Theory in the Philosophy of Cognition,”Minds and Machines, 15(3–4): 271–333.doi:10.1007/s11023-004-8339-2
  • Macrine, Shelia, L., and Jennifer M. B. Fugate, 2022,MovementMatters: How Embodied Cognition Informs Teaching and Learning,Cambridge, MA: MIT Press.
  • Mahon, Bradford Z., 2015, “What Is Embodied aboutCognition?”Language, Cognition and Neuroscience,30(4): 420–29. doi:10.1080/23273798.2014.987791
  • Mahon, Bradford Z., and Alfonso Caramazza, 2008, “A CriticalLook at the Embodied Cognition Hypothesis and a New Proposal forGrounding Conceptual Content,”Journal ofPhysiology-Paris, Links and Interactions Between Language andMotor Systems in the Brain, 102(1): 59–70.doi:10.1016/j.jphysparis.2008.03.004
  • Maibom, Heidi, 2010, “What Experimental Evidence Shows Usabout the Role of Emotions in Moral Judgement,”PhilosophyCompass, 5(11): 999–1012.
  • Marr, David, 1982,Vision: A Computational Investigation intothe Human Representation and Processing of Visual Information,San Francisco: W. H. Freeman.
  • Martin, Taylor, and Daniel L. Schwartz, 2005a, “PhysicallyDistributed Learning: Adapting and Reinterpreting PhysicalEnvironments in the Development of Fraction Concepts,”Cognitive Science, 29(4): 587–625.doi:10.1207/s15516709cog0000_15
  • –––, 2005b, “Physically DistributedLearning: Adapting and Reinterpreting Physical Environments in theDevelopment of Fraction Concepts,”Cognitive Science,29(4): 587–625. doi:10.1207/s15516709cog0000_15
  • Matthen, Mohan, 2014, “Debunking Enactivism: A CriticalNotice of Hutto and Myin’s Radicalizing Enactivism,”Canadian Journal of Philosophy, 44(1): 118–28.doi:10.1080/00455091.2014.905251
  • Maxwell, Scott E., Michael Y. Lau, and George S. Howard, 2015,“Is Psychology Suffering from a Replication Crisis? What Does‘Failure to Replicate’ Really Mean?”AmericanPsychologist, 70(6): 487–98. doi:10.1037/a0039400
  • May, Joshua, 2018,Regard for Reason in the Moral Mind,Oxford: Oxford University Press.
  • May, Joshua, and Victor Kumar, 2018, “Moral Reasoning andEmotion,” in Karen Jones, Mark Timmons and Aaron Zimmerman(eds.),Routledge Handbook on Moral Epistemology. London:Routledge, pp. 139–156.
  • Menary, Richard, 2008,Cognitive Integration: Mind andCognition Unbounded, Basingstoke, New York: PalgraveMacmillan.
  • Merleau-Ponty, Maurice, 1962,Phenomenology ofPerception, translated by Colin Smith, London: Routledge.
  • Michaels, Claire, and Zsolt Palatinus, 2014, “A TenCommandments for Ecological Psychology,” in Lawrence Shapiro(ed.),The Routledge Handbook of Embodied Cognition, NewYork: Routledge, Taylor & Francis Group, pp. 19–28.
  • Newell, Allen, John C. Shaw, and Herbert A. Simon, 1958,“Elements of a Theory of Human Problem Solving,”Psychological Review, 65(3): 151–66.doi:10.1037/h0048495
  • Nichols, Shaun, 2004,Sentimental Rules: On the NaturalFoundations of Moral Judgment, Oxford: Oxford UniversityPress.
  • Noë, Alva, 2004,Action in Perception, Cambridge,MA: MIT Press.
  • O’Regan, J. Kevin, and Alva Noë, 2001, “ASensorimotor Account of Vision and Visual Consciousness,”Behavioral and Brain Sciences, 24(5): 939–73.doi:10.1017/S0140525X01000115
  • Pouw, Wim T. J. L., Tamara van Gog, and Fred Paas, 2014, “AnEmbedded and Embodied Cognition Review of InstructionalManipulatives,”Educational Psychology Review, 26(1):51–72. doi:10.1007/s10648-014-9255-5
  • Prinz, Jesse J., 2004,Gut Reactions: A Perceptual Theory ofEmotion, Oxford: Oxford University Press.
  • Prinz, Jesse J., and Lawrence W. Barsalou, 2000, “Steering aCourse for Embodied Representation,” in Eric Dietrich and ArthurMarkman (eds.),Cognitive Dynamics: Conceptual Change in Humansand Machines, Cambridge, MA: MIT Press, pp. 51–77.
  • Pulvermüller, Friedemann, 2005, “Brain MechanismsLinking Language and Action,”Nature ReviewsNeuroscience, 6(7): 576–82. doi:10.1038/nrn1706
  • Rabelo, André L. A., Victor N. Keller, Ronaldo Pilati, andJelte M. Wicherts, 2015, “No Effect of Weight on Judgments ofImportance in the Moral Domain and Evidence of Publication Bias from aMeta-Analysis,”PLoS ONE, 10(8).doi:10.1371/journal.pone.0134808
  • Railton, Peter, 2017, “Moral Learning: ConceptualFoundations and Normative Relevance,”Cognition, 167(October): 172–90.
  • Rey, Georges, 1983, “Concepts and Stereotypes,”Cognition, 15(1): 237–62.doi:10.1016/0010-0277(83)90044-6
  • –––, 1985, “Concepts and Conceptions: AReply to Smith, Medin and Rips,”Cognition, 19(3):297–303. doi:10.1016/0010-0277(85)90037-X
  • Rupert, Robert D., 2004, “Challenges to the Hypothesis ofExtended Cognition,”The Journal of Philosophy, 101(8):389–428.
  • Sauer, Niko, 2010, “Causality and Causation: What We Learnfrom Mathematical Dynamic Systems Theory,”Transactions ofthe Royal Society of South Africa, 65(1): 65–68.doi:10.1080/00359191003680091
  • Schnall, Simone, Jennifer Benton, and Sophie Harvey, 2008,“With a Clean Conscience: Cleanliness Reduces the Severity ofMoral Judgments,”Psychological Science, 19(12):1219–22.
  • Schnall, Simone, Jonathan Haidt, Gerald L Clore, and Alexander HJordan, 2008, “Disgust as Embodied Moral Judgment,”Personality and Social Psychology Bulletin, 34(8):1096–1109.
  • Shapiro, Lawrence, 2007, “The Embodied Cognition ResearchProgramme,”Philosophy Compass, 2(2): 338–46.doi:10.1111/j.1747-9991.2007.00064.x
  • –––, 2012, “Embodied Cognition,” inEric Margolis, Richard Samuels and Stephen P. Stich (eds.),TheOxford Handbook of Philosophy of Cognitive Science, New York:Oxford University Press, pp. 118–147.
  • –––, 2013, “Dynamics and Cognition,”Minds and Machines, 23(3): 353–75.doi:10.1007/s11023-012-9290-2
  • –––, 2019a,Embodied Cognition, SecondEdition, London; New York: Routledge.
  • –––, 2019b, “Matters of the Flesh: TheRole(s) of Body in Cognition,” in Matteo Colombo, ElizabethIrvine and Mog Stapleton (eds.),Andy Clark and His Critics,New York, NY: Oxford University Press, pp. 69–80.
  • –––, 2019c, “Flesh Matters: The Body inCognition,”Mind & Language, 34(1): 3–20.doi:10.1111/mila.12203
  • Spaulding, Shannon, 2010, “Embodied Cognition andMindreading,”Mind & Language, 25(1):119–40.
  • –––, 2011, “A Critique of EmbodiedSimulation,”Review of Philosophy and Psychology, 2(3):579–99.
  • –––, 2013, “Mirror Neurons and SocialCognition,”Mind & Language, 28(2):233–57.
  • Spivey, Michael J., 2007,The Continuity of Mind (OxfordPsychology Series), Oxford, New York: Oxford University Press.
  • Sternberg, Saul, 1969, “Memory-Scanning: Mental ProcessesRevealed by Reaction-Time Experiments,”AmericanScientist, 57(4): 421–57.
  • Symes, Ed, Rob Ellis, and Mike Tucker, 2007, “Visual ObjectAffordances: Object Orientation,”Acta Psychologica,124(2): 238–55. doi:10.1016/j.actpsy.2006.03.005
  • Tettamanti, Marco, Giovanni Buccino, Maria Cristina Saccuman,Vittorio Gallese, Massimo Danna, Paola Scifo, Ferruccio Fazio, GiacomoRizzolatti, Stefano F. Cappa, and Daniela Perani, 2005,“Listening to Action-Related Sentences Activates Fronto-ParietalMotor Circuits,”Journal of Cognitive Neuroscience,17(2): 273–81. doi:10.1162/0898929053124965
  • Thelen, Esther, Gregor Schöner, Christian Scheier, and LindaB. Smith, 2001, “The Dynamics of Embodiment: A Field Theory ofInfant Perseverative Reaching,”Behavioral and BrainSciences, 24(1): 1–34. doi:10.1017/S0140525X01003910
  • Thelen, Esther, and Linda Smith (eds.), 1993,A DynamicSystems Approach to Development: Applications, Cambridge, MA: MITPress.
  • Thompson, Evan, 2010,Mind in Life, Cambridge, MA:Harvard University Press.
  • Trevarthen, Colwyn, 1979, “Communication and Cooperation inEarly Infancy: A Description of Primary Intersubjectivity,” inMargaret Bullowa (ed.)Before Speech: The Beginning ofInterpersonal Communication, Cambridge: Cambridge UniversityPress, pp. 321–348.
  • Tucker, Mike, and Rob Ellis, 1998, “On the Relations betweenSeen Objects and Components of Potential Actions,”Journalof Experimental Psychology: Human Perception and Performance,24(3): 830–46. doi:10.1037/0096-1523.24.3.830
  • –––, 2001, “The Potentiation of GraspTypes during Visual Object Categorization,”VisualCognition, 8(6): 769–800.doi:10.1080/13506280042000144
  • –––, 2004, “Action Priming by BrieflyPresented Objects,”Acta Psychologica, 116(2):185–203. doi:10.1016/j.actpsy.2004.01.004
  • Van Gelder, Tim, 1995, “What Might Cognition Be, If NotComputation?”The Journal of Philosophy, 92(7):345–81. doi:10.2307/2941061
  • –––, 1998, “The Dynamical Hypothesis inCognitive Science,”Behavioral and Brain Sciences,21(5): 615–28. doi:10.1017/S0140525X98001733
  • Varela, Francisco J., Evan Thompson, and Eleanor Rosch, 2017,The Embodied Mind, Revised Edition: Cognitive Science and HumanExperience, Cambridge, MA: MIT Press.
  • Ward, Dave, David Silverman, and Mario Villalobos, 2017,“Introduction: The Varieties of Enactivism,”Topoi, 36(3): 365–75.doi:10.1007/s11245-017-9484-6
  • Wilson, Andrew D., and Sabrina Golonka, 2013, “EmbodiedCognition Is Not What You Think It Is,”Frontiers inPsychology, 4, published online 12 February 2013.doi:10.3389/fpsyg.2013.00058
  • Wilson, Margaret, 2002, “Six Views of EmbodiedCognition,”Psychonomic Bulletin & Review, 9(4):625–36. doi:10.3758/BF03196322
  • Wilson, Robert A., 1994, “Wide Computationalism,”Mind, 103(411): 351–72.doi:10.1093/mind/103.411.351
  • Wilson, Robert A., and Andy Clark, 2001, “How to SituateCognition: Letting Nature Take Its Course,” in Philip Robbinsand Murat Aydede (eds.)The Cambridge Handbook of SituatedCognition, 1st ed., Cambridge: Cambridge University Press, pp.55–77. doi:10.1017/CBO9780511816826.004
  • Woodward, James, 2016, “Emotion versus Cognition in MoralDecision-Making: A Dubious Dichotomy,” in S. Matthew Liao (ed.),Moral Brains: The Neuroscience of Morality, Oxford: OxfordUniversity Press, pp. 87–116.
  • Zahavi, Dan, 2005,Subjectivity and Selfhood: Investigatingthe First-Person Perspective, Cambridge, MA: MIT Press.
  • Zednik, Carlos, 2011, “The Nature of DynamicalExplanation,”Philosophy of Science, 78(2):238–63.

Other Internet Resources

Copyright © 2021 by
Lawrence Shapiro<lshapiro@wisc.edu>
Shannon Spaulding<shannon.spaulding@okstate.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp