Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Folk Psychology as Mental Simulation

First published Mon Dec 8, 1997; substantive revision Tue Mar 28, 2017

The capacity for “mindreading” is understood in philosophyof mind and cognitive science as the capacity to represent, reasonabout, and respond to others’ mental states. Essentially thesame capacity is also known as “folk psychology”,“Theory of Mind”, and “mentalizing”. Anexample of everyday mindreading: you notice that Tom’sfrightembarrassed Mary andsurprised Bill,who hadbelieved that Tomwanted totryeverything. Mindreading is of crucial importance for our social life:our ability to predict, explain, and/or coordinate with others’actions on countless occasions crucially relies on representing theirmental states. For instance, by attributing to Steve the desire for abanana and the belief that there are no more bananas at home but thereare some left at the local grocery store, you can: (i)explain why Steve has just left home; (ii)predictwhere Steve is heading; and (iii)coordinate your behaviorwith his (meet him at the store, or prepare a surpise party while heis gone). Without mindreading, (i)–(iii) do not comeeasily—if they come at all. That much is fairly uncontroversial.What is controversial is how to explain mindreading. That is, how dopeople arrive at representing others’ mental states? This is themain question to which the Simulation (or, mental simulation) Theory(ST) of mindreading offers an answer.

Common sense has it that, in many circumstances, we arrive atrepresenting others’ mental states by putting ourselves in theirshoes, or taking their perspective. For example, I can try to figureout my chess opponent’s next decision by imagining what I woulddecide if I were in her place. (Although we may also speak of this asa kind ofempathy, that term must be understood here withoutany implication of sympathy or benevolence.)

ST takes this commonsensical idea seriously and develops it into afully-fledged theory. At the core of the theory, we find the thesisthatmental simulation plays a central role in mindreading:we typically arrive at representing others’ mental states bysimulating their mental states in our own mind. So, to figure out mychess opponent’s next decision, I mentally switch roles with herin the game. In doing this, I simulate her relevantbeliefsandgoals, and then feed these simulated mental states intomy decision-making mechanism and let the mechanism produce a simulateddecision. This decision is projected on or attributed to theopponent. In other words, the basic idea of ST is that if theresources our own brain uses to guide our own behavior can be modifiedto work as representations of other people’s mental states, thenwe have no need to store general information about what makes peopletick: we just do the ticking for them. Accordingly, ST challenges theTheory-Theory of mindreading (TT), the view that a tacit psychologicaltheory underlies the ability to represent and reason aboutothers’ mental states. While TT maintains that mindreading is aninformation-rich and theory-driven process, ST sees it asinformationally poor and process driven (Goldman 1989).

This entry is organized as follows. In section 1 (The Origins andVarieties of ST), we briefly reconstruct ST’s history andelaborate further on ST’s main theoretical aims. We then go onto explain the very idea of mental simulation (section 2: What isMeant by “Mental Simulation”?) In section 3 (Two Types ofSimulation Processes), we consider the cognitive architectureunderlying mental simulation and introduce the distinction betweenhigh-level and low-level simulation processes. In section 4 (The Roleof Mental Simulation in Mindreading), we discuss what role mentalsimulation is supposed to play in mindreading, according to ST. Thisdiscussion carries over to section 5 (Simulation Theory andTheory-Theory), where we contrast the accounts of mindreading given byST and TT. Finally, section 6 (Simulation Theory: Pros and Cons)examines some of the main arguments in favour of and against ST astheory of mindreading.

1. The Origins and Varieties of ST

The idea that we often arrive at representing other people’smental states by mentally simulating those states in ourselves has adistinguished history in philosophy and the human sciences. RobertGordon (1995) traces it back to David Hume (1739) and AdamSmith’s (1759) notion of sympathy; Jane Heal (2003) and Gordon(2000) find simulationist themes in theVerstehen approach tothe philosophy of history (e.g., Dilthey 1894); Alvin Goldman (2006)considers Theodor Lipps’s (1903) account of empathy(Einfühlung) as a precursor of the notion of mentalsimulation.

In its modern guise, ST was established in 1986, with the publicationof Robert Gordon’s “Folk Psychology as Simulation”and Jane Heal’s “Replication and Functionalism”.These two articles criticized the Theory-Theory and introduced ST as abetter account of mindreading. In his article, Gordon discussedpsychological findings concerning the development of the capacity torepresent others’ false beliefs. This attracted the interest ofdevelopmental psychologists, especially Paul Harris (1989, 1992), whopresented empirical support for ST, and Alison Gopnik (Gopnik &Wellman 1992) and Joseph Perner (Perner & Howes 1992), who arguedagainst it—Perner has since come to defend a hybrid version ofST (Perner & Kühberger 2005).

Alvin Goldman was an early and influential defender of ST (1989) andhas done much to give the theory its prominence. His work with theneuroscientist Vittorio Gallese (Gallese & Goldman 1998) was thefirst to posit an important connection between ST and the newlydiscovered mirror neurons. Goldman’s 2006 bookSimulatingMinds is the clearest and most comprehensive account to date ofthe relevant philosophical and empirical issues. Among otherphilosophical proponents of ST, Gregory Currie and Susan Hurley havebeen influential.

Since the late 1980s, ST has been one of the central players in thephilosophical, psychological, and neuroscientific discussions ofmindreading. It has however been argued that the fortunes of ST havehad a notable negative consequence: the expression “mentalsimulation” has come to be used broadly and in a variety ofways, making “Simulation Theory” a blanket term lumpingtogether many distinct approaches to mindreading. Stephen Stich andShaun Nichols (1997) already urged dropping it in favor of afiner-grained terminology. There is some merit to this. ST is in factbetter conceived of as afamily of theories rather than asingle theory. All the members of the family agree on the thesis thatmental simulation, rather than a body of knowledge about other minds,plays a central role in mindreading. However, different members of thefamily can differ from one another in significant respects.

One fundamental area of disagreement among Simulation Theorists is thevery nature of ST—whatkind of theory ST is supposed tobe—and what philosophers can contribute to it. Some SimulationTheorists take the question “How do people arrive atrepresenting others’ mental states?” as a straightforwardempirical question about the cognitive processes and mechanismsunderlying mindreading (Goldman 2006; Hurley 2008). According to them,ST is thus a theory incognitive science, to whichphilosophers can contribute exactly as theoretical physicistscontribute to physics:

theorists specialize in creating and tweaking theoretical structuresthat comport with experimental data, whereas experimentalists have theprimary job of generating the data. (Goldman 2006: 22)

Other philosophical defenders of ST, however, do not conceive ofthemselves as theoretical cognitive scientists at all. For example,Heal (1998) writes that:

it is commonly taken that the inquiry into … the extent ofsimulation in psychological understanding is empirical, and thatscientific investigation is the way to tell whether ST … iscorrect. But this perception is confused. It is ana prioritruth … that simulation must be given a substantial role in ourpersonal-level account of psychological understanding. (Heal 1998:477–478)

Adjudicating this meta-philosophical dispute goes well beyond the aimof this entry. To be as inclusive as we can, we shall adopt a“balanced diet” approach: we shall discuss the extent towhich ST is supported by empirical findings from psychology andneuroscience, and,at the same time, we shall dwell on“purely philosophical” problems concerning ST. We leave tothe reader the task of evaluating which aspects should be put at thecentre of the inquiry.

Importantly, even those who agree on the general nature of ST mightdisagree on other crucial issues. We will focus on what are typicallytaken to be the three most important bones of contention amongSimulation Theorists: what is meant by “mentalsimulation”? (section 2). What types of simulation processes are there? (section 3). What is the role of mental simulation in mindreading? (section 4). After having considered what keeps Simulation Theorists apart, weshall move to discuss what holds them together, i.e., the oppositionto the Theory-Theory of mindreading (section 5 andsection 6). This should give the reader a sense of the “unity amidstdiversity” that characterizes ST.

2. What is Meant by “Mental Simulation”?

In common parlance, we talk of putting ourselves in others’shoes, or empathizing with other people. This talk is typicallyunderstood as adopting someone else’s point of view, orperspective, in our imagination. For example, it is quite natural tointerpret the request “Try to show some empathy for John!”as asking you to use your imaginative capacity to consider the worldfrom John’s perspective. But what is it for someone toimaginatively adopt someone else’s perspective? To a firstapproximation, according to Simulation Theorists, it consists ofmentallysimulating, or re-creating, someoneelse’s mental states. Currie and Ravenscroft (2002) make thispoint quite nicely:

Imagination enables us to project ourselves into another situation andto see, or think about, the world from another perspective. Thesesituations and perspectives … might be those of another actualperson, [or] the perspective we would have on things if we believedsomething we actually don’t believe, [or] that of a fictionalcharacter. …Imagination recreates the mental states ofothers. (Currie & Ravenscroft 2002: 1, emphasis added).

Thus, according to ST, empathizing with John’s sadness consistsof mentally simulating hissadness, and adopting Mary’spolitical point of view consists of mentally simulating her politicalbeliefs. This is the intuitive and general sense of mentalsimulation that Simulation Theorists have in mind.

Needless to say, this intuitive characterization of “mentalsimulation” is loose. What exactly does it mean to say that amental state is a mental simulation of another mental state? Clearly,we need a precise answer to this question, if the notion of mentalsimulation is to be the fundamental building block of a theory.Simulation Theorists, however, differ over how to answer thisquestion. The central divide concerns whether “mentalsimulation” should be defined in terms ofresemblance(Heal 1986, 2003; Goldman 2006, 2008a) or in terms ofreuse(Hurley 2004, 2008; Gallese & Sinigaglia 2011). We consider thesetwo proposals in turn.

2.1 Mental Simulation as Resemblance

The simplest interpretation of “mental simulation” interms of resemblance goes like this:

(RES-1) Token stateM* is a mental simulation of token stateM if and only if:

  1. BothM andM* are mental states
  2. M* resemblesM in some significant respects

Two clarifications are in order. First, we will elaborate on the“significant respects” in which a mental state has toresemble another mental state in due course (see, in particular,section 3). For the moment, it will suffice to mention some relevant dimensionsof resemblance: similar functional role; similar content; similarphenomenology; similar neural basis (an important discussion of thistopic is Fisher 2006). Second, RES-1 defines “mentalsimulation” as adyadicrelation betweenmental states (the relationbeing a mental simulation of).However, the expression “mental simulation” is also oftenused to pick out amonadic property of mental states—the propertybeing a simulated mental state (as willbecome clear soon, “simulated mental state” does not referhere to the state which is simulated, but to the state that does thesimulating). For example, it is common to find in the literaturesentences like “M* is a mental simulation”. Toavoid ambiguities, we shall adopt the following terminologicalconventions:

  • We shall use the expression “mental simulation of” toexpress the relationbeing a mental simulation of.
  • We shall use the expression “simulated mental state”to express the propertybeing a simulated mental state.
  • We shall use the expression “mental simulation” in away that is deliberately ambiguous between “mental simulationof” and “simulated mental state”.

It follows from this that, strictly speaking,RES-1 is a definition of “mental simulation of”. Throughoutthis entry, we shall characterize “simulated mental state”in terms of “mental simulation of”: we shall say that ifM* isa mental simulationofM, thenM* is asimulated mental state.[1]

With these clarifications in place, we will consider the strengths andweaknesses ofRES-1. Suppose that Lisa is seeing a yellow banana. At the present moment,there is no yellow banana in my own surroundings; thus, I cannot havethat (type of) visual experience. Still, I canvisualize whatLisa is seeing. Intuitively, my visual imagery of a yellow banana is amental simulation of Lisa’s visual experience.RES-1 captures this, given that both my visual imagery and Lisa’svisual experience are mental states and the former resembles thelatter.

RES-1, however, faces an obvious problem (Goldman 2006). The resemblancerelation is symmetric: for anyx andy, ifxresemblesy, theny resemblesx. Accordingly, itfollows fromRES-1 that Lisa’s visual experience is a mental simulation of myvisual imagery. But this is clearly wrong. There is no sense in whichone person’s perceptual experience can be a mental simulation ofanother person’s mental imagery (see Ramsey 2010 for otherdifficulties withRES-1).

In order to solve this problem, Goldman (2006) proposes the followingresemblance-based definition of “mental simulationof”:

(RES-2) Token stateM* is a mental simulation of token stateM if and only if:

  1. BothM andM* are mental states
  2. M* resemblesM in some significant respects
  3. In resemblingM,M* fulfils at least one of itsfunctions

Under the plausible assumption that one of the functions of visualimagery is to resemble visual experiences, RES-2 correctly predictsthat my visual imagery of a yellow banana counts as a mentalsimulation of Lisa’s visual experience. At the same time, sincevisual experiences do not have the function of resembling visualimages, RES-2 does not run into the trouble of categorizing the formeras a mental simulation of the latter.

2.2 Mental Simulation as Reuse

Clearly,RES-2 is a better definition of “mental simulation of” thanRES-1. Hurley (2008), however, argued that it won’t do either, sinceit fails to distinguish ST from its main competitor, i.e., theTheory-Theory (TT), according to which mindreading depends on a bodyof information about mental states and processes (section 5). The crux of Hurley’s argument is this. Suppose that a tokenvisual imageV* resembles a token visual experienceVand, in doing so, fulfils one of its functions. In this case,RES-2 is satisfied. But now suppose further that visualization works like acomputer simulation: it generates its outputs on the basis of a bodyof information about vision. On this assumption,RES-2 still categorizesV* as a mental simulation ofV,even thoughV* has been generated by exactly the kind ofprocess described by TT: a theory-driven and information-richprocess.

According to Hurley (who follows here a suggestion by Currie &Ravenscroft 2002), the solution to this difficulty lies in therealization that “the fundamental … concept of simulationisreuse, not resemblance” (Hurley 2008: 758, emphasisadded). Hurley’s reuse-based definition of “mentalsimulation of” can be articulated as follows:

(REU) Token stateM* is a mental simulation of token stateM if and only if:

  1. BothM andM* are mental states
  2. M is generated by token cognitive processP
  3. M* is generated by token cognitive processP*
  4. P is implemented by theuse of a token cognitivemechanism of typeC
  5. P* is implemented by thereuse of a tokencognitive mechanism of typeC

To have a full understanding of REU, we need to answer threequestions: (a) What is a cognitiveprocess? (b) What is acognitivemechanism? (c) What is the difference betweenusing andreusing a certain cognitive mechanism?Let’s do it!

It is a commonplace that explanation in cognitive science isstructured into different levels. Given our aims, we can illustratethis idea through the classicaltri-level hypothesisformulated by David Marr (1982). Suppose that one wants to explain acertaincognitive capacity, say, vision (or mindreading, ormoral judgment). The first level of explanation, the most abstractone, consists in describing what the cognitive capacitydoes—what task it performs, what problem it solves, whatfunction it computes. For example, the task performed by vision isroughly “to derive properties of the world from images ofit” (Marr 1982: 23). The second level of analysis specifieshow the task is accomplished: what algorithm our mind uses tocompute the function. Importantly, this level of analysis abstractsfrom the particular physical structures that implement the algorithmin our head. It is only at the third level of analysis that thedetails of thephysical implementation of the algorithm inour brain are spelled out.

With these distinctions at hand, we can answer questions (a) and (b).Acognitive process is a cognitive capacity considered as aninformation-processing activity and taken in abstraction from itsphysical implementation. Thus, cognitive processes are individuated interms of what function they perform and/or in terms of what algorithmscompute these functions (fair enough, the “and/or” is avery big deal, but it is something we can leave aside here). Thismeans that the same (type of) cognitive process can be multiplyrealized in different physical structures. For example, parsing(roughly, the cognitive process that assigns a grammatical structureto a string of signs) can be implemented both by a human brain and acomputer. On the contrary,cognitive mechanisms areparticular (types of) physical structures—e.g., a certain partof the brain—implementing certain cognitive processes. Moreprecisely, cognitive mechanisms are organized structures carrying outcognitive processes in virtue of how their constituent parts interact(Bechtel 2008; Craver 2007; Machamer et al. 2000).

We now turn to question (c), which concerns the distinction betweenuse and reuse of a cognitive mechanism. At a first approximation, acognitive mechanism isused when it performs its primaryfunction, while it isreused when it is activated to performa different, non-primary function. For example, one isusingone’s visual mechanism when one employs it to see, while one isreusing it when one employs it to conjure up a visual image(see Anderson 2008, 2015 for further discussion of the notion ofreuse). All this is a bit sketchy, but it will do.

Let’s now go back toREU. The main idea behind it is that whether a mental state is a mentalsimulation of another mental state depends on thecognitiveprocesses generating these two mental states, and on thecognitive mechanisms implementing such cognitive processes.More precisely, in order for mental stateM* to be a mentalsimulation of mental stateM, it has to be case that:(i) cognitive processesP* andP, whichrespectively generateM* andM, are both implemented bythe same (type of) cognitive mechanismC; (ii)P isimplemented by theuse ofC, whileP* isimplemented by thereuse ofC.

Now that we know whatREU means, we can consider whether it fares better thanRES-2 in capturing the nature of the relation of mental simulation. Itwould seem so. Consider this hypothetical scenario. Lisa is seeing ayellow banana, and her visual experience has been generated bycognitive processV1, which has been implemented bythe use of her visual mechanism. I am visualizing a yellow banana, andmy visual image has been generated by cognitive processV2, which has been implemented by the reuse of myvisual mechanism. Rosanna-the-Super-Reasoner is also visualizing ayellow banana, but her visual image has been generated by aninformation-rich cognitive process: a process drawing uponRosanna’s detailed knowledge of vision and implemented by herincredibly powerful reasoning mechanism.REU correctly predicts that my visual image is a mental simulation ofLisa’s visual experience, but not vice versa. More importantly,it also predicts that Rosanna’s visual image does not count as amental simulation of Lisa’s visual experience, given thatRosanna’s cognitive process was not implemented by the reuse ofthe visual mechanism. In this way,REU solves the problem faced byRES-2 in distinguishing ST from TT.

Should we then conclude that “mental simulation of” has tobe defined in terms of reuse, rather than in terms of resemblance?Goldman (2008a) is still not convinced. Suppose that while Lisa isseeing a yellow banana, I am using my visual mechanism to visualizethe Golden Gate Bridge. Now, even though Lisa’s visualexperience and my visual image have been respectively generated by theuse and the reuse of the visual mechanism, it would be bizarre to saythat my mental state is a mental simulation of Lisa’s. Why?Because my mental state doesn’t resemble Lisa’s (she isseeing a yellow banana; I am visualizing the Golden Gate Bridge!)Thus—Goldman concludes—resemblance should be taken asthe central feature of mental simulation.

2.3 Relations, States, and Processes

In order to overcome the difficulties faced by trying to define“mental simulation of” in terms ofeitherreplicationor reuse, philosophers have built on the insightsof both RES andREU and have proposed definitions that combine replication and reuseelements (Currie & Ravenscroft 2002; in recent years, Goldmanhimself seems to have favoured a mixed account; see Goldman 2012a).Here is one plausible definition:

(RES+REU) Token stateM* is a mental simulation of token stateM if and only if:

  1. BothM andM* are mental states
  2. M* resemblesM in some significant respects
  3. M is generated by token cognitive processP
  4. M* is generated by token cognitive processP*
  5. P is implemented by theuse of a token cognitivemechanism of typeC
  6. P* is implemented by thereuse of a tokencognitive mechanism of typeC

RES+REU has at least three important virtues. Thefirst isthat it solves all the aforementioned problems for RES andREU—we leave to the reader the exercise of showing that this is indeed thecase.

Thesecond is that it fits nicely with an idea that loomedlarge in the simulationist literature: the idea thatsimulatedmental states are “pretend” (“as if”,“quasi-”) states—imperfect copies of, surrogatesfor, the “genuine” states normally produced by a certaincognitive mechanism, obtained by taking this cognitive mechanism“off-line”. Consider the following case. Frank is in frontofCentral Café (andbelieves that he isthere). Hedesires to drink a beer andbelieves thathe can buy one atCentral Café. When he feeds thesemental states into his decision-making mechanism, the mechanismimplements a decision-making process, which outputs thedecision to enter the café. In this case,Frank’s decision-making mechanism was“on-line”—i.e., heused it; he employed itfor its primary function. My situation is different. I don’tbelieve I am in front ofCentral Café, nor do I desireto drink a beer right now. Still, I can imagine believing and desiringso. When I feed these imagined states into my decision-makingmechanism, I am not employing it for its primary function. Rather, Iam taking it off-line (I amreusing it). As a result, thecognitive process implemented by my mechanism will output amerelyimagined decision to enter the café. Now, it seems fair tosay that my imagined decision resembles Frank’s decision (moreon this insection 3). If you combine this with how these two mental states have beengenerated, the result is that my imagined decision is a mentalsimulation of Frank’s decision, and thus it is a simulatedmental state. It is also clear why Frank’s decision isgenuine, while my simulated mental state is just apretend decision: all else being equal, Frank’sdecision to enterCentral Café will cause him to enterthe café; on the contrary, no such behaviour will result frommy simulated decision. I havenot really decided so. Mine wasjust aquasi-decision—an imperfect copy of, a surrogatefor, Frank’s genuine decision.

And here isRES+REU’sthird virtue. So far, we have said that “mentalsimulation” can either pick out a dyadic relation between mentalstates or a monadic property of mental states. In fact, its ambiguityruns deeper than this, since philosophers and cognitive scientistsalso use “mental simulation” to refer to a monadicproperty ofcognitive processes, namely, the propertybeing a (mental) simulation process (or: “process ofmental simulation”, “simulational process”,“simulative process”, etc.) As a first stab, a (mental)simulation process is a cognitive process generating simulated mentalstates.RES+REU has the resources to capture this usage of “mentalsimulation” too. Indeed,RES+REUimplicitly contains the following definition of“simulation process”:

(PROC): Token processP* is a (mental) simulation process ifand only if:

  1. P* generates token stateM*
  2. M* resembles another token state,M, in somesignificant respects
  3. BothM andM* are mental states
  4. M is generated by token processP
  5. BothP andP* are cognitive processes
  6. P is implemented by theuse of a token cognitivemechanism of typeC
  7. P* is implemented by thereuse of a tokencognitive mechanism of typeC

Go back to the case in which Lisa was having a visual experience of ayellow banana, while I was having a visual image of a yellow banana.Our two mental states resembled one another, but different cognitiveprocesses generated them:seeing in Lisa’s case, andvisualizing in my case. Moreover, Lisa’sseeing was implemented by the use of the visual mechanism,while myvisualizing was implemented by its reuse. Accordingto PROC, the latter cognitive process, but not the former, was thus asimulation process.

To sum up,RES+REU captures many of the crucial features that Simulation Theoristsascribe to mental simulation. For this reason, we shall adopt it asour working definition of “mental simulationof”—consequently, we shall adopt PROC as a definition of“simulated mental state”.[2] We can put this into a diagram.

[a diagram consisting of a hexagon labelled 'C' with an arrow labeled 'use' pointing to a diamond labeled 'P' on the upper left, an arrow points from this diamond up to a rectangle labeled 'M'. From the hexagon is also an arrow labeled 're-use' pointing to a diamond labeled 'P*' on the upper right, an arrow points from this diamond up to a rectangle labeled 'M*'. The two rectangles are connected by a dashed double-headed arrow labeled 'resemblance']

Figure 1

The hexagon at the bottom depicts a cognitive mechanismC (itcould be, say, the visual mechanism). WhenC is used (arrow onthe left), it implements cognitive processP (say, seeing);when it is re-used (arrow on the right), it implements cognitiveprocessP* (say, visualizing).P generates mental stateM (say, a visual experience of a red tomato), whileP*generates mental stateM* (say, a visual image of a redtomato). These two mental states (M andM*) resemble oneanother. Given this:M* is amental simulation ofM;M* is asimulated mental state; andP* is asimulation process.[3]

2.4 Final Worries

In this section, we shall finally consider three worries raised foradoptingRES+REU as a definition of “mental simulation of”. If you havealready had enough ofRES+REU, please feel free to move straight tosection 3.

Heal (1994) pointed out a problem with committing ST to a particularaccount of the cognitive mechanisms that underlie it. Suppose that thehuman mind contains two distinct decision-making mechanisms:Mec1, which takes beliefs and desires as input, and generatesdecisions as output; andMec2, which works by followingexactly the same logical principles asMec1, but takesimagined beliefs and imagined desires as input and generates imagineddecisions as output. Consider again Frank’s decision to enterCentral Café and my imagined decision to do so.According to the two mechanisms hypothesis, Frank desired to drink abeer and believed that he could buy one atCentralCafé, fed these mental states intoMec1, whichgenerated the decision to enter the café. As for me, I fed theimagined desire to drink a beer and the imagined belief that I couldbuy one atCentral Café into a distinct (type of)mechanism, i.e.,Mec2, which generated the imagined decisionto enterCentral Café. Here is the question: does myimagined decision to enterCentral Café count as amental simulation of Frank’s decision to do so? If your answeris “Yes, it does”, thenRES+REU is in trouble, since my imagined decision was not generated byreusing the same (type of) cognitive mechanism that Frank used togenerate his decision; his decision was generated byMec1, myimagined decision byMec2. Thus, Heal concludes, a definition“mental simulation of” should not contain any commitmentabout cognitive mechanisms—it should not make any implementationclaim—but should be given at a more abstract level ofdescription.

In the face of this difficulty, a defender ofRES+REU can say the following. First, she might reject the intuition that, inthe two mechanisms scenario, my imagined decision counts as a mentalsimulation of Frank’s decision. At a minimum, she might say thatthis scenario does not elicit any robust intuition in one direction orthe other: it is not clear whether these two mental states stand inthe relationbeing a mental simulation of. Second, she mightdownplay the role of intuitions in the construction of a definitionfor “mental simulation of” and cognate notions. Inparticular, if she conceives of ST as an empirical theory in cognitivescience, she will be happy to discount the evidential value ofintuitionsif countervailing theoretical considerations areavailable. This, e.g., is Currie and Ravenscroft’s (2002)position, who write that

there are two reasons … why the Simulation Theorist shouldprefer [a one mechanism hypothesis]: … first, the postulationof two mechanisms is less economical than the postulation of one;second, … we have very good reasons to think thatimagination-based decision making does not operate in isolation fromthe subject’s real beliefs and desires. … If imaginationand belief operate under a system of inferential apartheid—asthe two-mechanisms view has it—how could this happen? (Currie& Ravenscroft 2002: 67–68)

A second worry has to do with the fact thatRES+REU appears to be too liberal. Take this case. Yesterday, Angelina hadthe visual experience of a red apple. On the night of June 15, 1815,Napoleon conjured up the visual image of a red apple. Angelina usedher visual mechanism to see, while Napoleon reused his to imagine. Ifwe add to this that Napoleon’s mental state resembledAngelina’s,RES+REU predicts that Napoleon’s (token) visual image was a mentalsimulation of Angelina’s (token) visual experience. This mightstrike one as utterly bizarre. In fact, not only did Napoleon notintend to simulate Angelina’s experience: hecouldnot even have intended to do it. After all, Angelina was bornroughly 150 years after Napoleon’s death. By the same token, itis also impossible that Napoleon’s visual image has beencaused by Angelina’s visual experience. As a matter offact, the visual image Napoleon had on the night of June 15, 1815 isentirely disconnected from the visual experience thatAngelina had yesterday. Thus, how could the former be a mentalsimulation of the latter? If you think about it, the problem is evenworse than this.RES+REU has it that Napoleon’s visual image of a red apple is a mentalsimulation ofall the visual experiences of a red apple thathave obtained in the past, that are currently obtaining, and that willobtain in the future. Isn’t that absurd?

Again, a defender ofRES+REU can give a two-fold answer. First, she can develop an argument thatthis is not absurd at all. Intuitively, the following principle seemsto be true:

(TYPE): the mental state typevisual image of a red apple isa mental simulation of the mental state typevisual experience ofa red apple.

If TYPE is correct, then the following principle has to be true aswell:

(TOKEN): Any token mental state of the typevisual image of a redapple is a mental simulation of every token mental state of thetypevisual experience of a red apple.

But TOKEN entails that Napoleon’s (token) visual image of a redapple is a mental simulation of Angelina’s (token) visualexperience of a red apple, which is exactly whatRES+REU predicts. Thus,RES+REU’s prediction, rather than being absurd, independently follows fromquite intuitive assumptions. Moreover, even though TOKEN andRES+REU make the same prediction about the Napoleon-Angelina case, TOKEN isnot entailed byRES+REU, since the latter contains arestriction on how visual imageshave to be generated. Thus, if one finds TOKEN intuitively acceptable,it is hard to see how one can findRES+REU to betoo liberal.

The second component of the answer echoes one of the answers given toHeal: for a Simulation Theorist who conceives of ST as a theory incognitive science, intuitions have a limited value in assessing adefinition of “mental simulation of”. In fact, the mainaim of this definition is not that of capturing folk intuitions, butrather that of offering a clear enough picture of the relation ofmental simulation on the basis of which an adequate theory ofmindreading can be built. So, if the proposed definition fails, say,to help distinguishing ST from TT, or is of limited use intheory-building, or is contradicted by certain important results fromcognitive science, then one has a good reason to abandon it. On thecontrary, it should not be a cause for concern ifRES+REU does not match the folk concept MENTAL SIMULATION OF. The notion“mental simulation of” is a term of art—like, say,the notions of I-Language or of Curved Space. These notionsdo poorly match the folk concepts of language and space, butlinguists and physicists do not take this to be a problem. The sameapplies to the notion of mental simulation.

And here is the third and final worry.RES+REU is supposed to be a definition of “mental simulation of”on the basis of which a theory of mindreading can be built. However,neitherRES+REU norPROC make any reference to the idea of representing others’ mentalstates. Thus, how could these definitions help us to construct aSimulation Theoryof mindreading? The answer is simple: theywill help us exactly as a clear definition of“computation”, which has nothing to do with how the mindworks, helped to develop the Computational Theory of Mind (see entryoncomputational theory of mind).

Here is another way to make the point. ST is made up oftwodistinct claims: the first is that mental simulation ispsychologically real, i.e., that there are mental states and processessatisfyingRES+REU andPROC. The second claim is that mental simulation plays a central role inmindreading. Clearly, the second claim cannot be true if the first isfalse. However, the second claim can be false even if the first claimis true: mental simulation could be psychologically real, but play norole in mindreading at all. Hence, Simulation Theorists have to dothree things. First, they have to establish that mental simulation ispsychologically real. We consider this issue insection 3. Second, they have to articulate STas a theory ofmindreading. That is, they have to spell out in some detail thecrucial role that mental simulation is supposed to play inrepresenting others’ mental states, and contrast the resultingtheory with other accounts of mindreading. We dwell on this insections4 and5. Finally, Simulation Theorists have to provide evidence in support oftheir theory of mindreading—that is, they have to give us goodreasons to believe that mental simulation does play a crucial role inrepresenting others’ mental states. We discuss this issue insection 6.

3. Two Types of Simulation Processes

Now that we have definitions of “mental simulation of” andcognate notions, it is time to consider which mental states andprocesses satisfy them, if any. Are there reallysimulated mentalstates? That is, are there mental states generated by thereuse of cognitive mechanisms? And do these mental statesresemble the mental states generated by the use of suchmechanisms? For example, is it truly the case that visual images aremental simulations of visual experiences? What aboutdecisions, emotions, beliefs, desires, and bodily sensations? Can ourminds generate simulated counterparts of all these types of mentalstates? In this section, we consider how Simulation Theorists havetackled these problems. We will do so by focusing on the followingquestion: are there reallysimulation processes (as definedbyPROC)? If the answer to this question is positive, it follows that there aremental states standing in the relation of mental simulation (asdefined byRES+REU), and thus simulated mental states.

Following Goldman (2006), it has become customary among SimulationTheorists to argue for the existence of two types of simulationprocesses:high-level simulationprocesses andlow-level simulation processes (see, however, de Vignemont2009). By exploring this distinction, we begin to articulate thecognitive architecture underlying mental simulation proposed byST.

3.1 High-Level Simulation Processes

High-level simulation processes are cognitive processes with thefollowing features: (a) they are typically conscious, under voluntarycontrol, and stimulus-independent; (b) they satisfyPROC, that is, they are implemented by thereuse of a certaincognitive mechanism,C, and their output statesresemble the output states generated by the use ofC.[4] Here are some cognitive processes that, according to SimulationTheorists, qualify as high-level simulation processes. Visualizing:the cognitive process generating visual images (Currie 1995; Currie& Ravenscroft 2002; Goldman 2006); motor imagination: thecognitive process generating imagined bodily movements and actions(Currie & Ravenscroft 1997, 2002; Goldman 2006); imaginingdeciding: the cognitive process generating decision-like imaginings(Currie & Ravenscroft 2002); imagining believing: the cognitiveprocess generating belief-like imaginings (Currie & Ravenscroft2002); imagining desiring: the cognitive process generatingdesire-like imaginings (Currie 2002). In what follows, we shallconsider a couple of them in some detail.

Visualizing first. It is not particularly hard to see why visualizingsatisfies condition (a). Typically: one can decide to visualize (orstop visualizing) something; the process is not driven by perceptualstimuli; and at least some parts of the visualization process areconscious. There might be cases in which visualizing is not undervoluntary control, is stimulus-driven and, maybe, even entirelyunconscious. This, however, is not a problem, since we know that thereare clear cases satisfying (a).

Unsurprisingly, the difficult task for Simulation Theorists is toestablish that visualizing has feature (b), that is: it is implementedby thereuse of the visual mechanism; and its outputs (thatis, visual images)resemble genuine visual experiences.Simulation Theorists maintain that they have strong empirical evidencesupporting the claim that visualizing satisfiesPROC. Here is a sample (this and further evidence is extensively discussedin Currie 1995, Currie & Ravenscroft 2002, and in Goldman2006):

  1. visualizing recruits some of the brain areas involved in vision(Kosslyn et al. 1999);
  2. left-neglect patients have the same deficit in both seeing andvisualizing—i.e., they do not have perceptual experience of theleft half of the visual space and they also fail to imagine the lefthalf of the imagined space (Bisiach & Luzzatti 1978);
  3. ocular movements occurring during visualizing approximate thosehappening during seeing (Spivey et al. 2000);
  4. some patients systematically mistake visual images for perceptualstates (Goldenberg et al. 1995);
  5. visual perception and visualizing exhibit similar patterns ofinformation-processing (facilitations, constraints, illusions) (Decety& Michel 1989; Kosslyn et al. 1999)

On this basis, Simulation Theorists conclude that visualizing isindeed implemented by thereuse of the visual mechanism(evidence i and ii) and that its outputs, i.e., visual images, doresemble visual experiences (evidence iii, iv, and v). Thus,visualizing is a process that qualifies as high-level simulation, andvisual images are simulated mental states.

Visual images are mental simulations of perceptual states. Are therehigh-level simulation processes whose outputs instead are mentalsimulations of propositional attitudes? (If you think that visualexperiences are propositional attitudes, you can rephrase the questionas follows: are there high-level simulation processes whose outputsare mental simulations of non-sensory states?) Three candidateprocesses have received a fair amount of attention in thesimulationist literature: imagining desiring, imagining deciding, andimagining believing. The claims made by Simulation Theorists aboutthese cognitive processes and their output states have generated anintense debate (Doggett & Egan 2007; Funkhouser & Spaulding2009; Kieran & Lopes 2003; Nichols 2006a, 2006b; Nichols &Stich 2003; Velleman 2000). We do not have space to review it here(two good entry points are the introduction to Nichols 2006a and theentry onimagination). Rather, we shall confine ourselves to briefly illustrating thesimulationist case in favour of the thesis that imagining believing isa high-level simulation process.

I don’t believe that Rome is in France, but I can imaginebelieving it. Imagining believing typically is a conscious,stimulus-independent process, under voluntary control. Thus, imaginingbelieving satisfies condition (a). In order for it to count as aninstance of high-levelsimulation process, it also needs tohave feature (b), that is: (b.i) its outputs (i.e., belief-likeimaginings) have toresemble genuine beliefs in somesignificant respects; (b.ii) it has to be implemented by thereuse of the cognitive mechanism (whose use implements thecognitive process) that generates genuine beliefs—let us call it“the belief-forming mechanism”. Does imagining believingsatisfy (b)? Currie and Ravenscroft (2002) argue in favour of (b.i).Beliefs are individuated in terms of their content and functionalrole. Belief-like imaginings—Currie and Ravenscroftsay—have the same content and a similar functional role to theirgenuine counterparts. For example, the belief that Rome is in Franceand the belief-like imagining that Rome is in France have exactly thesame propositional content:that Rome is in France. Moreover,belief-like imaginings mirror the inferential role of genuine beliefs.If one believes both that Rome is in France and that French is thelanguage spoken in France, one can infer the belief that French is thelanguage spoken in Rome. Analogously, from the belief-like imaginingthat Rome is in France and the genuine belief that French is thelanguage spoken in France, one can infer the belief-like imaginingthat French is the language spoken in Rome. So far, so good (but seeNichols 2006b).

What about (b.ii)? Direct evidence bearing on it is scarce. However,Simulation Theorists can give an argument along the following lines.First, one owes an explanation of why belief-like imaginings are,well, belief-like—as we have said above, it seems that they havethe same type of content as, and a functional role similar to, genuinebeliefs. A possible explanation for this is that both types of mentalstates are generated by (cognitive processes implemented by) the samecognitive mechanism. Second, it goes without saying that our mindcontains a mechanism for generating beliefs (the belief-formingmechanism), and that there must be some mechanism or another in chargeof generating belief-like imaginings. It is also well known thatcognitive mechanisms are evolutionary costly to build and maintain.Thus, evolution might have adopted the parsimonious strategy ofredeploying a pre-existing mechanism (the belief-forming mechanism)for a non-primary function, i.e., generating belief-likeimaginings—in general, this hypothesis is also supported by theidea that neural reuse is one of the fundamental organizationalprinciple of the brain (Anderson 2008). If one puts these two strandsof reasoning together, one gets aprima facie case for theclaim that imagining believing is implemented by the reuse of thebelief-forming mechanism—that is, aprima facie casefor the conclusion that imagining believing satisfies (b.ii). Sinceimagining believing appears also to satisfy (b.i) and (a), lackingevidence to the contrary, Simulation Theorists are justified inconsidering it to be a high-level simulation process.

Let’s take stock. We have examined a few suggested instances ofhigh-level simulation processes. If Simulation Theorists are correct,they exhibit the following commonalities: they satisfyPROC (this is why they aresimulation processes); they aretypically conscious, under voluntary control, and stimulus-independent(this is why they arehigh-level). Do they have some otherimportant features in common? Yes, they do—Simulation Theoristssay. They all areunder the control of a single cognitivemechanism: imagination (more precisely, Currie & Ravenscroft(2002) talk of Re-Creative Imagination, while Goldman (2006, 2009)uses the expression “Enactment Imagination”). Thefollowing passage will give you the basic gist of the proposal:

What is distinctive to high-level simulation is the psychologicalmechanism … that produces it, the mechanism of imagination.This psychological system is capable of producing a wide variety ofsimulational events: simulated seeings (i.e., visual imagery),… simulated motor actions (motor imagery), simulated beliefs,… and so forth. … In producing simulational outputs,imagination does not operate all by itself. … For example, itrecruits parts of the visual system to produce visual imagery…. Nonetheless, imagination “‘takes thelead”’ in directing or controlling the other systems itenlists for its project. (Goldman 2009: 484–85)

Here is another way to make the point. We already know that, accordingto ST, visualizing is implemented by the reuse of the visualmechanism. In the above passage, Goldman adds that the reuse of thevisual mechanism is initiated, guided and controlled by imagination.The same applies,mutatis mutandis, to all cases ofhigh-level simulation processes. For example, in imagining hearing,imagination “gets in control” of the auditory mechanism,takes it off-line, and (re)uses it to generate simulated auditoryexperiences. Goldman (2012b, Goldman & Jordan 2013) supports thisclaim by making reference to neuroscientific data indicating that thesame core brain network, the so-called “default network”,subserves all the following self-projections: prospection (projectingoneself into one’s future); episodic memory (projecting oneselfinto one’s past); perspective taking (projecting oneself intoother minds); and navigation (projecting oneself into other places)(see Buckner & Carroll 2007 for a review). These differentself-projections presumably involve different high-level simulationprocesses. However, they all have something in common: they allinvolve imagination-based perspectival shifts. Therefore, the factthat there is one brain network common to all these self-projectionslends some support to the claim that there is one common cognitivemechanism, i.e., imagination, which initiates, guides, and controlsall high-level simulation processes.

If Goldman is right, and all high-level simulation processes areguided by imagination, we can then explain why, in our commonparlance, we tend to describe high-level simulation processes andoutputs in terms of imaginings, images, imagery, etc. Moreimportantly, we can also explain why high-level simulation processesare conscious, under voluntary control, and stimulus-independent.These are, after all, typical properties of imaginative processes.However, there are simulation processes that typically are neitherconscious, nor under voluntary control, nor stimulus independent. Thisindicates that they are not imagination-based. It is to this othertype of simulation processes that we now turn.

3.2 Low-Level Simulation Processes

Low-level simulation processes are cognitive processes with thesefeatures: (a*) they are typically unconscious, automatic, andstimulus-driven; (b) they satisfyPROC, that is, they are implemented by thereuse of a certaincognitive mechanism,C, and their output statesresemble the output states generated by the use ofC.What cognitive processes are, according to ST, instances of low-levelsimulation? The answer can be given in two words: mirroring processes.Clarifying what these two words mean, however, will take sometime.

The story begins at the end of the 1980s in Parma, Italy, where theneuroscientist Giacomo Rizzolatti and his team were investigating theproperties of the neurons in the macaque monkey ventral premotorcortex. Through single-cell recording experiments, they discoveredthat the activity of neurons in the area F5 is correlated withgoal-directed motor actions and not with particular movements(Rizzolatti et al. 1988). For example, some F5 neurons fire when themonkey grasps an object, regardless of whether the monkey uses theleft or the right hand. A plausible interpretation of these results isthat neurons in monkey area F5 encodemotor intentions (i.e.,those intentions causing and guiding actions like reaching, grasping,holding, etc.) and not merekinematic instructions (i.e.,those representations specifying the fine-grained motor details of anaction). (In-depth philosophical analyses of the notion of motorintention can be found in: Brozzo forthcoming; Butterfill &Sinigaglia 2014; Pacherie 2000). This was an already interestingresult, but it was not what the Parma group became famous for. Rather,their striking discovery happened a few years later, helped byserendipity. Researchers were recording the activity of F5 neurons ina macaque monkey performing an object-retrieval task. In betweentrials, the monkey stood still and watched an experimenter setting upthe new trial, with microelectrodes still measuring the monkey’sbrain activity. Surprisingly, some of the F5 neurons turned out tofire when the monkeysaw the experimenter grasping andplacing objects. This almost immediately led to new experiments, whichrevealed that a portion of F5 neurons not only fire when the monkeyperforms a certain goal-directed motor action (say, bringing a pieceof food to the mouth), but also when it sees another agent performingthe same (type of) action (di Pellegrino et al. 1992; Gallese et al.1996; Rizzolatti et al. 1996). For this reason, these neurons wereaptly called “mirror neurons”, and it wasproposed that they encode motor intentions both during actionexecution and action observation (Rizzolatti & Sinigaglia 2007,forthcoming). Later studies found mirror neurons also in the macaquemonkey inferior parietal lobule (Gallese et al. 2002), which togetherwith the ventral premotor cortex constitutes the monkeycorticalmirror neuron circuit (Rizzolatti & Craighero 2004).

Subsequent evidence suggested that anaction mirrormechanism—that is, a cognitive mechanism that getsactivated both when an individual performs a certain goal-directedmotor action and when she sees another agent performing the sameaction—also exists in the human brain (for reviews, seeRizzolatti & Craighero 2004, and Rizzolatti & Sinigagliaforthcoming). In fact, it appears that there aremirrormechanisms in the human brain outside the action domain as well:a mirror mechanism for disgust (Wicker et al. 2003), one for pain(Singer at al. 2004; Avenanti et al. 2005), and one for touch(Blakemore et al. 2005). Given the variety of mirror mechanisms, it isnot easy to give a definition that fits them all. Goldman (2008b) hasquite a good one though, and we will draw from it: a cognitivemechanism is a mirror mechanism if and only if it gets activated bothwhen an individual undergoes a certain mental eventendogenously and when sheperceives a sign thatanother individual is undergoing the same (type of) mental event. Forexample, the pain mirror mechanism gets activated both whenindividuals experience “a painful stimulus and … whenthey observe a signal indicating that [someone else] is receiving asimilar pain stimulus” (Singer et al. 2004: 1157).

Having introduced the notions of mirror neuron and mirror mechanism,we can define the crucial notion of this section:mirroringprocess. We have seen that mirror mechanisms can get activated intwo distinct modes: (i) endogenously; (ii) in the perception mode. Forexample, my action mirror mechanism gets endogenously activated when Igrasp a mug, while it gets activated in the perception mode when I seeyou grasping a mug. Following again Goldman (2008b), let us say that acognitive process is a mirroring process if and only if it isconstituted by the activation of a mirror mechanismin theperception mode. For example, what goes on in my brain when I seeyou grasping a mug counts as a mirroring process.

Now that we know what mirroring processes are, we can return to ourinitial problem—i.e., whether they are low-level simulationprocesses (remember that a cognitive process is a low-level simulationprocess if and only if: (a*) it is typically unconscious, automatic,and stimulus-driven; (b) it satisfiesPROC). For reasons of space, we will focus on disgust mirroring only.

Wicker et al. (2003) carried out an fMRI study in which participantsfirst observed videos of disgusted facial expressions and subsequentlyunderwent a disgust experience via inhaling foul odorants. It turnedout that the same neural area—the left anteriorinsula—that was preferentially activated during the experienceof disgust was also preferentially activated during the observation ofthe disgusted facial expressions. These results indicate the existenceof a disgust mirrormechanism. Is disgustmirroring(the activation of the disgust mirror mechanism in the perceptionmode) a low-level simulation process? Simulation Theorists answer inthe affirmative.

Here is why disgust mirroring satisfies (a*): the process isstimulus-driven: it is sensitive to certain perceptualstimuli (disgusted facial expressions); it isautomatic; andit is typicallyunconscious (even though its output, i.e.,“mirrored disgust”, is sometimes conscious). What aboutcondition (b)? Presumably, the primary (evolutionary) function of thedisgust mechanism is to produce a disgust response to spoiled food,germs, parasites etc. (Rozin et al. 2008). In the course of evolution,this mechanism could have been subsequently co-opted to also getactivated by the perception (of a sign) that someone else isexperiencing disgust, in order to facilitate social learning of foodpreferences (Gariépy et al. 2014). If this is correct, thendisgust mirroring is implemented by thereuse of the disgustmechanism (by employing this mechanism for a function different thanits primary one). Moreover, the output of disgust mirroringresembles the genuine experience of disgust in at least twosignificant respects: first, both mental states have the same neuralbasis; second, when conscious, they share a similar phenomenology.Accordingly, (b) is satisfied. By putting all this together,Simulation Theorists conclude that disgust mirroring is a low-levelsimulation process, and mirrored disgust is a simulated mental state(Goldman 2008b; Barlassina 2013)

4. The Role of Mental Simulation in Mindreading

In the previous section, we examined the case for the psychologicalreality of mental simulation. We now turn to STas a theory ofmindreading. We will tackle two main issues: the extent to whichmindreading is simulation-based, and how simulation-based mindreadingworks.

4.1 The Centrality of Mental Simulation in Mindreading

ST proposes that mental simulation plays acentral role inmindreading, i.e., it plays a central role in the capacity torepresent and reason about others’ mental state. What does“central” mean here? Does it meanthe centralrole, with other contributors to mindreading being merely peripheral?This is an important question, since in recent years there have beenproposed hybrid models according to which both mental simulation andtheorizing play important roles in mindreading (seesection 5.2).

A possible interpretation of the claim that mental simulation plays acentral role in representing others’ mental states is thatmindreading events arealways simulation-based, even if theysometimes also involve theory. Some Simulation Theorists, however,reject this interpretation, since they maintain that there aremindreading events in which mental simulation plays no role at all(Currie & Ravenscroft 2002). For example, if I know that LittleJimmy is happy every time he finds a dollar, and I also know that hehas just found a dollar, I do not need to undergo any simulationprocess to conclude that Little Jimmy is happy right now. I just needto carry out a simple logical inference.

However, generalizations like, “Little Jimmy is happy every timehe finds a dollar,” areceteris paribus rules. Peoplereadily recognize exceptions: for example, we recognize situations inwhich Jimmy would probably not be happy even if he found a dollar,including some in which finding a dollar might actually make himunhappy. Rather than applying some additional or more complex rulesthat cover such situations, it is arguable that putting ourselves inJimmy's situation and using “good common sense” alerts usto to these exceptions and overrides the rule. If that is correct,then simulation is acting as an overseer or governor even when peopleappear to be simply applying rules.

Goldman (2006) suggests that we cash out the central role of mentalsimulation in representing others’ mental states as follows:mindreading isoften simulation-based. Goldman’ssuggestion, however, turns out to be empty, since he explicitlyrefuses to specify what “often” means in this context.

How often is often? Every Tuesday, Thursday, and Saturday? Preciselywhat claim does ST mean to make? It is unreasonable to demand aprecise answer at this time. (Goldman 2006: 42; see also Goldman 2002;Jeannerod & Pacherie 2004)

Perhaps a better way to go is to characterize the centrality of mentalsimulation for mindreading not in terms offrequency of use,but in terms ofimportance. Currie and Ravenscroft make thevery plausible suggestion that “one way to see how important afaculty is for performing a certain task is to examine what happenswhen the faculty is lacking or damaged” (Currie &Ravenscroft 2002: 51). On this basis, one could say that mentalsimulation plays acentral role in mindreading if and onlyif: if one’s simulational capacity (i.e., the capacity toundergo simulation processes/simulated mental states) were impaired,then one’s mindreading capacity would besignificantlyimpaired.

An elaboration of this line of thought comes from Gordon (2005)—see also Gordon (1986, 1996) and Peacocke (2005)—who argues thatsomeone lacking the capacity for mental simulation would not be ableto represent mental states as such, since she is incapable ofrepresenting anyone as having a mind in the first place.Gordon’s argument is essentially as follows:

We represent something as having a mind, as having mental states andprocesses, only if we represent it as a subject (“subject ofexperience,” in formulations of “the hard problem ofconsciousness”), where “a subject” is understood asa generic “I”. This distinguishes it from a “mereobject” (and also is a necessary condition for a more benevolentsort of empathy).

To represent something as another “I” is to represent itas a possible target of self-projection: as something one might (withvarying degrees of success) imaginatively put oneself in the place of.(Of course, one can fancifully put oneself in the place of just aboutanything—a suspension bridge, even; but that is not areductio ad absurdum, because one can also fancifullyrepresent just about anything as having a mind.)

It is not clear, however, what consequences Gordon’s conceptualargument would have for mindreading, if any. Even if a capacity toself-project were needed for representing mental states as such, wouldlack of this capacity necessarily impair mindreading? That is,couldn't one predict explain, predict, and coordinate behavior using atheory of internal states, without conceptualizing these as states ofan I or subject? As a more general point, Simulation Theorists havenever provided a principled account of what would constitute a“significant impairment” of mindreading capacity.

To cut a long story short, ST claims that mental simulation plays acentral role in mindreading, but at the present stage its proponentsdo not agree on what this centrality exactly amounts to. We will comeback to this issue insection 5, when we shall discuss the respective contributions of mentalsimulation and theorizing in mindreading.

We now turn to a different problem: how does mental simulationcontribute to mindreading when it does? That is, how doessimulation-based mindreading work? Here again, Simulation Theoristsdisagree about what the right answer is. In what follows, we exploresome dimensions of disagreement.

4.2 Constitution or Causation?

Some Simulation Theorists defend astrong view ofsimulation-based mindreading (Gordon 1986, 1995, 1996; Gallese et al.2004; Gallese & Sinigaglia 2011). They maintain that manysimulation-based mindreading events are (entirely)constituted by mental simulation events (where mentalsimulation events are simulated mental states or simulationprocesses). In other words, some Simulation Theorists claim that,on many occasions, the fact that a subjectS isrepresenting someone else’s mental states is nothing over andabove the fact thatS is undergoing a mental simulation event:the former fact reduces to the latter. For example, Lisa’sundergoing a mirrored disgust experience as a result of observingJohn’s disgusted face would count as a mindreading event:Lisa’s simulated mental state would represent John’sdisgust (Gallese et al. 2004). Let us call this “theConstitution View”.

We shall elaborate on the details of the Constitution View insection 4.3. Before doing that, we consider an argument that has been directedagainst it over and over again, and which is supposed to show that theConstitution View is a non-starter (Fuller 1995; Heal 1995; Goldman2008b; Jacob 2008, 2012). Lacking a better name, we will call it“theAnti-Constitution argument”. Here it is. Bydefinition, a mindreading event is a mental event in which a subject,S, represents another subject,Q, as having a certainmental stateM. Now—the argument continues—theonly way in whichS can representQ as havingM is this:S has to employ theconcept of thatmental state and form thejudgment, or thebelief,thatQ is inM. Therefore, a mindreading event isidentical to an event of judging that someone else has a certainmental state (where this entails the application of mentalisticconcepts). It follows from this that mental simulation events cannotbe constitutive of mindreading events, since the former events are notevents of judging that someone else has a certain mental state. Anexample should clarify the matter. Consider Lisa again, who isundergoing a mirroreddisgust experience as a result ofobserving John’s disgusted face. Clearly, undergoing such asimulated disgust experience is a different mental event fromjudging that John is experiencing disgust. Therefore,Lisa’s mental simulation does not constitute a mindreadingevent.

Insection 4.3, we will discuss how the defenders of the Constitution View haveresponded to this argument. Suppose for the moment that theAnti-Constitution argument is sound. What alternative pictures ofsimulation-based mindreading are available? Those Simulation Theoristswho reject the Constitution View tend to endorse theCausationView, according to which mental simulation events neverconstitute mindreading events, but onlycausally contributeto them. The best developed version of this view is Goldman’s(2006)Three-Stage Model (again, this is our label, not his),whose basic structure is as follows:

STAGE 1.Mental simulation: SubjectS undergoes asimulation process, which outputs a token simulated mental statem*.

STAGE 2.Introspection:S introspectsm* andcategorizes/conceptualizes itas (a state of type)M.

STAGE 3.Judgment:S attributes (a state of type)M to another subject,Q, through the judgmentQ is inM.

(The causal relations among these stages are such that: STAGE 1 causesSTAGE 2, and STAGE 2 in turn causes STAGE 3. See Spaulding 2012 for adiscussion of the notion of causation in this context.)

Here is our trite example. On the basis of observing John’sdisgusted facial expression, Lisa comes tojudge that John ishaving a disgust experience. How did she arrive at the formation ofthis judgment? Goldman’s answer is as follows. The observationof John’s disgusted facial expression triggered a disgustmirroring process in Lisa, resulting in Lisa’s undergoing amirrored disgust experience (STAGE 1). This caused Lisa to introspecther simulated disgust experience and to categorize it as a disgustexperience (STAGE 2) (the technical notion of introspection used byGoldman will be discussed insection 4.4). This, in turn, brought about the formation of the judgmentJohnis having a disgust experience (STAGE 3). Given that, accordingto Goldman, mindreading events are identical to events of judging thatsomeone else has a certain mental state, it is only this last stage ofLisa’s cognitive process that constitutes a mindreading event.On the other hand, the previous two stages were merely causalcontributors to it. But mental simulation entirely took place at STAGE1. This is why the Three-Stage Model is a version of the CausationView: according to the model, mental simulation eventscausallycontribute to, but do not constitute, mindreading events.

4.3 Mindreading without Judgement

The main strategy adopted by the advocates of the Constitution View inresponding to the Anti-Constitution argument consists in impugning theidentification of mindreading events with events ofjudgingthat someone else has a certain mental state. A prominent version ofthis position is Gordon’s (1995, 1996)RadicalSimulationism, according to which representing someoneelse’s mental states does not require the formation ofjudgments involving the application ofmentalisticconcepts. Rather, Gordon proposes that the main bulk ofmindreading events arenon-conceptual representations ofothers’ mental states, where these non-conceptualrepresentations are constituted by mental simulation events. If thisis true, many mindreading events areconstituted by mentalsimulation events, and thus the Constitution View is correct.

The following case should help to get Radical Simulationism across.Suppose that I want to represent the mental state that anindividual—call him “Mr Tees”—is in right now.According to Gordon, there is a false assumption behind the idea that,in order to do so, I need to form a judgment with the contentMrTees is inM (where “M” is aplaceholder for a mentalistic concept). The false assumption is thatthe only thing that I can do is to simulatemyself in MrTees’s situation. As Gordon points out, it is also possible forme to simulateMr Tees in his situation. And if I do so, myvery simulation of Mr Teesconstitutes a representation ofhis mental state, without the need of forming any judgment. This ishow Gordon makes his point:

To simulate Mr Tees in his situation requires an egocentric shift, arecentering of my egocentric map on Mr Tees. He becomes in myimagination the referent of the first person pronoun “I”.… Such recentering is the prelude to transforming myself inimagination into Mr Tees as much as actors become the characters theyplay. … But once a personaltransformation has beenaccomplished, … I am already representinghim as beingin a certain state of mind. (Gordon 1995: 55–56)

It is important to stress the dramatic difference betweenGordon’s Radical Simulationism and Goldman’s Three-StageModel. According to the latter, mental simulation eventscausallycontribute to representing other people’s mental states,but the mindreading event proper isalways constituted by ajudgment (or a belief). Moreover, Goldman maintains that the abilityto form such judgments requires both the capacity tointrospect one’s own mental states (more in this insection 4.4) and possession ofmentalistic concepts. None of this is trueof Radical Simulationism. Rather, Gordon proposes that, in the largemajority of cases, it is the very mental simulation event itself thatconstitutes a representation of someone else’s mentalstates. Furthermore, since such mental simulation events neitherrequire the capacity for introspection nor possession of mentalisticconcepts, Radical Simulationism entails the surprising conclusion thatthese two features play at best a very minor role in mindreading. Atestable corollary is that social interaction often relies on anunderstanding of others that does not require the explicit applicationof mental state concepts.

4.4 Mindreading and Introspection

From what we have said so far, one could expect that Gordon shouldagree with Goldman on at least one point. Clearly, Gordon has to admitthat there aresome cases of mindreading in which a subjectattributes a mental state to someone else through a judgment involvingthe application of mentalistic concepts. Surely, Gordon cannot denythat there are occasions in which we think things likeMarybelieves that John is late orPat desires to visitLisbon. Being a Simulation Theorist, Gordon will also presumablybe eager to maintain that many such mindreading events are based onmental simulation events. But if Gordon admits that much, should henot also concede that Goldman’s Three-Stage Model is the rightaccount ofat least those simulation-based mindreadingevents? Surprising as it may be, Gordon still disagrees.

Gordon (1995) accepts that there are occasions in which a subjectarrives at ajudgment about someone else’s mental stateon the basis of some mental simulation event. He might also concede toGoldman that such a judgment involves mentalistic concepts (but seeGordon’s 1995 distinction betweencomprehending anduncomprehending ascriptions).Contra Goldman,however, Gordon argues thatintrospection plays no role atall in the generation of these judgments. Focusing on a specificexample will help us to clarify this further disagreement betweenGoldman and Gordon.

Suppose that I know that Tom believes that (1) and (2):

  1. Fido is a dog
  2. All dogs enjoy watching TV

On this basis, I attribute to Tom the further belief that (3):

  1. Fido enjoys watching TV

Goldman’s Three-Stage Model explains this mindreading act in thefollowing way. FIRST STAGE: I imagine believing what Tom believes(i.e., I imagine believing that (1) and (2)); I then feed thosebelief-like imaginings into my reasoning mechanism (in the off-linemode); as a result, my reasoning mechanism outputs the imagined beliefthat (3). The SECOND STAGE of the process consists in introspectingthis simulated belief and categorizing itas a belief.Crucially, in Goldman’s model, “introspection” doesnot merely refer to the capacity to self-ascribe mental states.Rather, it picks out adistinctive cognitive method forself-ascription, a method which is typically described asnon-inferential andquasi-perceptual (see thesection Inner sense accounts in the entry onself-knowledge). In particular, Goldman (2006) characterizes introspection as atransduction process that takes the neural properties of a mentalstate token as input and outputs a categorization of the type ofstate. In the case that we are considering, my introspective mechanismtakes the neural properties of my token simulated belief as input andcategorizes itas a belief as output. After all this, theTHIRD STAGE occurs: I project the categorized belief onto Tom, throughthe judgmentTom believesthat Fido enjoys watchingTV. (You might wonder where thecontent of Tom’sbelief comes from. Goldman (2006) has a story about that too, but wewill leave this aside).

What about Gordon? How does he explain, in a simulationist fashion butwithout resorting to introspection, the passage from knowing that Tombelieves that (1) and (2) to judging that Tom believes that (3)?According to Gordon, the first step in the process is, of course,imaginingto be Tom—thus believing,in the contextof the simulation, that (1) and (2). This results (again in thecontext of the simulation) in the formation of the belief that (3).But how do I now go about discovering that *I*, Tom, believe that (3)?How can one perform such a self-ascription if not via introspection? Asuggestion given by Gareth Evans will show us how—Gordonthinks.

Evans (1982) famously argued that we answer the question “Do Ibelieve thatp?” by answering another question, namely“Is it the case thatp?” In other words, accordingto Evans, we ascribe beliefs to ourselves not by introspecting, or by“looking inside”, but by looking “outside” andtrying to ascertain how the world is. If, e.g., I want to know whetherI believe that Manchester is bigger than Sheffield, I just ask myself“Is Manchester bigger than Sheffield?” If I answer in theaffirmative, then I believe that Manchester is bigger than Sheffield.If I answer in the negative, then I believe that Manchester isnot bigger than Sheffield. If I do not know what to answer,then I donot have any belief with regard to this subjectmatter.

Gordon (1986, 1995) maintains that this self-ascriptionstrategy—which he labels “theascentroutine” (Gordon 2007)—is also the strategy that weemploy, in the context of a simulation, to determine the mental statesof the simulated agent:

In a simulation ofO, I settle the question of whetherObelieves thatp by simply asking … whether it is thecase thatp. That is, I simply concern myself with theworld—O’s world, the world fromO’sperspective. … ReportingO’s beliefsisjust reporting what is there. (Gordon 1995: 60)

So, this is how, in Gordon’s story, I come to judge that Tom hasthe belief that Fido enjoys watching TV. In the context of thesimulation, *I* asked *myself* (where both “*I*” and“*myself*” in fact refer to Tom) whether *I* believe thatFido enjoys watching TV. And *I* answered this question by answeringanother question, namely, whether it is the case that Fido enjoyswatching TV. Given that, from *my* perspective, Fido enjoys watchingTV (after all, from *my* perspective, Fido is a dog and all dogs enjoywatching TV), *I*expressed my belief by saying: “Yes,*I*, Tom, believe that Fido enjoys watching TV”. As you can see,in such a story, introspection does not do anything. (We will comeback to the role of introspection in mindreading insection 6.2).

4.5 Summary

In sections 2, 3, and 4 we dwelt upon the “internal”disagreements among Simulation Theorists. It goes without saying thatsuch disagreements are both wide and deep. In fact, differentSimulation Theorists give different answers to such fundamentalquestions as: “What is mental simulation?”, “Howdoes mental simulation contribute to mindreading?, ‘What is therole of introspection in mindreading?” In light of suchdifferences of opinion in the simulationist camp, one might concludethat, after all, Stich and Nichols (1997) were right when saying thatthere is no such thing as the SimulationTheory. However, ifone considers what is shared among Simulation Theorists, one willrealize that there is unity amidst this diversity. A good way toreveal the commonalities among different versions of ST is bycontrasting ST with its arch-enemy, i.e., the Theory-Theory ofmindreading. This is what we do in the next section.

5. Simulation Theory and Theory-Theory

ST is only one of several accounts of mindreading on the market. Arough-and-ready list of the alternatives should at least include: theIntentional Stance Theory (Dennett 1987; Gergely & Csibra 2003;Gergely et al. 1995); Interactionism (Gallagher 2001; Gallagher &Hutto 2008; De Jaegher at al. 2010); and the Theory-Theory (Gopnik& Wellman 1992; Gopnik & Meltzoff 1997; Leslie 1994; Scholl& Leslie 1999). In this entry, we will discuss the Theory-Theory(TT) only, given that the TT-ST controversy has constituted the focalpoint of the debate on mindreading during the last 30 years or so.

5.1 The Theory-Theory

As suggested by its name, the Theory-Theory proposes that mindreadingis grounded by the possession of a Theory of Mind (“a folkpsychology”)—i.e., it is based on the tacit knowledge ofthe following body of information: a number of “folk” lawsor principles connecting mental states with sensory stimuli,behavioural responses, and other mental states. Here are a couple ofputative examples:

Law of sight: IfS is in front of objectO,S directs her eye-gaze toO,S’s visualsystem is properly functioning, and the environmental conditions areoptimal, thenceteris paribusS will seeO.

Law of the practical syllogism: IfS desires a certainoutcomeG andS believes that by performing a certainactionA she will obtainG, thenceterisparibusS will decide to performA.

The main divide among Theory-Theorists concerns how the Theory of Mindis acquired—i.e., it concerns where this body of knowledge comesfrom. According to the Child-Scientist Theory-Theory (Gopnik &Wellman 1992; Gopnik & Meltzoff 1997), a child constructs a Theoryof Mind exactly as a scientist constructs a scientific theory: shecollects evidence, formulates explanatory hypotheses, and revisesthese hypotheses in the light of further evidence. In other words,“folk” laws and principles are obtained through hypothesistesting and revision—a process that, according to proponents ofthis view, is guided by a general-purpose, Bayesian learning mechanism(Gopnik & Wellman 2012). On the contrary, the NativistTheory-Theory (Carruthers 2013; Scholl & Leslie 1999) argues thata significant part of the Theory of Mind is innate, rather thanlearned. More precisely, Nativists typically consider the core of theTheory of Mind as resulting from the maturation of a cognitive modulespecifically dedicated to representing mental states

These disagreements notwithstanding, the main tenet of TT is clearenough: attributions of mental states to other people are guided bythe possession of a Theory of Mind. For example, if I know that youdesire to buy a copy ofThe New York Times and I know thatyou believe that if you go toNews & Booze you can buy acopy, then I can use theLaw of the Practical Syllogism toinfer that you will decide to go toNews & Booze.

TT has been so popular among philosophers and cognitive scientiststhat the explanation it proposes has ended up being the name of thevery phenomenon to be explained: on many occasions, scholars use theexpression “Theory of Mind” as a synonym of“mindreading”. Simulation Theorists, however, have neverbeen particularly impressed by this. According to them, there isno need to invoke the tacit knowledge of a Theory of Mind toaccount for mindreading, since a moreparsimoniousexplanation is available: wereuse our own cognitivemechanisms to mentally simulate others’ mental states. Forexample, why do I need to know theLaw of the PracticalSyllogism, if I can employ my own decision-making mechanism(which I have anyway) to simulate your decision? It isuneconomical—Simulation Theorists say—to resortto an information-rich strategy, if an information-poor strategy willdo equally as well.

The difference between TT and ST can be further illustrated through anice example given by Stich and Nichols (1992). Suppose that you wantto predict the behavior of an airplane in certain atmosphericconditions. You can collect the specifications of the airplane andinfer, on the basis of aerodynamictheory, how the airplanewill behave. Alternatively, you can build a model of the airplane andrun asimulation. The former scenario approximates the way inwhich TT describes our capacity to represent others’ mentalstates, while the latter approximates ST. Two points need to bestressed, though. First, while knowledge of aerodynamic theory isexplicit, TT says that our knowledge of the Theory of Mind istypicallyimplicit (or tacit). That is, someone who knowsaerodynamic theory isaware of the theory’s laws andprinciples and is able to report them correctly, while the laws andprinciples constituting one’s Theory of Mind typically lieoutside awareness and reportability. Second, when we run a simulationof someone else’s mental states, we do not need to build amodel: weare the model—that is, we useour ownmind as a model of others’ minds.

Simulation Theorists maintain that thedefault state for the“model” is one in which the simulator simply makes noadjustments when simulating another individual. That is, ST has itthat we are automatically disposed to attribute to a target mentalstates no different from our own current states. This would oftenserve adequately in social interaction between people who arecooperating or competing in what is for practical purposes the samesituation. We tend to depart from this default when we perceiverelevant differences between others’ situations and our own. Insuch cases, we might find ourselves adjusting for situationaldifferences by putting ourselves imaginatively in what we consider theother’s situation to be.

We might also make adjustments for individual differences. Anacquaintance will soon be choosing between candidatea andcandidateb in an upcoming election. To us, projectingourselves imaginatively into that voting situation, the choice isglaringly obvious: candidatea, by any reasonable criteria.But then we may wonder whether this imaginative projection into thevoting situation adequately representsour acquaintance inthat situation. We might recall things the person has said, orpeculiarities of dress style, diet, or entertainment, that might seemrelevant. Internalizing such behavior ourselves, trying to “getbehind” it as an actor might get behind a scripted role, wemight then put, as it were, a different person into the votingsituation, one who might choose candidateb.

Such a transformation would require quarantining some of our ownmental states, preferences, and dispositions, inhibiting them so thatthey do not contaminate our off-line decision-making in the role ofthe other. Such inhibition of one's own mental states would becognitively demanding. For that reason, ST predicts that mindreadingwill be subject toegocentric errors—that is, itpredicts that we will often attribute to a target the mental statethat we would have if we were in the target’s situation, ratherthan the state the target is actually in (Goldman 2006). Insection 6.2, we shall discuss whether this prediction is borne out by thedata.

5.2 Collapse or Cooperation?

On the face of it, ST and TT could not be more different from oneanother. Some philosophers, however, have argued that, on closerinspection, ST collapses into TT, thus revealing itself as a form ofTT in disguise. The collapse argument was originally formulated byDaniel Dennett (1987):

If I make believe I am a suspension bridge and wonder what I will dowhen the wind blows, what “comes to my mind” in mymake-believe state depends on… myknowledge ofphysics… Why should my making believe I have your beliefs beany different? In both cases, knowledge of the imitated object isneeded to drive the… “simulation”, and theknowledge must be… something like atheory. (Dennett1987: 100–101, emphasis added)

Dennett’s point is clear. If I imagine being, say, a bridge,what I imagine will depend on my theory of bridges. Suppose that Ihave a folk theory of bridges that contains the following principle:“A bridge cannot sustain a weight superior to its ownweight”. In this case, if I imagine an elephant weighing threetons walking over a bridge weighing two tons, I will imagine thebridge collapsing. Since my “bridge-simulation” isentirelytheory-driven, “simulation” is amisnomer. The same carries over to “simulating otherpeople”s mental states’, Dennett says. If I try to imagineyour mental states, what I imagine will depend entirely on my Theoryof Mind. Therefore, the label “mental simulation” ismisleading.

Heal (1986) and Goldman (1989) promptly replied to Dennett. Fairenough, if a systemS tries to simulate the state of aradically different systemQ (e.g., if a human being tries tosimulate the state of a bridge), thenS’s simulation mustbe guided by a theory. However, if a systemS tries to simulatethe state of arelevantly similar systemS*, thenS’s simulation can be entirelyprocess-driven:to simulate the state whichS* is in,S simply has torun in itself a process similar to the oneS* underwent. Giventhat, for all intents and purposes, human beings are relevantlysimilar to each other, a human being can mentally simulate whatfollows from having another human being’s mental states withoutresorting to a body of theoretical knowledge about the mind’sinner workings. She will just need to reuse her own cognitivemechanisms to implement a simulation process.

This reply invited the following response (Jackson 1999). If thepossibility of process-driven simulation is grounded in the similaritybetween the simulator and the simulated, then I have to assume thatyou are relevantly similar to me, when I mentally simulate your mentalstates. This particular assumption, in turn, will be derived from ageneral principle—something like “Human beingsare psychologically similar”. Therefore, mental simulation isgrounded in the possession of a theory. The threat of collapse isback! One reply to Jackson’s arguments is as follows (for otherreplies see Goldman 2006): the fact that process-driven simulation isgrounded in the similarity among human beings does not entailthat, in order to run a simulation, a simulator must know (or believe,or assume) that such similarity obtains; no more, indeed, than thefact that the solubility of salt isgrounded in the molecularstructure of salt entails that a pinch of salt needs to know chemistryto dissolve in water.

Granting that ST and TT are distinct theories, we can now ask adifferent question: are the theories better off individually or shouldthey join forces somehow? Let us be more explicit. Can ST on its ownoffer an adequate account of mindreading (or at least of the greatmajority of its episodes)? And what about TT? A good number oftheorists now believe that neither ST nor TT alone will do. Rather,many would agree that these two theories need to cooperate, if theywant to reach a satisfactory explanation of mindreading. Some authorshave put forward TT-ST hybrid models, i.e., models in which the tacitknowledge of a Theory of Mind is the central aspect of mindreading,but it is in many cases supplemented by simulation processes(Botterill & Carruthers 1999; Nichols & Stich 2003). Otherauthors have instead defended ST-TT hybrid models, namely, accounts ofmindreading where the pride of place is given to mental simulation,but where the possession of a Theory of Mind plays some non-negligiblerole nonetheless (Currie & Ravenscroft 2002; Goldman 2006; Heal2003). Since this entry is dedicated to ST, we will briefly touch uponone instance of the latter variety of hybrid account.

Heal (2003) suggested that the domain of ST is restricted to thosemental processes involvingrational transitions amongcontentful mental states. To wit, Heal maintains that mentalsimulation is the cognitive routine that we employ to represent otherpeople’srational processes, i.e., those cognitiveprocesses which are sensitive to the semantic content of the mentalstates involved. On the other hand,

when starting point and/or outcome are [states] without content,and/or the connection is not [rationally] intelligible, there is noreason … to suppose that the process … can be simulated.(Heal 2003: 77)

An example will clarify the matter. Suppose that I know that youdesire to eat sushi, and that you believe that you can order sushi bycallingYama Sushi. To reach the conclusion that you willdecide to callYama Sushi, I only need to imagine desiringand believing what you desire and believe, and to run a simulateddecision-making process in myself. No further knowledge is required topredict your decision: simulation alone will do the job. Consider, onthe other hand, the situation in which I know that you took a certaindrug and I want to figure out what your mental states will be. In thiscase—Heal says—my prediction cannot be based on mentalsimulation. Rather, I need to resort to a body of information aboutthe likely psychological effects of that drug, i.e., I have to resortto a Theory of Mind (fair enough, I can also take the drug myself, butthis will not count as mental simulation). This, according to Heal,generalizes to all cases in which a mental state is the input or theoutput of amere causal process. In those cases, mentalsimulation is ineffective and should be replaced by theorizing. Still,those cases do not constitute the central part of mindreading. Infact, many philosophers and cognitive scientists would agree that thecrucial component of human mindreading is the ability to reason aboutothers’propositional attitudes. And this is exactlythe ability that, according to Heal, should be explained in term ofmental simulation. This is why Heal’s proposal counts as anST-TT hybrid, rather than the other way around.

6. Simulation Theory: Pros and Cons

ST has sparked a lively debate, which has been going on since the endof the 1980s. This debate has dealt with a great number of theoreticaland empirical issues. On the theoretical side, we have seenphilosophical discussions of the relation between ST and functionalism(Gordon 1986; Goldman 1989; Heal 2003; Stich & Ravenscroft 1992),and of the role of tacit knowledge in cognitive explanations (Davies1987; Heal 1994; Davies & Stone 2001), just to name a few.Examples of empirical debates are: how to account for mindreadingdeficits in Autism Spectrum Disorders (Baron-Cohen 2000; Currie &Ravenscroft 2002), or how to explain the evolution of mindreading(Carruthers 2009; Lurz 2011). It goes without saying that discussingall these bones of contention would require an entire book (mostprobably, aseries of books). In the last section of thisentry, we confine ourselves to briefly introducing the reader to asmall sample of the main open issues concerning ST.

6.1 The Mirror Neurons Controversy

We wrote that ST proposes that mirroring processes (i.e., activationsof mirror mechanismsin the perception mode): (A) are(low-level) simulation processes, and (B) contribute (eitherconstitutively or causally) to mindreading (Gallese et al. 2004;Gallese & Goldman 1998; Goldman 2006, 2008b; Hurley 2005). Both(A) and (B) have been vehemently contested by ST’sopponents.

Beginning with (A), it has been argued that mirroring processes do notqualify as simulation processes, because they fail to satisfy thedefinition of “simulation process” (Gallagher 2007;Herschbach 2012; Jacob 2008; Spaulding 2012) and/or because they arebetter characterized in different terms, e.g., as enactive perceptualprocesses (Gallagher 2007) or as elements in an information-richprocess (Spaulding 2012). As for (B), the main worry runs as follows.Granting that mirroring processes are simulation processes, whatevidence do we have for the claim that they contribute to mindreading?This, in particular, has been asked with respect to the role ofmirroring processes in “action understanding” (i.e., theinterpretation of an agent’s behavior in terms of theagent’s intentions, goals, etc.). After all, the neuroscientificevidence just indicates that action mirroringcorrelates withepisodes of action understanding, but correlation is not causation,let alone constitution. In fact, there are no studies examiningwhether disruption of the monkey mirror neuron circuit results inaction understanding deficits, and the evidence on human actionunderstanding following damage to the action mirror mechanism isinconclusive at best (Hickok 2009). In this regard, some authors havesuggested that the most plausible hypothesis is instead that actionmirroring follows (rather than causes or constitutes) theunderstanding of others’ mental states (Csibra 2007; Jacob2008). For example, Jacob (2008) proposes that the job of mirroringprocesses in the action domain is just that of computing arepresentation of the observed agent’s nextmovement,on the basis of aprevious representation of theagent’s intention. Similar deflationary accounts of the actionmirror mechanism have been given by Brass et al. (2007), Hickok(2014), and Vannuscorps and Caramazza (2015)—these accountstypically take the STS (superior temporal sulcus, a brain regionlacking mirror neurons) to be the critical neural area for actionunderstanding.

There are various ways to respond to these criticisms. A strongresponse argues that they are based on a misunderstanding of therelevant empirical findings, as well as on a mischaracterization ofthe role that ST attributes to the action mirror mechanism in actionunderstanding (Rizzolatti & Sinigaglia 2010, 2014). A weakerresponse holds that the focus on action understanding is a bit of ared herring, given that the most robust evidence in support of thecentral role played by mirroring processes in mindreading comes fromthe emotion domain (Goldman 2008b). We will consider the weakerresponse here.

Goldman and Sripada (2005) discuss a series of paired deficits inemotion production and face-based emotion mindreading. Thesedeficits—they maintain—are best explained by thehypothesis that one attributes emotions to someone else throughsimulating these emotions in oneself: when the ability to undergo theemotion breaks down, the mindreading capacity breaks down as well.Barlassina (2013) elaborates on this idea by consideringHuntington’s Disease (HD), a neurodegenerative disorderresulting in, among other things, damage to the disgust mirrormechanism. As predicted by ST, the difficulties individuals with HDhave in experiencing disgust co-occur with an impairment inattributing disgust to someone else on the basis of observing herfacial expression—despite perceptual abilities and knowledgeabout disgust being preserved in this clinical population. Individualssuffering from HD, however, exhibit an intact capacity for disgustmindreading on the basis of non-facial visual stimuli. For thisreason, Barlassina concludes by putting forward an ST-TT hybrid modelof disgust mindreading on the basis of visual stimuli.

6.2 Self and Others

ST’s central claim is that we reuseour own cognitivemechanisms to arrive at a representation ofotherpeople’s mental states. This claim raises a number of issuesconcerning how ST conceptualizes the self-other relation. We willdiscuss a couple of them.

Gallagher (2007: 355) writes that

given the large diversity of motives, beliefs, desires, and behavioursin the world, it is not clear how a simulation process … cangive me a reliable sense of what is going on in the otherperson’s mind.

There are two ways of interpreting Gallagher’s worry. First, itcan be read as saying that if mindreading is based on mentalsimulation, then it is hard to see how mental state attributions couldbeepistemically justified. This criticism, however, missesthe mark entirely, since ST is not concerned with whether mental stateattributions count as knowledge, but only with how,as a matter offact, we go about forming such attributions. A second way tounderstand Gallagher’s remarks is this:as a matter offact, we are pretty successful in understanding other minds;however, given the difference among individual minds, this pattern ofsuccesses cannot be explained in terms of mental simulation.

ST has a two-tier answer to the second reading of Gallagher’schallenge. First, human beings are very similar with regard tocognitive processes such as perception, theoretical reasoning,practical reasoning, etc. For example, there is a very highprobability that if both you and I look at the same scene, we willhave the same visual experience. This explains why, in the largemajority of cases, I can reuse my visual mechanism to successfullysimulate your visual experiences. Second, even though we are quitegood at recognizing others’ mental states, we are nonethelessprone toegocentric errors, i.e., we tend to attribute to atarget the mental state thatwe would undergo if we were inthe target’s situation, rather than the actual mental state thetarget is in (Goldman 2006). A standard example is thecurse ofknowledge bias, where we take for granted that other people knowwhat we know (Birch & Bloom 2007). ST has a straightforwardexplanation of such egocentric errors (Gordon 1995; Goldman 2006): ifwe arrive at attributing mental states via mental simulation, theattribution accuracy will depend on our capacity to“quarantine” our genuine mental states, when they do notmatch the target’s, and to replace them with more appropriatesimulated mental states. This “adjustment” process,however, is a demanding one, because our genuine mental states exert apowerful tendency. Thus, Gallagher is right when he says that, on someoccasions, “if I project the results of my own simulation ontothe other, I understand only myself in that other’s situation,but I don’t understand the other” (Gallagher 2007: 355).However, given how widespread egocentric errors are, this counts as apoint in favour of ST, rather than as an argument against it (but seede Vignemont & Mercier 2016, and Saxe 2005).

Carruthers (1996, 2009, 2011) raises a different problem for ST: noversion of ST can adequately account for self-attributions of mentalstates. Recall that, according to Goldman (2006), simulation-basedmindreading is a three-stage process in which we first mentallysimulate a target’s mental state, we thenintrospectand categorize the simulated mental state, and we finally attributethe categorized state to the target. Since Goldman’s model hasit that attributions of mental states to others asymmetrically dependon the ability to introspect one’s own mental states, itpredicts that: (A) introspection is (ontogenetically andphylogenetically) prior to the ability to represent others’mental states; (B) there are cases in which introspection works justfine, but where the ability to represent others’ mental statesis impaired (presumably, because the mechanism responsible forprojecting one’s mental states to the target is damaged).Carruthers (2009) argues that neither (A) nor (B) are borne out by thedata. The former because there are no creatures that haveintrospective capacities but at the same time lack the ability torepresent others’ mental states; the latter because there are nodissociation cases in which an intact capacity for introspection ispaired with an impairment in the ability to represent others’mental states.

How might a Simulation Theorist respond to this objection? As we saidinsection 4, Gordon’s (1986, 1995, 1996)Radical Simulationism doesnot assign any role to introspection in mindreading. Rather, Gordonproposes that self-ascriptions are guided by ascent routines throughwhich we answer the question “Do I believe thatp?”by answering the lower-order question “Is it the case thatp?” Carruthers (1996, 2011) thinks that this won’tdo either. Here is one of the many problems that Carruthers raises forthis suggestion—we can call it “The ScopeProblem”:

this suggestion appears to have only a limited range of application.For even if it works for the case of belief, it is very hard to seehow one might extend it to account for our knowledge of our own goals,decisions, or intentions—let alone our knowledge of our ownattitudes of wondering, supposing, fearing, and so on. (Carruthers2011: 81)

Carruthers’ objections are important and deserve to be takenseriously. To discuss them, however, we would need to introduce a lotof further empirical evidence and many complex philosophical ideasabout self-knowledge. This is not a task that we can take up here (theinterested reader is encouraged to read, in addition to Gordon (2007)and Goldman (2009), the SEP entries onself-knowledge and onintrospection). The take-home message should be clear enough nonetheless: anybody whoputs forward an account of mindreading should remember that such anaccount has to cohere with a plausible story about the cognitivemechanisms underlying self-attribution.

6.3 Developmental Findings

The development of mindreading capacities in children has been one ofthe central areas of empirical investigation. In particular,developmental psychologists have put a lot of effort into detailinghow the ability to attribute false beliefs to others develops. Until2005, the central experimental paradigm to test this ability was theverbal false belief task (Wimmer & Perner 1983). Here isa classic version of it. A subject is introduced to two dolls, Sallyand Anne, and three objects: Sally’s ball, a basket, and a box.Sally puts her ball in the basket and leaves the scene. While Sally isaway, Anne takes the ball out of the basket and puts it into the box.Sally then returns. The subject is asked where she thinks Sally willlook for the ball. The correct answer, of course, is that Sally willlook inside the basket. To give this answer, the subject has toattribute to Sally thefalse belief that the ball is in thebasket. A number of experiments have found that while four-year oldchildren pass this task, three-year old children fail it (for areview, see Wellman et al. 2001). For a long time, the mainstreaminterpretation of these findings has been that children acquire theability to attribute false beliefs only around their fourth birthday(but see Clements & Perner 1994 and Bloom & German 2000).

In 2005, this developmental timeline was called into question.Kristine Onishi and Renée Baillargeon (2005) published theresult of anon-verbal version of the false belief task,which they administered to 15-month old infants. The experimentinvolves three steps. First, the infants see a toy between two boxes,one yellow and one green, and then an actor hiding the toy inside thegreen box. Next, the infants see the toy sliding out of the green boxand hiding inside the yellow box. In the true belief condition (TB),the actor notices that the toy changes location, while in the falsebelief condition (FB) she does not. Finally, half of the infants seethe actor reaching into the green box, while the other half sees theactor reaching into the yellow box. According to theviolation-of-expectation paradigm, infants reliably look fora longer time at unexpected events. Therefore, if the infants expectedthe actor to search for the toyon the basis of theactor’s belief about its location, then when the actor had atrue belief that the toy was hidden in one box, the infantsshould look longer when the actor reached into the other box instead.Conversely, the infants should look longer at one box when the actorfalsely believed that the toy was hidden in the other box.Strikingly, these predictions were confirmed in both the (TB) and (FB)conditions. On this basis, Onishi and Baillargeon (2005) concludedthat children of 15 months possess the capacity to representothers’ false beliefs.

This and subsequent versions of non-verbal false belief tasksattracted a huge amount of interest (at the current stage of research,there is evidence that sensitivity to others’ false beliefs ispresent in infants as young as 7 months—for a review, seeBaillargeon at al. 2016). Above all, the following two questions havebeen widely discussed: why do children pass the non-verbal falsebelief task at such an early age, but do not pass the verbal versionbefore the age of 4? Does passing the non-verbal false belief taskreally indicate the capacity to represent others’ false beliefs?(Perner & Ruffman 2005; Apperly & Butterfill 2009; Baillargeonet al. 2010; Carruthers 2013; Helming et al. 2014).

Goldman and Jordan (2013) maintain that ST has a good answer to bothquestions. To begin with, they argue that it is implausible toattribute to infants such sophisticated meta-representationalabilities as the ability to represent others’ false beliefs.Thus, Goldman and Jordan favour a deflationary view, according towhich infants aresensitive to others’ false beliefs,but do not represent themas such. In particular, theypropose that rather than believing that another subjectS(falsely) believes thatp, infants simply imagine how theworld is fromS’s perspective—that is, they simplyimagine thatp is the case. This—Goldman and Jordansay—is a more primitive psychological competence thanmindreading, since it does not involve forming a judgment aboutothers’ mental states. This brings us to Goldman andJordan’s answer to the question “why do children pass theverbal false belief task only at four?” Passing this taskrequires fully-fledged mindreading abilities and executive functionssuch as inhibitory control. It takes quite a lot of time—around3 to 4 years—before these functions and abilities comeonline.

7. Conclusion

Since the late 1980s, ST has received a great amount of attention fromphilosophers, psychologists, and neuroscientists. This is notsurprising. Mindreading is a central human cognitive capacity, and STchalleges some basic assumptions about the cognitive processes andneural mechanisms underlying human social behavior. Moreover, STtouches upon a number of major philosophical problems, such as therelation between self-knowledge and knowledge of other minds, and thenature of mental concepts, including the concept of mind itself. Inthis entry, we have considered some of the fundamental empirical andphilosophical issues surrounding ST. Many of them remain open. Inparticular, while the consensus view is now that both mentalsimulation and theorizing play important role in mindreading, thecurrently available evidence falls short of establishing what theirrespective roles are. In other words, it is likely that we shall endup adopting a hybrid model of mindreading that combines ST and TT,but, at the present stage, it is very difficult to predict what thishybrid model will look like. Hopefully, the joint work of philosophersand cognitive scientists will help to settle the matter.

Bibliography

  • Anderson, Michael L., 2008, “Neural Reuse: A FundamentalOrganizational Principle of the Brain”,Behavioral and BrainScience, 20(4): 239–313. doi:10.1017/S0140525X10000853
  • –––, 2015,After Phrenology: Neural Reuseand the Interactive Brain, Cambridge, MA: MIT Press.
  • Apperly, Ian A. and Stephen A. Butterfill, 2009, “Do HumansHave Two Systems to Track Beliefs and Belief-Like States?”,Psychological Review, 116(4): 953–70.doi:10.1037/a0016923
  • Avenanti, Alessio, Domenica Bueti, Gaspare Galati, & SalvatoreM. Aglioti, 2005, “Transcranial Magnetic Stimulation Highlightsthe Sensorimotor Side of Empathy for Pain”,NatureNeuroscience, 8(7): 955–960. doi:10.1038/nn1481
  • Baillargeon, Renée, Rose M. Scott, and Zijing He, 2010,“False-Belief Understanding in Infants”,Trends inCognitive Sciences, 14(3): 110–118.doi:10.1016/j.tics.2009.12.006
  • Baillargeon, Renée, Rose M. Scott, and Lin Bian, 2016,“Psychological Reasoning in Infancy”,Annual Review ofPsychology, 67: 159–186.doi:10.1146/annurev-psych-010213-115033
  • Barlassina, Luca, 2013, “Simulation is not Enough: A HybridModel of Disgust Attribution on the Basis of Visual Stimuli”,Philosophical Psychology, 26(3): 401–419.doi:10.1080/09515089.2012.659167
  • Baron-Cohen, Simon, 2000, “Theory of Mind and Autism: AFifteen Year Review”, in Simon Baron-Cohen, HelenTager-Flusberg, and Donald J. Cohen (eds.);Understanding OtherMinds: Perspectives from Developmental Cognitive Neuroscience(2nd edition), New York: Oxford University Press, pp. 3–20.
  • Bechtel, William, 2008,Mental Mechanisms: PhilosophicalPerspectives on Cognitive Neuroscience, New York: Taylor andFrancis.
  • Birch, Susan A. and Paul Bloom, 2007, “The Curse ofKnowledge in Reasoning About False Beliefs”,PsychologicalScience, 18(5): 382–386.doi:10.1111/j.1467-9280.2007.01909.x
  • Bisiach, Edoardo and Claudio Luzzatti, 1978, “UnilateralNeglect of Representational Space”,Cortex, 14(1):129–133. doi:10.1016/S0010-9452(78)80016-1
  • Blakemore, S.-J., D. Bristow, G. Bird, C. Frith, and J. Ward,2005, “Somatosensory Activations During the Observation of Touchand a Case of Vision-Touch Synaesthesia”,Brain,128(7): 1571–1583. doi:10.1093/brain/awh500
  • Bloom, Paul and Tim P. German, 2000, “Two Reasons to Abandonthe False Belief Task as a Test of Theory of Mind”,Cognition, 77(1): B25–31.doi:10.1016/S0010-0277(00)00096-2
  • Botterill, George and Peter Carruthers, 1999,The Philosophyof Psychology, Cambridge: Cambridge University Press.
  • Brass, Marcel, Ruth M. Schmitt, Stephanie Spengler, andGyörgy Gergely, 2007, “Investigating Action Understanding:Inferential Processes versus Action Simulation”,CurrentBiology, 17(24): 2117–2121.doi:10.1016/j.cub.2007.11.057
  • Brozzo, Chiara, forthcoming, “Motor Intentions: HowIntentions and Motor Representations Come Together”,Mind& Language.
  • Buckner, Randy L. and Daniel C. Carroll, 2007,“Self-Projection and the Brain”,Trends in CognitiveScience, 11(2): 49–57. doi:10.1016/j.tics.2006.11.004
  • Butterfill, Stephen A. and Corrado Sinigaglia, 2014,“Intention and Motor Representation in Purposive Action”,Philosophy and Phenomenological Research, 88(1):119–145. doi:10.1111/j.1933-1592.2012.00604.x
  • Carruthers, Peter, 1996, “Simulation and Self-Knowledge: ADefense of Theory-Theory”, in Carruthers and Smith 1996:22–38. doi:10.1017/CBO9780511597985.004
  • –––, 2009, “How we Know Our Own Minds: TheRelationship between Mindreading and Metacognition”,Behavioral and Brain Sciences, 32(2): 121–138.doi:10.1017/S0140525X09000545
  • –––, 2011,The Opacity of Mind: AnIntegrative Theory of Self-Knowledge, Oxford: Oxford UniversityPress. doi:10.1093/acprof:oso/9780199596195.001.0001
  • –––, 2013, “Mindreading in Infancy”,Mind and Language, 28(2): 141–172.doi:10.1111/mila.12014
  • Carruthers, Peter and Peter K. Smith (eds.), 1996,Theories ofTheories of Mind, Cambridge: Cambridge University Press.doi:10.1017/CBO9780511597985
  • Clements, Wendy A. and Josef Perner, 1994, “ImplicitUnderstanding of Belief”,Cognitive Development, 9(4):377–395. doi:10.1016/0885-2014(94)90012-4
  • Craver, Carl F., 2007,Explaining the Brain. Mechanisms andthe Mosaic Unity of Neuroscience, Oxford: Oxford UniversityPress. doi:10.1093/acprof:oso/9780199299317.001.0001
  • Csibra, Gergely, 2007, “Action Mirroring and ActionUnderstanding: An Alternative Account”, in Patrick Haggard, YvesRosetti, and Mitsuo Kawato (eds.)Sensorimotor Foundations ofHigher Cognition. Attention and Performance XII, OxfordUniversity Press, Oxford, pp. 453–459.doi:10.1093/acprof:oso/9780199231447.003.0020
  • Currie, Gregory, 1995, “Visual Imagery as the Simulation ofVision”,Mind and Language, 10(1–2): 25–44.doi:10.1111/j.1468-0017.1995.tb00004.x
  • –––, 2002, “Desire in Imagination”,in Tamar Szabo Gendler and John Hawthorne (eds.),Conceivabilityand Possibility, Oxford: Oxford University Press, pp.201–221.
  • Currie, Gregory and Ian Ravenscroft, 1997, “MentalSimulation and Motor Imagery”,Philosophy of Science,64(1): 161–80. doi:10.1086/392541
  • –––, 2002,Recreative Minds: Imagination inPhilosophy and Psychology, Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780198238089.001.0001
  • Davies, Martin, 1987, “Tacit Knowledge and Semantic Theory:Can a Five per Cent Difference Matter?”Mind, 96(384):441–462. doi:10.1093/mind/XCVI.384.441
  • Davies, Martin and Tony Stone (eds.), 1995a,Folk Psychology:The Theory of Mind Debate, Oxford: Blackwell Publishers.
  • ––– (eds.), 1995b,Mental Simulation:Evaluations and Applications—Reading in Mind and Language,Oxford: Blackwell Publishers.
  • –––, 2001, “Mental Simulation, TacitTheory, and the Threat of Collapse”,PhilosophicalTopics, 29(1/2): 127–173.doi:10.5840/philtopics2001291/212
  • Decety, Jean and François Michel, 1989, “ComparativeAnalysis of Actual and Mental Movement Times in Two GraphicTasks”,Brain and Cognition, 11(1): 87–97.doi:10.1016/0278-2626(89)90007-9
  • De Jaegher, Hanne, Ezequiel Di Paolo, and Shaun Gallagher, 2010,“Can Social Interaction Constitute SocialCognition?”Trends in Cognitive Sciences, 14(10):441–447. doi:10.1016/j.tics.2010.06.009
  • Dennett, Daniel C., 1987,The Intentional Stance,Cambridge, MA: MIT Press.
  • de Vignemont, Frédérique, 2009, “Drawing theBoundary Between Low-Level and High-Level Mindreading”,Philosophical Studies, 144(3): 457–466.doi:10.1007/s11098-009-9354-1
  • de Vignemont, Frédérique and Hugo Mercier, 2016,“Under Influence: Is Altercentric Bias Compatible withSimulation Theory?” in Brian P. McLaughlin and Hilary Kornblith(eds.),Goldman and his Critics, Oxford: Blackwell.doi:10.1002/9781118609378.ch13
  • Dilthey, Wilhelm, [1894] 1977,Descriptive Psychology andHistorical Understanding, Richard M. Zaner and Kenneth L. Heiges(trans.), with an introduction by Rudolf A. Makkreel, The Hague:Martinus Nijhof. doi:10.1007/978-94-009-9658-8
  • di Pellegrino, G., L. Fadiga, L. Fogassi, V. Gallese, and G.Rizzolatti, 1992, “Understanding Motor Events: ANeuropsychological Study”,Experimental Brain Research,91(1): 176–180. doi:10.1007/BF00230027
  • Doggett, Tyler and Andy Egan, 2007, “Wanting Things YouDon’t Want: The Case for an Imaginative Analogue ofDesire”,Philosophers' Imprint, 7(9). [Doggett and Egan 2007 available online]
  • Evans, Gareth, 1982,The Varieties of Reference, Oxford:Oxford University Press.
  • Fisher, Justin C., 2006, “Does Simulation Theory ReallyInvolve Simulation?”Philosophical Psychology, 19(4):417–432. doi:10.1080/09515080600726377
  • Fuller, Gary, 1995, “Simulation and PsychologicalConcepts”, in Davies and Stone 1995b: chapter 1, pp.19–32
  • Funkhouser, Eric and Shannon Spaulding, 2009, “Imaginationand Other Scripts”,Philosophical Studies, 143(3):291–314. doi:10.1007/s11098-009-9348-z
  • Gallagher, Shaun, 2001, “The Practice of Mind: Theory,Simulation, or Primary Interaction?”Journal ofConsciousness Studies, 8(5–7): 83–108.
  • –––, 2007, “Simulation Trouble”,Social Neuroscience, 2(3–4): 353–365.doi:10.1080/17470910601183549
  • Gallagher, Shaun and Daniel D. Hutto, 2008, “UnderstandingOthers Through Primary Interaction and Narrative Practice”, inJordan Zlatev, Timothy P. Racine, Chris Sinha, & Esa Itkonen(eds.),The Shared Mind: Perspectives on Intersubjectivity,Amsterdam: John Benjamins, pp. 17–38.doi:10.1075/celcr.12.04gal
  • Gallese, Vittorio, 2001, “The ‘Shared Manifold’Hypothesis: From Mirror Neurons to Empathy”,Journal ofConsciousness Studies, 8(5–7): 33–50.
  • –––, 2007, “Before and Below ‘Theoryof Mind’: Embodied Simulation and the Neural Correlates ofSocial Cognition”,Philosophical Transactions of the RoyalSociety B, 362: 659–669. doi:10.1098/rstb.2006.2002
  • Gallese, Vittorio and Alvin Goldman, 1998, “Mirror Neuronsand the Simulation Theory of Mind-reading”,Trends inCognitive Sciences, 2(12): 493–501.doi:10.1016/S1364-6613(98)01262-5
  • Gallese, Vittorio and Corrado Sinigaglia, 2011, “What is soSpecial about Embodied Simulation?”Trends in CognitiveScience, 15(11): 512–9. doi:10.1016/j.tics.2011.09.003
  • Gallese, Vittorio, Luciano Fadiga, Leonardo Fogassi, and GiacomoRizzolatti, 1996, “Action Recognition in the PremotorCortex”,Brain, 119(2): 593–609.doi:10.1093/brain/119.2.593
  • Gallese, Vittorio, Leonardo Fogassi, Luciano Fadiga, and GiacomoRizzolatti, 2002, “Action Representation and the InferiorParietal Lobule”, in Wolfgang Prinz and Bernhard Hommel (eds.),Common Mechanisms in Perception and Action (Attention andPerformance XIX), Oxford: Oxford University Press, pp.247–266.
  • Gallese, Vittorio, Christian Keysers, and Giacomo Rizzolatti,2004, “A Unifying View of the Basis of Social Cognition”,Trends in Cognitive Sciences: 8(9): 396–403.doi:10.1016/j.tics.2004.07.002
  • Gariépy, Jean-François, Karli K. Watson, Emily Du,Diana L. Xie, Joshua Erb, Dianna Amasino, and Michael L. Platt, 2014,“Social Learning in Humans and Other Animals”,Frontiers in Neuroscience, 31 March 2014,doi:10.3389/fnins.2014.00058.
  • Gergely, György and Gergely Csibra, 2003, “TeleologicalReasoning in Infancy: The Naïve Theory of Rational Action”,Trends in Cognitive Sciences, 7(7): 287–292.doi:10.1016/S1364-6613(03)00128-1
  • Gergely, György, Zoltán Nádasdy, GergelyCsibra, and Szilvia Bíró, 1995, “Taking theIntentional Stance at 12 Months of Age”,Cognition,56(2): 165–93. doi:10.1016/0010-0277(95)00661-H
  • Goldenberg, Georg, Wolf Müllbacher, and Andreas Nowak, 1995,“Imagery without Perception: A Case Study of Anosognosia forCortical Blindness”,Neuropsychologia, 33(11):1373–1382. doi:10.1016/0028-3932(95)00070-J
  • Goldman, Alvin I., 1989, “InterpretationPsychologized”,Mind and Language, 4(3): 161–185;reprinted in Davies and Stone 1995a, pp. 74–99.doi:10.1111/j.1468-0017.1989.tb00249.x
  • –––, 2002, “Simulation Theory and MentalConcepts”, in Jérôme Dokic & Joëlle Proust(eds.),Simulation and Knowledge of Action, Amsterdam ;Philadelphia: John Benjamins, 35–71.
  • –––, 2006,Simulating Minds: The Philosophy,Psychology, and Neuroscience of Mindreading, Oxford: OxfordUniversity Press. doi:10.1093/0195138929.001.0001
  • –––, 2008a, “Hurley on Simulation”,Philosophy and Phenomenological Research, 77(3):775–788. doi:10.1111/j.1933-1592.2008.00221.x
  • –––, 2008b, “Mirroring, Mindreading, andSimulation”, in Jaime A. Pineda (ed.),Mirror NeuronSystems: The Role of Mirroring Processes in Social Cognition, NewYork: Humana Press, pp. 311–330.doi:10.1007/978-1-59745-479-7_14
  • –––, 2009, “Précis ofSimulating Minds: : The Philosophy, Psychology, and Neuroscienceof Mindreading” and “Replies to Perner and Brandl,Saxe, Vignemont, and Carruthers”,Philosophical Studies144(3): 431–434, 477–491. doi:10.1007/s11098-009-9355-0and doi:10.1007/s11098-009-9358-x
  • –––, 2012a, “A Moderate Approach toEmbodied Cognitive Science”,Review of Philosophy andPsychology, 3(1): 71–88. doi:10.1007/s13164-012-0089-0
  • –––, 2012b, “Theory of Mind”, inEric Margolis, Richard Samuels, and Stephen P. Stich (eds.),TheOxford Handbook of Philosophy of Cognitive Science, Oxford:Oxford University Press, 402–424.doi:10.1093/oxfordhb/9780195309799.013.0017
  • Goldman, Alvin I. and Lucy C. Jordan, 2013, “Mindreading bySimulation: The Roles of Imagination and Mirroring”, in SimonBaron-Cohen, Michael Lombardo, and Helen Tager-Flusberg (eds.),Understanding Other Minds: Perspectives From Developmental SocialNeuroscience, Oxford: Oxford University Press, 448–466.doi:10.1093/acprof:oso/9780199692972.003.0025
  • Goldman, Alvin I. and Chandra Sekhar Sripada, 2005,“Simulationist Models of Face-Based EmotionRecognition”,Cognition, 94(3): 193–213.doi:10.1016/j.cognition.2004.01.005
  • Gopnik, Alison and Andrew N. Meltzoff, 1997,Words, Thoughts,and Theories, Cambridge, MA: Bradford Books/MIT Press.
  • Gopnik, Alison and Henry M. Wellman, 1992, “Why the Child'sTheory of Mind Really Is a Theory”,Mind and Language,7(1–2): 145–71: reprinted in Davies and Stone 1995a, pp.232–258. doi:10.1111/j.1468-0017.1992.tb00202.x
  • –––, 2012, “Reconstructing Constructivism:Causal Models, Bayesian Learning Mechanisms, and theTheory-Theory”,Psychological Bulletin”,138(6):1085–108. doi:10.1037/a0028044
  • Gordon, Robert M., 1986, “Folk Psychology asSimulation”,Mind and Language, 1(2): 158–171;reprinted in Davies and Stone 1995a, pp. 60–73.doi:10.1111/j.1468-0017.1986.tb00324.x
  • –––, 1995, “Simulation WithoutIntrospection or Inference From Me to You”, in Davies &Stone 1995b: 53–67.
  • –––, 1996, “‘Radical’Simulationism”, in Carruthers & Smith 1996: 11–21.doi:10.1017/CBO9780511597985.003
  • –––, 2000, “Sellars’s RyleanRevisited”,Protosoziologie, 14: 102–114.
  • –––, 2005, “Intentional Agents LikeMyself,”, inPerspectives on Imitation: From Mirror Neuronsto Memes, S. Hurley & N. Chater (eds.), Cambridge, MA: MITPress
  • –––, 2007, “Ascent Routines forPropositional Attitudes”,Synthese, 159 (2):151–165. doi:10.1007/s11229-007-9202-9
  • Harris, Paul L., 1989,Children and Emotion, Oxford:Blackwell Publishers.
  • –––, 1992, “From Simulation to FolkPsychology: The Case for Development”,Mind andLanguage, 7(1–2): 120–144; reprinted in Davies andStone 1995a, pp. 207–231.doi:10.1111/j.1468-0017.1992.tb00201.x
  • Heal, Jane, 1986, “Replication and Functionalism”, inLanguage, Mind, and Logic, J. Butterfield (ed.), Cambridge:Cambridge University Press; reprinted in Davies and Stone 1995a, pp.45–59.
  • –––, 1994, “Simulation vs Theory-Theory:What is at Issue?” in Christopher Peacocke (ed.),Objectivity, Simulation, and the Unity of Consciousness: CurrentIssues in the Philosophy of Mind (Proceedings of the BritishAcademy, 83), Oxford: Oxford University Press, pp. 129–144. [Heal 1994 available online]
  • –––, 1995, “How to Think AboutThinking”, in Davies and Stone 1995b: chapter 2, pp.33–52.
  • –––, 1998, “Co-Cognition and Off-LineSimulation: Two Ways of Understanding the Simulation Approach”,Mind and Language, 13(4): 477–498.doi:10.1111/1468-0017.00088
  • –––, 2003,Mind, Reason andImagination, Cambridge: Cambridge University Press.
  • Helming, Katharina A., Brent Strickland, and Pierre Jacob, 2014,“Making Sense of Early False-Belief Understanding”,Trends in Cognitive Sciences, 18(4): 167–170.doi:10.1016/j.tics.2014.01.005
  • Herschbach, Mitchell, 2012, “Mirroring Versus Simulation: Onthe Representational Function of Simulation”,Synthese,189(3): 483–51. doi:10.1007/s11229-011-9969-6
  • Hickok, Gregory, 2009, “Eight Problems for the Mirror NeuronTheory of Action Understanding in Monkeys and Humans”,Journal of Cognitive of Neuroscience, 21(7): 1229–1243.doi:10.1162/jocn.2009.21189
  • –––, 2014,The Myth of Mirror Neurons: TheReal Neuroscience of Communication and Cognition, New York:Norton.
  • Hume, David, 1739,A Treatise of Human Nature, edited byL.A. Selby-Bigge, 2nd edition, revised by P.H. Nidditch,Oxford: Clarendon Press, 1975
  • Hurley, Susan, 2005, “The Shared Circuits Hypothesis: AUnified Functional Architecture for Control, Imitation, andSimulation”, inPerspectives on Imitation: From Neuroscienceto Social Science, Volume 1: Mechanisms of Imitation and Imitation inAnimals, Susan Hurley & Nick Chater (eds.), Cambridge, MA:MIT Press, pp. 177–193.
  • –––, 2008, “UnderstandingSimulation”,Philosophy and Phenomenological Research,77(3): 755–774. doi:10.1111/j.1933-1592.2008.00220.x
  • Jackson, Frank, 1999, “All That Can Be at Issue in theTheory-Theory Simulation Debate”,Philosophical Papers,28(2): 77–95. doi:10.1080/05568649909506593
  • Jacob, Pierre, 2008, “What do Mirror Neurons Contribute toHuman Social Cognition?”,Mind and Language, 23(2):190–223. doi:10.1111/j.1468-0017.2007.00337.x
  • –––, 2012, “Sharing and AscribingGoals”,Mind and Language, 27(2): 200–227.doi:10.1111/j.1468-0017.2012.01441.x
  • Jeannerod, Marc and Elisabeth Pacherie, 2004, “Agency,Simulation and Self-Identification”,Mind and Language19(2): 113–146. doi:10.1111/j.1468-0017.2004.00251.x
  • Kieran, Matthew and Dominic McIver Lopes (eds.), 2003,Imagination, Philosophy, and the Arts, London:Routledge.
  • Kosslyn, S.M., A. Pascual-Leone, O. Felician, S. Camposano, J.P.Keenan, W.L. Thompson, G. Ganis, K.E. Sukel, and N.M. Alpert, 1999,“The Role of Area 17 in Visual Imagery: Convergent Evidence fromPET and rTMS”,Science, 284(5411): 167–170.doi:10.1126/science.284.5411.167
  • Leslie, Alan M., 1994, “Pretending and Believing: Issues inthe Theory of ToMM”,Cognition, 50(1–3):211–238 . doi:10.1016/0010-0277(94)90029-9
  • Lipps, Theodor, 1903, “Einfühlung, Innere Nachahmungund Organempfindung”,Archiv für gesamtePsychologie, 1: 465–519. Translated as “Empathy,Inner Imitation and Sense-Feelings”, inA Modern Book ofEsthetics, New York: Holt, Rinehart and Winston, 1979, pp.374–382.
  • Lurz, Robert W., 2011,Mindreading Animals, Cambridge,MA: MIT Press. doi:10.7551/mitpress/9780262016056.001.0001
  • Machamer, Peter, Lindley Darden, and Carl F. Craver, 2000,“Thinking about Mechanisms”,Philosophy ofscience, 67(1): 1–25. doi:10.1086/392759
  • Marr, D., 1982.Vision, San Francisco: FreemanPress.
  • Nichols, Shaun (ed.), 2006a,The Architecture of theImagination: New Essays on Pretense, Possibility, and Fiction,Oxford: Oxford University Press.doi:10.1093/acprof:oso/9780199275731.001.0001
  • –––, 2006b, “Just the Imagination: WhyImagining Doesn't Behave Like Believing”,Mind &Language, 21(4): 459–474.doi:10.1111/j.1468-0017.2006.00286.x
  • Nichols, Shaun and Stephen P. Stich, 2003,Mindreading: AnIntegrated Account of Pretence, Self-Awareness, and Understanding ofOther Minds, Oxford: Oxford University Press.doi:10.1093/0198236107.001.0001
  • Onishi, Kristine H. and Renée Baillargeon, 2005, “Do15-Month-Old Infants Understand False Beliefs?”Science, 308(5719): 255–258.doi:10.1126/science.1107621
  • Pacherie, Elisabeth, 2000, “The Content ofIntentions”,Mind and Language, 15(4): 400–432.doi:10.1111/1468-0017.00142
  • Peackocke, C. 2005, “Another I: Representing ConsciousStates, Perception, and Others”, in J. L. Bermúdez (ed.),Thought, Reference, and Experience: Themes From the Philosophy ofGareth Evans, Oxford: Clarendon Press
  • Perner, Josef and Deborah Howes, 1992, “‘He Thinks heKnows’ and more Developmental Evidence Against the Simulation(Role-Taking) Theory”,Mind and Language, 7(1–2):72–86; reprinted in Davies and Stone 1995a, pp. 159–173.doi:10.1111/j.1468-0017.1992.tb00197.x
  • Perner Josef and Anton Kühberger, 2005, “MentalSimulation: Royal Road to Other Minds?”, in Bertram F. Malle andSara D. Hodges (eds.),Other Minds: How Humans Bridge the DivideBetween Self and Others, New York: Guilford Press, pp.174–187.
  • Perner, Josef and Ted Ruffman, 2005, “Infants’ Insightin to the Mind: How Deep?”Science, 308(5719):214–216. doi:10.1126/science.1111656
  • Ramsey, William M., 2010, “How Not to Build a Hybrid:Simulation vs. Fact-finding”,Philosophical Psychology,23(6): 775–795. doi:10.1080/09515089.2010.529047
  • Rizzolatti, Giacomo and Laila Craighero, 2004, “TheMirror-Neuron System”,Annual Review of Neuroscience,27: 169–92. doi:10.1146/annurev.neuro.27.070203.144230
  • Rizzolatti, Giacomo & Corrado Sinigaglia, 2007, “Mirrorneurons and motor intentionality”,FunctionalNeurology, 22(4): 205–210
  • –––, 2010, “The Functional Role of theParieto-Frontal Mirror Circuit: Interpretations andMisinterpretations”,Nature Reviews Neuroscience 11:264–274. doi:10.1038/nrn2805
  • –––, 2014, “Review: A Curious Book onMirror Neurons and Their Myth”,The American Journal ofPsychology, 128(4): 527–533.doi:10.5406/amerjpsyc.128.4.0527
  • –––, forthcoming, “The Mirror Mechanism: aBasic Principle of Brain Function”,Nature ReviewsNeuroscience, 17: 757–765. doi:10.1038/nrn.2016.135
  • Rizzolatti, G., R. Camarda, L. Fogassi, M. Gentilucci, G. Luppino,and M. Matelli, 1988, “Functional Organization of Inferior Area6 in the Macaque Monkey”,Experimental Brain Research,71(1): 491–507. doi:10.1007/BF00248742
  • Rizzolatti, Giacomo, Luciano Fadiga, Vittorio Gallese, andLeonardo Fogassi, 1996, “Premotor Cortex and the Recognition ofMotor Actions”,Cognitive Brain Research, 3(2):131–141. doi:10.1016/0926-6410(95)00038-0
  • Rozin, Paul, Jonathan Haidt, and Clark R. McCauley, 2008,“Disgust”, in Michael Lewis, Jeannette M.Haviland–Jones & Lisa Feldman Barrett (eds.),Handbookof Emotions (3rd edition), New York: Guilford Press, pp.757–776.
  • Saxe, Rebbecca, 2005, “Against Simulation: The Argument fromError”,Trends in Cognitive Sciences, 9(4):174–179. doi:10.1016/j.tics.2005.01.012
  • Scholl, Brian J. and Alan M. Leslie, 1999, “Modularity,Development and Theory of Mind”,Mind and Language,14(1): 131–153. doi:10.1111/1468-0017.00106
  • Singer, Tania, Ben Seymour, John O’Doherty, Holger Kaube,Raymond J. Dolan, and Chris D. Frith, 2004, “Empathy for PainInvolves the Affective but not Sensory Components of Pain”,Science, 303(5661): 1157– 1162.doi:10.1126/science.1093535
  • Smith, Adam, 1759,The Theory of Moral Sentiments, D.D.Raphael and A.L. Macfie (eds.), Oxford: Oxford University Press,1976.
  • Spaulding, Shannon, 2012, “Mirror Neurons are not Evidencefor the Simulation Theory”,Synthese, 189(3):515–534. doi:10.1007/s11229-012-0086-y
  • Spivey, Michael J., Daniel C. Richardson, Melinda J. Tyler, andEzekiel E. Young, 2000, “Eye movements During Comprehension ofSpoken Scene Descriptions”, inProceedings of the22nd Annual Conference of the Cognitive ScienceSociety, Mahwah, NJ: Erlbaum, pp. 487–492.
  • Stich, Stephen and Shaun Nichols, 1992, “Folk Psychology:Simulation or Tacit Theory?”,Mind and Language,7(1–2): 35–71; reprinted in Davies and Stone 1995a, pp.123–158. doi:10.1111/j.1468-0017.1992.tb00196.x
  • –––, 1997, “Cognitive Penetrability,Rationality, and Restricted Simulation”,Mind andLanguage, 12(3–4): 297–326.doi:10.1111/j.1468-0017.1997.tb00076.x
  • Stich, Stephen and Ian Ravenscroft, 1992, “WhatisFolk Psychology?”Cognition, 50(1–3):447–68. doi:10.1016/0010-0277(94)90040-X
  • Velleman, J. David, 2000, “The Aim of Belief”, inThe Possibility of Practical Reason, Oxford: OxfordUniversity Press, pp. 244–282
  • Vannuscorps, Gilles and Alfonso Caramazza, 2015, “TypicalAction Perception and Interpretation without Motor Simulation”,Proceedings of the National Academy of Sciences, 113(1):1–6. doi:10.1073/pnas.1516978112
  • Wellman, Henry M., David Cross, and Julanne Watson, 2001,“Meta-Analysis of Theory-of-Mind Development: The Truth aboutFalse Belief”,Child Development, 72(3): 655–684.doi:10.1111/1467-8624.00304
  • Wicker, Bruno, Christian Keysers, Jane Plailly, Jean-Pierre Royet,Vittorio Gallese, and Giacomo Rizzolatti, 2003, “Both of usDisgusted inMy Insula: The Common Neural Basis of Seeing andFeeling Disgust”,Neuron, 40(3): 655–664.doi:10.1016/S0896-6273(03)00679-2
  • Wimmer, Heinz and Josef Perner, 1983, “Beliefs AboutBeliefs: Representation and Constraint Function of Wrong Beliefs inYoung Children’s Understanding of Deception”,Cognition, 13(1): 103–128.doi:10.1016/0010-0277(83)90004-5

Other Internet Resources

[Please contact the authors with suggestions.]

Acknowledgments

The authors would like to thank Tom Cochrane, Jeremy Dunham, SteveLaurence, and an anonymous referee for comments on earlier drafts ofthis entry.

Copyright © 2017 by
Luca Barlassina<l.barlassina@sheffield.ac.uk>
Robert M. Gordon

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp