Folk Psychology as Mental Simulation

First published Mon Dec 8, 1997; substantive revision Tue Mar 28, 2017

The capacity for “mindreading” is understood in philosophy of mind and cognitive science as the capacity to represent, reason about, and respond to others’ mental states. Essentially the same capacity is also known as “folk psychology”, “Theory of Mind”, and “mentalizing”. An example of everyday mindreading: you notice that Tom’s fright embarrassed Mary and surprised Bill, who had believed that Tom wanted to try everything. Mindreading is of crucial importance for our social life: our ability to predict, explain, and/or coordinate with others’ actions on countless occasions crucially relies on representing their mental states. For instance, by attributing to Steve the desire for a banana and the belief that there are no more bananas at home but there are some left at the local grocery store, you can: (i) explain why Steve has just left home; (ii) predict where Steve is heading; and (iii) coordinate your behavior with his (meet him at the store, or prepare a surpise party while he is gone). Without mindreading, (i)–(iii) do not come easily—if they come at all. That much is fairly uncontroversial. What is controversial is how to explain mindreading. That is, how do people arrive at representing others’ mental states? This is the main question to which the Simulation (or, mental simulation) Theory (ST) of mindreading offers an answer.

Common sense has it that, in many circumstances, we arrive at representing others’ mental states by putting ourselves in their shoes, or taking their perspective. For example, I can try to figure out my chess opponent’s next decision by imagining what I would decide if I were in her place. (Although we may also speak of this as a kind of empathy, that term must be understood here without any implication of sympathy or benevolence.)

ST takes this commonsensical idea seriously and develops it into a fully-fledged theory. At the core of the theory, we find the thesis that mental simulation plays a central role in mindreading: we typically arrive at representing others’ mental states by simulating their mental states in our own mind. So, to figure out my chess opponent’s next decision, I mentally switch roles with her in the game. In doing this, I simulate her relevant beliefs and goals, and then feed these simulated mental states into my decision-making mechanism and let the mechanism produce a simulated decision. This decision is projected on or attributed to the opponent. In other words, the basic idea of ST is that if the resources our own brain uses to guide our own behavior can be modified to work as representations of other people’s mental states, then we have no need to store general information about what makes people tick: we just do the ticking for them. Accordingly, ST challenges the Theory-Theory of mindreading (TT), the view that a tacit psychological theory underlies the ability to represent and reason about others’ mental states. While TT maintains that mindreading is an information-rich and theory-driven process, ST sees it as informationally poor and process driven (Goldman 1989).

This entry is organized as follows. In section 1 (The Origins and Varieties of ST), we briefly reconstruct ST’s history and elaborate further on ST’s main theoretical aims. We then go on to explain the very idea of mental simulation (section 2: What is Meant by “Mental Simulation”?) In section 3 (Two Types of Simulation Processes), we consider the cognitive architecture underlying mental simulation and introduce the distinction between high-level and low-level simulation processes. In section 4 (The Role of Mental Simulation in Mindreading), we discuss what role mental simulation is supposed to play in mindreading, according to ST. This discussion carries over to section 5 (Simulation Theory and Theory-Theory), where we contrast the accounts of mindreading given by ST and TT. Finally, section 6 (Simulation Theory: Pros and Cons) examines some of the main arguments in favour of and against ST as theory of mindreading.

1. The Origins and Varieties of ST

The idea that we often arrive at representing other people’s mental states by mentally simulating those states in ourselves has a distinguished history in philosophy and the human sciences. Robert Gordon (1995) traces it back to David Hume (1739) and Adam Smith’s (1759) notion of sympathy; Jane Heal (2003) and Gordon (2000) find simulationist themes in the Verstehen approach to the philosophy of history (e.g., Dilthey 1894); Alvin Goldman (2006) considers Theodor Lipps’s (1903) account of empathy (Einfühlung) as a precursor of the notion of mental simulation.

In its modern guise, ST was established in 1986, with the publication of Robert Gordon’s “Folk Psychology as Simulation” and Jane Heal’s “Replication and Functionalism”. These two articles criticized the Theory-Theory and introduced ST as a better account of mindreading. In his article, Gordon discussed psychological findings concerning the development of the capacity to represent others’ false beliefs. This attracted the interest of developmental psychologists, especially Paul Harris (1989, 1992), who presented empirical support for ST, and Alison Gopnik (Gopnik & Wellman 1992) and Joseph Perner (Perner & Howes 1992), who argued against it—Perner has since come to defend a hybrid version of ST (Perner & Kühberger 2005).

Alvin Goldman was an early and influential defender of ST (1989) and has done much to give the theory its prominence. His work with the neuroscientist Vittorio Gallese (Gallese & Goldman 1998) was the first to posit an important connection between ST and the newly discovered mirror neurons. Goldman’s 2006 book Simulating Minds is the clearest and most comprehensive account to date of the relevant philosophical and empirical issues. Among other philosophical proponents of ST, Gregory Currie and Susan Hurley have been influential.

Since the late 1980s, ST has been one of the central players in the philosophical, psychological, and neuroscientific discussions of mindreading. It has however been argued that the fortunes of ST have had a notable negative consequence: the expression “mental simulation” has come to be used broadly and in a variety of ways, making “Simulation Theory” a blanket term lumping together many distinct approaches to mindreading. Stephen Stich and Shaun Nichols (1997) already urged dropping it in favor of a finer-grained terminology. There is some merit to this. ST is in fact better conceived of as a family of theories rather than a single theory. All the members of the family agree on the thesis that mental simulation, rather than a body of knowledge about other minds, plays a central role in mindreading. However, different members of the family can differ from one another in significant respects.

One fundamental area of disagreement among Simulation Theorists is the very nature of ST—what kind of theory ST is supposed to be—and what philosophers can contribute to it. Some Simulation Theorists take the question “How do people arrive at representing others’ mental states?” as a straightforward empirical question about the cognitive processes and mechanisms underlying mindreading (Goldman 2006; Hurley 2008). According to them, ST is thus a theory in cognitive science, to which philosophers can contribute exactly as theoretical physicists contribute to physics:

theorists specialize in creating and tweaking theoretical structures that comport with experimental data, whereas experimentalists have the primary job of generating the data. (Goldman 2006: 22)

Other philosophical defenders of ST, however, do not conceive of themselves as theoretical cognitive scientists at all. For example, Heal (1998) writes that:

it is commonly taken that the inquiry into … the extent of simulation in psychological understanding is empirical, and that scientific investigation is the way to tell whether ST … is correct. But this perception is confused. It is an a priori truth … that simulation must be given a substantial role in our personal-level account of psychological understanding. (Heal 1998: 477–478)

Adjudicating this meta-philosophical dispute goes well beyond the aim of this entry. To be as inclusive as we can, we shall adopt a “balanced diet” approach: we shall discuss the extent to which ST is supported by empirical findings from psychology and neuroscience, and, at the same time, we shall dwell on “purely philosophical” problems concerning ST. We leave to the reader the task of evaluating which aspects should be put at the centre of the inquiry.

Importantly, even those who agree on the general nature of ST might disagree on other crucial issues. We will focus on what are typically taken to be the three most important bones of contention among Simulation Theorists: what is meant by “mental simulation”? (section 2). What types of simulation processes are there? (section 3). What is the role of mental simulation in mindreading? (section 4). After having considered what keeps Simulation Theorists apart, we shall move to discuss what holds them together, i.e., the opposition to the Theory-Theory of mindreading (section 5 and section 6). This should give the reader a sense of the “unity amidst diversity” that characterizes ST.

2. What is Meant by “Mental Simulation”?

In common parlance, we talk of putting ourselves in others’ shoes, or empathizing with other people. This talk is typically understood as adopting someone else’s point of view, or perspective, in our imagination. For example, it is quite natural to interpret the request “Try to show some empathy for John!” as asking you to use your imaginative capacity to consider the world from John’s perspective. But what is it for someone to imaginatively adopt someone else’s perspective? To a first approximation, according to Simulation Theorists, it consists of mentally simulating, or re-creating, someone else’s mental states. Currie and Ravenscroft (2002) make this point quite nicely:

Imagination enables us to project ourselves into another situation and to see, or think about, the world from another perspective. These situations and perspectives … might be those of another actual person, [or] the perspective we would have on things if we believed something we actually don’t believe, [or] that of a fictional character. … Imagination recreates the mental states of others. (Currie & Ravenscroft 2002: 1, emphasis added).

Thus, according to ST, empathizing with John’s sadness consists of mentally simulating his sadness, and adopting Mary’s political point of view consists of mentally simulating her political beliefs. This is the intuitive and general sense of mental simulation that Simulation Theorists have in mind.

Needless to say, this intuitive characterization of “mental simulation” is loose. What exactly does it mean to say that a mental state is a mental simulation of another mental state? Clearly, we need a precise answer to this question, if the notion of mental simulation is to be the fundamental building block of a theory. Simulation Theorists, however, differ over how to answer this question. The central divide concerns whether “mental simulation” should be defined in terms of resemblance (Heal 1986, 2003; Goldman 2006, 2008a) or in terms of reuse (Hurley 2004, 2008; Gallese & Sinigaglia 2011). We consider these two proposals in turn.

2.1 Mental Simulation as Resemblance

The simplest interpretation of “mental simulation” in terms of resemblance goes like this:

(RES-1) Token state M* is a mental simulation of token state M if and only if:

  1. Both M and M* are mental states
  2. M* resembles M in some significant respects

Two clarifications are in order. First, we will elaborate on the “significant respects” in which a mental state has to resemble another mental state in due course (see, in particular, section 3). For the moment, it will suffice to mention some relevant dimensions of resemblance: similar functional role; similar content; similar phenomenology; similar neural basis (an important discussion of this topic is Fisher 2006). Second, RES-1 defines “mental simulation” as a dyadic relation between mental states (the relation being a mental simulation of). However, the expression “mental simulation” is also often used to pick out a monadic property of mental states —the property being a simulated mental state (as will become clear soon, “simulated mental state” does not refer here to the state which is simulated, but to the state that does the simulating). For example, it is common to find in the literature sentences like “M* is a mental simulation”. To avoid ambiguities, we shall adopt the following terminological conventions:

  • We shall use the expression “mental simulation of” to express the relation being a mental simulation of.
  • We shall use the expression “simulated mental state” to express the property being a simulated mental state.
  • We shall use the expression “mental simulation” in a way that is deliberately ambiguous between “mental simulation of” and “simulated mental state”.

It follows from this that, strictly speaking, RES-1 is a definition of “mental simulation of”. Throughout this entry, we shall characterize “simulated mental state” in terms of “mental simulation of”: we shall say that if M* is a mental simulation of M, then M* is a simulated mental state.[1]

With these clarifications in place, we will consider the strengths and weaknesses of RES-1. Suppose that Lisa is seeing a yellow banana. At the present moment, there is no yellow banana in my own surroundings; thus, I cannot have that (type of) visual experience. Still, I can visualize what Lisa is seeing. Intuitively, my visual imagery of a yellow banana is a mental simulation of Lisa’s visual experience. RES-1 captures this, given that both my visual imagery and Lisa’s visual experience are mental states and the former resembles the latter.

RES-1, however, faces an obvious problem (Goldman 2006). The resemblance relation is symmetric: for any x and y, if x resembles y, then y resembles x. Accordingly, it follows from RES-1 that Lisa’s visual experience is a mental simulation of my visual imagery. But this is clearly wrong. There is no sense in which one person’s perceptual experience can be a mental simulation of another person’s mental imagery (see Ramsey 2010 for other difficulties with RES-1).

In order to solve this problem, Goldman (2006) proposes the following resemblance-based definition of “mental simulation of”:

(RES-2) Token state M* is a mental simulation of token state M if and only if:

  1. Both M and M* are mental states
  2. M* resembles M in some significant respects
  3. In resembling M, M* fulfils at least one of its functions

Under the plausible assumption that one of the functions of visual imagery is to resemble visual experiences, RES-2 correctly predicts that my visual imagery of a yellow banana counts as a mental simulation of Lisa’s visual experience. At the same time, since visual experiences do not have the function of resembling visual images, RES-2 does not run into the trouble of categorizing the former as a mental simulation of the latter.

2.2 Mental Simulation as Reuse

Clearly, RES-2 is a better definition of “mental simulation of” than RES-1. Hurley (2008), however, argued that it won’t do either, since it fails to distinguish ST from its main competitor, i.e., the Theory-Theory (TT), according to which mindreading depends on a body of information about mental states and processes (section 5). The crux of Hurley’s argument is this. Suppose that a token visual image V* resembles a token visual experience V and, in doing so, fulfils one of its functions. In this case, RES-2 is satisfied. But now suppose further that visualization works like a computer simulation: it generates its outputs on the basis of a body of information about vision. On this assumption, RES-2 still categorizes V* as a mental simulation of V, even though V* has been generated by exactly the kind of process described by TT: a theory-driven and information-rich process.

According to Hurley (who follows here a suggestion by Currie & Ravenscroft 2002), the solution to this difficulty lies in the realization that “the fundamental … concept of simulation is reuse, not resemblance” (Hurley 2008: 758, emphasis added). Hurley’s reuse-based definition of “mental simulation of” can be articulated as follows:

(REU) Token state M* is a mental simulation of token state M if and only if:

  1. Both M and M* are mental states
  2. M is generated by token cognitive process P
  3. M* is generated by token cognitive process P*
  4. P is implemented by the use of a token cognitive mechanism of type C
  5. P* is implemented by the reuse of a token cognitive mechanism of type C

To have a full understanding of REU, we need to answer three questions: (a) What is a cognitive process? (b) What is a cognitive mechanism? (c) What is the difference between using and reusing a certain cognitive mechanism? Let’s do it!

It is a commonplace that explanation in cognitive science is structured into different levels. Given our aims, we can illustrate this idea through the classical tri-level hypothesis formulated by David Marr (1982). Suppose that one wants to explain a certain cognitive capacity, say, vision (or mindreading, or moral judgment). The first level of explanation, the most abstract one, consists in describing what the cognitive capacity does—what task it performs, what problem it solves, what function it computes. For example, the task performed by vision is roughly “to derive properties of the world from images of it” (Marr 1982: 23). The second level of analysis specifies how the task is accomplished: what algorithm our mind uses to compute the function. Importantly, this level of analysis abstracts from the particular physical structures that implement the algorithm in our head. It is only at the third level of analysis that the details of the physical implementation of the algorithm in our brain are spelled out.

With these distinctions at hand, we can answer questions (a) and (b). A cognitive process is a cognitive capacity considered as an information-processing activity and taken in abstraction from its physical implementation. Thus, cognitive processes are individuated in terms of what function they perform and/or in terms of what algorithms compute these functions (fair enough, the “and/or” is a very big deal, but it is something we can leave aside here). This means that the same (type of) cognitive process can be multiply realized in different physical structures. For example, parsing (roughly, the cognitive process that assigns a grammatical structure to a string of signs) can be implemented both by a human brain and a computer. On the contrary, cognitive mechanisms are particular (types of) physical structures—e.g., a certain part of the brain—implementing certain cognitive processes. More precisely, cognitive mechanisms are organized structures carrying out cognitive processes in virtue of how their constituent parts interact (Bechtel 2008; Craver 2007; Machamer et al. 2000).

We now turn to question (c), which concerns the distinction between use and reuse of a cognitive mechanism. At a first approximation, a cognitive mechanism is used when it performs its primary function, while it is reused when it is activated to perform a different, non-primary function. For example, one is using one’s visual mechanism when one employs it to see, while one is reusing it when one employs it to conjure up a visual image (see Anderson 2008, 2015 for further discussion of the notion of reuse). All this is a bit sketchy, but it will do.

Let’s now go back to REU. The main idea behind it is that whether a mental state is a mental simulation of another mental state depends on the cognitive processes generating these two mental states, and on the cognitive mechanisms implementing such cognitive processes. More precisely, in order for mental state M* to be a mental simulation of mental state M, it has to be case that: (i) cognitive processes P* and P, which respectively generate M* and M, are both implemented by the same (type of) cognitive mechanism C; (ii) P is implemented by the use of C, while P* is implemented by the reuse of C.

Now that we know what REU means, we can consider whether it fares better than RES-2 in capturing the nature of the relation of mental simulation. It would seem so. Consider this hypothetical scenario. Lisa is seeing a yellow banana, and her visual experience has been generated by cognitive process V1, which has been implemented by the use of her visual mechanism. I am visualizing a yellow banana, and my visual image has been generated by cognitive process V2, which has been implemented by the reuse of my visual mechanism. Rosanna-the-Super-Reasoner is also visualizing a yellow banana, but her visual image has been generated by an information-rich cognitive process: a process drawing upon Rosanna’s detailed knowledge of vision and implemented by her incredibly powerful reasoning mechanism. REU correctly predicts that my visual image is a mental simulation of Lisa’s visual experience, but not vice versa. More importantly, it also predicts that Rosanna’s visual image does not count as a mental simulation of Lisa’s visual experience, given that Rosanna’s cognitive process was not implemented by the reuse of the visual mechanism. In this way, REU solves the problem faced by RES-2 in distinguishing ST from TT.

Should we then conclude that “mental simulation of” has to be defined in terms of reuse, rather than in terms of resemblance? Goldman (2008a) is still not convinced. Suppose that while Lisa is seeing a yellow banana, I am using my visual mechanism to visualize the Golden Gate Bridge. Now, even though Lisa’s visual experience and my visual image have been respectively generated by the use and the reuse of the visual mechanism, it would be bizarre to say that my mental state is a mental simulation of Lisa’s. Why? Because my mental state doesn’t resemble Lisa’s (she is seeing a yellow banana; I am visualizing the Golden Gate Bridge!) Thus—Goldman concludes—resemblance should be taken as the central feature of mental simulation.

2.3 Relations, States, and Processes

In order to overcome the difficulties faced by trying to define “mental simulation of” in terms of either replication or reuse, philosophers have built on the insights of both RES and REU and have proposed definitions that combine replication and reuse elements (Currie & Ravenscroft 2002; in recent years, Goldman himself seems to have favoured a mixed account; see Goldman 2012a). Here is one plausible definition:

(RES+REU) Token state M* is a mental simulation of token state M if and only if:

  1. Both M and M* are mental states
  2. M* resembles M in some significant respects
  3. M is generated by token cognitive process P
  4. M* is generated by token cognitive process P*
  5. P is implemented by the use of a token cognitive mechanism of type C
  6. P* is implemented by the reuse of a token cognitive mechanism of type C

RES+REU has at least three important virtues. The first is that it solves all the aforementioned problems for RES and REU—we leave to the reader the exercise of showing that this is indeed the case.

The second is that it fits nicely with an idea that loomed large in the simulationist literature: the idea that simulated mental states are “pretend” (“as if”, “quasi-”) states—imperfect copies of, surrogates for, the “genuine” states normally produced by a certain cognitive mechanism, obtained by taking this cognitive mechanism “off-line”. Consider the following case. Frank is in front of Central Café (and believes that he is there). He desires to drink a beer and believes that he can buy one at Central Café. When he feeds these mental states into his decision-making mechanism, the mechanism implements a decision-making process, which outputs the decision to enter the café. In this case, Frank’s decision-making mechanism was “on-line”—i.e., he used it; he employed it for its primary function. My situation is different. I don’t believe I am in front of Central Café, nor do I desire to drink a beer right now. Still, I can imagine believing and desiring so. When I feed these imagined states into my decision-making mechanism, I am not employing it for its primary function. Rather, I am taking it off-line (I am reusing it). As a result, the cognitive process implemented by my mechanism will output a merely imagined decision to enter the café. Now, it seems fair to say that my imagined decision resembles Frank’s decision (more on this in section 3). If you combine this with how these two mental states have been generated, the result is that my imagined decision is a mental simulation of Frank’s decision, and thus it is a simulated mental state. It is also clear why Frank’s decision is genuine, while my simulated mental state is just a pretend decision: all else being equal, Frank’s decision to enter Central Café will cause him to enter the café; on the contrary, no such behaviour will result from my simulated decision. I have not really decided so. Mine was just a quasi-decision—an imperfect copy of, a surrogate for, Frank’s genuine decision.

And here is RES+REU’s third virtue. So far, we have said that “mental simulation” can either pick out a dyadic relation between mental states or a monadic property of mental states. In fact, its ambiguity runs deeper than this, since philosophers and cognitive scientists also use “mental simulation” to refer to a monadic property of cognitive processes, namely, the property being a (mental) simulation process (or: “process of mental simulation”, “simulational process”, “simulative process”, etc.) As a first stab, a (mental) simulation process is a cognitive process generating simulated mental states. RES+REU has the resources to capture this usage of “mental simulation” too. Indeed, RES+REU implicitly contains the following definition of “simulation process”:

(PROC): Token process P* is a (mental) simulation process if and only if:

  1. P* generates token state M*
  2. M* resembles another token state, M, in some significant respects
  3. Both M and M* are mental states
  4. M is generated by token process P
  5. Both P and P* are cognitive processes
  6. P is implemented by the use of a token cognitive mechanism of type C
  7. P* is implemented by the reuse of a token cognitive mechanism of type C

Go back to the case in which Lisa was having a visual experience of a yellow banana, while I was having a visual image of a yellow banana. Our two mental states resembled one another, but different cognitive processes generated them: seeing in Lisa’s case, and visualizing in my case. Moreover, Lisa’s seeing was implemented by the use of the visual mechanism, while my visualizing was implemented by its reuse. According to PROC, the latter cognitive process, but not the former, was thus a simulation process.

To sum up, RES+REU captures many of the crucial features that Simulation Theorists ascribe to mental simulation. For this reason, we shall adopt it as our working definition of “mental simulation of”—consequently, we shall adopt PROC as a definition of “simulated mental state”.[2] We can put this into a diagram.

[a diagram consisting of a hexagon labelled 'C' with an arrow labeled 'use' pointing to a diamond labeled 'P' on the upper left, an arrow points from this diamond up to a rectangle labeled 'M'. From the hexagon is also an arrow labeled 're-use' pointing to a diamond labeled 'P*' on the upper right, an arrow points from this diamond up to a rectangle labeled 'M*'. The two rectangles are connected by a dashed double-headed arrow labeled 'resemblance']

Figure 1

The hexagon at the bottom depicts a cognitive mechanism C (it could be, say, the visual mechanism). When C is used (arrow on the left), it implements cognitive process P (say, seeing); when it is re-used (arrow on the right), it implements cognitive process P* (say, visualizing). P generates mental state M (say, a visual experience of a red tomato), while P* generates mental state M* (say, a visual image of a red tomato). These two mental states (M and M*) resemble one another. Given this: M* is a mental simulation of M; M* is a simulated mental state; and P* is a simulation process.[3]

2.4 Final Worries

In this section, we shall finally consider three worries raised for adopting RES+REU as a definition of “mental simulation of”. If you have already had enough of RES+REU, please feel free to move straight to section 3.

Heal (1994) pointed out a problem with committing ST to a particular account of the cognitive mechanisms that underlie it. Suppose that the human mind contains two distinct decision-making mechanisms: Mec1, which takes beliefs and desires as input, and generates decisions as output; and Mec2, which works by following exactly the same logical principles as Mec1, but takes imagined beliefs and imagined desires as input and generates imagined decisions as output. Consider again Frank’s decision to enter Central Café and my imagined decision to do so. According to the two mechanisms hypothesis, Frank desired to drink a beer and believed that he could buy one at Central Café, fed these mental states into Mec1, which generated the decision to enter the café. As for me, I fed the imagined desire to drink a beer and the imagined belief that I could buy one at Central Café into a distinct (type of) mechanism, i.e., Mec2, which generated the imagined decision to enter Central Café. Here is the question: does my imagined decision to enter Central Café count as a mental simulation of Frank’s decision to do so? If your answer is “Yes, it does”, then RES+REU is in trouble, since my imagined decision was not generated by reusing the same (type of) cognitive mechanism that Frank used to generate his decision; his decision was generated by Mec1, my imagined decision by Mec2. Thus, Heal concludes, a definition “mental simulation of” should not contain any commitment about cognitive mechanisms—it should not make any implementation claim—but should be given at a more abstract level of description.

In the face of this difficulty, a defender of RES+REU can say the following. First, she might reject the intuition that, in the two mechanisms scenario, my imagined decision counts as a mental simulation of Frank’s decision. At a minimum, she might say that this scenario does not elicit any robust intuition in one direction or the other: it is not clear whether these two mental states stand in the relation being a mental simulation of. Second, she might downplay the role of intuitions in the construction of a definition for “mental simulation of” and cognate notions. In particular, if she conceives of ST as an empirical theory in cognitive science, she will be happy to discount the evidential value of intuitions if countervailing theoretical considerations are available. This, e.g., is Currie and Ravenscroft’s (2002) position, who write that

there are two reasons … why the Simulation Theorist should prefer [a one mechanism hypothesis]: … first, the postulation of two mechanisms is less economical than the postulation of one; second, … we have very good reasons to think that imagination-based decision making does not operate in isolation from the subject’s real beliefs and desires. … If imagination and belief operate under a system of inferential apartheid—as the two-mechanisms view has it—how could this happen? (Currie & Ravenscroft 2002: 67–68)

A second worry has to do with the fact that RES+REU appears to be too liberal. Take this case. Yesterday, Angelina had the visual experience of a red apple. On the night of June 15, 1815, Napoleon conjured up the visual image of a red apple. Angelina used her visual mechanism to see, while Napoleon reused his to imagine. If we add to this that Napoleon’s mental state resembled Angelina’s, RES+REU predicts that Napoleon’s (token) visual image was a mental simulation of Angelina’s (token) visual experience. This might strike one as utterly bizarre. In fact, not only did Napoleon not intend to simulate Angelina’s experience: he could not even have intended to do it. After all, Angelina was born roughly 150 years after Napoleon’s death. By the same token, it is also impossible that Napoleon’s visual image has been caused by Angelina’s visual experience. As a matter of fact, the visual image Napoleon had on the night of June 15, 1815 is entirely disconnected from the visual experience that Angelina had yesterday. Thus, how could the former be a mental simulation of the latter? If you think about it, the problem is even worse than this. RES+REU has it that Napoleon’s visual image of a red apple is a mental simulation of all the visual experiences of a red apple that have obtained in the past, that are currently obtaining, and that will obtain in the future. Isn’t that absurd?

Again, a defender of RES+REU can give a two-fold answer. First, she can develop an argument that this is not absurd at all. Intuitively, the following principle seems to be true:

(TYPE): the mental state type visual image of a red apple is a mental simulation of the mental state type visual experience of a red apple.

If TYPE is correct, then the following principle has to be true as well:

(TOKEN): Any token mental state of the type visual image of a red apple is a mental simulation of every token mental state of the type visual experience of a red apple.

But TOKEN entails that Napoleon’s (token) visual image of a red apple is a mental simulation of Angelina’s (token) visual experience of a red apple, which is exactly what RES+REU predicts. Thus, RES+REU’s prediction, rather than being absurd, independently follows from quite intuitive assumptions. Moreover, even though TOKEN and RES+REU make the same prediction about the Napoleon-Angelina case, TOKEN is not entailed by RES+REU, since the latter contains a restriction on how visual images have to be generated. Thus, if one finds TOKEN intuitively acceptable, it is hard to see how one can find RES+REU to be too liberal.

The second component of the answer echoes one of the answers given to Heal: for a Simulation Theorist who conceives of ST as a theory in cognitive science, intuitions have a limited value in assessing a definition of “mental simulation of”. In fact, the main aim of this definition is not that of capturing folk intuitions, but rather that of offering a clear enough picture of the relation of mental simulation on the basis of which an adequate theory of mindreading can be built. So, if the proposed definition fails, say, to help distinguishing ST from TT, or is of limited use in theory-building, or is contradicted by certain important results from cognitive science, then one has a good reason to abandon it. On the contrary, it should not be a cause for concern if RES+REU does not match the folk concept MENTAL SIMULATION OF. The notion “mental simulation of” is a term of art—like, say, the notions of I-Language or of Curved Space. These notions do poorly match the folk concepts of language and space, but linguists and physicists do not take this to be a problem. The same applies to the notion of mental simulation.

And here is the third and final worry. RES+REU is supposed to be a definition of “mental simulation of” on the basis of which a theory of mindreading can be built. However, neither RES+REU nor PROC make any reference to the idea of representing others’ mental states. Thus, how could these definitions help us to construct a Simulation Theory of mindreading? The answer is simple: they will help us exactly as a clear definition of “computation”, which has nothing to do with how the mind works, helped to develop the Computational Theory of Mind (see entry on computational theory of mind).

Here is another way to make the point. ST is made up of two distinct claims: the first is that mental simulation is psychologically real, i.e., that there are mental states and processes satisfying RES+REU and PROC. The second claim is that mental simulation plays a central role in mindreading. Clearly, the second claim cannot be true if the first is false. However, the second claim can be false even if the first claim is true: mental simulation could be psychologically real, but play no role in mindreading at all. Hence, Simulation Theorists have to do three things. First, they have to establish that mental simulation is psychologically real. We consider this issue in section 3. Second, they have to articulate ST as a theory of mindreading. That is, they have to spell out in some detail the crucial role that mental simulation is supposed to play in representing others’ mental states, and contrast the resulting theory with other accounts of mindreading. We dwell on this in sections 4 and 5. Finally, Simulation Theorists have to provide evidence in support of their theory of mindreading—that is, they have to give us good reasons to believe that mental simulation does play a crucial role in representing others’ mental states. We discuss this issue in section 6.

3. Two Types of Simulation Processes

Now that we have definitions of “mental simulation of” and cognate notions, it is time to consider which mental states and processes satisfy them, if any. Are there really simulated mental states? That is, are there mental states generated by the reuse of cognitive mechanisms? And do these mental states resemble the mental states generated by the use of such mechanisms? For example, is it truly the case that visual images are mental simulations of visual experiences? What about decisions, emotions, beliefs, desires, and bodily sensations? Can our minds generate simulated counterparts of all these types of mental states? In this section, we consider how Simulation Theorists have tackled these problems. We will do so by focusing on the following question: are there really simulation processes (as defined by PROC)? If the answer to this question is positive, it follows that there are mental states standing in the relation of mental simulation (as defined by RES+REU), and thus simulated mental states.

Following Goldman (2006), it has become customary among Simulation Theorists to argue for the existence of two types of simulation processes: high-level simulation processes and low-level simulation processes (see, however, de Vignemont 2009). By exploring this distinction, we begin to articulate the cognitive architecture underlying mental simulation proposed by ST.

3.1 High-Level Simulation Processes

High-level simulation processes are cognitive processes with the following features: (a) they are typically conscious, under voluntary control, and stimulus-independent; (b) they satisfy PROC, that is, they are implemented by the reuse of a certain cognitive mechanism, C, and their output states resemble the output states generated by the use of C.[4] Here are some cognitive processes that, according to Simulation Theorists, qualify as high-level simulation processes. Visualizing: the cognitive process generating visual images (Currie 1995; Currie & Ravenscroft 2002; Goldman 2006); motor imagination: the cognitive process generating imagined bodily movements and actions (Currie & Ravenscroft 1997, 2002; Goldman 2006); imagining deciding: the cognitive process generating decision-like imaginings (Currie & Ravenscroft 2002); imagining believing: the cognitive process generating belief-like imaginings (Currie & Ravenscroft 2002); imagining desiring: the cognitive process generating desire-like imaginings (Currie 2002). In what follows, we shall consider a couple of them in some detail.

Visualizing first. It is not particularly hard to see why visualizing satisfies condition (a). Typically: one can decide to visualize (or stop visualizing) something; the process is not driven by perceptual stimuli; and at least some parts of the visualization process are conscious. There might be cases in which visualizing is not under voluntary control, is stimulus-driven and, maybe, even entirely unconscious. This, however, is not a problem, since we know that there are clear cases satisfying (a).

Unsurprisingly, the difficult task for Simulation Theorists is to establish that visualizing has feature (b), that is: it is implemented by the reuse of the visual mechanism; and its outputs (that is, visual images) resemble genuine visual experiences. Simulation Theorists maintain that they have strong empirical evidence supporting the claim that visualizing satisfies PROC. Here is a sample (this and further evidence is extensively discussed in Currie 1995, Currie & Ravenscroft 2002, and in Goldman 2006):

  1. visualizing recruits some of the brain areas involved in vision (Kosslyn et al. 1999);
  2. left-neglect patients have the same deficit in both seeing and visualizing—i.e., they do not have perceptual experience of the left half of the visual space and they also fail to imagine the left half of the imagined space (Bisiach & Luzzatti 1978);
  3. ocular movements occurring during visualizing approximate those happening during seeing (Spivey et al. 2000);
  4. some patients systematically mistake visual images for perceptual states (Goldenberg et al. 1995);
  5. visual perception and visualizing exhibit similar patterns of information-processing (facilitations, constraints, illusions) (Decety & Michel 1989; Kosslyn et al. 1999)

On this basis, Simulation Theorists conclude that visualizing is indeed implemented by the reuse of the visual mechanism (evidence i and ii) and that its outputs, i.e., visual images, do resemble visual experiences (evidence iii, iv, and v). Thus, visualizing is a process that qualifies as high-level simulation, and visual images are simulated mental states.

Visual images are mental simulations of perceptual states. Are there high-level simulation processes whose outputs instead are mental simulations of propositional attitudes? (If you think that visual experiences are propositional attitudes, you can rephrase the question as follows: are there high-level simulation processes whose outputs are mental simulations of non-sensory states?) Three candidate processes have received a fair amount of attention in the simulationist literature: imagining desiring, imagining deciding, and imagining believing. The claims made by Simulation Theorists about these cognitive processes and their output states have generated an intense debate (Doggett & Egan 2007; Funkhouser & Spaulding 2009; Kieran & Lopes 2003; Nichols 2006a, 2006b; Nichols & Stich 2003; Velleman 2000). We do not have space to review it here (two good entry points are the introduction to Nichols 2006a and the entry on imagination). Rather, we shall confine ourselves to briefly illustrating the simulationist case in favour of the thesis that imagining believing is a high-level simulation process.

I don’t believe that Rome is in France, but I can imagine believing it. Imagining believing typically is a conscious, stimulus-independent process, under voluntary control. Thus, imagining believing satisfies condition (a). In order for it to count as an instance of high-level simulation process, it also needs to have feature (b), that is: (b.i) its outputs (i.e., belief-like imaginings) have to resemble genuine beliefs in some significant respects; (b.ii) it has to be implemented by the reuse of the cognitive mechanism (whose use implements the cognitive process) that generates genuine beliefs—let us call it “the belief-forming mechanism”. Does imagining believing satisfy (b)? Currie and Ravenscroft (2002) argue in favour of (b.i). Beliefs are individuated in terms of their content and functional role. Belief-like imaginings—Currie and Ravenscroft say—have the same content and a similar functional role to their genuine counterparts. For example, the belief that Rome is in France and the belief-like imagining that Rome is in France have exactly the same propositional content: that Rome is in France. Moreover, belief-like imaginings mirror the inferential role of genuine beliefs. If one believes both that Rome is in France and that French is the language spoken in France, one can infer the belief that French is the language spoken in Rome. Analogously, from the belief-like imagining that Rome is in France and the genuine belief that French is the language spoken in France, one can infer the belief-like imagining that French is the language spoken in Rome. So far, so good (but see Nichols 2006b).

What about (b.ii)? Direct evidence bearing on it is scarce. However, Simulation Theorists can give an argument along the following lines. First, one owes an explanation of why belief-like imaginings are, well, belief-like—as we have said above, it seems that they have the same type of content as, and a functional role similar to, genuine beliefs. A possible explanation for this is that both types of mental states are generated by (cognitive processes implemented by) the same cognitive mechanism. Second, it goes without saying that our mind contains a mechanism for generating beliefs (the belief-forming mechanism), and that there must be some mechanism or another in charge of generating belief-like imaginings. It is also well known that cognitive mechanisms are evolutionary costly to build and maintain. Thus, evolution might have adopted the parsimonious strategy of redeploying a pre-existing mechanism (the belief-forming mechanism) for a non-primary function, i.e., generating belief-like imaginings—in general, this hypothesis is also supported by the idea that neural reuse is one of the fundamental organizational principle of the brain (Anderson 2008). If one puts these two strands of reasoning together, one gets a prima facie case for the claim that imagining believing is implemented by the reuse of the belief-forming mechanism—that is, a prima facie case for the conclusion that imagining believing satisfies (b.ii). Since imagining believing appears also to satisfy (b.i) and (a), lacking evidence to the contrary, Simulation Theorists are justified in considering it to be a high-level simulation process.

Let’s take stock. We have examined a few suggested instances of high-level simulation processes. If Simulation Theorists are correct, they exhibit the following commonalities: they satisfy PROC (this is why they are simulation processes); they are typically conscious, under voluntary control, and stimulus-independent (this is why they are high-level). Do they have some other important features in common? Yes, they do—Simulation Theorists say. They all are under the control of a single cognitive mechanism: imagination (more precisely, Currie & Ravenscroft (2002) talk of Re-Creative Imagination, while Goldman (2006, 2009) uses the expression “Enactment Imagination”). The following passage will give you the basic gist of the proposal:

What is distinctive to high-level simulation is the psychological mechanism … that produces it, the mechanism of imagination. This psychological system is capable of producing a wide variety of simulational events: simulated seeings (i.e., visual imagery), … simulated motor actions (motor imagery), simulated beliefs, … and so forth. … In producing simulational outputs, imagination does not operate all by itself. … For example, it recruits parts of the visual system to produce visual imagery …. Nonetheless, imagination “‘takes the lead”’ in directing or controlling the other systems it enlists for its project. (Goldman 2009: 484–85)

Here is another way to make the point. We already know that, according to ST, visualizing is implemented by the reuse of the visual mechanism. In the above passage, Goldman adds that the reuse of the visual mechanism is initiated, guided and controlled by imagination. The same applies, mutatis mutandis, to all cases of high-level simulation processes. For example, in imagining hearing, imagination “gets in control” of the auditory mechanism, takes it off-line, and (re)uses it to generate simulated auditory experiences. Goldman (2012b, Goldman & Jordan 2013) supports this claim by making reference to neuroscientific data indicating that the same core brain network, the so-called “default network”, subserves all the following self-projections: prospection (projecting oneself into one’s future); episodic memory (projecting oneself into one’s past); perspective taking (projecting oneself into other minds); and navigation (projecting oneself into other places) (see Buckner & Carroll 2007 for a review). These different self-projections presumably involve different high-level simulation processes. However, they all have something in common: they all involve imagination-based perspectival shifts. Therefore, the fact that there is one brain network common to all these self-projections lends some support to the claim that there is one common cognitive mechanism, i.e., imagination, which initiates, guides, and controls all high-level simulation processes.

If Goldman is right, and all high-level simulation processes are guided by imagination, we can then explain why, in our common parlance, we tend to describe high-level simulation processes and outputs in terms of imaginings, images, imagery, etc. More importantly, we can also explain why high-level simulation processes are conscious, under voluntary control, and stimulus-independent. These are, after all, typical properties of imaginative processes. However, there are simulation processes that typically are neither conscious, nor under voluntary control, nor stimulus independent. This indicates that they are not imagination-based. It is to this other type of simulation processes that we now turn.

3.2 Low-Level Simulation Processes

Low-level simulation processes are cognitive processes with these features: (a*) they are typically unconscious, automatic, and stimulus-driven; (b) they satisfy PROC, that is, they are implemented by the reuse of a certain cognitive mechanism, C, and their output states resemble the output states generated by the use of C. What cognitive processes are, according to ST, instances of low-level simulation? The answer can be given in two words: mirroring processes. Clarifying what these two words mean, however, will take some time.

The story begins at the end of the 1980s in Parma, Italy, where the neuroscientist Giacomo Rizzolatti and his team were investigating the properties of the neurons in the macaque monkey ventral premotor cortex. Through single-cell recording experiments, they discovered that the activity of neurons in the area F5 is correlated with goal-directed motor actions and not with particular movements (Rizzolatti et al. 1988). For example, some F5 neurons fire when the monkey grasps an object, regardless of whether the monkey uses the left or the right hand. A plausible interpretation of these results is that neurons in monkey area F5 encode motor intentions (i.e., those intentions causing and guiding actions like reaching, grasping, holding, etc.) and not mere kinematic instructions (i.e., those representations specifying the fine-grained motor details of an action). (In-depth philosophical analyses of the notion of motor intention can be found in: Brozzo forthcoming; Butterfill & Sinigaglia 2014; Pacherie 2000). This was an already interesting result, but it was not what the Parma group became famous for. Rather, their striking discovery happened a few years later, helped by serendipity. Researchers were recording the activity of F5 neurons in a macaque monkey performing an object-retrieval task. In between trials, the monkey stood still and watched an experimenter setting up the new trial, with microelectrodes still measuring the monkey’s brain activity. Surprisingly, some of the F5 neurons turned out to fire when the monkey saw the experimenter grasping and placing objects. This almost immediately led to new experiments, which revealed that a portion of F5 neurons not only fire when the monkey performs a certain goal-directed motor action (say, bringing a piece of food to the mouth), but also when it sees another agent performing the same (type of) action (di Pellegrino et al. 1992; Gallese et al. 1996; Rizzolatti et al. 1996). For this reason, these neurons were aptly called “mirror neurons”, and it was proposed that they encode motor intentions both during action execution and action observation (Rizzolatti & Sinigaglia 2007, forthcoming). Later studies found mirror neurons also in the macaque monkey inferior parietal lobule (Gallese et al. 2002), which together with the ventral premotor cortex constitutes the monkey cortical mirror neuron circuit (Rizzolatti & Craighero 2004).

Subsequent evidence suggested that an action mirror mechanism—that is, a cognitive mechanism that gets activated both when an individual performs a certain goal-directed motor action and when she sees another agent performing the same action—also exists in the human brain (for reviews, see Rizzolatti & Craighero 2004, and Rizzolatti & Sinigaglia forthcoming). In fact, it appears that there are mirror mechanisms in the human brain outside the action domain as well: a mirror mechanism for disgust (Wicker et al. 2003), one for pain (Singer at al. 2004; Avenanti et al. 2005), and one for touch (Blakemore et al. 2005). Given the variety of mirror mechanisms, it is not easy to give a definition that fits them all. Goldman (2008b) has quite a good one though, and we will draw from it: a cognitive mechanism is a mirror mechanism if and only if it gets activated both when an individual undergoes a certain mental event endogenously and when she perceives a sign that another individual is undergoing the same (type of) mental event. For example, the pain mirror mechanism gets activated both when individuals experience “a painful stimulus and … when they observe a signal indicating that [someone else] is receiving a similar pain stimulus” (Singer et al. 2004: 1157).

Having introduced the notions of mirror neuron and mirror mechanism, we can define the crucial notion of this section: mirroring process. We have seen that mirror mechanisms can get activated in two distinct modes: (i) endogenously; (ii) in the perception mode. For example, my action mirror mechanism gets endogenously activated when I grasp a mug, while it gets activated in the perception mode when I see you grasping a mug. Following again Goldman (2008b), let us say that a cognitive process is a mirroring process if and only if it is constituted by the activation of a mirror mechanism in the perception mode. For example, what goes on in my brain when I see you grasping a mug counts as a mirroring process.

Now that we know what mirroring processes are, we can return to our initial problem—i.e., whether they are low-level simulation processes (remember that a cognitive process is a low-level simulation process if and only if: (a*) it is typically unconscious, automatic, and stimulus-driven; (b) it satisfies PROC). For reasons of space, we will focus on disgust mirroring only.

Wicker et al. (2003) carried out an fMRI study in which participants first observed videos of disgusted facial expressions and subsequently underwent a disgust experience via inhaling foul odorants. It turned out that the same neural area—the left anterior insula—that was preferentially activated during the experience of disgust was also preferentially activated during the observation of the disgusted facial expressions. These results indicate the existence of a disgust mirror mechanism. Is disgust mirroring (the activation of the disgust mirror mechanism in the perception mode) a low-level simulation process? Simulation Theorists answer in the affirmative.

Here is why disgust mirroring satisfies (a*): the process is stimulus-driven: it is sensitive to certain perceptual stimuli (disgusted facial expressions); it is automatic; and it is typically unconscious (even though its output, i.e., “mirrored disgust”, is sometimes conscious). What about condition (b)? Presumably, the primary (evolutionary) function of the disgust mechanism is to produce a disgust response to spoiled food, germs, parasites etc. (Rozin et al. 2008). In the course of evolution, this mechanism could have been subsequently co-opted to also get activated by the perception (of a sign) that someone else is experiencing disgust, in order to facilitate social learning of food preferences (Gariépy et al. 2014). If this is correct, then disgust mirroring is implemented by the reuse of the disgust mechanism (by employing this mechanism for a function different than its primary one). Moreover, the output of disgust mirroring resembles the genuine experience of disgust in at least two significant respects: first, both mental states have the same neural basis; second, when conscious, they share a similar phenomenology. Accordingly, (b) is satisfied. By putting all this together, Simulation Theorists conclude that disgust mirroring is a low-level simulation process, and mirrored disgust is a simulated mental state (Goldman 2008b; Barlassina 2013)

4. The Role of Mental Simulation in Mindreading

In the previous section, we examined the case for the psychological reality of mental simulation. We now turn to ST as a theory of mindreading. We will tackle two main issues: the extent to which mindreading is simulation-based, and how simulation-based mindreading works.

4.1 The Centrality of Mental Simulation in Mindreading

ST proposes that mental simulation plays a central role in mindreading, i.e., it plays a central role in the capacity to represent and reason about others’ mental state. What does “central” mean here? Does it mean the central role, with other contributors to mindreading being merely peripheral? This is an important question, since in recent years there have been proposed hybrid models according to which both mental simulation and theorizing play important roles in mindreading (see section 5.2).

A possible interpretation of the claim that mental simulation plays a central role in representing others’ mental states is that mindreading events are always simulation-based, even if they sometimes also involve theory. Some Simulation Theorists, however, reject this interpretation, since they maintain that there are mindreading events in which mental simulation plays no role at all (Currie & Ravenscroft 2002). For example, if I know that Little Jimmy is happy every time he finds a dollar, and I also know that he has just found a dollar, I do not need to undergo any simulation process to conclude that Little Jimmy is happy right now. I just need to carry out a simple logical inference.

However, generalizations like, “Little Jimmy is happy every time he finds a dollar,” are ceteris paribus rules. People readily recognize exceptions: for example, we recognize situations in which Jimmy would probably not be happy even if he found a dollar, including some in which finding a dollar might actually make him unhappy. Rather than applying some additional or more complex rules that cover such situations, it is arguable that putting ourselves in Jimmy's situation and using “good common sense” alerts us to to these exceptions and overrides the rule. If that is correct, then simulation is acting as an overseer or governor even when people appear to be simply applying rules.

Goldman (2006) suggests that we cash out the central role of mental simulation in representing others’ mental states as follows: mindreading is often simulation-based. Goldman’s suggestion, however, turns out to be empty, since he explicitly refuses to specify what “often” means in this context.

How often is often? Every Tuesday, Thursday, and Saturday? Precisely what claim does ST mean to make? It is unreasonable to demand a precise answer at this time. (Goldman 2006: 42; see also Goldman 2002; Jeannerod & Pacherie 2004)

Perhaps a better way to go is to characterize the centrality of mental simulation for mindreading not in terms of frequency of use, but in terms of importance. Currie and Ravenscroft make the very plausible suggestion that “one way to see how important a faculty is for performing a certain task is to examine what happens when the faculty is lacking or damaged” (Currie & Ravenscroft 2002: 51). On this basis, one could say that mental simulation plays a central role in mindreading if and only if: if one’s simulational capacity (i.e., the capacity to undergo simulation processes/simulated mental states) were impaired, then one’s mindreading capacity would be significantly impaired.

An elaboration of this line of thought comes from Gordon (2005)— see also Gordon (1986, 1996) and Peacocke (2005)—who argues that someone lacking the capacity for mental simulation would not be able to represent mental states as such, since she is incapable of representing anyone as having a mind in the first place. Gordon’s argument is essentially as follows:

We represent something as having a mind, as having mental states and processes, only if we represent it as a subject (“subject of experience,” in formulations of “the hard problem of consciousness”), where “a subject” is understood as a generic “I”. This distinguishes it from a “mere object” (and also is a necessary condition for a more benevolent sort of empathy).

To represent something as another “I” is to represent it as a possible target of self-projection: as something one might (with varying degrees of success) imaginatively put oneself in the place of. (Of course, one can fancifully put oneself in the place of just about anything—a suspension bridge, even; but that is not a reductio ad absurdum, because one can also fancifully represent just about anything as having a mind.)

It is not clear, however, what consequences Gordon’s conceptual argument would have for mindreading, if any. Even if a capacity to self-project were needed for representing mental states as such, would lack of this capacity necessarily impair mindreading? That is, couldn't one predict explain, predict, and coordinate behavior using a theory of internal states, without conceptualizing these as states of an I or subject? As a more general point, Simulation Theorists have never provided a principled account of what would constitute a “significant impairment” of mindreading capacity.

To cut a long story short, ST claims that mental simulation plays a central role in mindreading, but at the present stage its proponents do not agree on what this centrality exactly amounts to. We will come back to this issue in section 5, when we shall discuss the respective contributions of mental simulation and theorizing in mindreading.

We now turn to a different problem: how does mental simulation contribute to mindreading when it does? That is, how does simulation-based mindreading work? Here again, Simulation Theorists disagree about what the right answer is. In what follows, we explore some dimensions of disagreement.

4.2 Constitution or Causation?

Some Simulation Theorists defend a strong view of simulation-based mindreading (Gordon 1986, 1995, 1996; Gallese et al. 2004; Gallese & Sinigaglia 2011). They maintain that many simulation-based mindreading events are (entirely) constituted by mental simulation events (where mental simulation events are simulated mental states or simulation processes). In other words, some Simulation Theorists claim that, on many occasions, the fact that a subject S is representing someone else’s mental states is nothing over and above the fact that S is undergoing a mental simulation event: the former fact reduces to the latter. For example, Lisa’s undergoing a mirrored disgust experience as a result of observing John’s disgusted face would count as a mindreading event: Lisa’s simulated mental state would represent John’s disgust (Gallese et al. 2004). Let us call this “the Constitution View”.

We shall elaborate on the details of the Constitution View in section 4.3. Before doing that, we consider an argument that has been directed against it over and over again, and which is supposed to show that the Constitution View is a non-starter (Fuller 1995; Heal 1995; Goldman 2008b; Jacob 2008, 2012). Lacking a better name, we will call it “the Anti-Constitution argument”. Here it is. By definition, a mindreading event is a mental event in which a subject, S, represents another subject, Q, as having a certain mental state M. Now—the argument continues—the only way in which S can represent Q as having M is this: S has to employ the concept of that mental state and form the judgment, or the belief, that Q is in M. Therefore, a mindreading event is identical to an event of judging that someone else has a certain mental state (where this entails the application of mentalistic concepts). It follows from this that mental simulation events cannot be constitutive of mindreading events, since the former events are not events of judging that someone else has a certain mental state. An example should clarify the matter. Consider Lisa again, who is undergoing a mirrored disgust experience as a result of observing John’s disgusted face. Clearly, undergoing such a simulated disgust experience is a different mental event from judging that John is experiencing disgust. Therefore, Lisa’s mental simulation does not constitute a mindreading event.

In section 4.3, we will discuss how the defenders of the Constitution View have responded to this argument. Suppose for the moment that the Anti-Constitution argument is sound. What alternative pictures of simulation-based mindreading are available? Those Simulation Theorists who reject the Constitution View tend to endorse the Causation View, according to which mental simulation events never constitute mindreading events, but only causally contribute to them. The best developed version of this view is Goldman’s (2006) Three-Stage Model (again, this is our label, not his), whose basic structure is as follows:

STAGE 1. Mental simulation: Subject S undergoes a simulation process, which outputs a token simulated mental state m*.

STAGE 2. Introspection: S introspects m* and categorizes/conceptualizes it as (a state of type) M.

STAGE 3. Judgment: S attributes (a state of type) M to another subject, Q, through the judgment Q is in M.

(The causal relations among these stages are such that: STAGE 1 causes STAGE 2, and STAGE 2 in turn causes STAGE 3. See Spaulding 2012 for a discussion of the notion of causation in this context.)

Here is our trite example. On the basis of observing John’s disgusted facial expression, Lisa comes to judge that John is having a disgust experience. How did she arrive at the formation of this judgment? Goldman’s answer is as follows. The observation of John’s disgusted facial expression triggered a disgust mirroring process in Lisa, resulting in Lisa’s undergoing a mirrored disgust experience (STAGE 1). This caused Lisa to introspect her simulated disgust experience and to categorize it as a disgust experience (STAGE 2) (the technical notion of introspection used by Goldman will be discussed in section 4.4). This, in turn, brought about the formation of the judgment John is having a disgust experience (STAGE 3). Given that, according to Goldman, mindreading events are identical to events of judging that someone else has a certain mental state, it is only this last stage of Lisa’s cognitive process that constitutes a mindreading event. On the other hand, the previous two stages were merely causal contributors to it. But mental simulation entirely took place at STAGE 1. This is why the Three-Stage Model is a version of the Causation View: according to the model, mental simulation events causally contribute to, but do not constitute, mindreading events.

4.3 Mindreading without Judgement

The main strategy adopted by the advocates of the Constitution View in responding to the Anti-Constitution argument consists in impugning the identification of mindreading events with events of judging that someone else has a certain mental state. A prominent version of this position is Gordon’s (1995, 1996) Radical Simulationism, according to which representing someone else’s mental states does not require the formation of judgments involving the application of mentalistic concepts. Rather, Gordon proposes that the main bulk of mindreading events are non-conceptual representations of others’ mental states, where these non-conceptual representations are constituted by mental simulation events. If this is true, many mindreading events are constituted by mental simulation events, and thus the Constitution View is correct.

The following case should help to get Radical Simulationism across. Suppose that I want to represent the mental state that an individual—call him “Mr Tees”—is in right now. According to Gordon, there is a false assumption behind the idea that, in order to do so, I need to form a judgment with the content Mr Tees is in M (where “M” is a placeholder for a mentalistic concept). The false assumption is that the only thing that I can do is to simulate myself in Mr Tees’s situation. As Gordon points out, it is also possible for me to simulate Mr Tees in his situation. And if I do so, my very simulation of Mr Tees constitutes a representation of his mental state, without the need of forming any judgment. This is how Gordon makes his point:

To simulate Mr Tees in his situation requires an egocentric shift, a recentering of my egocentric map on Mr Tees. He becomes in my imagination the referent of the first person pronoun “I”. … Such recentering is the prelude to transforming myself in imagination into Mr Tees as much as actors become the characters they play. … But once a personal transformation has been accomplished, … I am already representing him as being in a certain state of mind. (Gordon 1995: 55–56)

It is important to stress the dramatic difference between Gordon’s Radical Simulationism and Goldman’s Three-Stage Model. According to the latter, mental simulation events causally contribute to representing other people’s mental states, but the mindreading event proper is always constituted by a judgment (or a belief). Moreover, Goldman maintains that the ability to form such judgments requires both the capacity to introspect one’s own mental states (more in this in section 4.4) and possession of mentalistic concepts. None of this is true of Radical Simulationism. Rather, Gordon proposes that, in the large majority of cases, it is the very mental simulation event itself that constitutes a representation of someone else’s mental states. Furthermore, since such mental simulation events neither require the capacity for introspection nor possession of mentalistic concepts, Radical Simulationism entails the surprising conclusion that these two features play at best a very minor role in mindreading. A testable corollary is that social interaction often relies on an understanding of others that does not require the explicit application of mental state concepts.

4.4 Mindreading and Introspection

From what we have said so far, one could expect that Gordon should agree with Goldman on at least one point. Clearly, Gordon has to admit that there are some cases of mindreading in which a subject attributes a mental state to someone else through a judgment involving the application of mentalistic concepts. Surely, Gordon cannot deny that there are occasions in which we think things like Mary believes that John is late or Pat desires to visit Lisbon. Being a Simulation Theorist, Gordon will also presumably be eager to maintain that many such mindreading events are based on mental simulation events. But if Gordon admits that much, should he not also concede that Goldman’s Three-Stage Model is the right account of at least those simulation-based mindreading events? Surprising as it may be, Gordon still disagrees.

Gordon (1995) accepts that there are occasions in which a subject arrives at a judgment about someone else’s mental state on the basis of some mental simulation event. He might also concede to Goldman that such a judgment involves mentalistic concepts (but see Gordon’s 1995 distinction between comprehending and uncomprehending ascriptions). Contra Goldman, however, Gordon argues that introspection plays no role at all in the generation of these judgments. Focusing on a specific example will help us to clarify this further disagreement between Goldman and Gordon.

Suppose that I know that Tom believes that (1) and (2):

  1. Fido is a dog
  2. All dogs enjoy watching TV

On this basis, I attribute to Tom the further belief that (3):

  1. Fido enjoys watching TV

Goldman’s Three-Stage Model explains this mindreading act in the following way. FIRST STAGE: I imagine believing what Tom believes (i.e., I imagine believing that (1) and (2)); I then feed those belief-like imaginings into my reasoning mechanism (in the off-line mode); as a result, my reasoning mechanism outputs the imagined belief that (3). The SECOND STAGE of the process consists in introspecting this simulated belief and categorizing it as a belief. Crucially, in Goldman’s model, “introspection” does not merely refer to the capacity to self-ascribe mental states. Rather, it picks out a distinctive cognitive method for self-ascription, a method which is typically described as non-inferential and quasi-perceptual (see the section Inner sense accounts in the entry on self-knowledge). In particular, Goldman (2006) characterizes introspection as a transduction process that takes the neural properties of a mental state token as input and outputs a categorization of the type of state. In the case that we are considering, my introspective mechanism takes the neural properties of my token simulated belief as input and categorizes it as a belief as output. After all this, the THIRD STAGE occurs: I project the categorized belief onto Tom, through the judgment Tom believes that Fido enjoys watching TV. (You might wonder where the content of Tom’s belief comes from. Goldman (2006) has a story about that too, but we will leave this aside).

What about Gordon? How does he explain, in a simulationist fashion but without resorting to introspection, the passage from knowing that Tom believes that (1) and (2) to judging that Tom believes that (3)? According to Gordon, the first step in the process is, of course, imagining to be Tom—thus believing, in the context of the simulation, that (1) and (2). This results (again in the context of the simulation) in the formation of the belief that (3). But how do I now go about discovering that *I*, Tom, believe that (3)? How can one perform such a self-ascription if not via introspection? A suggestion given by Gareth Evans will show us how—Gordon thinks.

Evans (1982) famously argued that we answer the question “Do I believe that p?” by answering another question, namely “Is it the case that p?” In other words, according to Evans, we ascribe beliefs to ourselves not by introspecting, or by “looking inside”, but by looking “outside” and trying to ascertain how the world is. If, e.g., I want to know whether I believe that Manchester is bigger than Sheffield, I just ask myself “Is Manchester bigger than Sheffield?” If I answer in the affirmative, then I believe that Manchester is bigger than Sheffield. If I answer in the negative, then I believe that Manchester is not bigger than Sheffield. If I do not know what to answer, then I do not have any belief with regard to this subject matter.

Gordon (1986, 1995) maintains that this self-ascription strategy—which he labels “the ascent routine” (Gordon 2007)—is also the strategy that we employ, in the context of a simulation, to determine the mental states of the simulated agent:

In a simulation of O, I settle the question of whether O believes that p by simply asking … whether it is the case that p. That is, I simply concern myself with the world—O’s world, the world from O’s perspective. … Reporting O’s beliefs is just reporting what is there. (Gordon 1995: 60)

So, this is how, in Gordon’s story, I come to judge that Tom has the belief that Fido enjoys watching TV. In the context of the simulation, *I* asked *myself* (where both “*I*” and “*myself*” in fact refer to Tom) whether *I* believe that Fido enjoys watching TV. And *I* answered this question by answering another question, namely, whether it is the case that Fido enjoys watching TV. Given that, from *my* perspective, Fido enjoys watching TV (after all, from *my* perspective, Fido is a dog and all dogs enjoy watching TV), *I* expressed my belief by saying: “Yes, *I*, Tom, believe that Fido enjoys watching TV”. As you can see, in such a story, introspection does not do anything. (We will come back to the role of introspection in mindreading in section 6.2).

4.5 Summary

In sections 2, 3, and 4 we dwelt upon the “internal” disagreements among Simulation Theorists. It goes without saying that such disagreements are both wide and deep. In fact, different Simulation Theorists give different answers to such fundamental questions as: “What is mental simulation?”, “How does mental simulation contribute to mindreading?, ‘What is the role of introspection in mindreading?” In light of such differences of opinion in the simulationist camp, one might conclude that, after all, Stich and Nichols (1997) were right when saying that there is no such thing as the Simulation Theory. However, if one considers what is shared among Simulation Theorists, one will realize that there is unity amidst this diversity. A good way to reveal the commonalities among different versions of ST is by contrasting ST with its arch-enemy, i.e., the Theory-Theory of mindreading. This is what we do in the next section.

5. Simulation Theory and Theory-Theory

ST is only one of several accounts of mindreading on the market. A rough-and-ready list of the alternatives should at least include: the Intentional Stance Theory (Dennett 1987; Gergely & Csibra 2003; Gergely et al. 1995); Interactionism (Gallagher 2001; Gallagher & Hutto 2008; De Jaegher at al. 2010); and the Theory-Theory (Gopnik & Wellman 1992; Gopnik & Meltzoff 1997; Leslie 1994; Scholl & Leslie 1999). In this entry, we will discuss the Theory-Theory (TT) only, given that the TT-ST controversy has constituted the focal point of the debate on mindreading during the last 30 years or so.

5.1 The Theory-Theory

As suggested by its name, the Theory-Theory proposes that mindreading is grounded by the possession of a Theory of Mind (“a folk psychology”)—i.e., it is based on the tacit knowledge of the following body of information: a number of “folk” laws or principles connecting mental states with sensory stimuli, behavioural responses, and other mental states. Here are a couple of putative examples:

Law of sight: If S is in front of object O, S directs her eye-gaze to O, S’s visual system is properly functioning, and the environmental conditions are optimal, then ceteris paribus S will see O.

Law of the practical syllogism: If S desires a certain outcome G and S believes that by performing a certain action A she will obtain G, then ceteris paribus S will decide to perform A.

The main divide among Theory-Theorists concerns how the Theory of Mind is acquired—i.e., it concerns where this body of knowledge comes from. According to the Child-Scientist Theory-Theory (Gopnik & Wellman 1992; Gopnik & Meltzoff 1997), a child constructs a Theory of Mind exactly as a scientist constructs a scientific theory: she collects evidence, formulates explanatory hypotheses, and revises these hypotheses in the light of further evidence. In other words, “folk” laws and principles are obtained through hypothesis testing and revision—a process that, according to proponents of this view, is guided by a general-purpose, Bayesian learning mechanism (Gopnik & Wellman 2012). On the contrary, the Nativist Theory-Theory (Carruthers 2013; Scholl & Leslie 1999) argues that a significant part of the Theory of Mind is innate, rather than learned. More precisely, Nativists typically consider the core of the Theory of Mind as resulting from the maturation of a cognitive module specifically dedicated to representing mental states

These disagreements notwithstanding, the main tenet of TT is clear enough: attributions of mental states to other people are guided by the possession of a Theory of Mind. For example, if I know that you desire to buy a copy of The New York Times and I know that you believe that if you go to News & Booze you can buy a copy, then I can use the Law of the Practical Syllogism to infer that you will decide to go to News & Booze.

TT has been so popular among philosophers and cognitive scientists that the explanation it proposes has ended up being the name of the very phenomenon to be explained: on many occasions, scholars use the expression “Theory of Mind” as a synonym of “mindreading”. Simulation Theorists, however, have never been particularly impressed by this. According to them, there is no need to invoke the tacit knowledge of a Theory of Mind to account for mindreading, since a more parsimonious explanation is available: we reuse our own cognitive mechanisms to mentally simulate others’ mental states. For example, why do I need to know the Law of the Practical Syllogism, if I can employ my own decision-making mechanism (which I have anyway) to simulate your decision? It is uneconomical—Simulation Theorists say—to resort to an information-rich strategy, if an information-poor strategy will do equally as well.

The difference between TT and ST can be further illustrated through a nice example given by Stich and Nichols (1992). Suppose that you want to predict the behavior of an airplane in certain atmospheric conditions. You can collect the specifications of the airplane and infer, on the basis of aerodynamic theory, how the airplane will behave. Alternatively, you can build a model of the airplane and run a simulation. The former scenario approximates the way in which TT describes our capacity to represent others’ mental states, while the latter approximates ST. Two points need to be stressed, though. First, while knowledge of aerodynamic theory is explicit, TT says that our knowledge of the Theory of Mind is typically implicit (or tacit). That is, someone who knows aerodynamic theory is aware of the theory’s laws and principles and is able to report them correctly, while the laws and principles constituting one’s Theory of Mind typically lie outside awareness and reportability. Second, when we run a simulation of someone else’s mental states, we do not need to build a model: we are the model—that is, we use our own mind as a model of others’ minds.

Simulation Theorists maintain that the default state for the “model” is one in which the simulator simply makes no adjustments when simulating another individual. That is, ST has it that we are automatically disposed to attribute to a target mental states no different from our own current states. This would often serve adequately in social interaction between people who are cooperating or competing in what is for practical purposes the same situation. We tend to depart from this default when we perceive relevant differences between others’ situations and our own. In such cases, we might find ourselves adjusting for situational differences by putting ourselves imaginatively in what we consider the other’s situation to be.

We might also make adjustments for individual differences. An acquaintance will soon be choosing between candidate a and candidate b in an upcoming election. To us, projecting ourselves imaginatively into that voting situation, the choice is glaringly obvious: candidate a, by any reasonable criteria. But then we may wonder whether this imaginative projection into the voting situation adequately represents our acquaintance in that situation. We might recall things the person has said, or peculiarities of dress style, diet, or entertainment, that might seem relevant. Internalizing such behavior ourselves, trying to “get behind” it as an actor might get behind a scripted role, we might then put, as it were, a different person into the voting situation, one who might choose candidate b.

Such a transformation would require quarantining some of our own mental states, preferences, and dispositions, inhibiting them so that they do not contaminate our off-line decision-making in the role of the other. Such inhibition of one's own mental states would be cognitively demanding. For that reason, ST predicts that mindreading will be subject to egocentric errors—that is, it predicts that we will often attribute to a target the mental state that we would have if we were in the target’s situation, rather than the state the target is actually in (Goldman 2006). In section 6.2, we shall discuss whether this prediction is borne out by the data.

5.2 Collapse or Cooperation?

On the face of it, ST and TT could not be more different from one another. Some philosophers, however, have argued that, on closer inspection, ST collapses into TT, thus revealing itself as a form of TT in disguise. The collapse argument was originally formulated by Daniel Dennett (1987):

If I make believe I am a suspension bridge and wonder what I will do when the wind blows, what “comes to my mind” in my make-believe state depends on… my knowledge of physics… Why should my making believe I have your beliefs be any different? In both cases, knowledge of the imitated object is needed to drive the… “simulation”, and the knowledge must be… something like a theory. (Dennett 1987: 100–101, emphasis added)

Dennett’s point is clear. If I imagine being, say, a bridge, what I imagine will depend on my theory of bridges. Suppose that I have a folk theory of bridges that contains the following principle: “A bridge cannot sustain a weight superior to its own weight”. In this case, if I imagine an elephant weighing three tons walking over a bridge weighing two tons, I will imagine the bridge collapsing. Since my “bridge-simulation” is entirely theory-driven, “simulation” is a misnomer. The same carries over to “simulating other people”s mental states’, Dennett says. If I try to imagine your mental states, what I imagine will depend entirely on my Theory of Mind. Therefore, the label “mental simulation” is misleading.

Heal (1986) and Goldman (1989) promptly replied to Dennett. Fair enough, if a system S tries to simulate the state of a radically different system Q (e.g., if a human being tries to simulate the state of a bridge), then S’s simulation must be guided by a theory. However, if a system S tries to simulate the state of a relevantly similar system S*, then S’s simulation can be entirely process-driven: to simulate the state which S* is in, S simply has to run in itself a process similar to the one S* underwent. Given that, for all intents and purposes, human beings are relevantly similar to each other, a human being can mentally simulate what follows from having another human being’s mental states without resorting to a body of theoretical knowledge about the mind’s inner workings. She will just need to reuse her own cognitive mechanisms to implement a simulation process.

This reply invited the following response (Jackson 1999). If the possibility of process-driven simulation is grounded in the similarity between the simulator and the simulated, then I have to assume that you are relevantly similar to me, when I mentally simulate your mental states. This particular assumption, in turn, will be derived from a general principle—something like “Human beings are psychologically similar”. Therefore, mental simulation is grounded in the possession of a theory. The threat of collapse is back! One reply to Jackson’s arguments is as follows (for other replies see Goldman 2006): the fact that process-driven simulation is grounded in the similarity among human beings does not entail that, in order to run a simulation, a simulator must know (or believe, or assume) that such similarity obtains; no more, indeed, than the fact that the solubility of salt is grounded in the molecular structure of salt entails that a pinch of salt needs to know chemistry to dissolve in water.

Granting that ST and TT are distinct theories, we can now ask a different question: are the theories better off individually or should they join forces somehow? Let us be more explicit. Can ST on its own offer an adequate account of mindreading (or at least of the great majority of its episodes)? And what about TT? A good number of theorists now believe that neither ST nor TT alone will do. Rather, many would agree that these two theories need to cooperate, if they want to reach a satisfactory explanation of mindreading. Some authors have put forward TT-ST hybrid models, i.e., models in which the tacit knowledge of a Theory of Mind is the central aspect of mindreading, but it is in many cases supplemented by simulation processes (Botterill & Carruthers 1999; Nichols & Stich 2003). Other authors have instead defended ST-TT hybrid models, namely, accounts of mindreading where the pride of place is given to mental simulation, but where the possession of a Theory of Mind plays some non-negligible role nonetheless (Currie & Ravenscroft 2002; Goldman 2006; Heal 2003). Since this entry is dedicated to ST, we will briefly touch upon one instance of the latter variety of hybrid account.

Heal (2003) suggested that the domain of ST is restricted to those mental processes involving rational transitions among contentful mental states. To wit, Heal maintains that mental simulation is the cognitive routine that we employ to represent other people’s rational processes, i.e., those cognitive processes which are sensitive to the semantic content of the mental states involved. On the other hand,

when starting point and/or outcome are [states] without content, and/or the connection is not [rationally] intelligible, there is no reason … to suppose that the process … can be simulated. (Heal 2003: 77)

An example will clarify the matter. Suppose that I know that you desire to eat sushi, and that you believe that you can order sushi by calling Yama Sushi. To reach the conclusion that you will decide to call Yama Sushi, I only need to imagine desiring and believing what you desire and believe, and to run a simulated decision-making process in myself. No further knowledge is required to predict your decision: simulation alone will do the job. Consider, on the other hand, the situation in which I know that you took a certain drug and I want to figure out what your mental states will be. In this case—Heal says—my prediction cannot be based on mental simulation. Rather, I need to resort to a body of information about the likely psychological effects of that drug, i.e., I have to resort to a Theory of Mind (fair enough, I can also take the drug myself, but this will not count as mental simulation). This, according to Heal, generalizes to all cases in which a mental state is the input or the output of a mere causal process. In those cases, mental simulation is ineffective and should be replaced by theorizing. Still, those cases do not constitute the central part of mindreading. In fact, many philosophers and cognitive scientists would agree that the crucial component of human mindreading is the ability to reason about others’ propositional attitudes. And this is exactly the ability that, according to Heal, should be explained in term of mental simulation. This is why Heal’s proposal counts as an ST-TT hybrid, rather than the other way around.

6. Simulation Theory: Pros and Cons

ST has sparked a lively debate, which has been going on since the end of the 1980s. This debate has dealt with a great number of theoretical and empirical issues. On the theoretical side, we have seen philosophical discussions of the relation between ST and functionalism (Gordon 1986; Goldman 1989; Heal 2003; Stich & Ravenscroft 1992), and of the role of tacit knowledge in cognitive explanations (Davies 1987; Heal 1994; Davies & Stone 2001), just to name a few. Examples of empirical debates are: how to account for mindreading deficits in Autism Spectrum Disorders (Baron-Cohen 2000; Currie & Ravenscroft 2002), or how to explain the evolution of mindreading (Carruthers 2009; Lurz 2011). It goes without saying that discussing all these bones of contention would require an entire book (most probably, a series of books). In the last section of this entry, we confine ourselves to briefly introducing the reader to a small sample of the main open issues concerning ST.

6.1 The Mirror Neurons Controversy

We wrote that ST proposes that mirroring processes (i.e., activations of mirror mechanisms in the perception mode): (A) are (low-level) simulation processes, and (B) contribute (either constitutively or causally) to mindreading (Gallese et al. 2004; Gallese & Goldman 1998; Goldman 2006, 2008b; Hurley 2005). Both (A) and (B) have been vehemently contested by ST’s opponents.

Beginning with (A), it has been argued that mirroring processes do not qualify as simulation processes, because they fail to satisfy the definition of “simulation process” (Gallagher 2007; Herschbach 2012; Jacob 2008; Spaulding 2012) and/or because they are better characterized in different terms, e.g., as enactive perceptual processes (Gallagher 2007) or as elements in an information-rich process (Spaulding 2012). As for (B), the main worry runs as follows. Granting that mirroring processes are simulation processes, what evidence do we have for the claim that they contribute to mindreading? This, in particular, has been asked with respect to the role of mirroring processes in “action understanding” (i.e., the interpretation of an agent’s behavior in terms of the agent’s intentions, goals, etc.). After all, the neuroscientific evidence just indicates that action mirroring correlates with episodes of action understanding, but correlation is not causation, let alone constitution. In fact, there are no studies examining whether disruption of the monkey mirror neuron circuit results in action understanding deficits, and the evidence on human action understanding following damage to the action mirror mechanism is inconclusive at best (Hickok 2009). In this regard, some authors have suggested that the most plausible hypothesis is instead that action mirroring follows (rather than causes or constitutes) the understanding of others’ mental states (Csibra 2007; Jacob 2008). For example, Jacob (2008) proposes that the job of mirroring processes in the action domain is just that of computing a representation of the observed agent’s next movement, on the basis of a previous representation of the agent’s intention. Similar deflationary accounts of the action mirror mechanism have been given by Brass et al. (2007), Hickok (2014), and Vannuscorps and Caramazza (2015)—these accounts typically take the STS (superior temporal sulcus, a brain region lacking mirror neurons) to be the critical neural area for action understanding.

There are various ways to respond to these criticisms. A strong response argues that they are based on a misunderstanding of the relevant empirical findings, as well as on a mischaracterization of the role that ST attributes to the action mirror mechanism in action understanding (Rizzolatti & Sinigaglia 2010, 2014). A weaker response holds that the focus on action understanding is a bit of a red herring, given that the most robust evidence in support of the central role played by mirroring processes in mindreading comes from the emotion domain (Goldman 2008b). We will consider the weaker response here.

Goldman and Sripada (2005) discuss a series of paired deficits in emotion production and face-based emotion mindreading. These deficits—they maintain—are best explained by the hypothesis that one attributes emotions to someone else through simulating these emotions in oneself: when the ability to undergo the emotion breaks down, the mindreading capacity breaks down as well. Barlassina (2013) elaborates on this idea by considering Huntington’s Disease (HD), a neurodegenerative disorder resulting in, among other things, damage to the disgust mirror mechanism. As predicted by ST, the difficulties individuals with HD have in experiencing disgust co-occur with an impairment in attributing disgust to someone else on the basis of observing her facial expression—despite perceptual abilities and knowledge about disgust being preserved in this clinical population. Individuals suffering from HD, however, exhibit an intact capacity for disgust mindreading on the basis of non-facial visual stimuli. For this reason, Barlassina concludes by putting forward an ST-TT hybrid model of disgust mindreading on the basis of visual stimuli.

6.2 Self and Others

ST’s central claim is that we reuse our own cognitive mechanisms to arrive at a representation of other people’s mental states. This claim raises a number of issues concerning how ST conceptualizes the self-other relation. We will discuss a couple of them.

Gallagher (2007: 355) writes that

given the large diversity of motives, beliefs, desires, and behaviours in the world, it is not clear how a simulation process … can give me a reliable sense of what is going on in the other person’s mind.

There are two ways of interpreting Gallagher’s worry. First, it can be read as saying that if mindreading is based on mental simulation, then it is hard to see how mental state attributions could be epistemically justified. This criticism, however, misses the mark entirely, since ST is not concerned with whether mental state attributions count as knowledge, but only with how, as a matter of fact, we go about forming such attributions. A second way to understand Gallagher’s remarks is this: as a matter of fact, we are pretty successful in understanding other minds; however, given the difference among individual minds, this pattern of successes cannot be explained in terms of mental simulation.

ST has a two-tier answer to the second reading of Gallagher’s challenge. First, human beings are very similar with regard to cognitive processes such as perception, theoretical reasoning, practical reasoning, etc. For example, there is a very high probability that if both you and I look at the same scene, we will have the same visual experience. This explains why, in the large majority of cases, I can reuse my visual mechanism to successfully simulate your visual experiences. Second, even though we are quite good at recognizing others’ mental states, we are nonetheless prone to egocentric errors, i.e., we tend to attribute to a target the mental state that we would undergo if we were in the target’s situation, rather than the actual mental state the target is in (Goldman 2006). A standard example is the curse of knowledge bias, where we take for granted that other people know what we know (Birch & Bloom 2007). ST has a straightforward explanation of such egocentric errors (Gordon 1995; Goldman 2006): if we arrive at attributing mental states via mental simulation, the attribution accuracy will depend on our capacity to “quarantine” our genuine mental states, when they do not match the target’s, and to replace them with more appropriate simulated mental states. This “adjustment” process, however, is a demanding one, because our genuine mental states exert a powerful tendency. Thus, Gallagher is right when he says that, on some occasions, “if I project the results of my own simulation onto the other, I understand only myself in that other’s situation, but I don’t understand the other” (Gallagher 2007: 355). However, given how widespread egocentric errors are, this counts as a point in favour of ST, rather than as an argument against it (but see de Vignemont & Mercier 2016, and Saxe 2005).

Carruthers (1996, 2009, 2011) raises a different problem for ST: no version of ST can adequately account for self-attributions of mental states. Recall that, according to Goldman (2006), simulation-based mindreading is a three-stage process in which we first mentally simulate a target’s mental state, we then introspect and categorize the simulated mental state, and we finally attribute the categorized state to the target. Since Goldman’s model has it that attributions of mental states to others asymmetrically depend on the ability to introspect one’s own mental states, it predicts that: (A) introspection is (ontogenetically and phylogenetically) prior to the ability to represent others’ mental states; (B) there are cases in which introspection works just fine, but where the ability to represent others’ mental states is impaired (presumably, because the mechanism responsible for projecting one’s mental states to the target is damaged). Carruthers (2009) argues that neither (A) nor (B) are borne out by the data. The former because there are no creatures that have introspective capacities but at the same time lack the ability to represent others’ mental states; the latter because there are no dissociation cases in which an intact capacity for introspection is paired with an impairment in the ability to represent others’ mental states.

How might a Simulation Theorist respond to this objection? As we said in section 4, Gordon’s (1986, 1995, 1996) Radical Simulationism does not assign any role to introspection in mindreading. Rather, Gordon proposes that self-ascriptions are guided by ascent routines through which we answer the question “Do I believe that p?” by answering the lower-order question “Is it the case that p?” Carruthers (1996, 2011) thinks that this won’t do either. Here is one of the many problems that Carruthers raises for this suggestion—we can call it “The Scope Problem”:

this suggestion appears to have only a limited range of application. For even if it works for the case of belief, it is very hard to see how one might extend it to account for our knowledge of our own goals, decisions, or intentions—let alone our knowledge of our own attitudes of wondering, supposing, fearing, and so on. (Carruthers 2011: 81)

Carruthers’ objections are important and deserve to be taken seriously. To discuss them, however, we would need to introduce a lot of further empirical evidence and many complex philosophical ideas about self-knowledge. This is not a task that we can take up here (the interested reader is encouraged to read, in addition to Gordon (2007) and Goldman (2009), the SEP entries on self-knowledge and on introspection). The take-home message should be clear enough nonetheless: anybody who puts forward an account of mindreading should remember that such an account has to cohere with a plausible story about the cognitive mechanisms underlying self-attribution.

6.3 Developmental Findings

The development of mindreading capacities in children has been one of the central areas of empirical investigation. In particular, developmental psychologists have put a lot of effort into detailing how the ability to attribute false beliefs to others develops. Until 2005, the central experimental paradigm to test this ability was the verbal false belief task (Wimmer & Perner 1983). Here is a classic version of it. A subject is introduced to two dolls, Sally and Anne, and three objects: Sally’s ball, a basket, and a box. Sally puts her ball in the basket and leaves the scene. While Sally is away, Anne takes the ball out of the basket and puts it into the box. Sally then returns. The subject is asked where she thinks Sally will look for the ball. The correct answer, of course, is that Sally will look inside the basket. To give this answer, the subject has to attribute to Sally the false belief that the ball is in the basket. A number of experiments have found that while four-year old children pass this task, three-year old children fail it (for a review, see Wellman et al. 2001). For a long time, the mainstream interpretation of these findings has been that children acquire the ability to attribute false beliefs only around their fourth birthday (but see Clements & Perner 1994 and Bloom & German 2000).

In 2005, this developmental timeline was called into question. Kristine Onishi and Renée Baillargeon (2005) published the result of a non-verbal version of the false belief task, which they administered to 15-month old infants. The experiment involves three steps. First, the infants see a toy between two boxes, one yellow and one green, and then an actor hiding the toy inside the green box. Next, the infants see the toy sliding out of the green box and hiding inside the yellow box. In the true belief condition (TB), the actor notices that the toy changes location, while in the false belief condition (FB) she does not. Finally, half of the infants see the actor reaching into the green box, while the other half sees the actor reaching into the yellow box. According to the violation-of-expectation paradigm, infants reliably look for a longer time at unexpected events. Therefore, if the infants expected the actor to search for the toy on the basis of the actor’s belief about its location, then when the actor had a true belief that the toy was hidden in one box, the infants should look longer when the actor reached into the other box instead. Conversely, the infants should look longer at one box when the actor falsely believed that the toy was hidden in the other box. Strikingly, these predictions were confirmed in both the (TB) and (FB) conditions. On this basis, Onishi and Baillargeon (2005) concluded that children of 15 months possess the capacity to represent others’ false beliefs.

This and subsequent versions of non-verbal false belief tasks attracted a huge amount of interest (at the current stage of research, there is evidence that sensitivity to others’ false beliefs is present in infants as young as 7 months—for a review, see Baillargeon at al. 2016). Above all, the following two questions have been widely discussed: why do children pass the non-verbal false belief task at such an early age, but do not pass the verbal version before the age of 4? Does passing the non-verbal false belief task really indicate the capacity to represent others’ false beliefs? (Perner & Ruffman 2005; Apperly & Butterfill 2009; Baillargeon et al. 2010; Carruthers 2013; Helming et al. 2014).

Goldman and Jordan (2013) maintain that ST has a good answer to both questions. To begin with, they argue that it is implausible to attribute to infants such sophisticated meta-representational abilities as the ability to represent others’ false beliefs. Thus, Goldman and Jordan favour a deflationary view, according to which infants are sensitive to others’ false beliefs, but do not represent them as such. In particular, they propose that rather than believing that another subject S (falsely) believes that p, infants simply imagine how the world is from S’s perspective—that is, they simply imagine that p is the case. This—Goldman and Jordan say—is a more primitive psychological competence than mindreading, since it does not involve forming a judgment about others’ mental states. This brings us to Goldman and Jordan’s answer to the question “why do children pass the verbal false belief task only at four?” Passing this task requires fully-fledged mindreading abilities and executive functions such as inhibitory control. It takes quite a lot of time—around 3 to 4 years—before these functions and abilities come online.

7. Conclusion

Since the late 1980s, ST has received a great amount of attention from philosophers, psychologists, and neuroscientists. This is not surprising. Mindreading is a central human cognitive capacity, and ST challeges some basic assumptions about the cognitive processes and neural mechanisms underlying human social behavior. Moreover, ST touches upon a number of major philosophical problems, such as the relation between self-knowledge and knowledge of other minds, and the nature of mental concepts, including the concept of mind itself. In this entry, we have considered some of the fundamental empirical and philosophical issues surrounding ST. Many of them remain open. In particular, while the consensus view is now that both mental simulation and theorizing play important role in mindreading, the currently available evidence falls short of establishing what their respective roles are. In other words, it is likely that we shall end up adopting a hybrid model of mindreading that combines ST and TT, but, at the present stage, it is very difficult to predict what this hybrid model will look like. Hopefully, the joint work of philosophers and cognitive scientists will help to settle the matter.

Bibliography

  • Anderson, Michael L., 2008, “Neural Reuse: A Fundamental Organizational Principle of the Brain”, Behavioral and Brain Science, 20(4): 239–313. doi:10.1017/S0140525X10000853
  • –––, 2015, After Phrenology: Neural Reuse and the Interactive Brain, Cambridge, MA: MIT Press.
  • Apperly, Ian A. and Stephen A. Butterfill, 2009, “Do Humans Have Two Systems to Track Beliefs and Belief-Like States?”, Psychological Review, 116(4): 953–70. doi:10.1037/a0016923
  • Avenanti, Alessio, Domenica Bueti, Gaspare Galati, & Salvatore M. Aglioti, 2005, “Transcranial Magnetic Stimulation Highlights the Sensorimotor Side of Empathy for Pain”, Nature Neuroscience, 8(7): 955–960. doi:10.1038/nn1481
  • Baillargeon, Renée, Rose M. Scott, and Zijing He, 2010, “False-Belief Understanding in Infants”, Trends in Cognitive Sciences, 14(3): 110–118. doi:10.1016/j.tics.2009.12.006
  • Baillargeon, Renée, Rose M. Scott, and Lin Bian, 2016, “Psychological Reasoning in Infancy”, Annual Review of Psychology, 67: 159–186. doi:10.1146/annurev-psych-010213-115033
  • Barlassina, Luca, 2013, “Simulation is not Enough: A Hybrid Model of Disgust Attribution on the Basis of Visual Stimuli”, Philosophical Psychology, 26(3): 401–419. doi:10.1080/09515089.2012.659167
  • Baron-Cohen, Simon, 2000, “Theory of Mind and Autism: A Fifteen Year Review”, in Simon Baron-Cohen, Helen Tager-Flusberg, and Donald J. Cohen (eds.); Understanding Other Minds: Perspectives from Developmental Cognitive Neuroscience (2nd edition), New York: Oxford University Press, pp. 3–20.
  • Bechtel, William, 2008, Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience, New York: Taylor and Francis.
  • Birch, Susan A. and Paul Bloom, 2007, “The Curse of Knowledge in Reasoning About False Beliefs”, Psychological Science, 18(5): 382–386. doi:10.1111/j.1467-9280.2007.01909.x
  • Bisiach, Edoardo and Claudio Luzzatti, 1978, “Unilateral Neglect of Representational Space”, Cortex, 14(1): 129–133. doi:10.1016/S0010-9452(78)80016-1
  • Blakemore, S.-J., D. Bristow, G. Bird, C. Frith, and J. Ward, 2005, “Somatosensory Activations During the Observation of Touch and a Case of Vision-Touch Synaesthesia”, Brain, 128(7): 1571–1583. doi:10.1093/brain/awh500
  • Bloom, Paul and Tim P. German, 2000, “Two Reasons to Abandon the False Belief Task as a Test of Theory of Mind”, Cognition, 77(1): B25–31. doi:10.1016/S0010-0277(00)00096-2
  • Botterill, George and Peter Carruthers, 1999, The Philosophy of Psychology, Cambridge: Cambridge University Press.
  • Brass, Marcel, Ruth M. Schmitt, Stephanie Spengler, and György Gergely, 2007, “Investigating Action Understanding: Inferential Processes versus Action Simulation”, Current Biology, 17(24): 2117–2121. doi:10.1016/j.cub.2007.11.057
  • Brozzo, Chiara, forthcoming, “Motor Intentions: How Intentions and Motor Representations Come Together”, Mind & Language.
  • Buckner, Randy L. and Daniel C. Carroll, 2007, “Self-Projection and the Brain”, Trends in Cognitive Science, 11(2): 49–57. doi:10.1016/j.tics.2006.11.004
  • Butterfill, Stephen A. and Corrado Sinigaglia, 2014, “Intention and Motor Representation in Purposive Action”, Philosophy and Phenomenological Research, 88(1): 119–145. doi:10.1111/j.1933-1592.2012.00604.x
  • Carruthers, Peter, 1996, “Simulation and Self-Knowledge: A Defense of Theory-Theory”, in Carruthers and Smith 1996: 22–38. doi:10.1017/CBO9780511597985.004
  • –––, 2009, “How we Know Our Own Minds: The Relationship between Mindreading and Metacognition”, Behavioral and Brain Sciences, 32(2): 121–138. doi:10.1017/S0140525X09000545
  • –––, 2011, The Opacity of Mind: An Integrative Theory of Self-Knowledge, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199596195.001.0001
  • –––, 2013, “Mindreading in Infancy”, Mind and Language, 28(2): 141–172. doi:10.1111/mila.12014
  • Carruthers, Peter and Peter K. Smith (eds.), 1996, Theories of Theories of Mind, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511597985
  • Clements, Wendy A. and Josef Perner, 1994, “Implicit Understanding of Belief”, Cognitive Development, 9(4): 377–395. doi:10.1016/0885-2014(94)90012-4
  • Craver, Carl F., 2007, Explaining the Brain. Mechanisms and the Mosaic Unity of Neuroscience, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199299317.001.0001
  • Csibra, Gergely, 2007, “Action Mirroring and Action Understanding: An Alternative Account”, in Patrick Haggard, Yves Rosetti, and Mitsuo Kawato (eds.) Sensorimotor Foundations of Higher Cognition. Attention and Performance XII, Oxford University Press, Oxford, pp. 453–459. doi:10.1093/acprof:oso/9780199231447.003.0020
  • Currie, Gregory, 1995, “Visual Imagery as the Simulation of Vision”, Mind and Language, 10(1–2): 25–44. doi:10.1111/j.1468-0017.1995.tb00004.x
  • –––, 2002, “Desire in Imagination”, in Tamar Szabo Gendler and John Hawthorne (eds.), Conceivability and Possibility, Oxford: Oxford University Press, pp. 201–221.
  • Currie, Gregory and Ian Ravenscroft, 1997, “Mental Simulation and Motor Imagery”, Philosophy of Science, 64(1): 161–80. doi:10.1086/392541
  • –––, 2002, Recreative Minds: Imagination in Philosophy and Psychology, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198238089.001.0001
  • Davies, Martin, 1987, “Tacit Knowledge and Semantic Theory: Can a Five per Cent Difference Matter?” Mind, 96(384): 441–462. doi:10.1093/mind/XCVI.384.441
  • Davies, Martin and Tony Stone (eds.), 1995a, Folk Psychology: The Theory of Mind Debate, Oxford: Blackwell Publishers.
  • ––– (eds.), 1995b, Mental Simulation: Evaluations and Applications—Reading in Mind and Language, Oxford: Blackwell Publishers.
  • –––, 2001, “Mental Simulation, Tacit Theory, and the Threat of Collapse”, Philosophical Topics, 29(1/2): 127–173. doi:10.5840/philtopics2001291/212
  • Decety, Jean and François Michel, 1989, “Comparative Analysis of Actual and Mental Movement Times in Two Graphic Tasks”, Brain and Cognition, 11(1): 87–97. doi:10.1016/0278-2626(89)90007-9
  • De Jaegher, Hanne, Ezequiel Di Paolo, and Shaun Gallagher, 2010, “Can Social Interaction Constitute Social Cognition?”Trends in Cognitive Sciences, 14(10): 441–447. doi:10.1016/j.tics.2010.06.009
  • Dennett, Daniel C., 1987, The Intentional Stance, Cambridge, MA: MIT Press.
  • de Vignemont, Frédérique, 2009, “Drawing the Boundary Between Low-Level and High-Level Mindreading”, Philosophical Studies, 144(3): 457–466. doi:10.1007/s11098-009-9354-1
  • de Vignemont, Frédérique and Hugo Mercier, 2016, “Under Influence: Is Altercentric Bias Compatible with Simulation Theory?” in Brian P. McLaughlin and Hilary Kornblith (eds.), Goldman and his Critics, Oxford: Blackwell. doi:10.1002/9781118609378.ch13
  • Dilthey, Wilhelm, [1894] 1977, Descriptive Psychology and Historical Understanding, Richard M. Zaner and Kenneth L. Heiges (trans.), with an introduction by Rudolf A. Makkreel, The Hague: Martinus Nijhof. doi:10.1007/978-94-009-9658-8
  • di Pellegrino, G., L. Fadiga, L. Fogassi, V. Gallese, and G. Rizzolatti, 1992, “Understanding Motor Events: A Neuropsychological Study”, Experimental Brain Research, 91(1): 176–180. doi:10.1007/BF00230027
  • Doggett, Tyler and Andy Egan, 2007, “Wanting Things You Don’t Want: The Case for an Imaginative Analogue of Desire”, Philosophers' Imprint, 7(9). [Doggett and Egan 2007 available online]
  • Evans, Gareth, 1982, The Varieties of Reference, Oxford: Oxford University Press.
  • Fisher, Justin C., 2006, “Does Simulation Theory Really Involve Simulation?” Philosophical Psychology, 19(4): 417–432. doi:10.1080/09515080600726377
  • Fuller, Gary, 1995, “Simulation and Psychological Concepts”, in Davies and Stone 1995b: chapter 1, pp. 19–32
  • Funkhouser, Eric and Shannon Spaulding, 2009, “Imagination and Other Scripts”, Philosophical Studies, 143(3): 291–314. doi:10.1007/s11098-009-9348-z
  • Gallagher, Shaun, 2001, “The Practice of Mind: Theory, Simulation, or Primary Interaction?” Journal of Consciousness Studies, 8(5–7): 83–108.
  • –––, 2007, “Simulation Trouble”, Social Neuroscience, 2(3–4): 353–365. doi:10.1080/17470910601183549
  • Gallagher, Shaun and Daniel D. Hutto, 2008, “Understanding Others Through Primary Interaction and Narrative Practice”, in Jordan Zlatev, Timothy P. Racine, Chris Sinha, & Esa Itkonen (eds.), The Shared Mind: Perspectives on Intersubjectivity, Amsterdam: John Benjamins, pp. 17–38. doi:10.1075/celcr.12.04gal
  • Gallese, Vittorio, 2001, “The ‘Shared Manifold’ Hypothesis: From Mirror Neurons to Empathy”, Journal of Consciousness Studies, 8(5–7): 33–50.
  • –––, 2007, “Before and Below ‘Theory of Mind’: Embodied Simulation and the Neural Correlates of Social Cognition”, Philosophical Transactions of the Royal Society B, 362: 659–669. doi:10.1098/rstb.2006.2002
  • Gallese, Vittorio and Alvin Goldman, 1998, “Mirror Neurons and the Simulation Theory of Mind-reading”, Trends in Cognitive Sciences, 2(12): 493–501. doi:10.1016/S1364-6613(98)01262-5
  • Gallese, Vittorio and Corrado Sinigaglia, 2011, “What is so Special about Embodied Simulation?” Trends in Cognitive Science, 15(11): 512–9. doi:10.1016/j.tics.2011.09.003
  • Gallese, Vittorio, Luciano Fadiga, Leonardo Fogassi, and Giacomo Rizzolatti, 1996, “Action Recognition in the Premotor Cortex”, Brain, 119(2): 593–609. doi:10.1093/brain/119.2.593
  • Gallese, Vittorio, Leonardo Fogassi, Luciano Fadiga, and Giacomo Rizzolatti, 2002, “Action Representation and the Inferior Parietal Lobule”, in Wolfgang Prinz and Bernhard Hommel (eds.), Common Mechanisms in Perception and Action (Attention and Performance XIX), Oxford: Oxford University Press, pp. 247–266.
  • Gallese, Vittorio, Christian Keysers, and Giacomo Rizzolatti, 2004, “A Unifying View of the Basis of Social Cognition”, Trends in Cognitive Sciences: 8(9): 396–403. doi:10.1016/j.tics.2004.07.002
  • Gariépy, Jean-François, Karli K. Watson, Emily Du, Diana L. Xie, Joshua Erb, Dianna Amasino, and Michael L. Platt, 2014, “Social Learning in Humans and Other Animals”, Frontiers in Neuroscience, 31 March 2014, doi:10.3389/fnins.2014.00058.
  • Gergely, György and Gergely Csibra, 2003, “Teleological Reasoning in Infancy: The Naïve Theory of Rational Action”, Trends in Cognitive Sciences, 7(7): 287–292. doi:10.1016/S1364-6613(03)00128-1
  • Gergely, György, Zoltán Nádasdy, Gergely Csibra, and Szilvia Bíró, 1995, “Taking the Intentional Stance at 12 Months of Age”, Cognition, 56(2): 165–93. doi:10.1016/0010-0277(95)00661-H
  • Goldenberg, Georg, Wolf Müllbacher, and Andreas Nowak, 1995, “Imagery without Perception: A Case Study of Anosognosia for Cortical Blindness”, Neuropsychologia, 33(11): 1373–1382. doi:10.1016/0028-3932(95)00070-J
  • Goldman, Alvin I., 1989, “Interpretation Psychologized”, Mind and Language, 4(3): 161–185; reprinted in Davies and Stone 1995a, pp. 74–99. doi:10.1111/j.1468-0017.1989.tb00249.x
  • –––, 2002, “Simulation Theory and Mental Concepts”, in Jérôme Dokic & Joëlle Proust (eds.), Simulation and Knowledge of Action, Amsterdam ; Philadelphia: John Benjamins, 35–71.
  • –––, 2006, Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading, Oxford: Oxford University Press. doi:10.1093/0195138929.001.0001
  • –––, 2008a, “Hurley on Simulation”, Philosophy and Phenomenological Research, 77(3): 775–788. doi:10.1111/j.1933-1592.2008.00221.x
  • –––, 2008b, “Mirroring, Mindreading, and Simulation”, in Jaime A. Pineda (ed.), Mirror Neuron Systems: The Role of Mirroring Processes in Social Cognition, New York: Humana Press, pp. 311–330. doi:10.1007/978-1-59745-479-7_14
  • –––, 2009, “Précis of Simulating Minds: : The Philosophy, Psychology, and Neuroscience of Mindreading” and “Replies to Perner and Brandl, Saxe, Vignemont, and Carruthers”, Philosophical Studies 144(3): 431–434, 477–491. doi:10.1007/s11098-009-9355-0 and doi:10.1007/s11098-009-9358-x
  • –––, 2012a, “A Moderate Approach to Embodied Cognitive Science”, Review of Philosophy and Psychology, 3(1): 71–88. doi:10.1007/s13164-012-0089-0
  • –––, 2012b, “Theory of Mind”, in Eric Margolis, Richard Samuels, and Stephen P. Stich (eds.), The Oxford Handbook of Philosophy of Cognitive Science, Oxford: Oxford University Press, 402–424. doi:10.1093/oxfordhb/9780195309799.013.0017
  • Goldman, Alvin I. and Lucy C. Jordan, 2013, “Mindreading by Simulation: The Roles of Imagination and Mirroring”, in Simon Baron-Cohen, Michael Lombardo, and Helen Tager-Flusberg (eds.), Understanding Other Minds: Perspectives From Developmental Social Neuroscience, Oxford: Oxford University Press, 448–466. doi:10.1093/acprof:oso/9780199692972.003.0025
  • Goldman, Alvin I. and Chandra Sekhar Sripada, 2005, “Simulationist Models of Face-Based Emotion Recognition”,Cognition, 94(3): 193–213. doi:10.1016/j.cognition.2004.01.005
  • Gopnik, Alison and Andrew N. Meltzoff, 1997, Words, Thoughts, and Theories, Cambridge, MA: Bradford Books/MIT Press.
  • Gopnik, Alison and Henry M. Wellman, 1992, “Why the Child's Theory of Mind Really Is a Theory”, Mind and Language, 7(1–2): 145–71: reprinted in Davies and Stone 1995a, pp. 232–258. doi:10.1111/j.1468-0017.1992.tb00202.x
  • –––, 2012, “Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory-Theory”, Psychological Bulletin”, 138(6):1085–108. doi:10.1037/a0028044
  • Gordon, Robert M., 1986, “Folk Psychology as Simulation”, Mind and Language, 1(2): 158–171; reprinted in Davies and Stone 1995a, pp. 60–73. doi:10.1111/j.1468-0017.1986.tb00324.x
  • –––, 1995, “Simulation Without Introspection or Inference From Me to You”, in Davies & Stone 1995b: 53–67.
  • –––, 1996, “‘Radical’ Simulationism”, in Carruthers & Smith 1996: 11–21. doi:10.1017/CBO9780511597985.003
  • –––, 2000, “Sellars’s Rylean Revisited”, Protosoziologie, 14: 102–114.
  • –––, 2005, “Intentional Agents Like Myself,”, in Perspectives on Imitation: From Mirror Neurons to Memes, S. Hurley & N. Chater (eds.), Cambridge, MA: MIT Press
  • –––, 2007, “Ascent Routines for Propositional Attitudes”, Synthese, 159 (2): 151–165. doi:10.1007/s11229-007-9202-9
  • Harris, Paul L., 1989, Children and Emotion, Oxford: Blackwell Publishers.
  • –––, 1992, “From Simulation to Folk Psychology: The Case for Development”, Mind and Language, 7(1–2): 120–144; reprinted in Davies and Stone 1995a, pp. 207–231. doi:10.1111/j.1468-0017.1992.tb00201.x
  • Heal, Jane, 1986, “Replication and Functionalism”, in Language, Mind, and Logic, J. Butterfield (ed.), Cambridge: Cambridge University Press; reprinted in Davies and Stone 1995a, pp. 45–59.
  • –––, 1994, “Simulation vs Theory-Theory: What is at Issue?” in Christopher Peacocke (ed.), Objectivity, Simulation, and the Unity of Consciousness: Current Issues in the Philosophy of Mind (Proceedings of the British Academy, 83), Oxford: Oxford University Press, pp. 129–144. [Heal 1994 available online]
  • –––, 1995, “How to Think About Thinking”, in Davies and Stone 1995b: chapter 2, pp. 33–52.
  • –––, 1998, “Co-Cognition and Off-Line Simulation: Two Ways of Understanding the Simulation Approach”, Mind and Language, 13(4): 477–498. doi:10.1111/1468-0017.00088
  • –––, 2003, Mind, Reason and Imagination, Cambridge: Cambridge University Press.
  • Helming, Katharina A., Brent Strickland, and Pierre Jacob, 2014, “Making Sense of Early False-Belief Understanding”, Trends in Cognitive Sciences, 18(4): 167–170. doi:10.1016/j.tics.2014.01.005
  • Herschbach, Mitchell, 2012, “Mirroring Versus Simulation: On the Representational Function of Simulation”, Synthese, 189(3): 483–51. doi:10.1007/s11229-011-9969-6
  • Hickok, Gregory, 2009, “Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans”, Journal of Cognitive of Neuroscience, 21(7): 1229–1243. doi:10.1162/jocn.2009.21189
  • –––, 2014, The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition, New York: Norton.
  • Hume, David, 1739, A Treatise of Human Nature, edited by L.A. Selby-Bigge, 2nd edition, revised by P.H. Nidditch, Oxford: Clarendon Press, 1975
  • Hurley, Susan, 2005, “The Shared Circuits Hypothesis: A Unified Functional Architecture for Control, Imitation, and Simulation”, in Perspectives on Imitation: From Neuroscience to Social Science, Volume 1: Mechanisms of Imitation and Imitation in Animals, Susan Hurley & Nick Chater (eds.), Cambridge, MA: MIT Press, pp. 177–193.
  • –––, 2008, “Understanding Simulation”, Philosophy and Phenomenological Research, 77(3): 755–774. doi:10.1111/j.1933-1592.2008.00220.x
  • Jackson, Frank, 1999, “All That Can Be at Issue in the Theory-Theory Simulation Debate”, Philosophical Papers, 28(2): 77–95. doi:10.1080/05568649909506593
  • Jacob, Pierre, 2008, “What do Mirror Neurons Contribute to Human Social Cognition?”, Mind and Language, 23(2): 190–223. doi:10.1111/j.1468-0017.2007.00337.x
  • –––, 2012, “Sharing and Ascribing Goals”, Mind and Language, 27(2): 200–227. doi:10.1111/j.1468-0017.2012.01441.x
  • Jeannerod, Marc and Elisabeth Pacherie, 2004, “Agency, Simulation and Self-Identification”, Mind and Language 19(2): 113–146. doi:10.1111/j.1468-0017.2004.00251.x
  • Kieran, Matthew and Dominic McIver Lopes (eds.), 2003, Imagination, Philosophy, and the Arts, London: Routledge.
  • Kosslyn, S.M., A. Pascual-Leone, O. Felician, S. Camposano, J.P. Keenan, W.L. Thompson, G. Ganis, K.E. Sukel, and N.M. Alpert, 1999, “The Role of Area 17 in Visual Imagery: Convergent Evidence from PET and rTMS”, Science, 284(5411): 167–170. doi:10.1126/science.284.5411.167
  • Leslie, Alan M., 1994, “Pretending and Believing: Issues in the Theory of ToMM”, Cognition, 50(1–3): 211–238 . doi:10.1016/0010-0277(94)90029-9
  • Lipps, Theodor, 1903, “Einfühlung, Innere Nachahmung und Organempfindung”, Archiv für gesamte Psychologie, 1: 465–519. Translated as “Empathy, Inner Imitation and Sense-Feelings”, in A Modern Book of Esthetics, New York: Holt, Rinehart and Winston, 1979, pp. 374–382.
  • Lurz, Robert W., 2011, Mindreading Animals, Cambridge, MA: MIT Press. doi:10.7551/mitpress/9780262016056.001.0001
  • Machamer, Peter, Lindley Darden, and Carl F. Craver, 2000, “Thinking about Mechanisms”, Philosophy of science, 67(1): 1–25. doi:10.1086/392759
  • Marr, D., 1982. Vision, San Francisco: Freeman Press.
  • Nichols, Shaun (ed.), 2006a, The Architecture of the Imagination: New Essays on Pretense, Possibility, and Fiction, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199275731.001.0001
  • –––, 2006b, “Just the Imagination: Why Imagining Doesn't Behave Like Believing”, Mind & Language, 21(4): 459–474. doi:10.1111/j.1468-0017.2006.00286.x
  • Nichols, Shaun and Stephen P. Stich, 2003, Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding of Other Minds, Oxford: Oxford University Press. doi:10.1093/0198236107.001.0001
  • Onishi, Kristine H. and Renée Baillargeon, 2005, “Do 15-Month-Old Infants Understand False Beliefs?” Science, 308(5719): 255–258. doi:10.1126/science.1107621
  • Pacherie, Elisabeth, 2000, “The Content of Intentions”, Mind and Language, 15(4): 400–432. doi:10.1111/1468-0017.00142
  • Peackocke, C. 2005, “Another I: Representing Conscious States, Perception, and Others”, in J. L. Bermúdez (ed.), Thought, Reference, and Experience: Themes From the Philosophy of Gareth Evans, Oxford: Clarendon Press
  • Perner, Josef and Deborah Howes, 1992, “‘He Thinks he Knows’ and more Developmental Evidence Against the Simulation (Role-Taking) Theory”, Mind and Language, 7(1–2): 72–86; reprinted in Davies and Stone 1995a, pp. 159–173. doi:10.1111/j.1468-0017.1992.tb00197.x
  • Perner Josef and Anton Kühberger, 2005, “Mental Simulation: Royal Road to Other Minds?”, in Bertram F. Malle and Sara D. Hodges (eds.), Other Minds: How Humans Bridge the Divide Between Self and Others, New York: Guilford Press, pp. 174–187.
  • Perner, Josef and Ted Ruffman, 2005, “Infants’ Insight in to the Mind: How Deep?” Science, 308(5719): 214–216. doi:10.1126/science.1111656
  • Ramsey, William M., 2010, “How Not to Build a Hybrid: Simulation vs. Fact-finding”, Philosophical Psychology, 23(6): 775–795. doi:10.1080/09515089.2010.529047
  • Rizzolatti, Giacomo and Laila Craighero, 2004, “The Mirror-Neuron System”, Annual Review of Neuroscience, 27: 169–92. doi:10.1146/annurev.neuro.27.070203.144230
  • Rizzolatti, Giacomo & Corrado Sinigaglia, 2007, “Mirror neurons and motor intentionality”, Functional Neurology, 22(4): 205–210
  • –––, 2010, “The Functional Role of the Parieto-Frontal Mirror Circuit: Interpretations and Misinterpretations”, Nature Reviews Neuroscience 11: 264–274. doi:10.1038/nrn2805
  • –––, 2014, “Review: A Curious Book on Mirror Neurons and Their Myth”, The American Journal of Psychology, 128(4): 527–533. doi:10.5406/amerjpsyc.128.4.0527
  • –––, forthcoming, “The Mirror Mechanism: a Basic Principle of Brain Function”, Nature Reviews Neuroscience, 17: 757–765. doi:10.1038/nrn.2016.135
  • Rizzolatti, G., R. Camarda, L. Fogassi, M. Gentilucci, G. Luppino, and M. Matelli, 1988, “Functional Organization of Inferior Area 6 in the Macaque Monkey”, Experimental Brain Research, 71(1): 491–507. doi:10.1007/BF00248742
  • Rizzolatti, Giacomo, Luciano Fadiga, Vittorio Gallese, and Leonardo Fogassi, 1996, “Premotor Cortex and the Recognition of Motor Actions”, Cognitive Brain Research, 3(2): 131–141. doi:10.1016/0926-6410(95)00038-0
  • Rozin, Paul, Jonathan Haidt, and Clark R. McCauley, 2008, “Disgust”, in Michael Lewis, Jeannette M. Haviland–Jones & Lisa Feldman Barrett (eds.), Handbook of Emotions (3rd edition), New York: Guilford Press, pp. 757–776.
  • Saxe, Rebbecca, 2005, “Against Simulation: The Argument from Error”, Trends in Cognitive Sciences, 9(4): 174–179. doi:10.1016/j.tics.2005.01.012
  • Scholl, Brian J. and Alan M. Leslie, 1999, “Modularity, Development and Theory of Mind”, Mind and Language, 14(1): 131–153. doi:10.1111/1468-0017.00106
  • Singer, Tania, Ben Seymour, John O’Doherty, Holger Kaube, Raymond J. Dolan, and Chris D. Frith, 2004, “Empathy for Pain Involves the Affective but not Sensory Components of Pain”, Science, 303(5661): 1157– 1162. doi:10.1126/science.1093535
  • Smith, Adam, 1759, The Theory of Moral Sentiments, D.D. Raphael and A.L. Macfie (eds.), Oxford: Oxford University Press, 1976.
  • Spaulding, Shannon, 2012, “Mirror Neurons are not Evidence for the Simulation Theory”, Synthese, 189(3): 515–534. doi:10.1007/s11229-012-0086-y
  • Spivey, Michael J., Daniel C. Richardson, Melinda J. Tyler, and Ezekiel E. Young, 2000, “Eye movements During Comprehension of Spoken Scene Descriptions”, in Proceedings of the 22nd Annual Conference of the Cognitive Science Society, Mahwah, NJ: Erlbaum, pp. 487–492.
  • Stich, Stephen and Shaun Nichols, 1992, “Folk Psychology: Simulation or Tacit Theory?”, Mind and Language, 7(1–2): 35–71; reprinted in Davies and Stone 1995a, pp. 123–158. doi:10.1111/j.1468-0017.1992.tb00196.x
  • –––, 1997, “Cognitive Penetrability, Rationality, and Restricted Simulation”, Mind and Language, 12(3–4): 297–326. doi:10.1111/j.1468-0017.1997.tb00076.x
  • Stich, Stephen and Ian Ravenscroft, 1992, “What is Folk Psychology?” Cognition, 50(1–3): 447–68. doi:10.1016/0010-0277(94)90040-X
  • Velleman, J. David, 2000, “The Aim of Belief”, in The Possibility of Practical Reason, Oxford: Oxford University Press, pp. 244–282
  • Vannuscorps, Gilles and Alfonso Caramazza, 2015, “Typical Action Perception and Interpretation without Motor Simulation”, Proceedings of the National Academy of Sciences, 113(1): 1–6. doi:10.1073/pnas.1516978112
  • Wellman, Henry M., David Cross, and Julanne Watson, 2001, “Meta-Analysis of Theory-of-Mind Development: The Truth about False Belief”, Child Development, 72(3): 655–684. doi:10.1111/1467-8624.00304
  • Wicker, Bruno, Christian Keysers, Jane Plailly, Jean-Pierre Royet, Vittorio Gallese, and Giacomo Rizzolatti, 2003, “Both of us Disgusted in My Insula: The Common Neural Basis of Seeing and Feeling Disgust”, Neuron, 40(3): 655–664. doi:10.1016/S0896-6273(03)00679-2
  • Wimmer, Heinz and Josef Perner, 1983, “Beliefs About Beliefs: Representation and Constraint Function of Wrong Beliefs in Young Children’s Understanding of Deception”, Cognition, 13(1): 103–128. doi:10.1016/0010-0277(83)90004-5

Other Internet Resources

[Please contact the authors with suggestions.]

Acknowledgments

The authors would like to thank Tom Cochrane, Jeremy Dunham, Steve Laurence, and an anonymous referee for comments on earlier drafts of this entry.

Copyright © 2017 by
Luca Barlassina <l.barlassina@sheffield.ac.uk>
Robert M. Gordon <gordon@umsl.edu>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]