Functionalism in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part. This doctrine is rooted in Aristotle's conception of the soul, and has antecedents in Hobbes's conception of the mind as a “calculating machine”, but it has become fully articulated (and popularly endorsed) only in the last third of the 20th century. Though the term ‘functionalism’ is used to designate a variety of positions in a variety of other disciplines, including psychology, sociology, economics, and architecture, this entry focuses exclusively on functionalism as a philosophical thesis about the nature of mental states.
The following sections will trace the intellectual antecedents of contemporary functionalism, sketch the different types of functionalist theories, and discuss the most serious objections to them.
- 1. What is Functionalism?
- 2. Antecedents of Functionalism
- 3. Varieties of Functionalism
- 4. Constructing Plausible Functional Theories
- 5. Objections to Functionalism
- 5.1 Functionalism and Holism
- 5.2 Functionalism and Mental Causation
- 5.3 Functionalism and Introspective Belief
- 5.4 Functionalism and the Norms of Reason
- 5.5 Functionalism and the Problem of Qualia
- 6. The Future of Functionalism
- Academic Tools
- Other Internet Resources
- Related Entries
Functionalism is the doctrine that what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part. More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.
For (an avowedly simplistic) example, a functionalist theory might characterize pain as a state that tends to be caused by bodily injury, to produce the belief that something is wrong with the body and the desire to be out of that state, to produce anxiety, and, in the absence of any stronger, conflicting desires, to cause wincing or moaning. According to this theory, all and only creatures with internal states that meet these conditions, or play these roles, are capable of being in pain.
Suppose that, in humans, there is some distinctive kind of neural activity (C-fiber stimulation, for example) that meets these conditions. If so, then according to this functionalist theory, humans can be in pain simply by undergoing C-fiber stimulation. But the theory permits creatures with very different physical constitutions to have mental states as well: if there are silicon-based states of hypothetical Martians or inorganic states of hypothetical androids that also meet these conditions, then these creatures, too, can be in pain. As functionalists often put it, pain can be realized by different types of physical states in different kinds of creatures, or multiply realized. (See entry on Multiple Realization.) Indeed, since descriptions that make explicit reference only to a state's causal relations with stimulations, behavior, and one another are what have come to be known as “topic-neutral” (Smart 1959) — that is, as imposing no logical restrictions on the nature of the items that satisfy the descriptions — then it's also logically possible for non-physical states to play the relevant roles, and thus realize mental states, in some systems as well. So functionalism is compatible with the sort of dualism that takes mental states to cause, and be caused by, physical states.
Still, though functionalism is officially neutral between materialism and dualism, it has been particularly attractive to materialists, since many materialists believe (or argue; see Lewis, 1966) that it is overwhelmingly likely that any states capable of playing the roles in question will be physical states. If so, then functionalism can stand as a materialistic alternative to the Psycho-Physical Identity Thesis (introduced in Place 1956, Feigl 1958, and Smart 1959, and defended more recently in Hill 1991, and Polger 2011), which holds that each type of mental state is identical with a particular type of neural state. This thesis seems to entail that no creatures with brains unlike ours can share our sensations, beliefs, and desires, no matter how similar their behavior and internal organization may be to our own, and thus functionalism, with its claim that mental states can be multiply realized, has been regarded as providing a more inclusive, less “(species-) chauvinistic” (Block 1980b) — theory of the mind that is compatible with materialism. (More recently, however, some philosophers have contended that the identity thesis may be more inclusive than functionalists assume; see Section 6 for further discussion.)
Within this broad characterization of functionalism, however, a number of distinctions can be made. One of particular importance is the distinction between theories in which the functional characterizations of mental states purport to provide analyses of the meanings of our mental state terms (or otherwise restrict themselves to a priori information), and theories that permit functional characterizations of mental states to appeal to information deriving from scientific experimentation (or speculation). (See Shoemaker 1984c, and Rey 1997, for further discussion and more fine-grained distinctions.) There are other important differences among functionalist theories as well. These (sometimes orthogonal) differences, and the motivations for them, can best be appreciated by examining the origins of functionalism and tracing its evolution in response both to explicit criticisms of the thesis and changing views about the nature of psychological explanation.
Although functionalism attained its greatest prominence as a theory of mental states in the last third of the 20th century, it has antecedents in both modern and ancient philosophy, as well as in early theories of computation and artificial intelligence.
The earliest view that can be considered an ancestor of functionalism is Aristotle's theory of the soul (350 BCE). In contrast to Plato's claim that the soul can exist apart from the body, Aristotle argued (De Anima Bk. II, Ch. 1) that the (human) soul is the form of a natural, organized human body — the set of powers or capacities that enable it to express its “essential whatness”, which for Aristotle is a matter of fulfilling the function or purpose that defines it as the kind of thing it is. Just as the form of an axe is whatever enables it to cut, and the form of an eye is whatever enables it to see, the (human) soul is to be identified with whichever powers and capacities enable a natural, organized human body to fulfill its defining function, which, according to Aristotle, is to survive and flourish as a living, acting, perceiving, and reasoning being. So, Aristotle argues, the soul is inseparable from the body, and comprises whichever capacities are required for a body to live, perceive, reason, and act.
A second, relatively early, ancestor of contemporary functionalism is Hobbes's (1651) account of reasoning as a kind of computation that proceeds by mechanistic principles comparable to the rules of arithmetic. Reasoning, he argues, is “nothing but reckoning, that is adding and subtracting, of the consequences of general names agreed upon for the marking and signifying of our thoughts.” (Leviathan, Ch. 5) In addition, Hobbes suggests that reasoning — along with imagining, sensing, and deliberating about action, all of which proceed according to mechanistic principles — can be performed by systems of various physical types. As he puts it in his Introduction to Leviathan, where he likens a commonwealth to an individual human, “why may we not say that all automata (engines that move themselves by springs and wheels…) have an artificial life? For what is the heart but a spring; and the nerves but so many strings, and the joints but so many wheels…”. It was not until the middle of the 20th century, however, that it became common to speculate that thinking may be nothing more than rule-governed computation that can be carried out by creatures of various physical types.
In a seminal paper (Turing 1950), A.M. Turing proposed that the question, “Can machines think?” can be replaced by the question, “Is it theoretically possible for a finite state digital computer, provided with a large but finite table of instructions, or program, to provide responses to questions that would fool an unknowing interrogator into thinking it is a human being?” Now, in deference to its author, this question is most often expressed as “Is it theoretically possible for a finite state digital computer (appropriately programmed) to pass the Turing Test?” (See Turing Test entry.)
In arguing that this question is a legitimate replacement for the original (and speculating that its answer is “yes”), Turing identifies thoughts with states of a system defined solely by their roles in producing further internal states and verbal outputs, a view that has much in common with contemporary functionalist theories. Indeed, Turing's work was explicitly invoked by many theorists during the beginning stages of 20th century functionalism, and was the avowed inspiration for a class of theories, the “machine state” theories most firmly associated with Hilary Putnam (1960, 1967) that had an important role in the early development of the doctrine.
Other important recent antecedents of functionalism are the behaviorist theories that emerged in the early-to-mid twentieth century. These include both the empirical psychological theories associated primarily with Watson and Skinner, and the “logical” or “analytical” behaviorism of philosophers such as Malcolm (1968) and Ryle (1949) (and, arguably, Wittgenstein 1953). Though functionalism is significantly different from behaviorism in that the latter attempts to explain behavior without any reference whatsoever to mental states and processes, the development of two important strains of functionalism, “psychofunctionalism” and “analytical” functionalism, can both be profitably viewed as attempts to rectify the difficulties, respectively, of empirical and logical behaviorism, while retaining certain important insights of those theories.
As an empirical psychological theory, behaviorism holds that the behavior of humans (and other animals) can be explained by appealing solely to behavioral dispositions, that is, to the lawlike tendencies of organisms to behave in certain ways, given certain environmental stimulations. Behavioral dispositions, unlike thoughts, feelings, and other internal states that can be directly observed only by introspection, are objectively observable and are indisputably part of the natural world. Thus they seemed to be fit entities to figure centrally in the emerging science of psychology. Also, behaviorist theories promised to avoid a potential regress that appeared to threaten psychological explanations invoking internal representations, namely, that to specify how such representations produce the behaviors in question, one must appeal to an internal intelligent agent (a “homunculus”) who interprets the representations, and whose skills would themselves have to be explained.
The promise of behaviorism lay in its conviction that there could be a science of human behavior as objective and explanatory as other “higher-level” sciences such as chemistry and biology. Behaviorism indeed had some early successes, especially in the domain of animal learning, and its principles are still used, at least for heuristic purposes, in various areas of psychology. But as many psychologists (and others, e.g. Chomsky 1959) have argued, the successes of behaviorism seem to depend upon the experimenters' implicit control of certain variables which, when made explicit, involve ineliminable reference to organisms' other mental states. For example, rats are typically placed into an experimental situation at a certain fraction of their normal body weight — and thus can be assumed to feel hunger and to want the food rewards contingent upon behaving in certain ways. Similarly, it is assumed that humans, in analogous experimental situations, want to cooperate with the experimenters, and understand and know how to follow the instructions. It seemed to the critics of behaviorism, therefore, that theories that explicitly appeal to an organism's beliefs, desires, and other mental states, as well as to stimulations and behavior, would provide a fuller and more accurate account of why organisms behave as they do. They could do so, moreover, without compromising the objectivity of psychology as long as the mental states to which these theories appeal are introduced as states that together play a role in the production of behavior, rather than states identifiable solely by introspection. Thus work was begun on a range of “cognitive” psychological theories which reflected these presumptions, and an important strain of contemporary functionalism, “psycho-functionalism” (Fodor 1968, Block and Fodor 1972) can be seen a philosophical endorsement of these new cognitive theories of mind.
Logical behaviorism, in contrast to behaviorism as a psychological theory, is a thesis about the meanings of our mental state terms or concepts. According to logical behaviorism, all statements about mental states and processes are equivalent in meaning to statements about behavioral dispositions. So, for (again, an overly simplified) example, “Henry has a toothache” would be equivalent in meaning to a statement such as “Henry is disposed (all things being equal) to cry out or moan and to rub his jaw”. And “Amelia is thirsty” would be equivalent to a statement such as “If Amelia is offered some water, she will be disposed (all things being equal) to drink it.” These candidate translations, like all behavioristic statements, eschew reference to any internal states of the organism, and thus do not threaten to denote, or otherwise induce commitment to, properties or processes (directly) observable only by introspection. In addition, logical behaviorists argued that if statements about mental states were equivalent in meaning to statements about behavioral dispositions, there could be an unproblematic account of how mental state terms could be applied both to oneself and others, and how they could be taught and learned.
However, as many philosophers have pointed out (Chisholm 1957; Geach 1957), logical behaviorism provides an implausible account of the meanings of our mental state terms, since, intuitively, a subject can have the mental states in question without the relevant behavioral dispositions — and vice versa. For example, Gene may believe that it's going to rain even if he's not disposed to wear a raincoat and take an umbrella when leaving the house (or to perform any other cluster of rain-avoiding behaviors), if Gene doesn't mind, or actively enjoys, getting wet. And subjects with the requisite motivation can suppress their tendencies to pain behavior even in the presence of excruciating pain, while skilled actors can perfect the lawlike disposition to produce pain behavior under certain conditions, even if they don't actually feel pain. (Putnam 1965) The problem, these philosophers argued, is that no mental state, by itself, can be plausibly assumed to give rise to any particular behavior unless one also assumes that the subject possesses additional mental states of various types. And so, it seemed, it was not in fact possible to give meaning-preserving translations of statements invoking pains, beliefs, and desires in purely behavioristic terms. Nonetheless, the idea that our common sense concepts of mental states reveal an essential tie between mental states and their typical behavioral expressions is retained, and elaborated, in contemporary “analytic” functionalist theories.
Given this history, it is helpful to think of functionalist theories as belonging to one of three major strains — “machine functionalism”, “psychofunctionalism” and “analytic functionalism” — and to see them as emerging, respectively, from early AI theories, empirical behaviorism, and logical behaviorism. It's important to recognize, however, that there is at least some overlap in the bloodlines of these different strains of functionalism, and also that there are functionalist theories, both earlier and more recent, that fall somewhere in between. For example, Wilfrid Sellars's (1956) account of mental states as “theoretical entities” is widely regarded as an important early version of functionalism, but it takes the proper characterization of thoughts and experiences to depend partially on their role in providing a scientific explanation of behavior, and partly on what he calls the “logic”, or the a priori interrelations, of the relevant concepts. Still, it is instructive to give separate treatment to the three major strains of the doctrine, as long as these caveats are kept in mind.
The early functionalist theories of Putnam (1960, 1967) can be seen as a response to the difficulties facing behaviorism as a scientific psychological theory, and as an endorsement of the (new) computational theories of mind which were becoming increasingly significant rivals to it. (But see Putnam 1988, for subsequent doubts about machine functionalism, Chalmers 1996b, for a response, and Shagrir 2005, for a comprehensive account of the evolution of Putnam's views on the subject)
According to Putnam's machine state functionalism, any creature with a mind can be regarded as a Turing machine (an idealized finite state digital computer), whose operation can be fully specified by a set of instructions (a “machine table” or program) each having the form:
If the machine is in state Si, and receives input Ij, it will go into state Sk and produce output Ol (for a finite number of states, inputs and outputs).
A machine table of this sort describes the operation of a deterministic automaton, but most machine state functionalists (e.g. Putnam 1967) take the proper model for the mind to be that of a probabilistic automaton: one in which the program specifies, for each state and set of inputs, the probability with which the machine will enter some subsequent state and produce some particular output.
On either model, however, the mental states of a creature are to be identified with such “machine table states” (S1,…,Sn). These states are not mere behavioral dispositions, since they are specified in terms of their relations not only to inputs and outputs, but also to the state of the machine at the time. For example, if believing it will rain is regarded as a machine state, it will not be regarded as a disposition to take one's umbrella after looking at the weather report, but rather as a disposition to take one's umbrella if one looks at the weather report and is in the state of wanting to stay dry. So machine state functionalism can avoid what many have thought to be a fatal difficulty for behaviorism. In addition, machines of this sort provide at least a simple model of how internal states whose effects on output occur by means of mechanical processes can be viewed as representations (though the question of what, exactly, they represent has been an ongoing topic of discussion (see sections 4.4–5). Finally, machine table states are not tied to any particular physical (or other) realization; the same program, after all, can be run on different sorts of computer hardware.
It's easy to see, therefore, why Turing machines provided a fruitful model for early functionalist theories. However, because machine table states are total states of a system, the early functionalist equation of mental states with machine table states faded in importance as a model for the functional characterization of the complex of distinct internal states that can be simultaneously realized in a human (or other) subject (Block and Fodor 1972; Putnam 1973). Nonetheless, the idea that internal states can be fully described in terms of their relations to input, output, and one another, and can figure in lawlike descriptions, and predictions, of a system's output, was a rich and important idea that is retained by contemporary functionalist theories. And many functionalists (e.g. Rey 1997) argue that mental states are best regarded as computational states (but see Piccinini 2004 for dissent).
A second strain of functionalism, psycho-functionalism, derives primarily from reflection upon the goals and methodology of “cognitive” psychological theories. In contrast to the behaviorists' insistence that the laws of psychology appeal only to behavioral dispositions, cognitive psychologists argue that the best empirical theories of behavior take it to be the result of a complex of mental states and processes, introduced and individuated in terms of the roles they play in producing the behavior to be explained. For example (Fodor's, in his 1968, Ch. 3), a psychologist may begin to construct a theory of memory by postulating the existence of “memory trace” decay, a process whose occurrence or absence is responsible for effects such as memory loss and retention, and which is affected by stress or emotion in certain distinctive ways.
On a theory of this sort, what makes some neural process an instance of memory trace decay is a matter of how it functions, or the role it plays, in a cognitive system; its neural or chemical properties are relevant only insofar as they enable that process to do what trace decay is hypothesized to do. And similarly for all mental states and processes invoked by cognitive psychological theories. Cognitive psychology, that is, is intended by its proponents to be a “higher-level” science like biology, and thus to have autonomy from lower-level sciences such as neurophysiology: just as, in biology, physically disparate entities can all be hearts as long as they function to circulate blood in a living organism, and physically disparate entities can all be eyes as long as they enable an organism to see, disparate physical structures or processes can be instances of memory trace decay — or more familiar phenomena such as thoughts, sensations, and desires — as long as they play the roles described by the relevant cognitive theory.
Psycho-functionalism, therefore, can be seen as straightforwardly adopting the methodology of cognitive psychology in its characterization of mental states and processes as entities defined by their role in a cognitive psychological theory. All versions of functionalism, however, can be regarded as characterizing mental states in terms of their roles in some psychological theory or other. (A more formal account of this will be given in Section 4.1 below.) What is distinctive about psycho-functionalism is its claim that mental states and processes are just those entities, with just those properties, postulated by the best scientific explanation of human behavior. This means, first, that the form of the theory can diverge from the “machine table” specifications of machine state functionalism. It also means that the information used in the functional characterization of mental states and processes needn't be restricted to what is considered common knowledge or common sense, but can include information available only by careful laboratory observation and experimentation. For example, a psychofunctional theory might be able to distinguish phenomena such as depression from sadness or listlessness even though the distinctive causes and effects of these syndromes are difficult to untangle solely by consulting intuitions or appealing to common sense. And psychofunctional theories will not include characterizations of mental states for which there is no scientific evidence, such as buyer's regret or hysteria, even if the existence and efficacy of such states is something that common sense affirms.
This may seem to be an unmitigated advantage, since psycho-functional theories can avail themselves of all the tools of inquiry available to scientific psychology, and will presumably make all, and only, the distinctions that are scientifically sound. This methodology, however, leaves psycho-functionalism open to the charge that it, like the Psycho-Physical Identity Thesis, may be overly “chauvinistic” (Block 1980b), since creatures whose internal states share the rough, but not fine-grained, causal patterns of ours wouldn't count as sharing our mental states. Many psycho-functionalists may not regard this as an unhappy consequence, and argue that it's appropriate to treat only those who are psychologically similar as having the same mental states. But there is a more serious worry about the thesis, namely, that if the laws of the best empirical psychological theories diverge from even the broad contours of our “folk psychology” — that is, our common sense beliefs about the causal roles of our thoughts, sensations, and perceptions — it will be hard to take psycho-functional theories as providing an account of our mental states, rather than merely changing the subject (Loar 1981, Stich 1983, Greenwood 1991). Many theorists, however (Horgan and Woodward 1985), argue that it's likely that future psychological theories will be recognizably close to “folk psychology”, though this question has been the subject of debate (Churchland 1981).
But there is another important strain of functionalism, “analytic” functionalism, that takes there to be reason to restrict the defining theory not just to generalizations sufficiently close to those that “the folk” take to hold between mental states, environmental stimulations, and behavior, but rather to a priori information about these relations. (See Smart 1959, Armstrong 1968, Shoemaker 1984a,b,c, Lewis 1972, and Braddon-Mitchell and Jackson 1996/2007.) This is because, for analytic functionalists, there are equally important goals that require strictly a priori characterizations of mental states.
Like the logical behaviorism from which it emerged, the goal of analytic functionalism is to provide “topic-neutral” translations, or analyses, of our ordinary mental state terms or concepts. Analytic functionalism, of course, has richer resources than logical behaviorism for such translations, since it permits reference to the causal relations that a mental state has to stimulations, behavior, and other mental states. Thus the statement “Blanca wants some coffee” need not be rendered, as logical behaviorism requires, in terms such as “Blanca is disposed to order coffee when it is offered”, but rather as “Blanca is disposed to order coffee when it is offered, if she has no stronger desire to avoid coffee”. But this requires any functional “theory” acceptable to analytic functionalists to include only generalizations about mental states, their environmental causes, and their joint effects on behavior that are so widely known and “platitudinous” as to count as analyzing our ordinary concepts of the mental states in question.
A good way to see why analytic functionalists insist that functional characterizations provide meaning analyses is to revisit a debate that occurred in the early days of the Psycho-Physical Identity Theory, the thesis that each type of mental state can be identified with some type of brain state or neural activity. For example, early identity theorists (e.g. Smart 1959) argued that it makes perfect sense (and may well be true) to identify pain with C-fiber stimulation. The terms ‘pain’ and ‘C-fiber stimulation’, they acknowledged, do not have the same meaning, but nonetheless they can denote the same state; the fact that an identity statement is not a priori, they argued, does not mean that it is not true. And just because I need not consult some sort of brain scanner when reporting that I'm in pain doesn't mean that the pain I report is not a neural state that a brain scanner could (in principle) detect.
An important — and enduring — objection to this argument, however, was raised early on by Max Black (reported in Smart 1959). Black argued, following Frege (1892), that the only way that terms with different meanings can denote the same state is to express different properties, or “modes of presentation” of that state. But this implies, he argued, that if terms like ‘pain’, ‘thought’ and ‘desire’ are not equivalent in meaning to any physicalistic descriptions, they can denote physical states only by expressing irreducibly mental properties of them. Thus, even if ‘pain’ and ‘C-fiber stimulation’ pick out a single type of neural state, this state must have two types of properties, physical and mental, by means of which the identification can be made. This argument has come to be known as the “Distinct Property Argument”, and is taken by its proponents to undermine a thorough-going materialistic theory of the mind. (See White 1986, 2002 (in the Other Internet Resources), and 2007, for more recent versions of this argument, and Block 2007, for a response.)
The appeal of meaning-preserving functional characterizations, therefore, is that in providing topic-neutral equivalents of our mental state terms and concepts, they blunt the anti-materialistic force of the Distinct Property Argument. True enough, analytic functionalists can acknowledge, terms like ‘pain’, ‘thought’, and ‘desire’ are not equivalent to any descriptions expressed in the language of physics, chemistry, or neurophysiology. But if there are functional descriptions that preserve the meanings of these terms, then a creature's mental states can be identified simply by determining which of that creature's internal states and processes play the relevant functional roles (see Lewis 1966). And since the capacity to play these roles is merely a matter of having certain causal relations to stimulations, behavior, and one another, the possession of these properties is compatible with a materialistic theory of the mind.
A major question, of course, is whether a theory that limits itself to a priori information about the causal relations between stimulations, mental states, and behavior can make the right distinctions among mental states. This question will be pursued further in Section 4.
There is yet another distinction between kinds of functional theory — one that crosscuts the distinctions described so far — that is important to note. This is the distinction between what has come to be known as “role” functionalism and “realizer” (or “filler”) functionalism (McLaughlin 2006). To see the difference between these types of theory, consider—once again—the (avowedly simplistic) example of a functional theory of pain introduced in the first section.
Pain is the state that tends to be caused by bodily injury, to produce the belief that something is wrong with the body and the desire to be out of that state, to produce anxiety, and, in the absence of any stronger, conflicting desires, to cause wincing or moaning.
As noted earlier, if in humans this functional role is played by C-fiber stimulation, then, according to this functionalist theory, humans can be in pain simply by undergoing C-fiber stimulation. But there is a further question to be answered, namely, what is the property of pain itself? Is it the higher-level relational property of being in some state or other that plays the “pain role”in the theory, or the C-fiber stimulation that actually plays this role?
Role functionalists identify pain with that higher-level relational property. Realizer functionalists, however, take a functional theory merely to provide definite descriptions of whichever lower-level properties satisfy the functional characterizations. On these views (also called “functional specification” theories), if the property that occupies the causal role of pain in human beings is C-fiber stimulation, then pain (or at least pain-in-humans) would be C-fiber stimulation, rather than the higher-level property of having some lower-level state that plays the relevant role. (This is not to suggest that there is a difference in kind between higher-level “role” properties and the lower-level “realizations” of those roles, since it may be that, relative to even lower-level descriptions, those realizations can be characterized as functional states themselves (Lycan 1987).
Some of the earliest versions of analytic functionalism (Lewis 1966, Armstrong 1968 — but see Lewis 1980, for a modification) were presented as functional specification theories, as topic-neutral “translations” of mental state terms that could pave the way for a psycho-physical identity theory by defusing the Distinct Property Argument (see section 3.3). However, if there are differences in the physical states that satisfy the functional definitions in different (actual or hypothetical) creatures, such theories—like most versions of the identity theory—would violate a key motivation for functionalism, namely, that creatures with states that play the same role in the production of other mental states and behavior possess, literally, the same mental states.
It may be that there are some important, more general, physical similarities between the neural states of seemingly disparate creatures that satisfy a given functional characterization (see Bechtel and Mundale 1999 and Churchland 2005—but see Aizawa 2009 for dissent; this issue will be discussed further in Section 6). However, even if this is so, it is unlikely that these similarities hold of all the creatures, including Martians and other hypothetical beings, who could share our functional organization, and thus our theory of mental states would remain, in Block's (1980) terms, overly “chauvinistic”. One could counter the charge of chauvinism, of course, by suggesting that all creatures with lower-level states that satisfy a given functional characterization possess a common (lower-level) disjunctive state or property. Or one could suggest that, even if all creatures possessing states that occupy (for example) the pain role are not literally in the same mental state, they nonetheless share a closely related higher-level property (call it, following Lewis 1966 (note 6), “the attribute of having pain”). But neither alternative, for many functionalists, goes far enough to preserve the basic functionalist intuition that functional commonality trumps physical diversity in determining whether creatures can possess the same mental states. Thus many functionalists—both a priori and empirical—advocate role functionalism, which, in addition to avoiding chauvinism, permits mental state terms to be rigid designators (Kripke 1972), denoting the same items—those higher-level “role” properties—in all possible worlds.
On the other hand, some functionalists—here, too, both a priori and empirical—consider realizer functionalism to be in a better position than role functionalism to explain the causal efficacy of the mental. If I stub my toe and wince, we believe that my toe stubbing causes my pain, which in turn causes my wincing. But, some have argued (Malcolm 1968; Kim 1989, 1998), if pain is realized in me by some neural event-type, then insofar as there are purely physical law-like generalizations linking events of that type with wincings, one can give a complete causal explanation of my wincing by citing the occurrence of that neural event (and the properties by virtue of which it figures in those laws). And thus it seems that the higher-level role properties of that event are causally irrelevant. This is known as the “causal exclusion problem”, which is claimed to arise not just for functional role properties, but for dispositional properties in general (Prior, Pargetter, and Jackson 1982)—and indeed for any sort of mental states or properties not type-identical to those invoked in physical laws. This problem will be discussed further in Section 5.2.
So far, the discussion of how to provide functional characterizations of individual mental states has been vague, and the examples avowedly simplistic. Is it possible to do better, and, if so, which version of functionalism is likely to have the greatest success? These questions will be the focus of this section, and separate treatment will be given to experiential states, such as perceptions and bodily sensations, which have a distinctive qualitative character or “feel”, and intentional states, such as thoughts, beliefs, and desires, which purport to represent the world in various ways (though there is increasing consensus that experiential states have representational contents and intentional states have qualitative character, and thus that these two groups may not be mutually exclusive).
First, however, it is important to get more precise about how exactly functional definition is supposed to work. This can be done by focusing on a general method for constructing functional definitions introduced by David Lewis (1972; building on an idea of Frank Ramsey's), which has become standard practice for functionalists of all varieties. Articulating this method will help in evaluating the strengths and weaknesses of the different varieties of functionalism—while displaying some further challenges that arise for them all.
The key feature of this now-canonical method is to treat mental states and processes as being implicitly defined by the Ramsey sentence of one or another psychological theory — common sense, scientific, or something in between. (Analogous steps, of course, can be taken to produce the Ramsey-sentence of any theory, psychological or otherwise). For (a still simplistic) example, consider the sort of generalizations about pain introduced before: pain tends to be caused by bodily injury; pain tends to produce the belief that something is wrong with the body and the desire to be out of that state; pain tends to produce anxiety; pain tends to produce wincing or moaning.
To construct the Ramsey-sentence of this “theory”, the first step is to conjoin these generalizations, then replace all names of different types of mental states with different variables, and then existentially quantify those variables, as follows:
∃x∃y∃z∃w( x tends to be caused by bodily injury & x tends to produce states y, z, and w & x tends to produce wincing or moaning).
Such a statement is free of any mental state terms. It includes only quantifiers that range over mental states, terms that denote stimulations and behavior, and terms that specify various causal relations among them. It can thus be regarded as providing implicit definitions of the mental state terms of the theory. An individual will have those mental states just in case it possesses a family of first-order states that interact in the ways specified by the theory. (Though functionalists of course acknowledge that the first-order states that satisfy the functional definitions may vary from species to species — or even from individual to individual — they specify that, for each individual, the functional definitions be uniquely satisfied.)
A helpful way to think of the Ramsey sentence of a psychological theory is to regard it as defining a system's mental states “all at once” as states that interact with stimulations in various ways to produce behavior (See Lewis 1972; also see Field 1980 for a more technical elaboration of Lewis's method, and an account of some crucial differences between this kind of characterization and the one Lewis initially proposed.) This makes it clear that, in the classic formulations of functional theories, mental states are intended to be characterized in terms of their relations to stimulations, behavior, and all the other states that may be permissibly invoked by the theory in question, and thus certain functional theories may have more resources for individuating mental states than suggested by the crude definitions used as examples. The next three sections will discuss the potential of various sorts of functionalist theory for giving adequate characterizations of experiential and intentional states—and also for specifying the inputs and outputs of the system.
The common strategy in the most successful treatments of perceptual experiences and bodily sensations (Shoemaker 1984a, Clark 1993; adumbrated in Sellars 1956) is to individuate experiences of various general types (color experiences, experiences of sounds, feelings of temperature) in part by appeal to their positions in the “quality spaces” associated with the relevant sense modalities — that is, the (perhaps multidimensional) matrices determined by judgments about the relative similarities and differences among the experiences in question. So, for example, the experience of a very reddish-orange could be (partially) characterized as the state produced by the viewing of a color swatch within some particular range, which tends to produce the judgment or belief that the state just experienced is more similar to the experience of red than of orange. (Analogous characterizations, of course, will have to be given of these other color experiences.) The judgments or beliefs in question will themselves be (partially) characterized in terms of their tendencies to produce sorting or categorization behavior of certain specified kinds.
This strategy may seem fatal to analytic functionalism, which restricts itself to the use of a priori information to distinguish among mental states, since it's not clear that the information needed to distinguish among experiences such as color perceptions will result from conceptual analysis of our mental state terms or concepts. However, this problem may not be as dire as it seems. For example, if sensations and perceptual experiences are characterized in terms of their places in a “quality space” determined by a person's pre-theoretical judgments of similarity and dissimilarity (and perhaps also in terms of their tendencies to produce various emotional effects), then these characterizations may qualify as a priori, even though they would have to be elicited by a kind of “Socratic questioning”.
There are limits to this strategy, however (see Section 5.5.1 on the “inverted spectrum” problem), which seem to leave two options for analytic functionalists: fight — that is, deny that it's coherent to suppose that there are the distinctions that the critics suggest, or switch — that is, embrace another version of functionalism in which the characterizations of mental states, though not conceptual truths, can provide information rich enough to individuate the states in question. To switch, however, would be to give up the benefits (if any) of a theory that offers meaning-preserving translations of our mental state terms.
There has been significant skepticism, however, about whether any functionalist theory — analytic or scientific — can capture what seem to be the intrinsic characters of experiential states such as color perceptions, pains, and other bodily sensations; these questions will be addressed in section 5.5 below.
On the other hand, intentional states such as beliefs, thoughts, and desires (sometimes called “propositional attitudes”) are often taken to be easier to specify in functional terms (but not always: see Searle 1992, G. Strawson 1994, Horgan and Tienson 2002, Kriegel 2003, and Pitt 2008, who suggest that intentional states have qualitative character as well). It's not hard to see how to begin: beliefs are (among other things) states produced in certain ways by sense-perception or inference from other beliefs, and which tend to interact with certain desires to produce behavior; desires are states with certain causal or counterfactual relations to the system's goals and needs, and which tend to interact with certain beliefs to produce behavior. But more must be said about what makes a state a particular belief or desire, for example, the belief — or desire — that it will snow tomorrow. Most functional theories describe such states as different relations (or “attitudes”) toward the same state of affairs or proposition (and to describe the belief that it will snow tomorrow and the belief that it will rain tomorrow as the same attitude toward different propositions). This permits differences and similarities in the contents of intentional states to be construed as differences and similarities in the propositions to which these states are related. But what makes a mental state a relation to, or attitude toward, some proposition P? And can these relations be captured solely by appeal to the functional roles of the states in question?
The development of conceptual role semantics may seem to provide an answer to these questions: what it is for Julian to believe that P is for Julian to be in a state that has causal and counterfactual relations to other beliefs and desires that mirror certain inferential, evidential, and practical (action-directed) relations among propositions with those formal structures (Field 1980; Loar 1981; Block 1986). This proposal raises a number of important questions. One is whether states capable of entering into such interrelations can (must?) be construed as comprising, or including elements of, a “language of thought” (Fodor 1975; Harman 1973; Field 1980; Loar 1981). Another is whether idiosyncracies in the inferential or practical proclivities of different individuals make for differences in (or incommensurabilities between) their intentional states. (This question springs from a more general worry about the holism of functional specification, which will be discussed more generally in Section 5.1.)
Yet another challenge for functionalism are the widespread intuitions that support “externalism”, the thesis that what mental states represent, or are about, cannot be characterized without appeal to certain features of the environments in which those individuals are embedded. Thus, if one individual's environment differs from another's, they may count as having different intentional states, even though they reason in the same ways, and have exactly the same “take” on those environments from their own points of view.
The “Twin Earth” scenarios introduced by Putnam (1975) are often invoked to support an externalist individuation of beliefs about natural kinds such as water, gold, or tigers. Twin Earth, as Putnam presents it, is a (hypothetical) planet on which things look, taste, smell and feel exactly the way they do on Earth, but which have different underlying microscopic structures; for example, the stuff that fills the streams and comes out of the faucets, though it looks and tastes like water, has molecular structure XYZ rather than H2O. Many theorists find it intuitive to think that we thereby mean something different by our term ‘water’ than our Twin Earth counterparts mean by theirs, and thus that the beliefs we describe as beliefs about water are different from those that our Twin Earth counterparts would describe in the same way. Similar conclusions, they contend, can be drawn for all cases of belief (and other intentional states) regarding natural kinds.
The same problem, moreover, appears to arise for other sorts of belief as well. Tyler Burge (1979) presents cases in which it seems intuitive that a person, Oscar, and his functionally equivalent counterpart have different beliefs about various syndromes (such as arthritis) and artifacts (such as sofas) because the usage of these terms by their linguistic communities differ. For example, in Oscar's community, the term ‘arthritis’ is used as we use it, whereas in his counterpart's community ‘arthritis’ denotes inflammation of the joints and also various maladies of the thigh. Burge's contention is that even if Oscar and his counterpart both complain about the ‘arthritis’ in their thighs and make exactly the same inferences involving ‘arthritis’, they mean different things by their terms and must be regarded as having different beliefs. If these cases are convincing, then there are differences among types of intentional states that can only be captured by characterizations of these states that make reference to the practices of an individual's linguistic community. These, along with the Twin Earth cases, suggest that if functionalist theories cannot make reference to an individual's environment, then capturing the representational content of (at least some) intentional states is beyond the scope of functionalism. (See Section 4.4 for further discussion, and Searle 1980, for related arguments against “computational” theories of intentional states.)
On the other hand, the externalist individuation of intentional states may fail to capture some important psychological commonalities between ourselves and our counterparts that are relevant to the explanation of behavior. If my Twin Earth counterpart and I have both come in from a long hike, declare that we're thirsty, say “I want some water” and head to the kitchen, it seems that our behavior can be explained by citing a common desire and belief. Some theorists, therefore, have suggested that functional theories should attempt merely to capture what has been called the “narrow content” of beliefs and desires — that is, whichever representational features individuals share with their various Twin Earth counterparts. There is no consensus, however, about just how functionalist theories should treat these “narrow” representational features (Block 1986; Loar 1987), and some philosophers have expressed skepticism about whether such features should be construed as representations at all (Fodor 1994; also see entry on Narrow Content). Even if a generally acceptable account of narrow representational content can be developed, however, if the intuitions inspired by “Twin Earth” scenarios remain stable, then one must conclude that the full representational content of intentional states (and qualitative states, if they too have representational content) cannot be captured by “narrow” functional characterizations alone.
Considerations about whether certain sorts of beliefs are to be externally individuated raise the related question about how best to characterize the stimulations and behaviors that serve as inputs and outputs to a system. Should they be construed as events involving objects in a system's environment (such as fire trucks, water and lemons), or rather as events in that system's sensory and motor systems? Theories of the first type are often called “long-arm” functional theories (Block 1990), since they characterize inputs and outputs — and consequently the states they are produced by and produce — by reaching out into the world. Adopting a “long-arm” theory would prevent our Twin Earth counterparts from sharing our beliefs and desires, and may thus honor intuitions that support an externalist individuation of intentional states (though further questions may remain about what Quine has called the “inscrutability of reference”; see Putnam 1988).
If functional characterizations of intentional states are intended to capture their “narrow contents”, however, then the inputs and outputs of the system will have to be specified in a way that permits individuals in different environments to be in the same intentional state. On this view inputs and outputs may be better characterized as activity in specific sensory receptors and motor neurons. But this (“short-arm”) option also restricts the range of individuals that can share our beliefs and desires, since creatures with different neural structures will be prevented from sharing our mental states, even if they share all our behavioral and inferential dispositions. (In addition, this option would not be open to analytic functionalist theories, since generalizations that link mental states to neurally specified inputs and outputs would not, presumably, have the status of conceptual truths.)
Perhaps there is a way to specify sensory stimulations that abstracts from the specifics of human neural structure enough to include any possible creature that intuitively seems to share our mental states, but is sufficiently concrete to rule out entities that are clearly not cognitive systems (such as the economy of Bolivia; see Block 1980b). If there is no such formulation, however, then functionalists will either have to dispel intuitions to the effect that certain systems can't have beliefs and desires, or concede that their theories may be more “chauvinistic” than initially hoped.
Clearly, the issues here mirror the issues regarding the individuation of intentional states discussed in the previous section. More work is needed to develop the “long-arm” and “short-arm” alternatives, and to assess the merits and deficiencies of both.
The previous sections were by and large devoted to the presentation of the different varieties of functionalism and the evaluation of their relative strengths and weaknesses. There have been many objections to functionalism, however, that apply to all versions of the theory. Some of these have already been introduced in earlier discussions, but they, and many others, will be addressed in more detail here.
One difficulty for every version of the theory is that functional characterization is holistic. Functionalists hold that mental states are to be characterized in terms of their roles in a psychological theory—be it common sense, scientific, or something in between—but all such theories incorporate information about a large number and variety of mental states. Thus if pain is interdefined with certain highly articulated beliefs and desires, then animals who don't have internal states that play the roles of our articulated beliefs and desires can't share our pains, and humans without the capacity to feel pain can't share certain (or perhaps any) of our beliefs and desires. In addition, differences in the ways people reason, the ways their beliefs are fixed, or the ways their desires affect their beliefs — due either to cultural or individual idiosyncracies — might make it impossible for them to share the same mental states. These are regarded as serious worries for all versions of functionalism (see Stich 1983, Putnam 1988).
Some functionalists, however (e.g. Shoemaker 1984c), have suggested that if a creature has states that approximately realize our functional theories, or realize some more specific defining subset of the theory particularly relevant to the specification of those states, then they can qualify as being mental states of the same types as our own. The problem, of course, is to specify more precisely what it is to be an approximate realization of a theory, or what exactly a “defining” subset of a theory is intended to include, and these are not easy questions. (They have particular bite, moreover, for analytic functionalist theories, since specifying what belongs inside and outside the “defining” subset of a functional characterization raises the question of what are the conceptually essential, and what the merely collateral, features of a mental state, and thus raise serious questions about the feasibility of (something like) an analytic-synthetic distinction. (Quine 1953, Rey 1997)).
Another worry for functionalism is the “causal exclusion problem”, introduced in section 3.4: the worry about whether role functionalism can account for what we take to be the causal efficacy of our mental states (Malcolm 1968, Kim 1989, 1998). For example, if pain is realized in me by some neural state-type, then insofar as there are purely physical law-like generalizations linking states of that type with pain behavior, one can give a complete causal explanation of my behavior by citing the occurrence of that neural state (and the properties by virtue of which it figures in those laws). And thus, some have argued, the higher-level role properties of that state—its being a pain—are causally irrelevant.
There have been a number of different responses to this problem. Some (e.g. Loewer 2002, 2007, Antony and Levine 1997, Burge 1997, Baker 1997) suggest that it arises from an overly restrictive account of causation, in which a cause must “generate” or “produce” its effect, a view which would count the macroscopic properties of other special sciences as causally irrelevant as well. Instead, some argue, causation should be regarded as a special sort of counterfactual dependence between states of certain types (Loewer 2002, 2007, Fodor 1990, Block 1997), or as a special sort of regularity that holds between them (Melnyk 2003). If this is correct, then functional role properties (along with the other macroscopic properties of the special sciences) could count as causally efficacious (but see Ney 2012 for dissent). However, the plausibility of these accounts of causation depends on their prospects for distinguishing bona-fide causal relations from those that are clearly epiphenomenal, and some have expressed skepticism about whether they can do the job, among them Crane 1995, Kim 2007, Jackson 1995, Ludwig 1998, and McLaughlin 2006, forthcoming. (On the other hand, see Lyons (2006) for an argument that if functional properties are causally inefficacious, this can be viewed as a benefit of the theory.)
Yet other philosophers argue that causation is best regarded as a relation between types of events that must be invoked to provide sufficiently general explanations of behavior (Antony and Levine 1997, Burge 1997, Baker 1997). Though many who are moved by the exclusion problem (e.g. Kim, Jackson) maintain that there is a difference between generalizations that are truly causal and those that contribute in some other (merely epistemic) way to our understanding of the world, theorists who advocate this response to the problem charge that this objection, once again, depends on a restrictive view of causation that would rule out too much.
Another problem with views like the ones sketched above, some argue (Kim 1989, 1998), is that mental and physical causes would thereby overdetermine their effects, since each would be causally sufficient for their production. And, though some theorists argue that overdetermination is widespread and unproblematic (see Loewer 2002, and also Shaffer, 2003, and Sider 2003, for a more general discussion of overdetermination), others contend that there is a special relation between role and realizer that provides an intuitive explanation of how both can be causally efficacious without counting as overdetermining causes. For example, Yablo (1992), suggests that mental and physical properties stand in the relation of determinable and determinate (just as red stands to scarlet), and argues that our conviction that a cause should be commensurate with its effects permits us to take the determinable, rather than the determinate, property to count as causally efficacious in psychological explanation. (See also Macdonald and Macdonald 1995, and Bennett 2003, for related views.)
There has been substantial recent work on the causal exclusion problem, which, as noted earlier, arises for any non-reductive theory of mental states. (See the entry on Mental Causation, as well as Bennett 2007, and Funkhouser 2007, for further discussion and extensive bibliographies.) But it is worth discussing a related worry about causation that arises exclusively for role-functionalism (and other dispositional theories), namely, the problem of “metaphysically necessitated effects” (Rupert 2006, Bennett 2007). If pain is functionally defined (either by an a priori or an empirical theory) as the state of being in some lower-level state or other that, in certain circumstances causes wincing, then it seems that the generalization that pain causes wincing (in those circumstances) is at best uninformative, since the state in question would not be pain if it didn't. And, on the Humean view of causation as a contingent relation, the causal claim would be false. Davidson (1980b) once responded to a similar argument by noting that even if a mental state M is defined in terms of its production of an action A, it can often be redescribed in other terms P such that 'P caused A' is not a logical truth. But it's unclear whether any such redescriptions are available to role (vs. realizer) functionalists.
Some theorists (e.g. Antony and Levine 1997) have responded by suggesting that, though mental states may be defined in terms of some of their effects, they have other effects that do not follow from those definitions which can figure into causal generalizations that are contingent, informative, and true. For example, even if it follows from a functional definition that pain causes wincing (and thus that the relation between pain and wincing cannot be truly causal), psychologists may discover, say, that pain produces resilience (or submissive behavior) in human beings. One might worry, nonetheless, that functional definitions threaten to leave too many commonly cited generalizations outside the realm of contingency, and thus causal explanation: surely, we may think, we want to affirm claims such as “pain causes wincing”. Such claims could be affirmed, however, if (as seems likely) the most plausible functional theories define sensations such as pain in terms of a small subset of their distinctive psychological, rather than behavioral, effects (see section 4.2).
A different line of response to this worry (Shoemaker 1984d, 2001) is to deny the Humean account of causation altogether, and contend that causal relations are themselves metaphysically necessary, but this remains a minority view.
Another important question concerns the beliefs that we have about our own “occurrent” (as opposed to dispositional) mental states such as thoughts, sensations, and perceptions. We seem to have immediately available, non-inferential beliefs about these states, and the question is how this is to be explained if mental states are identical with functional properties.
The answer depends on what one takes these introspective beliefs to involve. Broadly speaking, there are two dominant views of the matter (but see Peacocke 1999, Ch. 5 for further alternatives). One popular account of introspection — the “inner sense” model on which introspection is taken to be a kind of “internal scanning” of the contents of one's mind (Armstrong 1968) — has been taken to be unfriendly to functionalism, on the grounds that it's hard to see how the objects of such scanning could be second-order relational properties of one's neural states (Goldman 1993). Some theorists, however, have maintained that functionalism can accommodate the special features of introspective belief on the “inner sense” model, since it would be only one of many domains in which it's plausible to think that we have immediate, non-inferential knowledge of causal or dispositional properties (Armstrong 1993; Kobes 1993; Sterelney 1993). A full discussion of these questions goes beyond the scope of this entry, but the articles cited above are just three among many helpful pieces in the Open Peer Commentary following Goldman (1993), which provides a good introduction to the debate about this issue.
Another account of introspection, identified most closely with Shoemaker (1996a,b,c,d), is that the immediacy of introspective belief follows from the fact that occurrent mental states and our introspective beliefs about them are functionally interdefined. For example, one satisfies the definition of being in pain only if one is in a state that tends to cause (in creatures with the requisite concepts who are considering the question) the belief that one is in pain, and one believes that one is in pain only if one is in a state that plays the belief role, and is caused directly by the pain itself. On this account of introspection, the immediacy and non-inferential nature of introspective belief is not merely compatible with functionalism, but required by it.
But there is an objection, most recently expressed by George Bealer (1997; see also Hill 1993), that, on this model an introspective belief can only be defined in one of two unsatisfactory ways: either as a belief produced by a (second-order) functional state specified (in part) by its tendency to produce that very type of belief — which would be circular — or as a belief about the first-order realization of the functional state, rather than that state itself. Functionalists have suggested, however (Shoemaker 2001, McCullagh 2000, Tooley 2001), that there is a way of understanding the conditions under which beliefs can be caused by, and thus be about, one's second-order functional states that permits mental states and introspective beliefs about them to be non-circularly defined (but see Bealer 2001, for a skeptical response). A full treatment of this objection involves the more general question of whether second-order properties can have causal efficacy, and is thus beyond the scope of this discussion (see section 5.2 and Mental Causation entry). But even if this objection can ultimately be deflected, it suggests that special attention must be paid to the functional characterizations of “self-directed” mental states.
Yet another objection to functionalism raises the question of whether any “theory” of the mind that invokes beliefs, desires, and other intentional states could ever be, or even approximate, an empirical theory. Whereas even analytic functionalists hold that mental states are implicitly defined in terms of their (causal or probabilistic) roles in producing behavior, these critics take mental states, or at least intentional states, to be implicitly defined in terms of their roles in rationalizing, or making sense of, behavior. This is a different enterprise, they claim, since rationalization, unlike causal explanation, requires showing how an individual's beliefs, desires, and behavior conform, or at least approximate, to certain a priori norms or ideals of theoretical and practical reasoning — prescriptions about how we should reason, or what, given our beliefs and desires, we ought to do (Davidson 1980c, Dennett 1978, McDowell 1985). Thus the defining (“constitutive”) normative or rational relations among intentional states expressed by these principles cannot be expected to correspond to empirical relations among our internal states, sensory stimulations and behavior, since they comprise a kind of explanation that has sources of evidence and standards for correctness that are different from those of empirical theories (Davidson 1980c). One can't, that is, extract facts from values.
Thus, although attributions of mental states can in some sense explain behavior. by permitting an observer to “interpret” it as making sense, they should not be expected to denote entities that figure in empirical laws. (This is not to say, these theorists stress, that there are no causes, or empirical laws of, behavior. These, however, will be expressible only in the vocabularies of the neurosciences, or other lower-level sciences, and not as relations among beliefs, desires and behavior.)
Functionalists have replied to these worries in different ways. Many just deny the intuition behind the objection, and maintain that even the strictest conceptual analyses of our intentional terms and concepts purport to define them in terms of their bona-fide causal roles, and that any norms they reflect are explanatory rather than prescriptive. They argue, that is, that if these generalizations are idealizations, they are the sort of idealizations that occur in any scientific theory: just as Boyle's Law depicts the relations between the temperature, pressure, and volume of a gas under certain ideal experimental conditions, our a priori theory of the mind consists of descriptions of what normal humans would do under (physically specifiable) ideal conditions, not prescriptions as to what they should, or are rationally required, to do.
Other functionalists agree that we may advert to various norms of inference and action in attributing beliefs and desires to others, but deny that there is any in principle incompatibility between normative and empirical explanations. They argue that if there are causal relations among beliefs, desires, and behavior that even approximately mirror the norms of rationality, then the attributions of intentional states can be empirically confirmed (Fodor 1990; Rey 1997). In addition, many who hold this view suggest that the principles of rationality that intentional states must meet are quite minimal, and comprise at most a set of constraints on the contours of our theory of mind, such as that people can't, in general, hold (obviously) contradictory beliefs, or act against their (sincerely avowed) strongest desires (Loar 1981). Still others suggest that the intuition that we attribute beliefs and desires to others according to rational norms is based on a fundamental mistake; these states are attributed not on the basis of whether they rationalize the behavior in question, but whether those subjects can be seen as using principles of inference and action sufficiently like our own — be they rational, like Modus Ponens, or irrational, like the Gambler's Fallacy. (See Stich 1981, and Levin 1988, for commentary)
But, though many functionalists argue that the considerations discussed above show that there is no in principle bar to a functionalist theory that has empirical force, these worries about the normativity of intentional ascription continue to fuel skepticism about functionalism (and, for that matter, any scientific theory of the mind that uses intentional notions). For a recent debate about the matter, see Rey 2007, and Wedgwood 2007.
In addition to these general worries about functionalism, there are particular questions that arise for the prospects of giving a functional characterization of qualitative states. These will be discussed in the following section.
Even for those generally sympathetic to functionalism, there is one category of mental states that seems particularly resistant to functional characterization. Functionalist theories of all varieties — whether analytic or empirical, FSIT or functional specification — attempt to characterize mental states exclusively in relational, specifically causal, terms. A common and persistent objection, however, is that no such characterizations can capture the qualitative character, or “qualia”, of experiential states such as perceptions, emotions, and bodily sensations, since they would leave out certain of their essential properties, namely, “what it's like” (Nagel 1974) to have them. The next three sections will present the most serious worries about the ability of functionalist theories to give an adequate characterization of these states. (These worries, of course, will extend to intentional states, if, as some philosophers have argued (Searle 1992, G. Strawson 1986, Kriegel 2003, Pitt 2008), “what it's like” to have them is among their essential properties as well. See also entry on Mental Representation.)
The first to be considered are the “absent” and “inverted” qualia objections most closely associated with Ned Block (1980b; see also Block and Fodor 1972). The “inverted qualia” objection to functionalism maintains that there could be an individual who (for example) satisfies the functional definition of our experience of red, but is experiencing green instead. It is a descendant of the claim, discussed by philosophers from Locke to Wittgenstein, that there could be an individual with an “inverted spectrum” who is behaviorally indistinguishable from someone with normal color vision; both objections trade on the contention that purely relational characterizations cannot make distinctions among distinct experiences with isomorphic causal patterns. (Even if inverted qualia are not really a possibility for human beings, given certain asymmetries in our “quality space” for color, and differences in the relations of color experiences to other mental states such as emotions (Hardin 1988), it seems possible that there are creatures with perfectly symmetrical color quality spaces for whom a purely functional characterization of color experience would fail.)
A related objection, the “absent qualia” objection, maintains that there could be creatures functionally equivalent to normal humans whose mental states have no qualitative character at all. In his well-known “Chinese nation” thought-experiment, Block (1980b) imagines that the population of China (chosen because its size approximates the number of neurons in a typical human brain) is recruited to duplicate his functional organization for a period of time, receiving the equivalent of sensory input from an artificial body and passing messages back and forth via satellite. Block argues that such a “homunculi-headed” system — or “Blockhead”, as it has come to be called — would not have mental states with any qualitative character (other than the qualia possessed by the individuals themselves), and thus that states functionally equivalent to sensations or perceptions may lack their characteristic “feels”. Conversely, it has also been argued that functional role is not necessary for qualitative character: for example, the argument goes, people may have mild, but distinctive, twinges that have no typical causes or characteristic effects.
All these objections purport to have characterized a creature with the functional organization of normal human beings, but without any, or the right sort, of qualia (or vice versa), and thus to have produced a counter-example to functional theories of qualia. In response, some theorists (Dennett 1978; Levin 1985; Van Gulick 1989) argue that these characterizations provide clear-cut counterexamples only to crude functional theories, and that attention to the subtleties of more sophisticated characterizations will undermine the intuition that functional duplicates of ourselves with absent or inverted qualia are possible (or, conversely, that there are qualitative states without distinctive functional roles). The plausibility of this line of defense is often questioned, however, since there is tension between the goal of increasing the sophistication (and thus the individuative powers) of the functional definitions, and the goal (for analytic functionalists) of keeping these definitions within the bounds of the a priori (though see Section 4.2), or (for psycho-functionalists) broad enough to be instantiable by creatures other than human beings. Another response to the “absent qualia” objection is to suggest that the intuition that creatures such as Blockheads lack mental states with qualitative character is based on prejudice—against creatures with unfamiliar shapes and extended reaction times (Dennett 1978), or creatures with parts widely distributed in space (Schwitzgebel 2013 (Other Internet Resources), and commentary).
Yet another line of response, initially advanced by Sydney Shoemaker (1994b), attributes a different sort of incoherence to the “absent qualia” scenarios. Shoemaker argues that although functional duplicates of ourselves with inverted qualia may be possible, duplicates with absent qualia are not, since their possibility leads to untenable skepticism about the qualitative character of one's own mental states. This argument has been challenged (Block 1980b; see also Shoemaker's response in 1994d, and Balog 1999), but successful or not, it raises questions about the nature of introspection and the conditions under which, if functionalism is true, we can have knowledge of our own mental states. These questions are discussed further in section 5.3.
The “inverted” and “absent” qualia objections were initially presented as challenges exclusively to functionalist theories, both conceptual and empirical, and not generally to physicalistic theories of experiential states; the main concern was that the purely relational resources of functional description were incapable of capturing the intrinsic qualitative character of states such as feeling pain, or seeing red. (Indeed, in Block's 1980b, p. 291, he suggests that qualitative states may best be construed as “composite state[s] whose components are a quale and a [functional state],” and adds, in an endnote (note 22) that the quale “might be identified with a physico-chemical state”.) But these objections are now generally regarded as special cases of the “conceivability argument” against physicalism, advanced by (among others) Kripke (1972) and Chalmers (1996a), which derives from Descartes's well-known argument in the Sixth Meditation (1641) that since he can clearly and distinctly conceive of himself existing apart from his body (and vice versa), and since the ability to clearly and distinctly conceive of things as existing apart guarantees that they are in fact distinct, he is in fact distinct from his body.
Chalmers's version of the argument (1996a, 2002), known as the “Zombie Argument”, has been particularly influential. The first premise of this argument is that it is conceivable, in a special, robust, “positive” sense, that there are molecule-for-molecule duplicates of oneself with no qualia (call them “zombies”, following Chalmers 1996a). The second premise is that scenarios that are “positively” conceivable in this way represent real, metaphysical, possibilities. Thus, he concludes, zombies are possible, and functionalism — or, more broadly, physicalism — is false. The force of the Zombie Argument is due in large part to the way Chalmers defends its two premises; he provides a detailed account of just what is required for zombies to be conceivable, and also an argument as to why the conceivability of zombies entails their possibility (see also Chalmers 2002, 2006, 2010, Ch. 6, and Chalmers and Jackson 2002). This account, based on a more comprehensive theory of how we can have knowledge or justified belief about possibility and necessity known as “two-dimensional semantics”, reflects an increasingly popular way of thinking about these matters, but remains controversial. (For alternative ways of explaining conceivability, see Kripke (1986), Hart (1988); for criticism of the argument from two-dimensional semantics, see Yablo 2000, 2002, Bealer 2002, Stalnaker 2002, Soames 2004, Byrne and Prior 2006; but see also Chalmers 2006.)
In a related challenge, Joseph Levine (1983, 1993, 2001) argues that, even if the conceivability of zombies doesn't entail that functionalism (or more broadly, physicalism) is false, it opens an “explanatory gap” not encountered in other cases of inter-theoretical reduction, since the qualitative character of an experience cannot be deduced from any physical or functional description of it. Such attempts thus pose, at very least, a unique epistemological problem for functionalist (or physicalist) reductions of qualitative states.
In response to these objections, analytic functionalists contend, as they did with the inverted and absent qualia objections, that sufficient attention to what is required for a creature to duplicate our functional organization would reveal that zombies are not really conceivable, and thus there is no threat to functionalism and no explanatory gap. A related suggestion is that, while zombies may now seem conceivable, we will eventually find them inconceivable, given the growth of empirical knowledge, just as we now find it inconceivable that there could be H2O without water (Yablo 1999). Alternatively, some suggest that the inconceivability of zombies awaits the development of new concepts that can provide a link between our current phenomenal and physical concepts (Nagel 1975, 2000), while others (McGinn 1989) agree, but deny that humans are capable of forming such concepts.
However, an increasingly popular strategy for defending functionalism (and physicalism) against these objections is to concede that there can be no conceptual analyses of qualitative concepts (such as what it's like to see red or what it's like to feel pain) in purely functional terms, and focus instead on developing arguments to show that the conceivability of zombies neither implies that such creatures are possible nor opens up an explanatory gap.
One line of argument (Block and Stalnaker 1999; Yablo 2000) contends that the conceivability of (alleged) counterexamples to psycho-physical or psycho-functional identity statements, such as zombies, has analogues in other cases of successful inter-theoretical reduction, in which the lack of conceptual analyses of the terms to be reduced makes it conceivable, though not possible, that the identities are false. However, the argument continues, if these cases routinely occur in what are generally regarded as successful reductions in the sciences, then it's reasonable to conclude that the conceivability of a situation does not entail its possibility.
A different line of argument (Horgan 1994; Loar 1990; Lycan 1990; Hill 1997, Hill and McLaughlin 1999, Balog 1999) maintains that, while generally the conceivability of a scenario entails its possibility, scenarios involving zombies stand as important exceptions. The difference is that the qualitative, or “what it's like”, concepts used to describe the properties of experience that we conceive zombies to lack are significantly different from the third-personal, discursive concepts of our common sense and scientific theories such as mass, force or charge; they comprise a special class of non-discursive, first-personal, perspectival representations of those properties. Whereas conceptually independent third-personal concepts x and y may reasonably be taken to express metaphysically independent properties, or modes of presentation, no such metaphysical conclusions can be drawn when one of the concepts in question is third-personal and the other is qualitative, since these concepts may merely be picking out the same properties in different ways. Thus, the conceivability of zombies, dependent as it is on our use of qualitative concepts, provides no evidence of their metaphysical possibility.
Key to this line of defense is the claim that these special qualitative concepts can denote functional (or physical) properties without expressing some irreducibly qualitative modes of presentation of them, for otherwise it couldn't be held that these concepts do in fact apply to our functional (or physical) duplicates, even though it's conceivable that they don't. This, not surprisingly, has been disputed (See White 2002 in the Other Internet Resources; Chalmers 1999, but see the responses of Loar 1999, and Hill and McLaughlin 1999; also see Levin 2002, 2008, for a hybrid view), and there is currently much discussion in the literature about the plausibility of this claim. If this line of defense is successful, however, it can also provide a response to the “Distinct Property Argument”, discussed in section 3.3.
In another important, related, challenge to functionalism (and, more generally, physicalism), Thomas Nagel (1974) and Frank Jackson (1982) argue that a person could know all the physical and functional facts about a certain type of experience and still not “know what it's like” to have it. This is known as the “Knowledge Argument”, and its conclusion is that there are certain properties of experiences — the “what it's like” to see red, feel pain, or sense the world through echolocation — which cannot be identified with functional (or physical) properties. Though neither Nagel (2000) nor Jackson (1998) now endorse this argument, many philosophers contend that it raises special problems for any physicalistic view (see Alter 2007, and, in response, Jackson 2007).
An early line of defense against these arguments, endorsed primarily but not exclusively by a priori functionalists, is known as the “Ability Hypothesis”. (Nemirow 1990, 2007, Lewis 1990, Levin 1986) “Ability” theorists suggest that knowing what it's like to see red or feel pain is merely a sort of practical knowledge, a “knowing how” (to imagine, remember, or re-identify, a certain type of experience) rather than a knowledge of propositions or facts. (See Tye 2000, for a summary of the pros and cons of this position.) An alternative, and currently more widespread, view among contemporary functionalists is that coming to know what it's like to see red or feel pain is indeed to acquire propositional knowledge uniquely afforded by experience, expressed in terms of first-personal concepts of those experiences. But, the argument continues, this provides no problem for functionalism (or physicalism), since these special first-personal concepts need not denote, or introduce as “modes of presentation”, any irreducibly qualitative properties. This view, of course, shares the strengths and weaknesses of the analogous response to the conceivability arguments discussed above. For further discussion, see the essays in Ludlow, Nagasawa, and Stoljar 2004.
There is one final strategy for defending a functionalist account of qualitative states against all of these objections, namely, eliminativism (Dennett 1988; Rey 1997). One can, that is, deny that there are any such things as irreducible qualia, and maintain that the conviction that such things do, or perhaps even could, exist is due to illusion — or confusion.
In the last part of the 20th century, functionalism stood as the dominant theory of mental states. Like behaviorism, functionalism takes mental states out of the realm of the “private” or subjective, and gives them status as entities open to scientific investigation. But, in contrast to behaviorism, functionalism's characterization of mental states in terms of their roles in the production of behavior grants them the causal efficacy that common sense takes them to have. And in permitting mental states to be multiply realized, functionalism seems to offer an account of mental states that is compatible with materialism, without limiting the class of those with minds to creatures with brains like ours.
More recently, however, there has been a resurgence of interest in the psycho-physical (type-) identity thesis, fueled in part by the contention that, in the actual practice of neuroscience, neural states are type-individuated more coarsely than early identity theorists such as Place, Feigl, and Smart assumed. If this is so, then it may well be that creatures that differ from us in their fine-grain neurophysiological make-up can nonetheless share our neural states, and thus that the psycho-physical identity thesis can claim some of the scope once thought to be exclusive to functionalism. The plausibility of this thesis depends, first, on whether or not such creatures would in fact be our functional equivalents, and if so, whether or not their underlying similarities would in fact be (coarse-grain) neural similarities, and not (finer-grain) psycho-functional similarities. (See Bechtel 2012, Bickle 2012, McCauley 2012, and Shapiro and Polger 2012, and the essays in DeJoong and Shouten 2012, for further discussion; see also the entry on Multiple Realizability.) Even so, it seems that there could be creatures, both biological and non-biological, that are functionally equivalent to us but do not possess even our coarse-grain neural properties. If so, and if these creatures can plausibly be regarded as sharing our mental states—to be sure, a controversial thesis—then even if neural states can be individuated more coarsely, functionalism will retain its claim to greater universality than the identity thesis.
There remain other, more recently posed, questions about functionalism, among them whether there can be plausible functional characterizations of non-standard perceptual experiences, such as synaesthesia (see Gray et al. 2002, for an articulation of these questions, and Gray 2004, for a response), and whether functional theories can accommodate non-standard views about the location of mental states, such as the hypothesis of extended cognition, which maintains that certain mental states such as memories—and not just their representational contents—can reside outside the head. (See Clark and Chalmers 2002, Adams and Aizawa 2008, Rupert 2009, and Sprevack 2009, for further discussion.)
In general, the sophistication of functionalist theories has increased since their introduction, but so has the sophistication of the objections to functionalism, especially to functionalist accounts of mental causation (section 5.2), introspective knowledge (Section 5.3), and the qualitative character of experiential states (Section 5.5). For those unconvinced of the plausibility of dualism, however, and unwilling to restrict mental states to creatures physically like ourselves, the initial attractions of functionalism remain. The primary challenge for future functionalists, therefore, will be to meet these objections to the doctrine, either by articulating a functionalist theory in increasingly convincing detail, or by showing how the intuitions that fuel these objections can be explained away.
- Adams, F. and K. Aizawa, 2008. The Bounds of Cognition, Oxford: Wiley-Blackwell.
- Aizawa, K., 2008. “Neuroscience and Multiple realization: A Reply to Bechtel and Mundale”, Synthese, 167: 495–510.
- Alter, T., 2007. “Does Representationalism Undermine the Knowledge Argument?”, in Alter and Walter (2007), 65–76.
- Alter, T. and S. Walter, 2007. Phenomenal Concepts and Phenomenal Knowledge, New York: Oxford University Press.
- Antony, L. and J. Levine, 1997. “Reduction With Autonomy”, Philosophical Perspectives, 2: 83–105.
- Armstrong, D., 1968. A Materialistic Theory of the Mind, London: RKP.
- –––, 1981. The Nature of Mind, Brisbane: University of Queensland Press.
- –––, 1993. “Causes are perceived and introspected”, in Behavioral and Brain Sciences, 16(1): 29–29.
- Baker, L. R., 1995. “Metaphysics and mental Causation”, in Heil and Mele 1995, 75–96.
- Balog, K., 1999. “Conceivability, Possibility, and the Mind-Body Problem”, Philosophical Review, 108(4): 497–528.
- Bealer, G., 1997. “Self-Consciousness”, Philosophical Review, 106: 69–117.
- –––, 2001. “The self-consciousness argument: Why Tooley's criticisms fail”, Philosophical Studies, 105(3): 281–307.
- –––, 2002. “Modal Epistemology and the Rationalist Renaissance”, in Gendler, T.and Hawthorne, J. (eds.), Conceivability and Possibility, Oxford: Oxford University Press, 71–126.
- Bechtel, W., 2012. “Identity, reduction, and conserved mechanisms: perspectives from circadian rhythm research”, in Gozzano and Hill 2012, 88–110.
- Bennett, K., 2003. “Why the Exclusion Problem Seems Intractable, and How, Just Maybe, to Tract It”, Noûs, 37(3): 471–497.
- –––, 2007. “Mental Causation”, Philosophy Compass, 2(2): 316–337.
- Bickle, J., 2012. “A brief history of neuroscience's actual influences on mind-brain reductionalism”, in Gozzano and Hill 2012, 43–65.
- Block, N., 1980a. Readings in the Philosophy of Psychology, Volumes 1 and 2, Cambridge, MA: Harvard University Press.
- –––, 1980b. “Troubles With Functionalism”, in Block 1980a, 268–305.
- –––, 1980c. “Are Absent Qualia Impossible?”, Philosophical Review, 89: 257–274.
- –––, 1986. “Advertisement for a Semantics for Psychology”, in French, Euling, and Wettstein (eds.), Midwest Studies in Philosophy, 10: 615–678.
- –––, 1990. “Inverted Earth”, in J. Tomberlin (ed.), Philosophical Perspectives, 4, Atascadero, CA: Ridgeview Press, 52–79.
- –––, 1997. “Anti-Reductionism Slaps Back”, Philosophical Perspectives, 11, Atascadero, CA: Ridgeview Press, 107–132.
- Block, N., and O. Flanagan and G. Guzeldere (eds.), 1997. The Nature of Consciousness, Cambridge, MA: MIT Press.
- Block, N. and J. Fodor,, 1972. “What Psychological States Are Not”, Philosophical Review, 81: 159–181.
- Block, N. and R. Stalnaker, 1999. “Conceptual Analysis, Dualism, and the Explanatory Gap”, Philosophical Review, 108(1): 1–46.
- Braddon-Mitchell, D. and F. Jackson, 1996/2007. Philosophy of Mind and Cognition, Malden, MA: Blackwell Publishing.
- Burge, T., 1979. “Individualism and the Mental”, Midwest Studies in Philosophy, 4: 73–121.
- –––, 1995. “Mind-Body Causation and Explanatory Practice”, in Heil and Mele 1995, 97–120.
- Byrne, A. and J. Prior, 2006. “Bad Intensions”, in Garcia-Carpintiero, M. and Macia, J. (eds.) Two-Dimensional Semantics, Oxford: Oxford University Press, 38–54.
- Chalmers, D., 1996a. The Conscious Mind, Oxford: Oxford University Press.
- –––, 1996b. “Does a Rock Implement Every Finite State Automaton?”, Synthese, 108: 309–333.
- –––, 1999. “Materialism and the Metaphysics of Modality”, in Philosophy and Phenomenological Research, 59(2): 473–496.
- –––, 2002. “Does Conceivability Entail Possibility?”, in Gendler and Hawthorne 2002, 145–200.
- –––, 2006. “The Foundation of Two-Dimensional Semantics”, in M. Garcia-Carpintiero, M. and J. Macia (eds.), Two-Dimensional Semantics, Oxford: Oxford University Press, 55–140.
- –––, 2010. The Character of Consciousness, New York: Oxford University Press.
- Chisholm, R., 1957. Perceiving, Ithaca: Cornell University Press.
- Chomsky, N., 1959. “Review of Skinner's Verbal Behavior”, Language, 35: 26–58; reprinted in Block 1980, 48–63.
- Churchland, P., 1981. “Eliminative Materialism and Propositional Attitudes”, Journal of Philosophy, 78: 67–90.
- –––, 2005. “Functionalism at Forty: A Critical Retrospective”, Journal of Philosophy, 102: 33–50.
- Clark, A. and Chalmers, D., 2002. “The Extended Mind”, in Chalmers, D. (2002). Philosophy of Mind, New York: Oxford University Press, 643–651.
- Crane, T., 1995. “The Mental Causation Debate”, Proceedings of the Aristotelian Society, 69 (Supplement): 211–236.
- Davidson, D., 1980a. Essays on Actions and Events, New York: Oxford University Press.
- –––, 1980b. “Actions, Reasons, and Causes”, in Davidson 1980a.
- –––, 1980c. “Mental Events”, in Davidson 1980a.
- De Joong, H.L. and M. Shouten (eds.), 2012. Rethinking Reduction, Oxford: Blackwell Publishers.
- Dennett, D., 1978a. “Intentional Systems”, in Dennett 1978c, 3–22.
- –––, 1978b. “Towards a Cognitive Theory of Consciousness”, in Dennett 1978c, 149–173.
- –––, 1978c. Brainstorms, Montgomery, VT: Bradford Books.
- –––, 1988. “Quining Qualia”, in A. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science, New York: Oxford University Press, 43–77.
- Feigl, H., 1958.“The ‘Mental’ and the ‘Physical’”, in Feigl, H., Scriven, M. and Maxwell, G., Minnesota Studies in the Philosophy of Science: Concepts, Theories, and the Mind-Body Problem, Minneapolis: Minnesota University Press, 370–409.
- Field, H., 1980, “Mental Representation”, in Block 1980, 78–114.
- Fodor, J., 1968. Psychological Explanation, New York: Random House.
- –––, 1975. The Language of Thought, New York: Crowell.
- –––, 1990a. “Fodor's Guide to Mental Representation”, in J. Fodor, A Theory of Content and Other Essays, Cambridge, MA: MIT Press, 3–29.
- –––, 1990b. “Making Mind Matter More”, in J. Fodor, A Theory of Content and Other Essays, Cambridge, MA: MIT Press, 137–159.
- –––, 1994. The Elm and the Expert, Cambridge, MA: MIT Press.
- Frege, G., 1892/1952. “On Sense and Reference”, in P. Geach and M. Black (eds.), Translations from the Work of Gottlob Frege, Oxford: Blackwell.
- Funkhouser, E., 2007. “Multiple Realizability”, Philosophy Compass, 2(2): 303–315.
- Geach, P., 1957. Mental Acts, London: RKP.
- Gendler, T. and J. Hawthorne (eds.), 2002. Conceivability and Possibility, Oxford: Oxford University Press.
- Goldman, A., 1993. “The psychology of folk psychology”, in Behavioral and Brain Sciences, 16(1): 15–28.
- Gozzano, S. and C. Hill, 2012. New Perspectives on Type Identity, New York: Cambridge University Press.
- Gray, J.A., and S. Chopping, J. Nunn, D. Parslow, L. Gregory, S. Williams, M.J. Brammer, and S. Baron-Cohen, 2002. “Implications of Synaesthesia for Functionalism”, Journal of Consciousness Studies, 9(12): 5–31.
- Hardin, C.L., 1988. Color for Philosophers: Unweaving the Rainbow, Indianapolis: Hackett.
- Harman, G., 1973. Thought, Princeton, NJ: Princeton University Press.
- Hart, W., 1988. The Engines of the Soul, Cambridge: Cambridge University Press.
- Hill, C., 1991. Sensations, Cambridge: Cambridge University Press.
- –––, 1993. “Qualitative characteristics, type materialism, and the circularity of analytic functionalism”, in Behavioral and Brain Sciences, 16(1): 50–51.
- –––, 1997. “Imaginability, Conceivability, Possibility and the Mind-Body Problem”, Philosophical Studies, 87: 61–85.
- Hill, C. and B. McLaughlin, 1999. “There are Fewer Things in Reality than Dreamt of in Chalmers' Philosophy”, in Philosophy and Phenomenological Research, 59(2): 445–454.
- Horgan, T., 1984. “Jackson on Physical Information and Qualia”, Philosophical Quarterly, 34: 147–183.
- Horgan, T. and J. Tienson, 2002. “The Intentionality of Phenomenology and the Phenomenology of Intentionality”, in Chalmers, D. (ed.) (2002). Philosophy of Mind, New York. Oxford University Press, 520–533.
- Horgan, T. and J. Woodward, 1985. “Folk Psychology is Here to Stay”, Philosophical Review, 94: 197–226.
- Jackson, F., 1982. “Epiphenomenal Qualia”, Philosophical Quarterly, 32: 127–136.
- –––, 1996. “Mental Causation”, Mind, 105(419): 377–413.
- –––, 1998. “Postscript on Qualia”, in F. Jackson, Mind, Method, and Conditionals: Selected Essays, London: Routledge, 76–79.
- –––, 2007. “The Knowledge Argument, Diaphanousness, Representationalism”, in Alter and Walter 2007, 52–64.
- Kim, J., 1989. “Mechanism, Purpose, and Explanatory Exclusion”, in J. Kim, Supervenience and Mind, Cambridge, Cambridge University Press.
- –––, 1998. Mind in a Physical World, Cambridge, MA: Bradford.
- –––, 2007. “Causation and Mental Causation”, in McLaughlin and Cohen 2007, 227–242.
- Kobes, B., 1993. “Self-attributions help constitute mental types”, in Behavioral and Brain Sciences, 16(1): 54–56.
- Kriegel, U., 2003. “Is intentionality dependent on consciousness?”, Philosophical Studies, 116(3): 271–307.
- Kripke, S., 1980. Naming and Necessity, Cambridge, MA: Harvard University Press.
- Levin, J., 1985. “Functionalism and the Argument from Conceivability”, Canadian Journal of Philosophy, 11 (Supplement): 85–104.
- –––, 1986. “Could Love be Like a Heatwave?”, Philosophical Studies, 49: 245–261.
- –––, 1998. “Must Reasons be Rational?”, in Philosophy of Science, 55: 199–217.
- –––, 2002. “Is Conceptual Analysis Needed for the Reduction of Qualitative States?”, Philosophy and Phenomenological Research, 64(3): 571–591.
- –––, 2008. “Taking Type-B Materialism Seriously”, Mind and Language, 23(4): 402–425.
- Levine, J., 1983. “Materialism and Qualia: The Explanatory Gap”, Pacific Philosophical Quarterly, 64: 354–361.
- –––, 1993. “On Leaving Out What it's Like”, in M. Davies and G. Humphries (eds.), Consciousness, Oxford: Blackwell Publishers, 121–136.
- –––, 2001. Purple Haze, New York: Oxford University Press.
- Lewis, D., 1966. “An Argument for the Identity Theory”, Journal of Philosophy, 63: 17–25.
- –––, 1972. “Psychophysical and Theoretical Identifications”, in Block 1980, 207–215.
- –––, 1980. “Mad Pain and Martian Pain”, in Block 1980, 216–222.
- –––, 1990. “What Experience Teaches”, in Lycan 1990, 499–519.
- Loar, B., 1981. Mind and Meaning, Cambridge: Cambridge University Press.
- –––, 1987. “Social Content and Psychological Content”, in P. Grimm and D. Merrill (eds.), Contents of Thought, Tucson: University of Arizona Press, 99–110.
- –––, 1997. “Phenomenal States” (revised version), in Block, et al., 1997, 597–616.
- –––, 1999. “David Chalmers's The Conscious Mind”, in Philosophy and Phenomenological Research, 59(2): 439–444.
- Loewer, B., 2002. “Comments on Jaegwon Kim's Mind and the Physical World”, Philosophy and Phenomenological Research, 65(3): 555–662.
- –––, 2007. “Mental Causation, or Something Near Enough”, in McLaughlin and Cohen 2007, 243–264.
- Ludlow, P. and Y. Nagasawa and D. Stoljar, 2004. There's Something About Mary: Essays on Phenomenal Knowledge and Frank Jackson's Knowledge Argument, Cambridge, MA: MIT Press.
- Ludwig, K., 1998. “Functionalism, Causation, and Causal Relevance”, Psyche, 4(3) (March).
- Lycan, W., 1987. Consciousness, Cambridge MA: MIT Press.
- –––, 1990a. Mind and Cognition, Oxford: Blackwell Publishers.
- –––, 1990b. “The Continuity of Levels of Nature”, in Lycan 1990, 77–96.
- Lyons, J., 2006. “In Defense of Epiphenomenalism”, Philosophical Psychology, 19(6): 767–749.
- Macdonald, C. and G. Macdonald, 1995. “How to be Psychologically Relevant”, in C. Macdonald and G. Macdonald (eds.), Philosophy of Psychology: Debates on Psychological Explanation (Volume 1), Oxford: Blackwell, 60–77.
- Malcolm, N., 1968. “The Conceivability of Mechanism”, Philosophical Review, 77: 45–72.
- McCauley, R., 2012. “About face: philosophical naturalism, the heuristic identity theory, and recent findings about prosopagnosia”, in Gozzano and Hill 2012, 186–206.
- McCullagh, M., 2000. “Functionalism and Self-Consciousness”, Mind and Language, 15(5): 481–499.
- McDowell, J., 1985. “Functionalism and Anomalous Monism”, in E. LePore and B. McLaughlin (eds.), Actions and Events: Perspectives on the Philosophy of Donald Davidson, Oxford: Blackwell Publishers, 387–398.
- McGinn, C., 1989. “Can We Solve the Mind-Body Problem?”, Mind, 98: 349–66.
- McLaughlin, B., 2006. “Is Role-Functionalism Committed to Epiphenomenaliam?”, Consciousness Studies, 13 (1–2): 39–66.
- –––, forthcoming. “Does Mental Causation Require Psychophysical Identities?”, in T. Horgan, M. Sabates, and D. Sosa (eds.), Qualia and Causation in a Physical World: Themes From the Work of Jaegwon Kim, Cambridge: Cambrdge University Press.
- McLaughlin, B. and J. Cohen, 2007. Contemporary Debates in the Philosophy of Mind, Malden, MA: Blackwell Publishing.
- Melnyk, A., 2003. A Physicalist Manifesto: Thoroughly Modern Materialism, Cambridge: Cambridge University Press.
- Mumford, S., 1998. Dispositions, Oxford: Oxford University Press.
- Nagel, T., 1974. “What Is It Like To Be a Bat?”, Philosophical Review, 83: 435–450.
- –––, 2000. “The Psycho-physical Nexus”, in P. Boghossian and C. Peacocke (eds.). New Essays on the A Priori, New York: Oxford University Press, 434–471
- Nemirow, L., 1990. “Physicalism and the Cognitive Role of Acquaintance”, in Lycan 1990a, 490–498.
- –––, 2007. “So This is What It's Like: A Defense of the Ability Hypothesis”, in Alter and Walter 2007, 32–51.
- Ney, A., 2012. “The causal contribution of mental events”, in Gozzano and Hill 2012, 230–250.
- Peacocke, C., 1999. Being Known, Oxford: Oxford University Press.
- Piccinini, G., 2004. “Functionalism, computationalism, and mental states”, Stud. Hist. Phil. Sci., 35: 811–833.
- Pitt, D., 2004. “The phenomenology of cognition, or, what is it like to think that P?”, Philosophy and Phenomenological Research, 69(1): 1–36.
- Place, U.T., 1956. “Is Consciousness a Brain Process?” British Journal of Psychology, 47: 44–50.
- Polger, T., 2011. “Are Sensations Still Brain Processes?”, Philosophical Psychology, 24(1): 1–21.
- Prior, E., and R. Pargetter and F. Jackson, 1982. “Three Theses About Dispositions”, American Philosophical Quarterly, 19(3): 251–257.
- Putnam, H., 1960. “Minds and Machines”, reprinted in Putnam 1975b, 362–385.
- –––, 1963. “Brains and Behavior”, reprinted in Putnam 1975b, 325–341.
- –––, 1967. “The Nature of Mental States”, reprinted in Putnam 1975b, 429–440.
- –––, 1973. “Philosophy and our Mental Life”, reprinted in Putnam 1975b, 291–303.
- –––, 1975a. “The Meaning of ‘Meaning’ ”, reprinted in Putnam 1975b, 215–271.
- –––, 1975b. Mind, Language, and Reality, Cambridge: Cambridge University Press.
- –––, 1988. Representation and Reality, Cambridge, MA: MIT Press.
- Quine, W.V., 1953. “Two Dogmas of Empiricism”, in W. V. Quine, From a Logical Point of View, New York: Harper and Row.
- Rey, G., 1997. Contemporary Philosophy of Mind, Cambridge, MA: Blackwell.
- –––, 2007. “Resisting Normativism in Psychology”, in McLaughlin and Cohen 2007, 69–84.
- Rupert, R., 2006. “Functionalism, Mental Causation, and the Problem of Metaphysically Necessitated Effects”, Noûs, 40: 256–283.
- –––, 2009. Cognitive Systems and the Extended Mind, New York: Oxford University Press.
- Ryle, G., 1949. The Concept of Mind, London: Hutcheson.
- Schaffer, J., 2003. “Overdetermining Causes”, Philosophical Studies, 114: 23–45.
- Searle, J., 1980. “Minds, Brains and Programs”, Behavioral and Brain Sciences, 3: 417–457
- –––, 1992. The Rediscovery of Mind, Cambridge, MA: MIT Press.
- Sellars, W., 1956. “Empiricism and the Philosophy of Mind”, in M. Scriven, P. Feyerabend and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, Volume 1. Minneapolis: U.of Minnesota Press.
- Shagrir, O., 2005. “The Rise and Fall of Computational Functionalism”, in Y. Ben-Menahem (ed.), Hilary Putnam, Cambridge: Cambridge University Press, 220–250
- Shapiro, L. and T. Polger, 2012. “Identity, variability, and multiple realization in the special sciences”, in Gozzano and Hill 2002, 264–287.
- Shoemaker, S., 1984. Identity, Cause, and Mind, Cambridge: Cambridge University Press.
- –––, 1984a. “Phenomenal similarity”, in Shoemaker 1984, 159–183.
- –––, 1984b. “Functionalism and qualia”, in Shoemaker 1984, 184–205.
- –––, 1984c. “Some varieties of functionalism”, in Shoemaker 1984, 261–286.
- –––, 1996. The first-person perspective and other essays, Cambridge: Cambridge University Press.
- –––, 1996a. “Introspection and the self”, in Shoemaker 1996, 3–24.
- –––, 1996b. “On knowing one's own mind”, in Shoemaker 1996, 25–49.
- –––, 1996c. “First-person access”, in Shoemaker 1996, 50–73.
- –––, 1996d. “Self-knowledge and ‘inner sense’: Lecture I”, in Shoemaker 1996, 201–223.
- –––, 2001. “Realization and Mental Causation”, in C. Gillet and B. Loewer, Physicalism and Its Discontents, Cambridge: Cambridge University Press, 74–98.
- Sider, T., 2003. “What's so Bad About Overdetermination?”, Philosophy and Phenomenological Research, 67: 716–726.
- Smart, J.J.C., 1959. “Sensations and Brain Processes”, Philosophical Review, 68: 141–156.
- Soames, S., 2004. Reference and Description, Princeton, NJ: Princeton University Press.
- Sprevak, M., 2009. “Extended cognition and functionalism”, Journal of Philosophy, 106(9): 503–527.
- Stalnaker, R., 2002. “What is it Like to Be a Zombie?”, in Gendler and Hawthorne 2002, 385–400.
- Sterelny, K., 1993. “Categories, categorisation and development: Introspective knowledge is no threat to functionalism”, in Behavioral and Brain Sciences, 16(1): 81–83.
- Stich, S., 1981. “Dennett on Intentional Systems”, Philosophical Topics, 12: 39–62.
- –––, 1983. From Folk Psychology to Cognitive Science, Cambridge, MA: MIT Press.
- Strawson, G., 1994. Mental Reality, Cambridge, MA: MIT Press.
- Tooley, M., 2001. “Functional concepts,referentially opaque contexts, causal relations, and the definition of theoretical terms”, Philosophical Studies, 105(3): 251–279.
- Tye, M., 2000. Consciousness, Color, and Content, Cambridge, MA: MIT Press.
- Van Gulick, R., 1989. “What Difference Does Consciousness Make?” Philosophical Topics, 17: 211–230.
- Wedgwood, R., 2007. “Normativism Defended”, in McLaughlin and Cohen 2007, 85–101.
- White, S., 1986. “Curse of the Qualia”, Synthese, 68: 333–368.
- White, S., 2007. “Property Dualism, Phenomenal Concepts, and the Semantic Premise”, in Alter and Walter 2007, 210–248.
- Wittgenstein, L., 1953. Philosophical Investigations, New York: Macmillan.
- Yablo, S., 1992. “Mental Causation”, Philosophical Review, 101: 245–280.
- –––, 2000. “Textbook Kripkeanism and the Open Texture of Concepts”, Pacific Philosophical Quarterly, 81(1): 98–122.
- –––, 2002. “Coulda, Shoulds, Woulda”, in Gendler and Hawthorne 2002, 441–492.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Schwitzgebel, E. (2013). “If Materialism is True, The United States is Probably Conscious”, Presented at the Consciousness Online, conference, 2013.
- White, Stephen. (2002). “Why the Property Dualism Argument Won't Go Away”, unpublished.
- Online Papers on Consciousness, authored by various philosophers, maintained, and continually updated, by David Chalmers.
behaviorism | belief | dualism | mental causation | mental representation | mind/brain identity theory | mind: computational theory of | multiple realizability | physicalism | qualia | qualia: knowledge argument | Turing machines