Mental Representation

First published Thu Mar 30, 2000; substantive revision Tue Dec 11, 2012

The notion of a “mental representation” is, arguably, in the first instance a theoretical construct of cognitive science. As such, it is a basic concept of the Computational Theory of Mind, according to which cognitive states and processes are constituted by the occurrence, transformation and storage (in the mind/brain) of information-bearing structures (representations) of one kind or another.

However, on the assumption that a representation is an object with semantic properties (content, reference, truth-conditions, truth-value, etc.), a mental representation may be more broadly construed as a mental object with semantic properties. As such, mental representations (and the states and processes that involve them) need not be understood only in computational terms. On this broader construal, mental representation is a philosophical topic with roots in antiquity and a rich history and literature predating the recent “cognitive revolution,” and which continues to be of interest in pure philosophy. Though most contemporary philosophers of mind acknowledge the relevance and importance of cognitive science, they vary in their degree of engagement with its literature, methods and results; and there remain, for many, issues concerning the representational properties of the mind that can be addressed independently of the computational hypothesis.

Though the term ‘Representational Theory of Mind’ is sometimes used almost interchangeably with ‘Computational Theory of Mind’, I will use it here to refer to any theory that postulates the existence of semantically evaluable mental objects, including philosophy's stock in trade mentalia — thoughts, concepts, percepts, ideas, impressions, notions, rules, schemas, images, phantasms, etc. — as well as the various sorts of “subpersonal” representations postulated by cognitive science. Representational theories may thus be contrasted with theories, such as those of Baker (1995), Collins (1987), Dennett (1987), Gibson (1966, 1979), Reid (1764/1997), Stich (1983) and Thau (2002), which deny the existence of such things.

1. The Representational Theory of Mind

The Representational Theory of Mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and imagings. Such states are said to have “intentionality” — they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an imaging of George W. Bush with dreadlocks is inaccurate.)

RTM defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is, on the representational view, to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry.

RTM also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is, inter alia, to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if p then q is (inter alia) to have a sequence of thoughts of the form p, if p then q, q.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized — i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensically conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.

In philosophy, recent debates about mental representation have centered around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focused on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.

2. Propositional Attitudes

Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behavior (often collectively referred to as “folk psychology”) are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way of making sense of each other's behavior than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behavior. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court. See, e.g., Churchland 1989.)

Dennett (1987a) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behavior is merely to adopt the “intentional stance” toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behavior (on the assumption that it is rational — i.e., that it behaves as it should, given the propositional attitudes it should have, given its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this. (See Dennett 1987a: 29.)

Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a “moderate” realist about propositional attitudes, since he believes that the patterns in the behavior and behavioral dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the individual believes (1987b, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.)

Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behavior and cognition, and the causal powers of a mental state are determined by its intrinsic “structural” or “syntactic” properties. The semantic properties of a mental state, however, are determined by its extrinsic properties — e.g., its history, environmental or intramental relations. Hence, such properties cannot figure in causal-scientific explanations of behavior. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role. (Stich has since changed his views on a number of these issues. See Stich 1996.)

3. Conceptual and Non-Conceptual Representation

It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (cf. Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal (“what-it's-like”) features (“qualia”), and those, such as sensations, which have phenomenal features but no conceptual constituents. (Nonconceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy.[1]) On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps, photographs or movies. Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a non-conceptual sensory experience and a belief, or some more integrated compound of conceptual and nonconceptual elements. (There is an extensive literature on the representational content of perceptual experience. See the entry on the contents of perception.)

Disagreement over nonconceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as qualia at all (as they are standardly construed); while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible. (See the discussion in the next section.)

Some historical discussions of the representational properties of mind (e.g., Aristotle De Anima, Locke 1689/1975, Hume 1739/1978) seem to assume that nonconceptual representations — percepts (“impressions”), images (“ideas”) and the like — are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focusing on the lack of generality (Berkeley Principles of Human Knowledge), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981c) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frege 1918/1997, Geach 1957) or mathematical (Frege 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only nonconceptual representations construed in this way.

There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tienson (2002), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991a), Pitt (2004, 2009, 2011, Forthcoming), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely conceptual (conscious) representational states themselves have a (perhaps proprietary) phenomenology. (This view — bread and butter, it should be said, among historical and contemporary Phenomenologists — has been gaining momentum of late among analytic philosophers of mind. See, e.g., the essays in Bayne and Montague 2011 and in Kriegel Forthcoming, Farkas 2008 and Kriegel 2011.) If this claim is correct, the question of what role phenomenology plays in the determination of content rearises for conceptual representation; and the eliminativist ambitions of Sellars, Brandom, Rey, et al. would meet a new obstacle. (It would also raise prima face problems for reductivist representationalism (see the next section), as well as for reductive naturalistic theories of intentional content.)

4. Representationalism and Phenomenalism

Among realists about phenomenal properties, the central division is between representationalists (also called “representationists” and “intentionalists”) — e.g., Dretske (1995), Harman (1990), Leeds (1993), Lycan (1987, 1996), Rey (1991), Thau (2002), Tye (1995, 2000, 2009) — and phenomenalists (also called “phenomenists”) — e.g., Block (1996, 2003), Chalmers (1996,2004), Evans (1982), Loar (2003a, 2003b), Peacocke (1983, 1989, 1992, 2001), Raffman (1995), Shoemaker (1990). Representationalists claim that the phenomenal character of a mental state is reducible to a kind of intentional content, naturalistically construed (a la Dretske). Phenomenalists claim that the phenomenal character of a mental state is not so reducible.

The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-reductive claim (though the term ‘representationalism’ is most often used for the reductive claim). (See Chalmers 2004a.) On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (i.e., the objective qualitative properties it represents). On the other, it could mean that the intrinsic, subjective phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar) or not (Block, Peacocke). (So-called “Ganzfeld” experiences, in which, for example, the visual field is completely taken up with a uniform experience of a single color, are a standard test case: Do Ganzfeld experiences represent anything? It may be that doubts about the representationality of such experiences is simply a consequence of the fact that (outside the laboratory) we never encounter things that would produce them. Supposing we routinely did (and especially if we had names for them), it seems unlikely such skepticism would arise.)

Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality (see the next section) is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal — though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45–51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to “see through it” to the objects and properties it is experiences of.[2] They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents. (Cf. also Dretske 1996, 1999.)

In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to structural or functional properties. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property P is a state of a system whose evolved function is to indicate the presence of P in the environment; a thought representing the property P, on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of “symbol-filled arrays.” (Cf. the account of mental images in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences — qualia themselves — that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual “scenario” (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is “correct” (a semantic property) if in the corresponding “scene” (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.

Another sort of representation appealed to by some phenomenalists (e.g., Chalmers (2003), Block (2003)) is what Chalmers calls a “pure phenomenal concept.” A phenomenal concept in general is a concept whose denotation is a phenomenal property, and it may be discurive (‘the color of ripe bananas‘), demonstrative (‘this color’; Loar 1996)), or even more direct. On Chalmers's view, a pure phenomenal concept is (something like) a conceptual/phenomenal hybrid consisting of a phenomenological “sample” (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991b) puts it, “you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.” One cannot have a phenomenal concept of a phenomenal property P, and, hence, phenomenal beliefs about P, without having experience of P, because P itself is (in some way) constitutive of the concept of P. (Cf. Jackson 1982, 1986 and Nagel 1974.) (Chalmers (2004b) puts pure phenomenal concepts to use in defending the Knowledge Argument against physicalism. Alter and Walter 2007 is an excellent collection of essays on phenomenal concepts.)

5. Imagery

Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. (McGinn 2004 is a notable recent exception.) In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained kindled a lively debate on the nature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties — i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981a, 1981b, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery (see, e.g., Kosslyn and Pomerantz 1977). The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focused on visual imagery — hence the designation ‘pictorial’; though of course there may be imagery in other modalities — auditory, olfactory, etc. — as well. See O'Callaghan 2007 for discussion of auditory imagery.)

The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is “quasi-pictorial” when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially — for example, in terms of the number of discrete computational steps required to combine stored information about them. (Cf. Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are “(labeled) interpreted symbol-filled arrays.” The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each “cell” in the array represents a specific viewer-centered 2-D location on the surface of the imagined object).

6. Content Determination

The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to specify naturalistic content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.[3]

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990a) cause it to occur.[4] There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990a, 1994) and Teleological Theories (Fodor 1990b, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses (or the property horse).

According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is determined, at least in part, by its (causal, computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The non-functional view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989).

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (non-conceptual) content of experiential states. They thus tend to be externalists (see the next section) about phenomenological as well as conceptual content. Phenomenalists and non-reductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenally-based approach to conceptual content (Horgan and Tienson, Kriegel, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.

7. Internalism and Externalism

Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are externalists, or anti-individualists (e.g., Burge 1979, 1986b, 2010, McGinn 1977), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981b).[5]

This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviors they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic (see Stich 1983, Fodor 1982, 1987, 1994). Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both “narrow” content (determined by intrinsic factors) and “wide” or “broad” content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology. See von Eckardt 1993: 189.)

Narrow content has been variously construed. Putnam (1975), Fodor (1982: 114; 1994: 39ff), and Block (1986: 627ff), for example, seem to understand it as something like de dicto content (i.e., Fregean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construals, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor, such as its syntactic structure or its intramental computational or inferential role.

Burge (1986b) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frege cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

On the most common versions of externalism, though intentional contents are externally determined, mental representations themselves, and the states they partly constitute, remain “in the head.” More radical versions are possible. One might maintain that since thoughts are individuated by their contents, and some thought contents are partially constituted by objects external to the mind, then some thoughts are partly constituted by objects external to the mind. On such a view, a singular thought — i.e., a thought about a particular object — literally contains the object it is about. It is “object-involving.” Such a thought (and the mind that thinks it) thus extend beyond the boundaries of the skull. (This appears to be the view articulated in McDowell 1986, on which there is “interpenetration” between the mind and the world.)

Clark and Chalmers (1998) and Clark (2001, 2005, 2008) have argued that mental representations may exist entirely “outside the head.” On their view, which they call “active externalism,” cognitive processes (e.g., calculation) may be realized in external media (e.g., a calculator or pen and paper), and the “coupled system” of the individual mind and the external workspace ought to count as a cognitive system — a mind —in its own right. Symbolic representations on external media would thus count as mental representations.

Clark and Chalmers's paper has inspired a burgeoning literature on extended, embodied and interactive cognition. (Menary 2010 is a recent collection of essays. See also the entry on embodied cognition.)

8. The Computational Theory of Mind

The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to CTM, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states.

CTM develops RTM by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and other animal cognition, and developing models of cognitive processes implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some — so-called “subpersonal” or “sub-doxastic” representations — are not. Though many philosophers believe that CTM can provide the best scientific explanations of cognition and behavior, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific RTM.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981a, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, that computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the “mental models” of Johnson-Laird 1983, the “retinal arrays,” “primal sketches” and “2½-D sketches” of Marr 1982, the “frames” of Minsky 1974, the “sub-symbolic” structures of Smolensky 1989, the “quasi-pictures” of Kosslyn 1980, and the “interpreted symbol-filled arrays” of Tye 1991 — in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, 2008 Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and use (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).

A fundamental disagreement among proponents of CTM concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Connectionist Architectures.

The classicists (e.g., Turing 1950, Fodor 1975, 2000, 2003, 2008, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors (“nodes”) and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism — “localist” versions — on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the connectionist program (Smolensky 1988, 1991, Chalmers 1993).)

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987, 2008), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. (Cf. also Marr 1982 for an application of classical approach in scientific psychology.) According to the LOTH, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the LOTH explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration. (See, e.g.,Fodor and Lepore 2002.)

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in connectionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)

Moreover, connectionists argue that information processing as it occurs in connectionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981c), on the connectionist model it is a matter of evolving distribution of “weights” (strengths) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The connectionist network is “trained up” by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well. (Cf. the sonar example in Churchland 1989.)

Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that connectionist systems show the kind of flexibility in response to novel situations typical of human cognition — situations in which classical systems are relatively “brittle” or “fragile.”

Some philosophers have maintained that connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if connectionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within connectionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/connectionist debate, and provides useful introductory material as well. See also Von Eckardt 2005.)

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that CTM provides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters. (See also Port and Van Gelder 1995; Clark 1997a, 1997b, 2008.)

Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. CTM attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So CTM involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

9. Thought and Language

To say that a mental object has semantic properties is, paradigmatically, to say that it is about, or true or false of, an object or objects, or that it is true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to RTM such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.

(Others, however, e.g., Davidson (1975, 1982) have suggested that the kind of thought human beings are capable of is not possible without language, so that the dependency might be reversed, or somehow mutual (see also Sellars 1956). (But see Martin 1987 for a defense of the claim that thought is possible without language. See also Chisholm and Sellars 1958.) Schiffer (1987) subsequently despaired of the success of what he calls “Intention Based Semantics.”)

It is also widely held that in addition to having such properties as reference, truth-conditions and truth — so-called extensional properties — expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions — i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frege 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.

Bibliography

  • Almog, J., Perry, J. and Wettstein, H. (eds.), (1989), Themes from Kaplan, New York: Oxford University Press.
  • Alter, T. and Walter, S. (2007), Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness and Physicalism, Oxford: Oxford University Press.
  • Aristotle, De Anima, in The Complete Works of Aristotle: The Revised Oxford Translation, Oxford: Oxford University Press, 1984.
  • Baker, L. R. (1995), Explaining Attitudes: A Practical Approach to the Mind, Cambridge: Cambridge University Press.
  • Ballard, D.H. (1986), “Cortical Connections and Parallel Processing: Structure and Function,” The Behavioral and Brain Sciences, 9: 67–120.
  • Ballard, D.H and Hayes, P.J. (1984), “Parallel Logical Inference,” Proceedings of the Sixth Annual Conference of the Cognitive Science Society, Rochester, NY.
  • Bayne, T. and Montague, M. (eds.), (2011), Cognitive Phenomenology, Oxford: Oxord University Press.
  • Beaney, M. (ed.) (1997), The Frege Reader, Oxford: Blackwell Publishers.
  • Berkeley, G. Principles of Human Knowledge, in M.R. Ayers (ed.), Berkeley: Philosophical Writings, London: Dent, 1975.
  • Block, N. (1983), “Mental Pictures and Cognitive Science,” Philosophical Review, 93: 499–542.
  • ––– (1986), “Advertisement for a Semantics for Psychology,” in P.A. French, T.E. Uehling and H.K. Wettstein (eds.), Midwest Studies in Philosophy, Vol. X, Minneapolis: University of Minnesota Press: 615–678.
  • ––– (1996), “Mental Paint and Mental Latex,” in E. Villanueva (ed.), Philosophical Issues, 7: Perception: 19–49.
  • ––– (2003), “Mental Paint,” in M. Hahn and B. Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, Mass.: The MIT Press.
  • Block, N. (ed.) (1981), Readings in Philosophy of Psychology, Vol. 2, Cambridge, Mass.: Harvard University Press.
  • ––– (ed.) (1982), Imagery, Cambridge, Mass.: The MIT Press.
  • Boghossian, P. A. (1995), “Content,” in J. Kim and E. Sosa (eds.), A Companion to Metaphysics, Oxford: Blackwell, 94–96.
  • Brandom, R. (2002), “Non-inferential Knowledge, Perceptual Experience, and Secondary Qualities: Placing McDowell's Empiricism,” in N.H. Smith (ed.), Reading McDowell: On Mind and World, London: Routledge.
  • Burge, T. (1979), “Individualism and the Mental,” in P.A. French, T.E. Uehling and H.K.Wettstein (eds.), Midwest Studies in Philosophy, Vol. IV, Minneapolis: University of Minnesota Press: 73–121. (Reprinted, with Postscript, in Burge 2007.)
  • ––– (1986a), “Individualism and Psychology,” Philosophical Review, 95: 3–45.
  • ––– (1986b), “Intellectual Norms and Foundations of Mind,” The Journal of Philosophy, 83: 697–720.
  • ––– (2007), Foundations of Mind: Philosophical Essays, Volume 2, Oxford: Oxford University Press.
  • ––– (2010), Origins of Objectivity, Oxford: Oxford University Press.
  • Chalmers, D. (1993), “Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong,” Philosophical Psychology, 6: 305–319.
  • ––– (1996), The Conscious Mind, New York: Oxford University Press.
  • ––– (2003), “The Content and Epistemology of Phenomenal Belief,” in Q. Smith & A. Jokic (eds.), Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press: 220–272.
  • ––– (2004a), “The Representational Character of Experience,” in B. Leiter (ed.), The Future for Philosophy, Oxford: Oxford University Press: 153–181.
  • ––– (2004b), “Phenomenal Concepts and the Knowledge Argument,” in P. Ludlow, Y. Nagasawa and D. Stoljar (eds.), There's Something About Mary: Essays on Phenomenal Consciousness and Frank Jackson's Knowledge Argument, Cambridge, Mass.: The MIT Press.
  • Chisholm, R. and Sellars, W. (1958), “The Chisholm-Sellars Correspondence on Intentionality,” in H. Feigl, M. Scriven and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, Vol. II, Minneapolis: University of Minnesota Press: 529–539.
  • Chomsky, N. (1965), Aspects of the Theory of Syntax, Cambridge, Mass.: The MIT Press.
  • Churchland, P.M. (1981), “Eliminative Materialism and the Propositional Attitudes,” Journal of Philosophy, 78: 67–90.
  • ––– (1989), “On the Nature of Theories: A Neurocomputational Perspective,” in W. Savage (ed.), Scientific Theories: Minnesota Studies in the Philosophy of Science, Vol. 14, Minneapolis: University of Minnesota Press: 59–101.
  • Clark, A. (1997a), “The Dynamical Challenge,” Cognitive Science, 21: 461–481.
  • ––– (1997b), Being There: Putting Brain, Body and World Together Again, Cambridge, MA: The MIT Press.
  • ––– (2001), “Reasons, Robots and the Extended Mind,” Mind and Language, 16: 121–145.
  • ––– (2005), “Intrinsic Content, Active Memory, and the Extended Mind,” Analysis, 65: 1–11.
  • ––– (2008). Supersizing the Mind, Oxford: Oxford University Press.
  • Clark, A., and Chalmers, D. (1998), “The Extended Mind,” Analysis, 58: 7–19.
  • Collins, A. (1987), The Nature of Mental Things, Notre Dame: Notre Dame University Press.
  • Crane, T. (1995), The Mechanical Mind, London: Penguin Books Ltd.
  • Davidson, D. (1973), “Radical Interpretation,” Dialectica 27: 313–328.
  • ––– (1974), “Belief and the Basis of Meaning,” Synthese, 27: 309–323.
  • ––– (1975), “Thought and Talk,” in S. Guttenplan (ed.), Mind and Language, Oxford: Clarendon Press: 7–23.
  • ––– (1982), “Rational Animals,” Dialectica, 4: 317–327.
  • Dennett, D. (1969), Content and Consciousness, London: Routledge & Kegan Paul.
  • ––– (1981), “The Nature of Images and the Introspective Trap,” pages 132–141 of Dennett 1969, reprinted in Block 1981: 128–134.
  • ––– (1987), The Intentional Stance, Cambridge, Mass.: The MIT Press.
  • ––– (1987a), “True Believers: The Intentional Strategy and Why it Works,” in Dennett 1987: 13–35.
  • ––– (1987b), “Reflections: Real Patterns, Deeper Facts, and Empty Questions,” in Dennett 1987: 37–42.
  • ––– (1988), “Quining Qualia,” in A.J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science, Oxford: Clarendon Press: 42–77.
  • ––– (1991), “Real Patterns,” The Journal of Philosophy, 87: 27–51.
  • Devitt, M. (1996), Coming to Our Senses: A Naturalistic Program for Semantic Localism, Cambridge: Cambridge University Press.
  • Dretske, F. (1969), Seeing and Knowing, Chicago: The University of Chicago Press.
  • ––– (1981), Knowledge and the Flow of Information, Cambridge, Mass.: The MIT Press.
  • ––– (1988), Explaining Behavior: Reasons in a World of Causes, Cambridge, Mass.: The MIT Press.
  • ––– (1995), Naturalizing the Mind, Cambridge, Mass.: The MIT Press.
  • ––– (1996), “Phenomenal Externalism, or If Meanings Ain't in the Head, Where are Qualia?”, in E. Villanueva (ed.), Philosophical Issues 7: Perception: 143–158.
  • ––– (1999), “The Mind's Awareness of Itself,” Philosophical Studies, 95: 103–124.
  • ––– (1998), “Minds, Machines, and Money: What Really Explains Behavior,” in J. Bransen and S. Cuypers (eds.), Human Action, Deliberation and Causation, Philosophical Studies Series 77, Dordrecht: Kluwer Academic Publishers. Reprinted in Dretske 2000.
  • ––– (2000), Perception, Knowledge and Belief, Cambridge: Cambridge University Press.
  • Evans, G. (1982), The Varieties of Reference, Oxford: Oxford University Press.
  • Farkas, K. (2008), The Subject's Point of View, Oxford: Oxford University Press.
  • Field, H. (1978), “Mental representation,” Erkenntnis, 13: 9–61.
  • Flanagan, O. (1992), Consciousness Reconsidered, Cambridge, Mass.: The MIT Press.
  • Fodor, J.A. (1975), The Language of Thought, Cambridge, Mass.: Harvard University Press.
  • ––– (1978), “Propositional Attitudes,” The Monist 61: 501–523.
  • ––– (1981), Representations, Cambridge, Mass.: The MIT Press.
  • ––– (1981a), “Introduction,” in Fodor 1981: 1–31.
  • ––– (1981b), “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology,” in Fodor 1981: 225–253.
  • ––– (1981c), “The Present Status of the Innateness Controversy,” in Fodor 1981: 257–316.
  • ––– (1982), “Cognitive Science and the Twin-Earth Problem,” Notre Dame Journal of Formal Logic, 23: 98–118.
  • ––– (1987), Psychosemantics, Cambridge, Mass.: The MIT Press.
  • ––– (1990a), A Theory of Content and Other Essays, Cambridge, Mass.: The MIT Press.
  • ––– (1990b), “Psychosemantics or: Where Do Truth Conditions Come From?” in W.G. Lycan (ed.), Mind and Cognition: A Reader, Oxford: Blackwell Publishers: 312–337.
  • ––– (1994), The Elm and the Expert, Cambridge, Mass.: The MIT Press.
  • ––– (1998), Concepts: Where Cognitive Science Went Wrong, Oxford: Oxford University Press.
  • ––– (2000), The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology, Cambridge, Mass.: The MIT Press.
  • ––– (2003), LOT 2: The Language of Thought Revisited, Oxford: Clarendon Press.
  • ––– (2008), The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology, Cambridge, Mass.: The MIT Press.
  • Fodor, J.A. and Lepore, E. (2002), The Compositionality Papers, Oxford: Clarendon Press.
  • Fodor, J.A. and Pylyshyn, Z. (1981), “How Direct is Visual Perception?: Some Reflections on Gibson's ‘Ecological Approach’,” Cognition, 9: 207–246.
  • ––– (1988), “Connectionism and Cognitive Architecture: A Critical Analysis,” Cognition, 28: 3–71.
  • Frege, G. (1884), The Foundations of Arithmetic, trans. J.L. Austin, New York: Philosophical Library (1954).
  • ––– (1892), “On Sinn and Bedeutung”, in Beany 1997: 151–171.
  • ––– (1918), “Thought”, in Beany 1997: 325–345.
  • Geach, P. (1957), Mental Acts: Their Content and Their Objects, London: Routledge & Kegan Paul.
  • Gibson, J.J. (1966), The senses considered as perceptual systems, Boston: Houghton Mifflin.
  • ––– (1979), The ecological approach to visual perception, Boston: Houghton Mifflin.
  • Goldman, A. (1993), “The Psychology of Folk Psychology,” Behavioral and Brian Sciences, 16: 15–28.
  • Goodman, N. (1976), Languages of Art, 2nd ed., Indianapolis: Hackett.
  • Grice, H.P. (1957), “Meaning,” Philosophical Review, 66: 377–388; reprinted in Studies in the Way of Words, Cambridge, Mass.: Harvard University Press (1989): 213–223.
  • Gunther, Y.H. (ed.) (2003), Essays on Nonconceptual Content, Cambridge, Mass.: The MIT Press.
  • Harman, G. (1973), Thought, Princeton: Princeton University Press.
  • ––– (1987), “(Non-Solipsistic) Conceptual Role Semantics,” in E. Lepore (ed.), New Directions in Semantics, London: Academic Press: 55–81.
  • ––– (1990), “The Intrinsic Quality of Experience,” in J. Tomberlin (ed.), Philosophical Perspectives 4: Action Theory and Philosophy of Mind, Atascadero: Ridgeview Publishing Company: 31–52.
  • Harnish, R. (2002), Minds, Brains, Computers, Malden, Mass.: Blackwell Publishers Inc.
  • Haugeland, J. (1981), “Analog and analog,” Philosophical Topics, 12: 213–226.
  • Heil, J. (1991), “Being Indiscrete,” in J. Greenwood (ed.), The Future of Folk Psychology, Cambridge: Cambridge University Press: 120–134.
  • Horgan, T. and Tienson, J. (1996), Connectionism and the Philosophy of Psychology, Cambridge, Mass: The MIT Press.
  • ––– (2002), “The Intentionality of Phenomenology and the Phenomenology of Intentionality,” in D.J. Chalmers (ed.), Philosophy of Mind, Oxford: Oxford University Press.
  • Horst, S. (1996), Symbols, Computation, and Intentionality, Berkeley: University of California Press.
  • Hume, D. (1739), A Treatise of Human Nature, L.A. Selby-Bigg (ed.), rev. P.H. Nidditch, Oxford: Oxford University Press (1978).
  • Jackendoff, R. (1987), Computation and Cognition, Cambridge, Mass.: The MIT Press.
  • Jackson, F. (1982), “Epiphenomenal Qualia,” Philosophical Quarterly, 32: 127–136.
  • ––– (1986), “What Mary Didn't Know,” Journal of Philosophy, 83: 291–295.
  • Johnson-Laird, P.N. (1983), Mental Models, Cambridge, Mass.: Harvard University Press.
  • Johnson-Laird, P.N. and Wason, P.C. (1977), Thinking: Readings in Cognitive Science, Cambridge University Press.
  • Kaplan, D. (1989), “Demonstratives,” in Almog, Perry and Wettstein 1989: 481–614.
  • Kosslyn, S.M. (1980), Image and Mind, Cambridge, Mass.: Harvard University Press.
  • ––– (1982), “The Medium and the Message in Mental Imagery,” in Block 1982: 207–246.
  • ––– (1983), Ghosts in the Mind's Machine, New York: W.W. Norton & Co.
  • Kosslyn, S.M. and Pomerantz, J.R. (1977), “Imagery, Propositions, and the Form of Internal Representations,” Cognitive Psychology, 9: 52–76.
  • Kriegel, U. (2011), The Sources of Intentionality, Oxford: Oxford University Press.
  • Kriegel, U. (ed.) forthcoming, Phenomenal Intentionality: New Essays, Oxford: Oxford University Press.
  • Leeds, S. (1993), “Qualia, Awareness, Sellars,” Noûs XXVII: 303–329.
  • Lerdahl, F. and Jackendoff, R. (1983), A Generative Theory of Tonal Music, Cambridge, Mass.: The MIT Press.
  • Levine, J. (1993), “On Leaving Out What It's Like,” in M. Davies and G. Humphreys (eds.), Consciousness, Oxford: Blackwell Publishers: 121–136.
  • ––– (1995), “On What It Is Like to Grasp a Concept,” in E. Villanueva (ed.), Philosophical Issues 6: Content, Atascadero: Ridgeview Publishing Company: 38–43.
  • ––– (2001), Purple Haze, Oxford: Oxford University Press.
  • Lewis, D. (1971), “Analog and Digital,” Noûs, 5: 321–328.
  • ––– (1974), “Radical Interpretation,” Synthese, 27: 331–344.
  • Loar, B. (1981), Mind and Meaning, Cambridge: Cambridge University Press.
  • ––– (1996), “Phenomenal States” (Revised Version), in N. Block, O. Flanagan and G. Güzeldere (eds.), The Nature of Consciousness, Cambridge, Mass.: The MIT Press: 597–616.
  • ––– (2003a), “Transparent Experience and the Availability of Qualia,” in Q. Smith and A. Jokic (eds.), Consciousness: New Philosophical Perspectives, Oxford: Clarendon Press: 77–96.
  • ––– (2003b), “Phenomenal Intentionality as the Basis of Mental Content,” in M. Hahn and B. Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, Mass.: The MIT Press.
  • Locke, J. (1689), An Essay Concerning Human Understanding, P.H. Nidditch (ed.), Oxford: Oxford University Press (1975).
  • Lycan, W.G. (1987), Consciousness, Cambridge, Mass.: The MIT Press.
  • ––– (1986), Consciousness and Experience, Cambridge, Mass.: The MIT Press.
  • MacDonald, C. and MacDonald, G. (1995), Connectionism: Debates on Psychological Explanation, Oxford: Blackwell Publishers.
  • Marr, D. (1982), Vision, New York: W.H. Freeman and Company.
  • Martin, C.B. (1987), “Proto-Language,” Australasian Journal of Philosophy, 65: 277–289.
  • McCulloch, W.S. and Pitts, W. (1943), “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics, 5: 115–33.
  • McDowell, J. (1986), “Singular Thought and the Extent of Inner Space,” in P. Pettit and J. McDowell (eds.), Subject, Thought, and Context, Oxford: Clarendon Press: 137–168.
  • ––– (1994), Mind and World, Cambridge, Mass.: Harvard University Press.
  • McGinn, C. (1977), “Charity, Interpretation, and Belief,” Journal of Philosophy, 74: 521–535.
  • ––– (1982), “The Structure of Content,” in A. Woodfield (ed.), Thought and Content, Oxford: Oxford University Press: 207–258.
  • ––– (1989), Mental Content, Oxford: Blackwell Publishers.
  • ––– (1991), The Problem of Consciousness, Oxford: Blackwell Publishers.
  • ––– (1991a), “Content and Consciousness,” in McGinn 1991: 23–43.
  • ––– (1991b), “Can We Solve the Mind-Body Problem?” in McGinn 1991: 1–22.
  • ––– (2004), Mindsight: Image, Dream, Meaning, Cambridge, Mass.: Harvard University Press.
  • Millikan, R. (1984), Language, Thought and other Biological Categories, Cambridge, Mass.: The MIT Press.
  • Menary, R. (ed.) (2010), The Extended Mind, Cambridge, Mass.: The MIT Press.
  • Minsky, M. (1974), “A Framework for Representing Knowledge,” MIT-AI Laboratory Memo 306 June. (A shorter version appears in J. Haugeland (ed.), Mind Design II, Cambridge, Mass.: The MIT Press (1997).)
  • Nagel, T. (1974), “What Is It Like to Be a Bat?” Philosophical Review, 83: 435–450.
  • Newell, A. and Simon, H.A. (1972), Human Problem Solving, New York: Prentice-Hall.
  • ––– (1976), “Computer Science as Empirical Inquiry: Symbols and Search,” Communications of the Association for Computing Machinery, 19: 113–126.
  • O'Callaghan, C. (2007), Sounds, Oxford: Oxford University Press.
  • Osherson, D.N., Kosslyn, S.M. and Hollerbach, J.M. (1990), Visual Cognition and Action: An Invitation to Cognitive Science, Vol. 2, Cambridge, Mass.: The MIT Press.
  • Papineau, D. (1987), Reality and Representation, Oxford: Blackwell Publishers.
  • Peacocke, C. (1983), Sense and Content, Oxford: Clarendon Press.
  • ––– (1989), “Perceptual Content,” in Almog, Perry and Wettstein 1989: 297–329.
  • ––– (1992), “Scenarios, Concepts and Perception,” in T. Crane (ed.), The Contents of Experience, Cambridge: Cambridge University Press: 105–35.
  • ––– (2001), “Does Perception Have a Nonconceptual Content?” Journal of Philosophy, 99: 239–264.
  • Pinker, S. (1989), Learnability and Cognition, Cambridge, Mass.: The MIT Press.
  • Pitt, D. (2004), “The Phenomenology of Cognition, Or, What Is it Like to Think That P?” Philosophy and Phenomenological Research, 69: 1–36.
  • ––– (2009), “Intentional Psychologism” Philosophical Studies, 146: 117–138.
  • ––– (2011), “Introspection, Phenomenality and the Availability of Intentional Content,” in Bayne and Montague 2011.
  • –––, forthcoming, “Indexical Thought,” in Kriegel (ed.) forthcoming.
  • Port, R., and Van Gelder, T. (1995), Mind as Motion: Explorations in the Dynamics of Cognition, Cambridge, Mass.: The MIT Press.
  • Putnam, H. (1975), “The Meaning of ‘Meaning’,” in Philosophical Papers, Vol. 2, Cambridge: Cambridge University Press: 215–71.
  • Pylyshyn, Z. (1979), “The Rate of ‘Mental Rotation’ of Images: A Test of a Holistic Analogue Hypothesis,” Memory and Cognition, 7: 19–28.
  • ––– (1981a), “Imagery and Artificial Intelligence,” in Block 1981: 170–194.
  • ––– (1981b), “The Imagery Debate: Analog Media versus Tacit Knowledge,” Psychological Review, 88: 16–45.
  • ––– (1984), Computation and Cognition, Cambridge, Mass.: The MIT Press.
  • ––– (2003), Seeing and Visualizing: It's Not What You Think, Cambridge, Mass.: The MIT Press.
  • Raffman, D. (1995), “The Persistence of Phenomenology,” in T. Metzinger (ed.), Conscious Experience, Paderborn: Schönigh/Imprint Academic: 293–308.
  • Ramsey, W., Stich, S. and Garon, J. (1990), “Connectionism, Eliminativism and the Future of Folk Psychology,” Philosophical Perspectives, 4: 499–533.
  • Reid, T. (1764), An Inquiry into the Human Mind, D.R. Brooks (ed.), Edinburgh: Edinburgh University Press (1997).
  • Rey, G. (1981), “Introduction: What Are Mental Images?” in Block 1981: 117–127.
  • ––– (1991), “Sensations in a Language of Thought,” in E. Villaneuva (ed.), Philosophical Issues 1: Consciousness, Atascadero: Ridgeview Publishing Company: 73–112.
  • Rumelhart, D.E. (1989), “The Architecture of the Mind: A Connectionist Approach,” in M.I. Posner (ed.), Foundations of Cognitive Science, Cambridge, Mass.: The MIT Press: 133–159.
  • Rumelhart, D.E. and McCelland, J.L. (1986). Parallel Distributed Processing, Vol. I, Cambridge, Mass.: The MIT Press.
  • Schiffer, S. (1987), Remnants of Meaning, Cambridge, Mass.: The MIT Press.
  • ––– (1972), “Introduction” (Paperback Edition), in Meaning, Oxford: Clarendon Press (1972/1988): xi-xxix.
  • Searle, J.R. (1980), “Minds, Brains, and Programs,” Behavioral and Brain Sciences, 3: 417–424.
  • ––– (1983), Intentionality, Cambridge: Cambridge University Press.
  • ––– (1984) Minds, Brains, and Science, Cambridge: Harvard University Press.
  • ––– (1992), The Rediscovery of the Mind, Cambridge, Mass.: The MIT Press.
  • Sellars, W. (1956), “Empiricism and the Philosophy of Mind,” in K. Gunderson (ed.), Minnesota Studies in the Philosophy of Science, Vol. I, Minneapolis: University of Minnesota Press: 253–329.
  • Shepard, R.N. and Cooper, L. (1982), Mental Images and their Transformations, Cambridge, Mass.: The MIT Press.
  • Shoemaker, S. (1990), “Qualities and Qualia: What's in the Mind?” Philosophy and Phenomenological Research, 50: 109–31.
  • Siewert, C. (1998), The Significance of Consciousness, Princeton: Princeton University Press.
  • Smolensky, P. (1988), “On the Proper Treatment of Connectionism,” Behavioral and Brain Sciences, 11: 1–74.
  • ––– (1989), “Connectionist Modeling: Neural Computation/Mental Connections,” in L. Nadel, L.A. Cooper, P. Culicover and R.M. Harnish (eds.), Neural Connections, Mental Computation Cambridge, Mass.:The MIT Press: 49–67.
  • ––– (1991), “Connectionism and the Language of Thought,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell Ltd.: 201–227.
  • Sterelny, K. (1989), “Fodor's Nativism,” Philosophical Studies, 55: 119–141.
  • Stich, S. (1983), From Folk Psychology to Cognitive Science, Cambridge, Mass.: The MIT Press.
  • ––– (1996), Deconstructing the Mind, New York: Oxford University Press.
  • Strawson, G. (1994), Mental Reality, Cambridge, Mass.: The MIT Press.
  • ––– (2008), Real Materialism and Other Essays, Oxford: Oxford University Press.
  • Thau, M. (2002), Consciousness and Cognition, Oxford: Oxford University Press.
  • Turing, A. (1950), “Computing Machinery and Intelligence,” Mind, 59: 433–60.
  • Tye, M. (1991), The Imagery Debate, Cambridge, Mass.: The MIT Press.
  • ––– (1995), Ten Problems of Consciousness, Cambridge, Mass.: The MIT Press.
  • ––– (2000), Consciousness, Color, and Content, Cambridge, Mass.: The MIT Press.
  • ––– (2009), Consciousness Revisited, Cambridge, Mass.: The MIT Press.
  • Van Gelder, T. (1995), “What Might Cognition Be, if not Computation?”, Journal of Philosophy, XCI: 345–381.
  • Von Eckardt, B. (1993), What Is Cognitive Science?, Cambridge, Mass.: The MIT Press.
  • ––– (2005), “Connectionism and the Propositional Attitudes,” in C.E. Erneling and D.M. Johnson (eds.), The Mind as a Scientific Object: Between Brain and Culture, Oxford: Oxford University Press.
  • Wittgenstein, L. (1953), Philosophical Investigations, trans. G.E.M. Anscombe, Oxford: Blackwell Publishers.

Other Internet Resources

Acknowledgments

Thanks to Brad Armour-Garb, Mark Balaguer, Dave Chalmers, Jim Garson, John Heil, Jeff Poland, Bill Robinson, Galen Strawson, Adam Vinueza and (especially) Barbara Von Eckardt for comments on earlier versions of this entry.

Copyright © 2012 by
David Pitt <dalanpitt@yahoo.com>

Open access to the SEP is made possible by a world-wide funding initiative.
Please Read How You Can Help Keep the Encyclopedia Free