Narrow Mental Content
Narrow mental content is a kind of mental content that does not depend on an individual's environment. Narrow content contrasts with “broad” or “wide” content, which depends on features of the individual's environment as well as on features of the individual. It is controversial whether there is any such thing as narrow content. Assuming that there is, it is also controversial what sort of content it is, what its relation to ordinary or “broad” content is, and how it is determined by the individual's intrinsic properties.
- 1. Introduction
- 2. Arguments for Broad Content
- 3. Arguments for Narrow Content
- 4. Conceptions of Narrow Content
- 5. Strategies for Determining Narrow Content
- 6. Further Issues
- 7. Conclusion
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Introduction
What is narrow mental content? Mental content simply means the content of a mental state such as a thought, a belief, a desire, a fear, an intention, or a wish. Content is a deliberately vague term; it is a rough synonym of another vague term, ‘meaning’. A state with content is a state that represents some part or aspect of the world; its content is the way it represents the world as being. For example, consider my belief that water is a liquid at room temperature. The content of this belief is what it says about the world, namely that a certain substance, water, has a certain property, being a liquid, under specified conditions, namely being at room temperature. Whether a belief is true or false depends on its content: it is true if the world really is the way the belief represents it as being; otherwise it is false.
A narrow content of a particular state is a content of that state that is completely determined by the individual's intrinsic properties. An intrinsic property of an individual is a property that does not depend at all on the individual's environment. For example, having a certain shape is, arguably, an intrinsic property of a particular penny; being in my pocket is not an intrinsic property of the penny. This is because the penny's shape depends only on internal properties of the penny, whereas the fact that it is in my pocket depends on where it happens to be, which is an extrinsic property. The shape of the penny could not be different unless the penny itself were different in some way, but the penny could be exactly the way it is even if it were not in my pocket. Again, there could not be an exact duplicate of the penny that did not share its shape, but there could be an exact duplicate that was not in my pocket. Similarly, a narrow content of a belief or other mental state is a content that could not be different unless the subject who has the state were different in some intrinsic respect: no matter how different the individual's environment were, the belief would have the same content it actually does. Again, a narrow content of an individual's belief is a content that must be shared by any exact duplicate of the individual. (If some form of dualism is true, then the intrinsic properties of an individual may include properties that are not completely determined by the individual's physical properties. In that case an “exact duplicate” must be understood to be an individual who shares all intrinsic nonphysical properties as well as physical ones.)
(The notion of an intrinsic property turns out to be surprisingly difficult to define precisely. A good guide to the various approaches that have been taken, and their difficulties and refinements, is the entry on intrinsic vs. extrinsic properties.)
On first encounter, it may seem strange that the idea of narrow content should be controversial, or even that we should need a special term for it. Most people, if they were ever to explicitly consider the issue of whether mental content is narrow or broad, would probably hold that all mental content is narrow, i.e. that all of the contents of our mental states are entirely determined by our intrinsic properties. It seems conceivable, for example, as Descartes argued in his First Meditation, that our perceptual states and beliefs could be exactly as they are even if the world were nothing like we think it is. This seems to presuppose that no difference in our environment, however radical, could make a difference to the contents of our beliefs so long as our intrinsic properties remained the same.
Why, then, have philosophers believed they need to define narrow content and argue for its existence? The reason is that many philosophers have been convinced by some influential arguments that, in the most ordinary or typical sense of ‘content’, most or even all of the contents of our mental states are broad rather than narrow. If this conclusion is correct, if ordinary content is broad, then it requires some work to define an alternative, narrow conception of content, and it requires arguments to show that there is any such thing. To understand the issues about narrow content, then, it is essential to first understand the arguments that most ordinary content is broad.
2. Arguments for Broad Content
2.1 Putnam's Argument: Twin Earth and Natural Kinds
Among the earliest and most influential arguments for broad content were Hilary Putnam's arguments in such essays as “The Meaning of ‘Meaning’” (1975). Putnam's arguments were not designed specifically with mental content in mind. They applied in the first instance to linguistic content, more specifically to the reference of terms in a natural language. However, they have been widely applied to mental content. I will first discuss Putnam's argument concerning linguistic content, and then note how it can be extended to mental content.
Putnam's most famous examples involve “Twin Earth,” an imaginary planet which is molecule-for-molecule identical to Earth, including having exact duplicates of Earth's inhabitants, except for a systematic change in certain parts of the natural environment. In a particularly well-known version of this example, we consider Earth as it was around 1750, before the chemical structure of water was discovered, and we consider an inhabitant of Earth named “Oscar” who is a competent user of the term ‘water’. We then imagine a Twin Earth which is exactly like Earth in every way, including having an exact duplicate of Oscar, with one exception: for every place on Earth that contains H2O, the Twin Earthly duplicate of that place instead contains XYZ, a substance with a different microstructure from water but with similar observable properties. On Twin Earth, it is XYZ, not H2O, that falls from the skies and fills the lakes and oceans.
Putnam argues that the stuff that falls from the skies and fills the lakes on Twin Earth is not water. According to Putnam, when people used the term ‘water’, even in 1750, they intended to refer to a natural kind, a kind of thing whose instances share a common nature, not directly observable, which explains the observable properties of instances of the kind. They identified water by observable characteristics like colorlessness and odorlessness, but they also assumed that there was a microstructure which explains these observable properties. Since 1750, we have learned what this microstructure is, namely that water consists of molecules of H2O. But water was H2O even in 1750, before we learned this. (Other natural kind terms work in the same way. For instance, we identify diseases by their symptoms, but we assume that there is an underlying cause of these symptoms, for example a particular microorganism, and that even before we know what this underlying cause is, it makes the disease what it is.)
Now Twin Oscar, being an exact duplicate of Earth's Oscar, will have many of the same properties Oscar has. For instance, he will be disposed to accept a sentence of Twin English that is written and pronounced exactly like the English sentence “Water is wet.” However, Putnam argues, Twin Oscar's word ‘water’ does not refer to water. There is no water on Twin Earth, only XYZ; Twin Oscar has never seen water, talked about water, or interacted with water in any way. So it seems that he cannot possibly refer to water.
(Two points should be made parenthetically about this example. (1) The observable properties of XYZ do not need to be identical to those of water; all that is needed is that Oscar and Twin Oscar have not observed the differences. (2) In some ways it is unfortunate that the water/XYZ example has become Putnam's best known example, because it has a failing that many of his other examples lack, namely that most of the human body consists of water. This has the consequence that Twin Oscar cannot be an exact duplicate of Oscar unless Twin Oscar also consists largely of water. Other examples Putnam considers involve switching aluminum and molybdenum, beeches and elms, diseases, and so on, and these examples do not suffer from the same problem.)
It is important for this example that, although 1750 residents of Earth did not yet realize this, all water was in fact H2O. Had it turned out instead that some of the stuff called “water” was H2O while other stuff called “water” was XYZ, then water would not have turned out to be a single “natural kind.” In that case the English word “water” would refer to anything that was either H2O or XYZ, and we could say that the Earthly and Twin-Earthly ‘water’-words referred to the same thing. Again, if it had turned out that there were a huge number of different microstructures that produced the observable properties of water, then water would not have been a natural kind at all. In that case we would probably say that anything with the right observable properties was water, and in that case again we could say both ‘water’-words were coreferential. In fact, though, neither of these possibilities obtains. Water is a natural kind whose essential nature is that it has the chemical structure H2O; since there is no H2O on Twin Earth, there is no water there. Twin Earthlings never had occasion to give a label to water, since there is none on their planet, so their word “water” does not refer to water.
Since Oscar and Twin Oscar have exactly the same intrinsic properties, yet refer to different substances when they use their ‘water’-words, their intrinsic properties cannot suffice to determine what they refer to. If the meaning of a word suffices to determine its reference, then meaning cannot be determined by intrinsic properties either. As Putnam famously puts it, “‘meanings’ just ain't in the head!” (1975, p. 227).
Although the argument as presented so far concerns the reference of ‘water’ and other natural kind terms, it is natural to extend it to mental content as well (McGinn 1977; Burge 1979, note 2). Not only does Twin Oscar not refer to water when he uses the term ‘water’, he does not have beliefs about water either. To be sure, he has beliefs that play the same role in his mental life that Oscar's water-beliefs play in his. But in Twin Oscar's case, those beliefs are not about water. In particular, while Oscar believes that water is wet, Twin Oscar does not. Since Oscar and Twin Oscar have identical intrinsic properties, yet Oscar believes that water is wet while Twin Oscar does not, mental content cannot be determined solely by intrinsic properties.
2.2 Burge's Argument: Semantic Deference
The second main source of arguments that ordinary mental content is broad is a series of influential articles by Tyler Burge, including “Individualism and the Mental” (1979). Burge has offered several lines of argument for the externalist view he calls “anti-individualism.” These are helpfully distinguished and described in the Introduction to Burge 2007. One line of argument, closely related to Putnam's water example, emphasizes the role of the environment in thought about natural kinds. Another line of argument defends anti-individualism about perceptual content. I will consider a third much-discussed line of argument, which relies on the fact that in many cases we intend what we are thinking or talking about to depend to some extent on the beliefs of others in our community, especially those more expert than we.
Burge's most famous example involves the concept of arthritis. He considers an individual who is unaware that arthritis is a disease specifically of the joints. His subject believes that he has arthritis in his thigh. This belief is false, since one cannot have arthritis in the thigh. However, Burge argues, in a world in which all the intrinsic facts about the subject were exactly the same as they actually are, but in which the term ‘arthritis’ is generally used in the subject's community to refer to rheumatoid ailments in general, the subject's belief would have a different content. It would be a belief that the subject had a rheumatoid ailment in the thigh, and this is a belief which could possibly be true. Burge offers a wide variety of other examples making the same point, involving beliefs about such things as sofas and contracts. These latter examples are important because, if successful, they show that broad content extends far beyond beliefs about natural kinds.
The idea that mental content is broad is the idea that it is not determined entirely by an individual's intrinsic properties, but is determined in part by features of the individual's environment. But if the content of my beliefs is not determined entirely by my internal states, what else could determine it? How could anything other than my intrinsic properties determine what I think and believe? The examples just discussed point to two different sorts of environmental factor. Putnam's example of Oscar and his Twin Earth duplicate focuses on the contribution of the natural environment. The crucial idea here is that when we have thoughts or beliefs about natural kinds, we often do not know what the essential features of those kinds are, even though we assume that there are such essential features. In such cases, what we are thinking about depends not only on internally available factors, but also on facts about the physical, chemical, or biological makeup of the kinds we are thinking about. Burge's arthritis example, by contrast, focuses on the contribution of the social environment. In our thoughts about many kinds of things, including natural kinds but also including kinds invented by humans, such as furniture or contracts, we assume that others may have more expertise than we do about what is and what is not included in the kind in question. Thus, what we are thinking about depends not only on our intrinsic properties, but also on expert opinion. We defer to the experts with regard to what exactly we are thinking about. For this reason, this sort of contribution of the social environment is sometimes referred to as “semantic deference.”
The phenomenon of semantic deference is closely related to what Putnam memorably termed “the linguistic division of labor.” Putnam's idea was that as long as there are experts on what certain words refer to, we do not all need to have that specialized knowledge; we can rely on the knowledge of the experts. In Burge's treatment, however, the phenomenon is not merely linguistic: it is not just that we defer to the experts on the meaning of the word ‘arthritis’; we also defer to the experts on the nature of the disease arthritis. Thus, for Burge, the phenomenon affects not only what we mean by the words we use, but also the very contents of our thoughts.
2.3 Responses to the Arguments
We can distinguish between three broad categories of response to the examples of Putnam and Burge. On one extreme, to use the terminology of Segal (2000), we have the unqualified acceptance of the extreme externalist. Many philosophers have been persuaded by examples like those of Putnam and Burge that all or nearly all mental content is broad. Such philosophers are highly skeptical about the usefulness of any notion of narrow content. Burge himself is a noteworthy proponent of extreme externalism; other extreme externalists include Robert Stalnaker (1989, 1990, 2008) and Robert A. Wilson (1995).
A second response takes us to the opposite extreme, extreme internalism. According to this response, Putnam's and Burge's examples do not succeed in showing that any content is broad. This position has been defended by, among others, Kent Bach (1987), Tim Crane (1991), and Gabriel Segal (2000). These authors question the externalist interpretation of the examples we have discussed. For example, in response to Putnam, Segal points out that we have some empty natural kind concepts — that is, we have concepts which we intend to be natural kind concepts, but which in fact do not succeed in referring to a real kind. Possible examples include the concepts of witches, ghosts, and phlogiston. In these examples, the environment cannot make the sort of contribution discussed by Putnam, because the environment contains no relevant kind. Nevertheless, people who have them use these concepts in their reasoning, and their behavior is partly explained by these concepts. If so, then we can have natural kind concepts that do not have an environmental component. Now, it seems that with respect to explanations of our reasoning and action, it does not make a difference whether the kinds we think we are reasoning about actually exist: so long as we think they exist, we will make the same inferences and perform the same actions regardless of whether we are correct or not. This may lead us to suspect that even in the case of non-empty natural kind concepts, our reasoning and action are best explained in terms of concepts whose content is not environmentally determined — in short, in terms of concepts whose content is narrow.
With respect to Burge's examples, Segal suggests that it is odd to regard someone who thinks it possible to have arthritis in the thigh as having the concept of arthritis at all. Arthritis just is an inflammation of the joints; it seems peculiar to say that someone who does not realize this has the concept of arthritis. Instead, we should say that the subject in Burge's example has a different concept, a concept he mistakenly associates with the English word ‘arthritis’. Thus we might want to deny that Burge's subject really believes he has arthritis in his thigh. What he really believes is something it is hard to express in English, since we do not have a word that applies to all and only the cases he would regard as cases of arthritis.
(For a careful presentation and critique of this sort of “dual concepts” objection to externalism, see Frances 2016.)(Segal takes these arguments to undermine the idea that Putnam's and Burge's examples establish that thoughts and beliefs about kinds have only broad contents. He does not take them to refute “two-factor” theories according to which beliefs have both broad and narrow contents, although he does offer additional arguments specifically targeted against two-factor theories.)
Although extreme internalists advocate narrow content, they also hold that ordinary content is already narrow, so that we do not need a special or technical notion of narrow content. Thus, much of the literature in favor of narrow content is written by those who accept the third response to Putnam's and Burge's arguments, moderate internalism (which we could equally well call “moderate externalism,” since it is a compromise between the two extreme views). This is the view that, while Putnam's and Burge's examples do show that ordinary content is broad, there are also contents that are narrow. On a moderate internalist view, many beliefs have both broad and narrow contents. Since, on this view, ordinary content is often broad, we need a distinctive, specialized notion of narrow content as different in some way from ordinary content.
3. Arguments for Narrow Content
Why do moderate internalists believe that, despite the success of arguments that ordinary content is often or always broad, we nevertheless need a notion of narrow content? There are four main kinds of arguments they have found persuasive.
3.1 Causal Arguments
One influential argument for narrow content (Fodor 1987; a recent defense of this kind of argument, with repies to criticisms, is Gaukroger, forthcoming) appeals to considerations involving causal explanation. We might outline the argument like this. A first premise is that mental states causally explain behavior by virtue of the content they have. Although this has been denied by some, it certainly seems to be a central part of commonsense psychology. Our behavior seems to be a causal consequence of our beliefs and desires; moreover, the content of those beliefs and desires seems to be centrally involved in the causation of behavior. We behave the way we do because of what we want and what we believe, and this seems to be just another way of saying that we behave as we do because of the contents of our beliefs and desires. A second premise is that the causal powers of an entity, its capacity to produce effects, must be intrinsic features of the entity. Thus twins, who share all their intrinsic properties, must share their causal powers. This premise seems plausible for at least two reasons. First, causation is local. It seems that features of the environment can affect an individual's actions only by way of effects on the individual's intrinsic properties. Second, causal powers should be evaluated across contexts. If an astronaut on the Moon can easily lift a one-hundred-kilogram weight and I, on Earth, cannot, this does not mean that the astronaut is stronger; the crucial issue is whether the astronaut can lift more than I can in the same environments. This appears to show that my Twin Earth counterpart and I have the same causal powers even though I can obtain water by turning on the faucet and he cannot, since our parallel actions will achieve parallel results provided that our environments are the same. A third and final premise is that broad content does not characterize intrinsic features, at least not essentially; thus twins need not share broad contents. According to the first premise, mental states must have a kind of content that causally explains behavior. Taken together, the second and third premises show that broad content cannot fulfill this role. The conclusion of the argument, then, is that mental states must have narrow contents, contents that are shared between twins.
Externalists have attacked this argument at its second premise, the premise that causal powers must be intrinsic properties. Against the argument that causal powers must be intrinsic because causation is local, Burge (1986, 1989) has argued that local causation is entirely compatible with broad individuation. Against the argument that the cross-context test for sameness of causal powers shows that they are intrinsic, Burge (1989) suggests that causal powers are typically identified relative to a normal environment; thus, for example, it would be reasonable to distinguish between a heart and a similar organ whose function is to pump waste even if one of these organs could be successfully surgically replaced by the other. Burge (1986) also argues that actual psychological theories, such as Marr's theory of vision, do not satisfy internalist constraints. (For criticisms of Burge's interpretation of Marr, see Segal 1989 and Egan 1991. Burge 2010 is a book-length defense of the claim that perceptual psychology is anti-individualistic.)
In a later essay (Fodor 1991), Fodor defends a weaker and more complicated version of the second premise. He suggests that there are some extrinsic properties, such as being a planet, that affect causal powers, and others, like being part of a universe in which a certain coin toss comes out heads, that are irrelevant to causal powers. He then offers a criterion for distinguishing between causally relevant extrinsic properties and causally irrelevant extrinsic properties: roughly, an extrinsic property is causally irrelevant to outcomes that it is logically connected to. He then argues that broad content does not satisfy the criterion for being a causally relevant extrinsic property. (It should be noted that in still more recent work (Fodor 1994), Fodor has abandoned the idea that narrow content is important for psychology.)
3.2 Arguments from Introspective Access
A somewhat different motivation for narrow content (Loar 1988) appeals to the idea that we have introspective access to the contents of our own thoughts. In particular, it seems that we should be able to determine introspectively whether two of our thoughts have the same content or not. But the kind of difference in content that distinguishes Oscar's thoughts from Twin Oscar's thoughts seems to be the sort of difference that they could not in principle be introspectively aware of. From the inside, so to speak, there is no way for Oscar and Twin Oscar to tell whether they are thinking XYZ-thoughts or H2O-thoughts. (Recall that they are unaware of what the microstructure of the substance they call “water” is, although they assume that it has one.) The difference in broad content between the beliefs of Oscar and Twin Oscar seems to be a difference to which they themselves have no access.
It is difficult to formulate this point precisely, however. For instance, Oscar can think that water is wet and then think the meta-level thought, “that thought I just had was about water!”, referring thereby to H2O and thus expressing the very aspect of his original thought which distinguishes his content from Twin Oscar's. Since neither Oscar nor Twin Oscar has thoughts about the substance his twin has thoughts about, it is not clear what it means to say that they cannot introspectively distinguish between these different thoughts. One way to try to clarify and reinforce the argument is to consider the phenomenon of “slow switching” (introduced in Burge 1988, and used by Boghossian 1989 to pose difficulties for self-knowledge). Suppose Oscar moves to Twin Earth. Initially his water-thoughts will continue to be about water, but it seems that gradually, the longer he interacts with XYZ and the longer he is out of touch with H2O, his thoughts will come to be about XYZ rather than H2O. If this is correct, then his ‘water’-thoughts will have come over time to have a different broad content than they previously had. However, this change in content will be completely invisible to Oscar himself. From his own subjective point of view, his thoughts appear to have exactly the same content as before. If there is a kind of mental content to which we have introspective access, and if introspective access must include the ability to recognize when contents are the same or different, then the sort of content to which we have introspective access cannot be broad content. This suggests that we need a concept of narrow content to capture the kind of content that we are immediately aware of.
Burge's response to this sort of argument is to accept that we have introspective knowledge of the contents of our own thoughts, but deny that this entails that we can tell introspectively whether two contents are the same or different (Burge 1988). In response, some suggest that knowing that my thought is about water requires ruling out relevant alternative possibilities, and that in slow switching cases the possibility that my thought is instead about XYZ is in fact a relevant alternative that we cannot rule out. (For further dialectical twists and turns, see Ludlow and Martin 1998, and Nuccetelli 2003.)
3.3 Arguments Concerning Rationality
A related issue is that describing a subject's beliefs in terms of broad content can make them appear irrational even though they are not: when beliefs are described in terms of broad content, they can be inconsistent with one another even though the inconsistency is in principle not discoverable by the subject. A famous example is due to Saul Kripke (1979). In Kripke's example, Pierre, a Frenchman, grows up with a belief he expresses by saying “Londres est jolie.” This belief has the (broad) content that London is pretty. Later he moves to England, where he learns English by immersion rather than by translation. He comes to have a second belief, which he expresses in English by saying “London is not pretty.” This belief has the broad content that London is not pretty. Pierre never realizes that the city he thinks of as Londres and the city he thinks of as London are in fact the same city. His two beliefs directly contradict one another, and yet he is not guilty of any sort of failure of rationality; it is impossible for him to ascertain that the two beliefs are contradictory. Kripke himself does not offer a solution to his puzzle and does not discuss narrow content. But a natural response to the example is to suppose that, while the belief Pierre accepts and the one he rejects have the same broad content, they have different narrow contents. If so, then the sort of content most relevant to determining whether someone's beliefs and inferences are rational is not broad content but narrow content.
One response to this sort of argument is offered by Stalnaker (1990) in a critique of Loar (1988). Stalnaker agrees that examples like that of Pierre require us to distinguish between the world as it is according to Pierre, on the one hand, and, on the other hand, the propositions ordinarily expressed by the sentences we use to describe those beliefs, e.g. the proposition that London is pretty. However, in his view it does not follow that an accurate description of the world according to Pierre must be narrow: “I don't think the belief states themselves — the ways the world is according to the thinker — are any less causally and socially infected than the language in which beliefs are ascribed” (p. 203). Jackson (2003) responds to Stalnaker's reasons for skepticism about narrow content.
3.4 Argument from Phenomenal Intentionality
A recent argument for the existence of narrow content is an argument from phenomenal intentionality (Loar 2003; Horgan and Tienson 2002; Horgan, Tienson and Graham 2004; Kriegel 2013). To understand this argument, we first need to understand what its proponents mean by “phenomenal intentionality.” Philosophers of mind have traditionally drawn a sharp distinction between two sorts of properties of mental states, phenomenal properties and intentional properties. Phenomenal properties have to do with the felt character of conscious experience, with “what it's like,” in Thomas Nagel's famous phrase (Nagel 1974). Intentional properties have to do with the representational character of mental states, i.e. with their content. One view of the relation between phenomenal and intentional properties, called “separatism” by Horgan and Tienson (2002), is that they are independent of one another: any given phenomenal character could be accompanied by any intentional properties (or none), and vice versa. According to Lycan (2008 §9), this view was “the standard attitude among philosophers of mind between the 1950s and the 1980s.” On another view of the relation between the intentional and the phenomenal, known as representationalism, the phenomenal character of experience is completely determined by its intentional nature. The key thesis of phenomenal intentionality is that, while representationalism is correct that there is an intimate connection between phenomenology and intentionality, the determination runs in the opposite direction: there is a kind of intentional content, phenomenal intentionality, which is entirely constitutively determined by the phenomenal character of a mental state.
This thesis is one premise of the argument from phenomenal intentionality to narrow content. The other premise is that the phenomenal character of experience is itself narrow. Putting the two premises together, we get the following argument for the existence of narrow content: “(1) There is pervasive intentional content that constitutively depends on phenomenology alone. (2) Phenomenology constitutively depends only on narrow factors. So, (3) There is pervasive intentional content that constitutively depends only on narrow factors” (Horgan and Tienson 2002, p. 527; cf. Horgan, Tienson and Graham 2004, p. 300).
Both premises of this argument are controversial. The second premise, that phenomenology is narrow, is rejected by phenomenal externalists (Lycan 2008, §14), while the controversial character of the first premise, that there is intentional content that is entirely determined by phenomenology, can be seen in the fact that its proponents have advanced it as a departure from orthodoxy. Defenders of phenomenal intentionality have supported both premises by appeal to brain-in-vat scenarios. Suppose that alien beings synthesize a structure identical to your own brain, and connect it to a computer-controlled apparatus that provides inputs to this brain-like object which maintain its similarity to your brain over a substantial period of time. (See Horgan, Tienson and Graham 2004 for further details. This fairly elaborate brain-in-vat scenario avoids some of Burge's objections (Burge 2003, pp. 443–445) to Loar's less-fully-described version (Loar 2003).) Defenders of phenomenal intentionality find it intuitively plausible that by virtue of its physical similarity with your brain, the brain-like object will also share your phenomenology, supporting the narrowness of phenomenology; and that the brain-like object will, by virtue of sharing your phenomenology, also share many of the contents of your mental states, supporting the existence of phenomenally-determined intentionality. (For these two uses of the brain-in-vat scenario, see Horgan, Tienson and Graham 2004, p. 302. For additional arguments for the existence of phenomenal intentionality, see Kriegel 2013, §2.1. Criticisms of phenomenal intentionality may be found in Bailey and Richards 2014 and Werner 2015.)
The most immediately plausible purported example of phenomenal intentionality is the content of perceptual experience. If perceptual experience is a genuine example of phenomenally determined intentionality, but also the only example, then the argument from phenomenal intentionality would show the existence of narrow contents of perceptual states, but would be silent on whether other mental states such as beliefs and desires have narrow contents. However, some defenders of phenomenal intentionality (e.g. Horgan, Tienson and Graham 2004 and several of the contributors to Bayne and Montague 2011) would go further, arguing that there are distinctive phenomenologies of agency and of propositional attitudes including beliefs and desires; that the phenomenal properties of these mental states also constitutively determine intentional properties; and moreover that all intentionality either is identical with, or is derived from, phenomenal intentionality. If these bolder theses are correct, then the argument from phenomenal intentionality would give reason to think that all of the propositional attitudes have narrow contents, and that their wide contents, if any, are derived from these narrow contents.
4. Conceptions of Narrow Content
Supposing that there is a sort of content of at least some mental states that is narrow, how should we conceive of it? What sort of thing is narrow content? There are many different proposals in the literature (although in some cases the differences between them may not be as great as they first appear). We consider several.
4.1 Descriptive Content
Perhaps the most obvious suggestion is that the narrow content of a particular belief can be understood as a more detailed description of what is believed. More specifically, the idea is that the narrow content of a particular concept is a description of what the concept expresses or refers to.
An example will make this idea clearer. Consider Oscar, who believes that water is wet. This belief involves the concept of water, and arguments like Putnam's appear to show that the ordinary content of this concept is broad. The proposal we are considering is that there is a more detailed description that captures the narrow content, for Oscar, of the concept of water. This description might be something like “clear, colorless, odorless liquid that falls from the sky and fills the lakes.” Oscar and his Twin may share this descriptive content even though their ‘water’-concepts do not have the same broad content. The suggestion, then, is that, when Oscar thinks the thought that he would express by saying “water is wet,” and when Twin Oscar thinks the thought that he would express by saying, in Twin English, “water is wet,” both of them are expressing a thought with the descriptive content “the clear, colorless, odorless liquid that falls from the sky and fills the lakes is wet.” This narrow content, the proposal continues, determines, on Twin Earth, the broad content that XYZ is wet, while on Earth it determines the broad content that water (i.e. H2O) is wet.
Notice that if this description is to succeed in determining the appropriate broad content on Earth and on Twin Earth, then it must have a subtle “indexical” component. An indexical is a term, like ‘I’ or ‘now’, whose referent is context-relative. The referent of ‘I’ depends on who utters or thinks it; the referent of ‘now’ depends on the time at which it is uttered or thought. The supposedly narrow content “colorless odorless liquid that falls from the sky and fills the lakes” must have an implicit indexical component: the content in question is really something like “the colorless, odorless liquid that falls from the sky and fills the lakes around here” (Putnam 1975, p. 234).
There is an obvious and serious problem with the proposal that narrow content is descriptive content, however. The problem is simply that the description which is intended to give the narrow content of a concept such as Oscar's ‘water’-concept may itself be broad (Lepore and Loewer, 1986; Taylor, 1989). In my example, several of the concepts involved in the descriptive content arguably have broad contents. The notion of a liquid has a technical meaning that need not correspond to the observable properties we associate with it. And perhaps concepts like those of sky, lake, and color are also broad.
This objection need not be entirely crushing, but it certainly makes the descriptive content approach more difficult to spell out in detail. If we specify the description that is supposed to capture a narrow content in ordinary language, then we will need to use only ordinary-language terms that do not have broad contents. If the moral of the arguments for broad content is as sweeping as philosophers like Burge believe, it may be difficult or impossible to find enough ordinary-language expressions that satisfy this requirement. We could regard the description we have been discussing as a first step; the second step would be to replace the expressions ‘liquid’, ‘sky’, ‘lake’, and so on with their own descriptive contents. But these descriptions in turn might well contain expressions with broad contents, which would then need to be replaced with still further descriptions. It is not clear that we will be able to find enough purely narrow expressions to do all the descriptive work we need. (Mendola 2008 is a recent attempt to develop a detailed version of the descriptive approach that can overcome such worries.)
4.2. Conceptual Role
A second approach to narrow content identifies narrow contents with “conceptual roles.” This approach was laid out in a programmatic way by Ned Block in his essay “Advertisement for a Semantics for Psychology” (Block 1986). The general idea is that the conceptual role of a particular state is a matter of its causal relations to other states. As Block puts it, conceptual role “is a matter of the causal role of the expression in reasoning and deliberation and, in general, in the way the expression combines and interacts with other expressions so as to mediate between sensory inputs and behavioral outputs” (p. 93). However, conceptual role should not be understood to include all the causal relations between a given state and other states: “Conceptual role abstracts away from all causal relations except the ones that mediate inferences, inductive or deductive, decision making, and the like” (p. 94).
The easiest way to get a feeling for conceptual role semantics is to consider the kind of example that it seems to fit most naturally. Suppose that we have a mental representation we will symbolize as ‘*’. Suppose further that ‘*’ stands in the following causal relations with other mental representations.
- If the subject bears the belief relation to sentential mental representations P and Q, then the subject is likely to also acquire the belief relation to the compound representation P*Q.
- If the subject bears the belief relation to P*Q, then the subject is likely to also acquire the belief relation to P.
- If the subject bears the belief relation to P*Q, then the subject is likely to also acquire the belief relation to Q.
If the representation ‘*’ is related in these ways to other mental representations, it seems reasonable to say that it expresses the relation of conjunction, i.e. that ‘*’ should be interpreted as ‘and’. In fact, we might want to go so far as to say that satisfying the conditions above constitutes meaning conjunction. It is worth noticing that these three conditions closely resemble the rules that typically characterize conjunction in natural deduction systems of propositional logic. Reflecting on this similarity may suggest some potential problems for conceptual role semantics.
These potential problems include the following. (1) Rules of inference are normative rather than descriptive. They tell us what inferences are permissible; they do not purport to provide an empirical account of what inferences people actually make. It is not clear how a description of the causal interactions between mental states can capture this normative element (Williams 1990; for a critique of the idea that mental content is normative, see Glüer and Wikforss 2009). (2) One might wonder whether there is something backward about the view that conceptual role determines meaning. In the case of propositional logic, a standard view is that the meaning of logical connectives such as conjunction is given by a truth table, which shows how the truth or falsity of a compound sentence is determined by the truth values of its component sentences. The adequacy of a system of inference rules is then determined by whether it permits derivations of all and only those arguments that are semantically valid. Similarly, perhaps the causal roles of mental states should be explained in part by their semantics, instead of the other way around. (3) Conceptual role semantics seems more plausible for logical connectives than for other sorts of representations. It is one thing to regard the meaning of a mental symbol for conjunction as determined by the inferences a subject will make between mental representations that contain the symbol and those that do not. After all, that is what conjunction is for. It is another and much bolder thing to regard more empirical mental representations as having their meanings determined in this way.
Two additional problems, already identified in Block's essay, have been much discussed since. First, conceptual role semantics seems to lead to a very extreme holism. If all or nearly all of the inferential relations between mental representations are included in their conceptual role, then it seems that a change in the meaning of any representation will also change the meanings of all or nearly all the others, and also that nearly any change in belief will result in a change in the meaning of one's representations. We ordinarily think that there is an important difference between changes of belief and changes of meaning, but it is hard to see how to capture this difference within conceptual role semantics. Second, conceptual role, as understood by Block and others, may seem too “syntactic” to constitute a conception of content at all. In particular, conceptual role does not naturally give rise to an account of truth conditions. As Block puts it, “is narrow content really content?” Block himself regards conceptual role as a determinant of content, and leaves it an open question whether it is also a kind of content. But if conceptual role is not actually a kind of content, then it does not satisfy all of the original motivations for introducing a notion of narrow content.
4.3 The Mapping Conception
White (1982) and Fodor (1987) have offered a rather different, and highly influential, way of thinking about narrow content. This conception focuses on what narrow contents are supposed to accomplish. A narrow content is supposed to be something that Oscar and Twin Oscar share, and by virtue of which Oscar believes that water is wet and Twin Oscar believes that XYZ is wet. Similarly, it should be something that Art in his actual environment shares with Art in his envisioned counterfactual environment, and by virtue of which he believes, in his actual environment, that he has arthritis in his thigh, and believes, in the counterfactual environment, that he has a different and broader disease in his thigh. So one approach to narrow content is simply to declare that a narrow content is something that, given a particular environment, determines a particular broad content. Block (1991) calls this the “mapping theory,” since on this account a narrow content maps environments into broad contents.
Some care is required in determining what the relevant environments are. What matters is not only the environment the subject is currently in, but rather the environment in which the subject acquired the relevant beliefs and other mental states. If we zipped Oscar to Twin Earth and Twin Oscar to Earth, we would not thereby change what their thoughts are about (at least not immediately). Oscar would still be thinking about water, and would probably misidentify XYZ as water; Twin Oscar would still be thinking about XYZ, and would probably misidentify water as XYZ. What determines the broad content of their thoughts is not merely the environment they are in at the moment, but also the environment in which they first acquired their thoughts and beliefs about watery stuff. Provided we understand “context” to include both sorts of facts, we can describe the mapping conception as a conception on which narrow content is a function from contexts to broad contents.
(White (1982) actually distinguishes between “contexts of acquisition” and “contexts of occurrence”, and defines a notion of “partial character” as a higher-order function which takes a context of acquisition as argument and yields as the resulting value a function from contexts of occurrence to broad contents. The relation between White's view and Fodor's is easier to see, however, if we employ a more inclusive conception of context that includes both one's current environment and one's history of acquisition of the relevant concept. If we do, we can collapse both levels of White's higher-order function into one, yielding a lower-order function like Fodor's. The broad content determined by a Fodorian narrow content applied to a particular context is the same as the broad content determined by applying White's partial character to that context, and then applying the resulting function to the very same context.)
The mapping theory faces a number of difficulties. (1) As Fodor notes, on the mapping view narrow content seems to be ineffable. We would like to be able to say what the narrow content of a particular thought or belief is, but on Fodor's view this cannot be done. To express a narrow content we would presumably need to find an English expression that is synonymous with it. But the content of English expressions is broad, not narrow, so this seems to be impossible.
(Valerie Walker (1990) and Stephen Stich (1991) propose that narrow contents could be expressed if English were supplemented with a “bracketing” notation. For example, an expression of the form ‘___ has the (narrow) content that [p]’ is said to have as its extension “in any possible world the class of brain state tokens whose (broad) content is p, along with physically identical tokens in all doppelgangers of people who harbor tokens whose broad content is p” (Stich 1991, p. 247). The chief difficulty with this proposal is that it has the consequence that every token with a given broad content has the same narrow content. But mental states with a particular broad content can be very unlike one another: compare Oscar's belief that water is wet with that of an expert in chemistry, or the very different ways in which Pierre is related to the proposition that London is pretty. If narrow content is to be useful in explaining behavior and rational inference, it must be the case not only that Twins share their narrow contents despite their different broad contents, but also that individuals with the same broad content may have different narrow contents (Brown 1993).)
(2) A second difficulty noted by Fodor is that, like conceptual role semantics, the mapping conception may not deserve to be called “content,” because the narrow contents it yields do not suffice to determine truth conditions. A central characteristic of broad content is that a thought or belief with broad content thereby has truth conditions: in some possible circumstances it is true, and in others it is false. On the mapping conception, narrow content does not suffice to determine truth conditions in this sense. To determine truth conditions, one needs to fix not only a narrow content but also a context. For instance, given the narrow content shared by Oscar and Twin Oscar when they think, “Lake Superior is full of water,” we do not have enough information to say whether that thought is true or false in a particular situation. Suppose Lake Superior is full of XYZ. Then Twin Oscar's thought is true but Oscar's thought is false, even though both thoughts have the same narrow content. So it seems that narrow content by itself is not enough to determine what truth conditions a thought has. (However, see the following section on “Diagonal Propositions.”)
(3) Finally, although the mapping conception gives us an abstract, formal conception of narrow content, it does not give us an algorithm for finding the narrow content of a particular state. Although apparently any function from contexts to contents would count as a “narrow content” in Fodor's sense, some of these functions could not really be the content of a mental state. To use a computational analogy, we are really interested only in “computable” functions from context to content, functions that can be implemented somehow in a human mind, and this suggests that it is not the function itself that is of interest but rather the algorithm by means of which it is computed.
4.4 Diagonal Propositions
The complaint that on the mapping conception, narrow contents are not truth conditional, and hence perhaps should not be called “contents” at all, can be met by an interesting twist on the mapping conception. Instead of considering a function from contexts of acquisition to broad contents in its full generality, we can focus our attention on a narrower function, the “diagonal proposition” determined by a Fodorian narrow content. The idea, and the term “diagonal proposition,” were originally introduced by Robert Stalnaker in a different context (Stalnaker 1999, especially papers 4, 6, and 7), but it turns out to be useful here. It is not clear that anyone has actually proposed this idea as an account of narrow content, but Stalnaker has suggested it as an interpretation of Loar's view (Stalnaker 1990, discussing Loar 1988), and the view of Chalmers (1996) has sometimes been understood in this way (e.g. by Block and Stalnaker 1999).
Recall that on the mapping conception, a narrow content is a function from environments or contexts of acquisition to broad contents. Broad contents in turn are thought of as determining truth conditions; that is, a broad content will be true in some situations and false in others. How should we think of the environments or contexts that determine broad content, and the situations in which broad contents are true or false? One reasonably natural suggestion is the following. Oscar and Twin Oscar both actually exist (in Putnam's fantasy), but they have different environments, different contexts of acquisition. We can think of their contexts as including all the objective or nonperspectival facts about the actual world, plus a bit more, namely information about their locations in that world. This may be more information than we need, but it gives us a simple way to characterize contexts, and it is guaranteed to include everything relevant to the contribution of the natural and social environment to the contents of their beliefs. And of course, in addition to the actual contexts of Oscar and Twin Oscar, we can consider other possible contexts, other environments that they might have inhabited. In general, we can say that a context of an individual at a time is a centered world, a possible world that we regard as centered on the relevant individual and time.
With this background, we can consider a way to visualize the mapping conception. We will consider how the account applies to an example similar to that of Oscar and Twin Oscar. To keep things simple, we will change the example slightly. Instead of regarding Earth and Twin Earth as two different planets both of which exist in the actual world, we will consider them as different ways things could have turned out to be on our actual planet. In the actual world, the watery stuff on Earth is H2O; in a possible counterfactual world, it is XYZ instead. We will envision Oscar looking at a beaker full of a colorless, odorless liquid and thinking a thought that he would express by saying “that beaker contains water.”
Now, we consider three possible environments or contexts, which we are thinking of as centered worlds. These are:
- context 1: Oscar's ‘water’-thoughts have been acquired in an environment in which the colorless, odorless liquid that falls from the sky and fills the lakes is XYZ. For short, let us say that Oscar's ‘water’-thoughts are anchored in XYZ. Moreover, the substance in the beaker in front of him is in fact XYZ.
- context 2: Oscar's ‘water’-thoughts are anchored in H2O. Moreover, the substance in the beaker in front of him is in fact H2O.
- context 3: As in context 2, Oscar's ‘water’-thoughts are anchored in H2O. However, in context 3, the substance in the beaker is neither H2O nor XYZ, but sulfuric acid, H2SO4.
In context 2 and context 3, Oscar's ‘water’-thoughts are about water, i.e. H2O, while in context 1 they are about XYZ. Whether Oscar's thought about the substance in the beaker is true or not depends on two things: what his thought means, i.e. its broad content, and a certain fact about the world, namely what substance is in the beaker. Since a context includes all the objective facts about a world plus additional information about the “center” of the world, each context determines a unique possible world. For example, if we take context 1 and subtract the information about the time and individual on which the context is centered, we obtain a possible world we could call w(context 1).
We can summarize the situation in the following table:
w(context 1)
substance: XYZ |
w(context 2)
substance: H2O |
w(context 3)
substance: H2SO4 |
||
1 |
anchor: XYZ;
substance: XYZ |
T | F | F |
2 |
anchor: H20;
substance: H2O |
F | T | F |
3 |
anchor: H2O;
substance: H2SO4 |
F | T | F |
The items in the left-hand column of our table are contexts. The horizontal row to the right of each context represents the truth conditions or broad content Oscar's thought has if it originated in the indicated context. For instance, in context 1, Oscar's thought has the broad content that there is XYZ in the beaker. This thought is true in the world of context 1, false in the world of context 2 (since the beaker contains H2O in that world), and false in the world of context 3. A suitably extended version of our table, then, could be regarded as visually representing the narrow content of Oscar's thought about the substance in the beaker, on the mapping conception of narrow content, since it would illustrate, for each context, the broad content associated with Oscar's thought by that context.
Now we can say what the “diagonal proposition” is. It is simply the proposition represented by the diagonal from the upper left to the lower right of the above table. This represents the truth value Oscar's belief has, for any context, in the world of that context. The truth conditions this gives us are different from any of the three broad contents Oscar's belief might have depending on his context. But arguably they also give a better account of his narrow content than any of the horizontal propositions does. Unaware as he is of the chemical structure of water, Oscar has no direct access to which possible context is his actual context, and thus in a sense does not know what broad content his thoughts have. He also does not know for certain what liquid the beaker contains. What he does know is that if his ‘water’-thoughts are anchored in XYZ and the substance in the glass is also XYZ, then his belief is true; and if his ‘water’-thoughts are anchored in H2O and the substance in the glass is also H2O, then his belief is true; and if his ‘water’-thoughts are anchored in H2O and the substance in the glass is H2SO4, then his belief is false; and so on. In short, although his internal state does not suffice to determine any of the horizontal propositions, it does suffice to determine the diagonal proposition, which therefore seems to do a better job of capturing Oscar's state of mind than the horizontal propositions do.
The diagonal proposition view seems to avoid many of the difficulties of other approaches to narrow content. In particular, it does provide truth conditions, and hence seems clearly to be a kind of content. (However, Kriegel (2008) points out that, while two-dimensional accounts like this one and the epistemic account to be discussed next can provide truth conditions for “the mental analog of a sentence” and thus explain how it “puts us in cognitive contact with a state of affairs that constitutes its potential truth maker” (305), it is not so clear how the account can associate the mental analogs of subsentential expressions with worldly things and properties. Kriegel offers an account according to which concepts, the mental analogs of predicate terms, denote response-dependent properties.)
The main difficulties for the diagonal proposition account concern (1) how to apply it in order to determine the narrow content of a given mental state, and (2) the fact that it gives a way of defining the truth conditions of a mental state only in centered worlds in which the state actually exists. In the example above, for instance, Oscar is presumed to be in the same mental state in each context, although differences in how he came to be in that state affect its content. But this raises the problem of what we need to “hold constant” in considering various counterfactual contexts in order to be considering a context in which Oscar is in the relevant state. These issues are considered briefly in section 5.1.
4.5 Sets of Maximal Epistemic Possibilities
A final view about the nature of narrow content has some striking structural resemblances to the idea of a diagonal proposition, but is motivated very differently. This is the view of David Chalmers (1996, 2002); a related view has been defended by David Lewis (1979, 1994). In a nutshell, the view construes narrow contents as sets of maximal epistemic possibilities, or scenarios. (Scenarios closely resemble the centered worlds used to define diagonal propositions. Whether there is a centered world for every scenario, and vice versa, are debated issues: see Chalmers 2006, section 3.4.) The motivation for this view requires some development.
Narrow content is intended to capture a subject's perspective on the world, the way the world is according to the subject. A very natural way to think of this is to consider the narrow content of a belief or other thought to be a way of dividing up the ways things could conceivably be into those that are compatible with the thought and those that are ruled out by it.
Of course, broad contents also produce a kind of partitioning of possibilities. Any sort of content that determines truth conditions will rule in some possibilities and rule out others. But broad content does not provide the kind of partitioning needed for narrow content. Twin Earth is ruled out by the broad content of my belief that the lakes are full of water, since the lakes on Twin Earth do not contain water. But narrow content was introduced precisely in order to have a kind of content that my Twin and I share, so the narrow content of my thought that the lakes contain water should come out true in a Twin Earth environment centered on my Twin, just as my Twin's parallel thought comes out true there. A related way to see why Twin Earth is not ruled out by the narrow content of my thought is to notice that I can imagine finding out that all the watery stuff in my actual environment is XYZ. In that case I would not conclude that the lakes do not contain water; instead I would conclude that water is XYZ. So the narrow content of my thought that the lakes contain water does not rule out Twin Earth, even though its broad content does.
Chalmers develops this line of thought with the help of the following apparatus. A thought is said to be epistemically possible if it cannot be ruled out a priori, i.e. if its negation cannot be conclusively established without any appeal to experience. Such a thought corresponds to an epistemic possibility, a way the world could be for all one can tell a priori. A scenario is then defined to be a maximally specific epistemic possibility, an epistemic possibility with no detail left unspecified. Epistemic space is the set of all such scenarios. Any thought carves out a particular region of epistemic space by endorsing some scenarios and excluding others. A thought endorses a scenario when, if we accept that the scenario is actual, we should accept the thought as true. For instance, if we accept as actual a scenario in which the liquid that falls from the skies and fills the lakes is XYZ, we should accept as true the thought that water is XYZ. We can then think of the narrow content of a thought as constituted by the way the thought divides epistemic space into those scenarios it endorses and those it excludes. More specifically, we can think of the narrow content of a thought as a function from scenarios to truth values, or (equivalently if we have only two truth values) simply as a set of scenarios, namely those endorsed by the thought. (This paragraph closely follows Chalmers 2002, p. 610. Chalmers gives related but somewhat more detailed expositions in Chalmers 2003, especially pp. 47, 54, and in Chalmers 2006, especially pp. 76ff. Note that a thought endorses a scenario iff the scenario verifies the thought: Chalmers 2006 uses the latter terminology but not the former.)
Scenarios clearly have much in common with centered worlds. Indeed, it may be possible to simply identify scenarios with centered worlds. (Chalmers is cautious about this, noting reasons that some might reject this identification, but makes use of it for some purposes.) If scenarios are thought of as centered worlds, then the idea that narrow content is a function from scenarios to truth values is obviously a close cousin of the idea that narrow contents are diagonal propositions, which can also be thought of as functions from centered worlds to truth values. The differences between the two accounts should not be underestimated, however. An immediate formal difference is that narrow contents on Chalmers' approach are defined over a larger class of centered worlds than are diagonal propositions. On the diagonal approach, the centered worlds with respect to which a thought is evaluated must include a token of that very thought at the center, while this is not the case on the approach we are now considering. Another substantive difference between the two views is that they lead to very different strategies for determining narrow contents, as will emerge in sections 5.1 and 5.4.
4.6 Phenomenal Intentionality and Conceptions of Narrow Content
Recent work on phenomenal intentionality (see section 3.4) does not seem to have introduced any fundamentally new conceptions of narrow content. Some writing on the topic suggests the descriptive conception: “I suspect that in phenomenal intentionality the referential connection to the world works roughly as suggested in the descriptive theory of linguistic reference, rather than as suggested by direct-reference theories” (Kriegel 2013, p. 19). Loar's view of phenomenal intentionality (Loar 2003) has been interpreted as a version of the mapping conception (Lycan 2008 §13; Burge 2003, p. 448). Horgan and Tienson suggest that their conception of phenomenal intentionality is similar to “the approach of so-called two-dimensional modal semantics” (Horgan and Tienson 2002, note 26; cf. Horgan, Tienson and Graham 2004, note 13), of which the maximal epistemic possibilities conception described in section 4.4 is a version. Chalmers has explicitly proposed a way of extending the epistemic possibilities approach to the content of perceptual experience (Chalmers 2010, especially pp. 376–377 of the Afterword on “The Two-Dimensional Content of Perception”).
5. Strategies for Determining Narrow Content
We have seen that there are various sorts of thing a narrow content could be: a description, a conceptual role, a function from contexts to broad contents, a diagonal proposition, or a set of maximal epistemic possibilities. It is a further question which items of the relevant sort (which diagonal propositions or epistemic possibilities, for example) constitute the narrow content of a particular state of a particular subject.
How can we find out what the narrow content of a mental state is? Even more centrally, what is it about a mental state that makes it appropriate to describe it as having a particular narrow content? In the remainder of this section, I consider several strategies for determining narrow content. I do not address the issue of whether these strategies should be regarded as giving the essential nature of narrow content, or merely as heuristic devices for approximating it in practice.
Arguably, it is these differences over the appropriate strategy for determining narrow contents that are the most important differences between rival views of narrow content. Although we have considered several different views about the sort of semantic entities narrow contents might be, all these views, with the exception of conceptual role semantics, are close cousins of the view that narrow contents are sets of centered worlds. The most substantive differences between rival views concern how to determine which centered worlds are included in the narrow content of a particular state of a particular subject.
5.1 Diagonalization Strategy
A first strategy fits neatly with the view of narrow content as a diagonal proposition. If we want to know the narrow content of a particular mental state, we simply construct the diagonal proposition. That is, we first envision a variety of situations or environments in which the mental state could be embedded, i.e. a set of contexts or centered worlds that contain, at the center, the very mental state whose content we are interested in. For each of these contexts, we use our knowledge of broad content and how it is determined to discover the broad content that the mental state would have in that context. And then we determine whether, in the world of that context, a belief with that broad content would be true.
There are three main problems with this strategy. First, it treats broad content as fundamental, and narrow content as derivative. However, for many advocates of narrow content (e.g. Chalmers 2002), narrow content is at least as fundamental as broad content. In fact, it is tempting to regard broad content as determined by narrow content in conjunction with facts about context. But the strategy we are considering can only be applied to determine narrow content if we already have an independent way of determining broad content.
A second problem for the diagonalization strategy is a problem of scope (Chalmers, 2002). Although the diagonalization strategy yields a truth-conditional notion of content, the only centered worlds at which the diagonal proposition is evaluated will be worlds that contain at their center the mental state we are interested in. In effect this means that every mental state represents itself as existing. But it is puzzling why I could not have mental states whose content has nothing to do with their own existence. Chalmers offers these examples (Chalmers 2002, p. 625): it seems that my thought that I am a philosopher should be true in worlds centered on a philosopher even if he is not currently thinking that he is a philosopher. Again, it seems that the thought that someone is thinking should be false, not undefined, at centered worlds that do not contain a thinking person.
Third, there is the problem of what to “hold constant” in determining which possible contexts to consider (Block and Stalnaker 1999). The strategy requires us to consider contexts that include the mental state whose content we are interested in. But exactly what counts as a context that includes a particular mental state? And how closely, and in which respects, must the version of the state in the other worlds resemble the version in the actual world?
Block and Stalnaker argue in some detail that the likely candidates for what to hold constant all give the wrong results. Consider how we might find the diagonal proposition associated with Oscar's belief that water is wet. Suppose that a belief is, or is associated with, a mental analog of a sentence. We will suppose that, like a sentence, a mental token can be identified separately from its meaning. Then Oscar's mental token, like the sentence “Water is wet,” could, in some possible mental language, mean that dogs have fur. Now if in diagonalizing we consider all possible worlds centered on someone who possesses the same mental sentence as Oscar's water-sentence, regardless of its meaning, we get a diagonal proposition that is much too unconstrained to serve as a narrow content. We surely do not want to say that the narrow content of Oscar's belief that water is wet has the value True in a world that contains no remotely watery substance, but in which dogs are furry and the mental token in question means that dogs are furry.
So it is not sufficient to hold a syntactically identified mental token constant in deciding which worlds to include in the diagonal proposition. We must somehow consider worlds in which the token carries the same meaning it carries in the actual world. However, if we consider only worlds in which the token has the broad meaning that water is wet, the diagonal proposition will be too constrained to play the role of narrow content: it will be false, not true, in a world centered on Twin Oscar.
Still another possibility is to hold constant, not the broad content of the mental token, but its narrow content. This will give the results we want, but at the cost of making the account completely circular; diagonalizing cannot be a useful strategy for discovering narrow contents if we must already know the narrow content of a mental token in order to apply the strategy.
5.2 Subtraction Strategy
The subtraction strategy (Brown 1992) is an attempt to identify the narrow contents of a subject's beliefs by considering all of the contents of the subject's belief and subtracting those that are not narrow. The contents that remain must be narrow. More precisely, if a content of my belief is something I believe, then a narrow content of my belief is something that I believe and that is believed by every possible duplicate of me (possibly within some restricted class of worlds). I say “ordinary content” instead of “broad content” here, since the subtraction strategy presupposes that not all ordinary contents are broad.
To see why this strategy might be appealing, we can consider an analogy with perception. (A similar analogy with action is also possible.) Consider the perceptual state of someone looking at an apple. We can characterize this state in terms of what the person sees, just as we can characterize Oscar's belief state in terms of what he believes. In this case, our subject sees an apple, so one way to characterize her perceptual state is as “seeing an apple,” just as one way to describe Oscar's belief state is as “believing that water is wet.” However, we can easily construct scenarios in which our perceiver is in exactly the same narrow state, but does not see an apple — perhaps because everything except the apple's facing surface has been cut away. We can easily find a content of the subject's perception that is a content of the very same perceptual state but which characterizes it more narrowly: the subject sees the facing surface of the apple, and it is by virtue of seeing the facing surface that she sees the apple as a whole. Characterizing the perceptual state as “seeing the facing surface of an apple” is a narrower characterization in a very simple sense: if we consider alternative situations in which the subject is intrinsically exactly the same, but her environment is different, we will observe that in every situation in which she sees an apple, she also sees its facing surface, but that there are additional situations in which she sees the facing surface but does not thereby see an entire apple. Similarly, if we consider Oscar's belief that water is wet, we notice that a Twin Earth-like scenario provides a situation in which he is in the very same cognitive state but does not believe that water is wet. So we look for other contents of his belief, other things he believes, that characterize his state more narrowly. In this case we notice that in every situation in which he is in exactly the same intrinsic state and believes that water is wet, he also believes that the colorless, odorless liquid called ‘water’ is wet, but not conversely. So this latter belief seems to characterize his intrinsic state more narrowly than does the content “water is wet.”
It is important to notice that neither the perceptual content of seeing the facing surface of an apple, nor the belief content of believing that the colorless, odorless liquid called ‘water’ is wet, is completely narrow. We can find still more remote possibilities in which the subject's perceptual or belief state is exactly the same, but he or she does not have this perceptual or belief content. (The “apple” could be a wax imitation or a holographic projection; Oscar's Twin could live in an environment in which the word ‘liquid’ refers to very finely granulated solids.) To find truly narrow contents we will need to press our subtraction strategy still further, and appeal to objects that are very different from the ordinary objects of perception or belief — perhaps colored shapes or sense-data in the case of perception; perhaps beliefs about the subject's perceptual inputs and behavioral outputs in the case of belief (cf. McDermott 1986).
This suggests an important point which is rarely mentioned (but see Recanati 1994 for a related observation). Narrowness need not be construed as an all-or-nothing property. We can understand it instead as a matter of degree: one content of a mental state is narrower than another the further away from the actual world we need to go in logical space in order to find a world in which the subject's intrinsic properties are the same but the state does not have that content. (Alternatively, we could relativize the notion of narrow content to a set of possibilities; the more possibilities the set includes, the fewer contents will count as narrow. For many purposes we never consider Twin-Earth possibilities; for such purposes the proposition that water is wet may count as a narrow content of a subject's belief.) On this way of thinking of things, narrowness as it is usually defined is a limiting case: narrowness relative to the set of all metaphysically possible worlds. The concept of narrowness may be useful even if the limiting case never occurs, just as the concept of flatness is useful even though in this world the limiting case of absolute flatness never occurs.
Possible problems for the subtraction strategy include the following. (1) The strategy presupposes that all the narrow contents of our beliefs are included in the ordinary contents of belief, so that once we have subtracted the non-narrow contents away the narrow contents will remain. On many conceptions of narrow content, however, narrow content is a more specialized and technical notion than this, and we cannot suppose that the ordinary contents of belief will include narrow contents. (2) The conception of narrow content with which the subtraction strategy fits most naturally is the descriptive content conception discussed in section 2.1. It inherits the principal objection to that view, namely that it is not clear that ordinary language can offer a narrow vocabulary sufficient to describe the narrow contents of our thoughts. Two points should be made in response to this worry. First, while the subtraction strategy assumes that the narrow contents of belief are a subset of the ordinary contents of belief, it need not be committed to the view that all of these ordinary contents are describable in natural language. Second, as noted above, we can think of completely narrow content as a limiting ideal case. The subtraction strategy offers a way of relating broad beliefs to the narrower beliefs on which they depend. This may be useful even if the process does not terminate in beliefs which are absolutely narrow. (3) Although the subtraction strategy offers a way of determining one's total narrow content, it is not clear whether or how it could be applied to more specific belief states. (4) The subtraction strategy also shares with the diagonalization strategy the problem that it gives us a method of identifying narrow contents only if we already have an independent method of identifying contents in general.
5.3 Ideal Environment Strategy
This strategy is proposed by Dennett (1982). The idea is that a (centered) world is included in one's narrow content if and only if it is a world to which one is ideally suited. Place a subject in some environments, and everything will work out extremely well: the subject's attempts to satisfy his or her desires will succeed every time. Other environments will be much less friendly; somehow the subject's actions will never turn out to have quite the desired effects. Dennett's thought is roughly that we can capture the way the world is from the subject's point of view by taking the set of centered worlds to which the subject is ideally adapted. One attraction of this strategy is that it does not make narrow content parasitic on broad content; another is that it does not require the subject to be able to answer questions or reflect on the content of the subject's thoughts, so that it could easily be applied to cats and dogs as well as to humans.
A possible problem for the ideal environment strategy is that, while it may give us a way to determine a subject's total view of the world, it does not provide a way of parceling out narrow contents to more specific states.
A second problem is that the strategy does not seem to properly discriminate cognitive content from other sorts of information a subject's body may carry. A baby is better adapted to worlds in which extreme heat can damage its body than to worlds in which it cannot. When the baby touches something hot it automatically jerks away. This action has a useful purpose in a world in which heat is damaging, but would be pointless in a world in which it was not. But it does not follow that the baby believes that extreme heat is damaging. (See Stalnaker 1989, White 1991.)
A third problem is that, in some cases in which an individual's states do seem contentful, the ideal environment strategy, as stated above, seems to yield the wrong content. In the most obvious sense, I am better suited to worlds that do not contain a homicidal maniac who wants to kill me than I am to worlds that do contain such a maniac, even if I believe that such a maniac exists. So it seems that the ideal environment strategy will not correctly include the content of this belief among those it attributes to me. (Related examples are offered by Stalnaker 1989, White 1991, and Chalmers 2002.) As Stalnaker notes (1999: 182–183), Dennett is better understood to mean, not that the worlds I am best suited to are those in which I would do best, but rather that they are those with which I am best prepared to cope. But refining this account is a challenging task. (For instance, martial arts training might prepare me to cope with dangers that I do not believe to exist, raising the worry that the ideal environment strategy on this interpretation will attribute to me beliefs I do not in fact have.)
5.4 Epistemic Strategy
The epistemic strategy is recommended by Chalmers (2002, 2003, 2006). The framework that gives rise to this strategy was presented in section 3.5. Narrow contents are to be thought of as effecting a partition of scenarios, which are similar to the centered worlds employed by the diagonalization strategy, into those endorsed by the thought and those excluded by it. But how exactly are we to determine which scenarios are which? On the diagonalization strategy, we make use of our preexisting grasp of ordinary content to determine what ordinary content the thought would express if it were located at the center of a particular centered world, and then determine whether that ordinary content is true at that centered world. The epistemic strategy is radically different, and treats narrow content as at least as fundamental as ordinary content.
So, what function from scenarios to truth values constitutes the narrow content of my thought that the lakes contain water? Put slightly differently, which scenarios does this narrow content include and which does it exclude? To find out whether the narrow content of the thought that the lakes contain water includes a given scenario, I consider the hypothesis that the scenario is actual. For example, if I consider the hypothesis that a scenario in which the oceans and lakes around me contain H2O is actual, then I will be led by a priori reasoning to the conclusion that the lakes contain water; hence, the narrow content of my thought that the lakes contain water includes this Twin Earthly scenario. Similarly, if I consider the hypothesis that a scenario in which the oceans and lakes contain XYZ is actual, I will still conclude that the lakes contain water, since in Twin Earth scenarios my water-thoughts are about XYZ. So the Twin Earthly scenario is also included in the narrow content of my thought that the lakes contain water. By contrast, the narrow content of my thought that water is H2O will separate these two scenarios. If I consider the hypothesis that an Earthly scenario is actual, I will conclude that water is H2O, so the narrow content of the thought that water is H2O includes Earthly scenarios. However, if I consider the hypothesis that a Twin Earthly scenario is actual, I will conclude that water is not H2O (rather, it is XYZ), so the narrow content of my thought that water is H2O excludes Twin Earthly scenarios.
It is crucial that when I consider the hypothesis that the Twin-Earthly (or any other) scenario is actual, and ask whether, in that case, my thought that lakes contain water is true, I am not asking whether, had a Twin-Earthly world obtained, lakes would have contained water. The answer to that question is “no,” but it is a different question. When I ask this latter question, I am considering the Twin-Earthly world as counterfactual. Presupposing that the world is not actually that way, I ask what would be true if it were that way. Such questions, in which we consider alternative worlds as counterfactual, are the appropriate way to determine issues of metaphysical possibility. The sort of question relevant to epistemic possibility is different. It involves considering scenarios as actual, not as counterfactual: seeing what is the case if the world is that way, not seeing what would be the case if the world were that way. Questions about epistemic possibility, in which we consider scenarios as actual, are naturally posed in indicative conditionals: if the substance in the lakes is XYZ, is it water?
A full account must say much more than this about precisely what it is to consider a scenario as actual, and what it is for a scenario to be endorsed by a particular belief. In order to consider a scenario, we must have a complete description of some sort. On the other hand, there must be restrictions on thevocabulary in which the description is expressed. In particular, it cannot contain expressions like ‘water’, for which Twin Earth examples can be constructed. Chalmers offers a detailed account that addresses such questions. (This is presented briefly in Chalmers 2002 and Chalmers 2003, and in much more detail in Chalmers 2006.) Here is the short version (Chalmers 2002, p. 611):
To consider a scenario W as actual is to consider the hypothesis that D is the case, where D is a canonical description of W. When scenarios are understood as centered worlds, a canonical description will conjoin an objective description of the character of W (including its mental and physical character, for example), with an indexical description of the center's location within W. The objective description will be restricted to semantically neutral terms: roughly, terms that are not themselves vulnerable to Twin Earth thought experiments (thus excluding most names, natural kind terms, indexicals, and terms used with semantic deference). The indexical description will allow in addition indexical terms such as ‘I’ and ‘now’, to specify the center's location. We can then say that W verifies a thought T when a hypothesis that D is the case implies T. Or equivalently, where S is a linguistic expression of T, W verifies T when a material conditional ‘if D, then S’ is a priori.
Unlike the diagonalization strategy, the epistemic strategy does not depend on a prior determination of the broad content of the expression or state. Moreover, it does not require that narrow contents be evaluated only in scenarios that contain a token of the mental state at their center.
Potential problems for the epistemic strategy include: (1) whether it can be applied to nonhuman animals, many of whom presumably also have contentful mental states; (2) whether a canonical language that satisfies the necessary constraints is possible (see e.g. Schroeter 2004; Soames 2005, pp. 216–218; Sawyer 2007); and (3) whether a version of the “what is held constant” problem for the diagonalization strategy also poses problems for the epistemic strategy. (This last point is discussed a bit further in section 6.2.)
6. Further Issues
This section addresses some more technical issues that have been sidestepped or ignored in earlier sections.
6.1 Type or Token?
We have frequently considered the narrow content of Oscar's beliefs about water, for instance his belief that water is wet. Is the mental state we are concerned with here a type or a token? That is, are we concerned with a particular instance of a mental state, which occurs at a particular place and time, namely Oscar's belief on this particular occasion that water is wet, or are we concerned with a general kind of belief, the belief that water is wet, which many different people could have on many different occasions?
It seems that it cannot be the general type of belief that we are interested in here, at least not if the type in question is “belief that water is wet,” since different people (or even the same person on different occasions) could have beliefs of this type which had different narrow contents. Oscar is ignorant of the molecular structure of water; he identifies water as a clear, odorless, colorless substance that falls from the skies and fills the lakes. As we have already seen, Twin Oscar shares Oscar's narrow content, but in his environment this narrow content determines that he believes that XYZ is wet, not that water is wet. However, an earthly expert chemist who has done years of laboratory research on water, and is well aware of its molecular structure, will have a different water-concept than Oscar does, and his belief that water is wet will have a different narrow content than Oscar's does. The chemist's Twin Earth duplicate does not believe that XYZ is wet, since he is aware of the chemical composition of water and his beliefs explicitly concern H2O. He believes that water is wet (though he also believes, falsely, that the substance that fills the lakes and emerges from the taps is water).
Although Oscar and the chemist share the (broad) belief that water is wet, they have different narrow contents associated with this belief. So when we say that it is a belief that has a particular narrow content, we cannot be speaking of a general type of belief, at least not if the type is determined by the broad content of the belief. Rather, we must be speaking of a particular token belief, in this case Oscar's belief on a particular occasion that water is wet.
6.2 What sort of token?
We cannot completely evade issues about the nature of the token mental states we are considering, however. Even if the object of our concern is a particular token, we need to know how to identify the particular token we are interested in. Compare: suppose we decided we wanted to know the weight of a certain animal. A first question would be whether we are talking about a type of animal or a token animal. In this case we almost certainly intend to refer to the token rather than the type. (If the token belongs to a type of animal all or most of whose tokens have weights that fall in a fairly narrow range, we may later decide that we can also assign a weight to the type, but it is the weight of the tokens that is primary.) In this case there seems to be no problem: we can easily determine the weight of this animal, and even determine what its weight would be in other environments, without deciding whether the relevant type might be Pomeranians, or dogs, or canines, for example.
However, our ability to determine the weight of a token animal depends on the fact that we already know what animals are and how to identify them. If someone told us to find the weight of that thing over there, we would need a further specification of the thing in question before we could find its weight. Which thing? The dog? The dog's front leg? The dog's fur? The cat over there next to the dog? Or possibly even the disjoint thing consisting of both the cat and the dog? It is not clear that asking about the narrow content of Oscar's belief that water is wet is a much more clearly defined task than asking about the weight of that thing over there. Do we really have a means of picking out the mental state in question in a way that distinguishes it from other beliefs in the vicinity? What properties does it have? For example, does it have a syntactic structure? Is it an intrinsic state? Does it have a particular location in the brain? Is it entirely distinct from Oscar's beliefs that water is a liquid, that water can form droplets, and that water feels a certain way to the touch?
The problem of identifying the bearer of narrow content is obviously closely related to the problem of what to hold constant when employing the diagonalization strategy. But the problem may also affect views on which we do not need to require that a token state be present in a counterfactual situation in order to determine whether its narrow content would be true in that situation. It still seems that in order to know exactly what question we are asking, we need to know what it is whose content we are evaluating in the counterfactual situations. We need to know what the token state is in the actual world whether or not we insist on its presence in the counterfactual one.
6.3 Holism vs. Particularism
The discussion so far has presupposed that the mental states that have narrow contents are what we might call local mental states. For instance, we might want to know the narrow content of Oscar's belief that water is wet without wanting to know the narrow content of the rest of his beliefs. However, an alternative possibility is that narrow contents cannot be parceled out belief-by-belief in this way. It could be that the best we can do is to find the narrow content of a subject's total belief state, the subject's complete understanding of what the world is like. It could still make sense to discuss narrow contents less all-encompassing than one's total narrow belief content: we might say that any necessary consequence of the subject's total narrow content is also a narrow content of the subject's belief. On a holistic view, though, there need not be an identifiable distinct belief state by virtue of which the subject has that narrow content, whereas on a particularist view there will be.
Whether holism or particularism is correct may depend on the correct view of the nature of mental representations. On one extremely influential view (Fodor 1987, and many other writings), cognitive states are best thought of as relations to internal representations. These representations are thought of as similar to expressions in a natural language; indeed, Fodor describes these mental representations as a “language of thought.” On this view, Oscar's belief that water is wet will be understood as a relation to a sentence-like internal representation. This view may permit a particularist understanding of narrow content (although it is also possible to combine particularism about mental representations with holism about their content; see Block 1991).
On the other hand, Frank Jackson has suggested that we might represent the world by means of something more like a map than like a collection of sentences (Jackson 1996; Braddon-Mitchell and Jackson 1996). If this is the right understanding of representational mental states, then we would expect holism rather than particularism to be true. As Braddon-Mitchell and Jackson note, a map “says which island is the largest by saying something about the size of all the islands, and it says something about the size of any particular island in the course of saying something about where it is and what shape it is” (p. 183). Although the map does convey particular bits of information, we cannot neatly identify these bits of information with particular pieces of the map.
6.4 The Subject
Whether the content of a particular state is narrow or not depends on whether it would be shared by the corresponding state of every duplicate of the subject who has the state. But this means that whether the content of a state is narrow depends on how we individuate the subject who has the state. A content that is broad relative to one subject might be narrow relative to another, more inclusive subject. I might have an internal state that represents the condition of a particular cone in my retina. If we take the subject in this case to be me — all of me — this content might be narrow, while if we construe the subject more narrowly as, say, my brain, then the very same content of the very same state will be broad rather than narrow (since a duplicate state could have a different content if hooked up to a different kind of eye). For at least some purposes, for instance discussing brain-in-a-vat skeptical scenarios, we will no doubt want to construe the subject very narrowly, while for other purposes we might want to include such external objects as a notebook or PDA as part of the subject's memory (Clark and Chalmers 1998).
7. Conclusion
It should be evident that the idea of narrow content is highly controversial. Many thinkers reject the very idea of narrow content, while to many others it seems an attractive way to think about the kind or aspect of mental content that most closely captures the subject's perspective on the world, the nature of rational belief and inference, and the nature and extent of a priori knowledge. Even among its advocates, however, there is substantial disagreement on the precise form a theory of narrow content should take. There is much work left to be done on this topic: to develop the various approaches more fully; to determine to what extent they are compatible with one another; and, to the extent that they are not, to compare their advantages and disadvantages.
Bibliography
- Bach, Kent, 1987, Thought and Reference, Oxford: Oxford University Press.
- –––, 1998, “Content: Wide and Narrow,”Routledge Encyclopedia of Philosophy (Version 1.0), London: Routledge.
- Bailey, Andrew and Bradley Richards, 2014, “Horgan and Tienson on phenomenology and intentionality,” Philosophical Studies 167: 313–326.
- Bayne, Tim and Michelle Montague (eds.), 2011, Cognitive Phenomenology, Oxford: Oxford University Press.
- Block, Ned, 1986, “Advertisement for a Semantics for Psychology,” Midwest Studies in Philosophy, 10: 615–678. Reprinted in Stephen P. Stich and Ted A. Warfield, eds., Mental Representation: A Reader, Oxford: Blackwell, 1994.
- –––, 1991, “What Narrow Content is Not,” In Loewer and Rey (eds.) 1991.
- Block, Ned, and Stalnaker, Robert, 1999, “Conceptual Analysis, Dualism, and the Explanatory Gap,” Philosophical Review, 108: 1–46.
- Boghossian, Paul, 1989, “Content and self-knowledge,” Philosophical Topics, 17: 5–26.
- Braddon-Mitchell, David, and Jackson, Frank, 1996, Philosophy of Mind and Cognition, Oxford: Blackwell.
- Brown, Curtis, 1992, “Direct and Indirect Belief,” Philosophy and Phenomenological Research, 52: 289–316.
- –––, 1993, “Belief States and Narrow Content,” Mind and Language, 8: 343–67.
- Burge, Tyler, 1979, “Individualism and the Mental,” Midwest Studies in Philosophy, 4: 73–121; reprinted in Burge 2007.
- –––, 1986, “Individualism and Psychology,” Philosophical Review, 95: 3–45; reprinted in Burge 2007.
- –––, 1988, “Individualism and Self-Knowledge,” Journal of Philosophy, 85: 649–65.
- –––, 1989, “Individuation and Causation in Psychology,” Pacific Philosophical Quarterly, 70: 303–322; reprinted in Burge 2007.
- –––, 2003, “Phenomenality and Reference: Reply to Loar,” in Martin Hahn and Bjørn Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, MA: MIT Press.
- –––, 2007, Foundations of Mind: Philosophical Essays, Volume 2, Oxford: Oxford University Press.
- –––, 2010, Origins of Objectivity, Oxford: Oxford University Press.
- Chalmers, David, 1996, The Conscious Mind, Oxford: Oxford University Press.
- –––, 2002, “The Components of Content,” in D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings, Oxford: Oxford University Press.
- –––, 2003, “The Nature of Narrow Content,” Philosophical Issues, 13: 46–66.
- –––, 2006, “The Foundations of Two-Dimensional Semantics,” in Garcia-Carpintero and Macia (eds.), Two-Dimensional Semantics, Oxford: Oxford University Press.
- –––, 2010, “The Representational Character of Experience,” in D. Chalmers, The Character of Consciousness, Oxford: Oxford University Press.
- Clark, Andy, and David Chalmers, 1998, “The Extended Mind,” Analysis, 58: 7–19.
- Crane, Tim, 1991, “All the Difference in the World,” Philosophical Quarterly, 41: 1–25.
- Dennett, Daniel, 1982, “Beyond Belief,” in Andrew Woodfield (ed.), Thought and Object: Essays on Intentionality, Oxford: Oxford University Press, 1982; reprinted in D. Dennett, The Intentional Stance, Cambridge: MIT Press, 1987.
- Egan, Frances, 1991, “Must Psychology Be Individualistic?” Philosophical Review, 100: 179–203.
- Fodor, Jerry, 1987, Psychosemantics, Cambridge, MA: MIT Press.
- –––, 1991a, “A Modal Argument for Narrow Content,” Journal of Philosophy, 88: 5–26.
- –––, 1991b, “Replies,” in Loewer and Rey (eds.) 1991.
- –––, 1995, The Elm and the Expert: Mentalese and its Semantics, Cambridge, MA: MIT Press.
- Frances, Bryan, 2016, “The Dual Concepts Objection to Content Externalism,” American Philosophical Quarterly, 53: 123–138.
- Gaukroger, Cressida, forthcoming, “Why Broad Content Can't Influence Behavior,” Synthese.
- Glüer, Kathrin and Åsa Wikforss, 2009, “Against Content Normativity,” Mind, 118: 31–70.
- Horgan, Terence, and John Tienson, 2002, “The Intentionality of Phenomenology and the Phenomenology of Intentionality,” in D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings, Oxford: Oxford University Press.
- Horgan, Terence, John Tienson, and George Graham, 2004, “Phenomenal Intentionality and the Brain in a Vat,” in Richard Schanz (ed.), The Externalist Challenge, Berlin: Walter de Gruyter.
- Jackson, Frank, 1996, “Mental Causation,” Mind, 105:: 377–413.
- –––, 2003, “Representation and Narrow Belief,” Philosophical Issues, 13: 99–112.
- –––, 2003, “Narrow Content and Representation, or Twin Earth Revisited,” Proceedings and Addresses of the American Philosophical Association, 77 (2): 55–70.
- Kriegel, Uriah, 2008, “Real Narrow Content,” Mind and Language, 23: 305–328.
- –––, 2013, “The Phenomenal Intentionality Research Program,” in Uriah Kriegel (ed.), Phenomenal Intentionality, Oxford: Oxford University Press.
- Kripke, Saul, 1979, “A Puzzle About Belief,” in A. Margalit (ed.), Meaning and Use, Dordrecht: D. Reidel, 239–283.
- Lepore, Ernest, and Barry Loewer, 1986, “Solipsist Semantics,” Midwest Studies in Philosophy, 10: 595–614.
- Lewis, David, 1979, “Attitudes De Dicto and De Se,” Philosophical Review, 88: 513–543; reprinted in D. Lewis, Philosophical Papers (Volume 1), Oxford: Oxford University Press, 1983.
- –––, 1994, “Reduction of Mind,” in Samuel Guttenplan (ed.), A Companion to the Philosophy of Mind, Oxford: Blackwell.
- Loar, Brian, 1988, “Social Content and Psychological Content,” in R. Grimm and D. Merrill (eds.), Contents of Thought, Tucson: University of Arizona Press.
- –––, 2003, “Phenomenal Intentionality as the Basis of Mental Content,” in Martin Hahn and Bjørn Ramberg (eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, MA: MIT Press.
- Loewer, Barry and Georges Rey (eds.), 1991, Meaning in Mind: Fodor and his Critics, Oxford: Blackwell.
- Ludlow, Peter and Norah Martin (eds.), 1998, Externalism and Self-Knowledge, Stanford: CSLI Publications.
- Lycan, William G., 2008, “Phenomenal Intentionalities,” American Philosophical Quarterly, 45: 233–252.
- McDermott, Michael, 1986, “Narrow Content,” Australasian Journal of Philosophy, 64: 277–288.
- McGinn, Colin, 1977, “Charity, Interpretation, and Belief,” Journal of Philosophy, 74: 521–535.
- Mendola, Joseph, 2008, Anti-Externalism, Oxford: Oxford University Press.
- Nagel, Thomas, 1974, “What Is It Like to Be a Bat?” Philosophical Review, 4: 435–450.
- Nuccetelli, Susana (ed.), 2003, New Essays on Semantic Externalism and Self-Knowledge, Cambridge, MA: MIT Press.
- Putnam, Hilary, 1975, “The Meaning of ‘Meaning’,” in Keith Gunderson (ed.), Language, Mind and Knowledge (Minnesota Studies in the Philosophy of Science, Volumes VII), Minneapolis: University of Minnesota Press, 1975; reprinted in H. Putnam, Mind, Language and Reality (Philosophical Papers, Volume 2), Cambridge: Cambridge University Press, 1975.
- Recanati, Francois, 1994, “How Narrow is Narrow Content?” Dialectica, 48: 209–229.
- Sawyer, Sarah, 2007, “There Is No Viable Notion of Narrow Content,” in Brian P. McLaughlin and Jonathan Cohen (eds.), Contemporary Debates in Philosophy of Mind, Oxford: Blackwell.
- Schroeter, Laura, 2004, “The Rationalist Foundations of Chalmers's 2D Semantics,” Philosophical Studies, 118: 227–255.
- Segal, Gabriel, 1989, “Seeing What Is Not There,” Philosophical Review, 98: 189–214.
- Segal, Gabriel, 2000, A Slim Book about Narrow Content, Cambridge: MIT Press.
- –––, 2003, “Ignorance of Meaning,” in A. Barber (ed.), Epistemology of Language, Oxford: Oxford University Press.
- Soames, Scott, 2005, Reference and Description: The Case Against Two-Dimensionalism, Princeton: Princeton University Press.
- Stalnaker, Robert C., 1990, “Narrow Content,” In C. Anthony Anderson and Joseph Owens (eds.), Propositional Attitudes: The Role of Content in Logic, Language, and Mind, Stanford: CSLI; reprinted in Stalnaker 1999.
- –––, 1989, “On What's in the Head,” Philosophical Perspectives, 3: 287–316; reprinted in Stalnaker 1999.
- –––, 1999, Context and Content, Oxford: Oxford University Press.
- –––, 2006, “Assertion Revisited,” in Garcia-Carpintero and Macia (eds.), Two-Dimensional Semantics, Oxford: Oxford University Press.
- –––, 2008, Our Knowledge of the Internal World, Oxford: Oxford University Press.
- Stich, Stephen P., 1991, “Narrow Content Meets Fat Syntax,” in Loewer and Rey (eds.) 1991.
- Taylor, Kenneth A., 1989, “Narrow Content Functionalism and the Mind-Body Problem,” Noûs, 23: 355-372.
- Walker, Valerie, 1990, “In Defense of a Different Taxonomy: A Reply to Owens,” Philosophical Review, 99: 425–431.
- Werner, Preston J., 2015, “Character (alone) doesn't count: phenomenal character and narrow intentional content,” American Philosophical Quarterly 52: 261–271.
- White, Stephen, 1982, “Partial Character and the Language of Thought,” Pacific Philosophical Quarterly, 63: 347–365.
- –––, 1991, “Narrow Content and Narrow Interpretation,” in S. White, The Unity of the Self, Cambridge, MA: MIT Press, 1991.
- Williams, Meredith, 1990, “Social Norms and Narrow Content,” Midwest Studies in Philosophy, 15: 425–462.
- Wilson, Robert A., 1995, Cartesian Psychology and Physical Minds: Individualism and the Sciences of Mind, New York: Cambridge University Press.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- PhilPapers listing of papers on Narrow Content.