Belief

First published Mon Aug 14, 2006; substantive revision Wed Nov 15, 2023

Anglophone philosophers of mind generally use the term “belief” to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true. To believe something, in this sense, needn’t involve actively reflecting on it: Of the vast number of things ordinary adults believe, only a few can be at the fore of the mind at any single time. Nor does the term “belief”, in standard philosophical usage, imply any uncertainty or any extended reflection about the matter in question (as it sometimes does in ordinary English usage). Many of the things we believe, in the relevant sense, are quite mundane: that we have heads, that it’s the 21st century, that a coffee mug is on the desk. Forming beliefs is thus one of the most basic and important features of the mind, and the concept of belief plays a crucial role in both philosophy of mind and epistemology. The “mind-body problem”, for example, so central to philosophy of mind, is in part the question of whether and how a purely physical organism can have beliefs. Much of epistemology revolves around questions about when and how our beliefs are justified or qualify as knowledge.

Most contemporary philosophers characterize belief as a “propositional attitude”. Propositions are generally taken to be whatever it is that sentences express (see the entry on propositions). For example, if two sentences mean the same thing (e.g., “snow is white” in English, “Schnee ist weiss” in German), they express the same proposition, and if two sentences differ in meaning, they express different propositions. (Here we are setting aside some complications that might arise concerning indexicals; see the entry on indexicals.) A propositional attitude, then, is the mental state of having some attitude, stance, take, or opinion about a proposition or about the potential state of affairs in which that proposition is true—a mental state of the sort canonically expressible in the form “S A that P”, where S picks out the individual possessing the mental state, A picks out the attitude, and P is a sentence expressing a proposition. For example: Ahmed [the subject] hopes [the attitude] that Alpha Centauri hosts intelligent life [the proposition], or Yifeng [the subject] doubts [the attitude] that New York City will exist in four hundred years. What one person doubts or hopes, another might fear, or believe, or desire, or intend—different attitudes, all toward the same proposition. Discussions of belief are often embedded in more general discussions of the propositional attitudes; and treatments of the propositional attitudes often take belief as the first and foremost example.

1. What Is It to Believe?

1.1 Representationalism

It is common to think of believing as involving entities—beliefs—that are in some sense contained in the mind. When someone learns a particular fact, for example, when Kai reads that garden snails are hermaphrodites, they acquire a new belief (in this case, the belief that garden snails are hermaphrodites). The fact in question—or more accurately, a representation, symbol, or marker of that fact—may be stored in memory and accessed or recalled when necessary. In one way of speaking, the belief just is the fact or proposition represented, or the particular stored token of that fact or proposition; in another way of speaking, the belief is the state of having such a fact or representation stored.

It is also common to suppose that beliefs play a causal role in the production of behavior. Continuing the example, we might imagine that after learning about garden snail mating, Kai naturally turns their attention elsewhere, not consciously considering the matter for several days, until they and their ten-year-old daughter start watching an internet video about molluscs. Involuntarily, Kai’s new knowledge about the hermaphroditism of garden snails is called up from memory. Kai says to her, “Did you know that garden snails have both male and female organs at the same time?” It seems plausible to say that Kai’s belief about garden snails, or their possession of that belief, caused, or figured in a causal explanation of, their utterance.

Various elements of this intuitive characterization of belief have been challenged by philosophers, but it is probably fair to say that the majority of contemporary philosophers of mind accept the bulk of this picture, which embodies the core ideas of the representational approach to belief, according to which central cases of belief involve someone’s having in their head or mind a representation with the same propositional content as the belief. (But see §2.2, below, for some caveats, and see the entry on mental representation.) As discussed below, representationalists may diverge in their accounts of the nature of representation, and they need not agree about what further conditions, besides possessing such a representation, are necessary if a being is to qualify as having a belief. Among the more prominent advocates of a representational approach to belief are Fodor (1975, 1981, 1987, 1990), Millikan (1984, 1993), Dretske (1988), Burge (2010), Mandelbaum (2016; Quilty-Dunn and Mandelbaum 2018), and Zimmerman (2018).

One strand of representationalism, endorsed by Fodor, takes mental representations to be sentences in an internal language of thought. To get a sense of what this view amounts to, it is helpful to start with an analogy. Computers are sometimes characterized as operating by manipulating sentences in “machine language” in accordance with certain rules. Consider a simplified description of what happens as one enters numbers into a spreadsheet. Inputs from the keyboard cause the computer, depending on the programs it is running and its internal state, to instantiate or “token” a sentence (in machine language) with the content (translated into English) of, for example, “numerical value 4 in cell A1”. In accordance with certain rules, the machine then displays the shape “4” in a certain location on the monitor, and perhaps, if it is implementing the rule “the values of column B are to be twice the values of column A”, it tokens the sentence “numerical value 8 in cell B1” and displays the shape “8” in another location on the monitor. If we someday construct a robot whose behavior resembles that of a human being, we might imagine it to operate along broadly the lines described above—that is, by manipulating machine-language sentences in accordance with rules, in connection with various potential inputs and outputs. Such a robot might somewhere store the machine-language sentence whose English translation is “the chemical formula for water is H2O”. We might suppose this robot is able to act as does a human who possesses this belief because it is disposed to access this sentence appropriately on relevant occasions: When asked “of what chemical elements is water compounded?”, the robot accesses the water sentence and manipulates it and other relevant sentences in such a way that it produces an appropriate response.

According to the language of thought hypothesis (see the entry on the language of thought hypothesis), our cognition proceeds rather like such a robot’s. The formulae we manipulate are not in “machine language”, of course, but rather in a species-wide “language of thought”. A sentence in the language of thought with some particular propositional content P is a “representation” of P. On this view, a subject believes that P just in case they have a representation of P that plays the right kind of role—a “belief-like” role—in their cognition. That is, the representation must not merely be instantiated somewhere in the mind or brain, but it must be deployed, or apt to be deployed, in ways we regard as characteristic of belief. For example, it must be apt to be called up for use in theoretical inferences toward which it is relevant. It must be ready for appropriate deployment in deliberation about means to desired ends. It is sometimes said, in such a case, that the subject has the proposition P, or a representation of that proposition, tokened in their “belief box” (though of course it is not assumed that there is any literal box-like structure in the head).

Dretske’s view centers on the idea of representational systems as systems with the function of tracking features of the world (for similar views, see Millikan 1984, 2017; Neander 2017). Organisms, especially mobile ones, generally need to keep track of features of their environment to be evolutionarily successful. Consequently, they generally possess internal systems whose function it is to covary in certain ways with the environment. For example, certain marine bacteria contain internal magnets that align with the Earth’s magnetic field. In the northern hemisphere, these bacteria, guided by the magnets, propel themselves toward magnetic north. Since in the northern hemisphere magnetic north tends downward, they are thus carried toward deeper water and sediment, and away from toxic, oxygen-rich surface water. We might thus say that the magnetic system of these bacteria is a representational system that functions to indicate the direction of benign or oxygen-poor environments. In general, on Dretske’s view, an organism can be said to represent P just in case that organism contains a subsystem whose function it is to enter state A only if P holds, and that subsystem is in state A.

To have beliefs, Dretske suggests, is to have an integrated manifold of such representational systems, acquired in part through associative learning, poised to guide behavior. Given the lack of such a complex, and the lack of associative learning, magnetosome bacteria cannot, on Dretske’s view, rightly be regarded as literally possessing full-fledged beliefs. But exactly how rich an organism’s representational structure must be for it to have beliefs, and in what ways, Dretske regards as a terminological boundary dispute, rather than a matter of deep ontological significance. (For more on belief in non-human animals see §4 below.)

1.1.1. Representational Structure

If one accepts a representational view of belief, it’s plausible to suppose that the relevant representations are structured in some way—that the belief that P & Q, for instance, shares something structurally in common with the belief that P. To say this is not merely to say that the belief that P & Q has the following property: It cannot be true unless the belief that P is true. Consider the following possible development of Dretske’s representational approach: An organism has developed a system that functions to detect whether P is or is not the case. It’s supposed to enter state alpha when P is true; its being in alpha has the function of indicating P. Also, the organism has developed a separate system for detecting whether P & Q is the case. It’s supposed to enter state beta when P & Q is true; its being in beta has the function of indicating P & Q. But alpha and beta have nothing important in common other than what, in the outside world, they are supposed to represent; they have no structural similarity; one is not compounded in part from the other. Conceivably, all our beliefs could be set up in this way, having as little in common as alpha and beta—one internally unstructured representational state after another. To say that mental representations are structured is in part to deny that our minds work like that.

Among the reasons to suppose that our representations are structured, Fodor argues, are the productivity and systematicity of thought (Fodor 1987; Fodor and Pylyshyn 1988; Aizawa 2003). Thought and belief are “productive” in the sense that we can potentially think or believe an indefinitely large number of things: that elephants despise bowling, that 245 + 382 = 627, that river bottoms are usually not composed of blue beads. If representations are unstructured, each of these different potential beliefs must, once believed, be an entirely new state, not constructed from representational elements previously available. Similarly, thought and belief are “systematic” in the sense that an organism who thinks or believes that Mengzi repudiated Gaozi will normally also have the capacity (if not necessarily the inclination) to think or believe that Gaozi repudiated Mengzi; an organism who thinks or believes that dogs are insipid and cats are resplendent will normally also have the capacity to think or believe that dogs are resplendent and cats are insipid. If representations are structured, if they have elements that can be shuffled and recombined, the productivity and systematicity of thought and belief seem naturally to follow. Conversely, someone who holds that representations are unstructured has, at least, some explaining to do to account for these features of thought. (So also, apparently, does someone who denies that belief is underwritten or implemented by a representational system of any sort.)

Supposing representations are structured, then, what kind of structure do they have? Fodor notes that productivity and systematicity are features not just of thought but also of language, and concludes that representational structure must be linguistic. He endorses the idea of an innate, species-wide language of thought (as discussed briefly in §1.1 above); others tie the structure more closely to the thinker’s own natural (learned) language (Harman 1973; Field 1978; Carruthers 1996). However, still others assert that the representational structure underwriting belief isn’t language-like at all.

A number of philosophers have argued that our cognitive representations have, or can have, a map-like rather than a linguistic structure (Lewis 1994; Braddon-Mitchell and Jackson 1996; Camp 2007, 2018; Rescorla 2009; though see Blumson 2012 and Johnson 2015 for concerns about whether map-like and language-like structures are importantly distinct). Map-like representational systems are both productive and systematic: By recombination and repetition of its elements, a map can represent indefinitely many potential states of affairs; and a map-like system that has the capacity, for example, to represent the river as north of the mountain will normally also have the capacity to represent, by a re-arrangement of its parts, the mountain as north of the river. Although maps may sometimes involve words or symbols, nothing linguistic seems to be essential to the nature of map-like representation: Some maps are purely pictorial or combine pictorial elements with symbolic elements, like coloration to represent altitude, that we don’t ordinarily think of as linguistic.

The maps view makes nice sense of the fact that when a person changes one belief, a multitude of other beliefs seem also to change simultaneously and effortlessly: If you shift a mountain farther north on a map, for example, you immediately and automatically change many other aspects of the representational system (the distance between the mountain and the north coast, the direction one must hike to go from the mountain to the oasis, etc.). In contrast, if you change the linguistic representation “the mountain peak is 15 km north of the river” to “the mountain peak is 20 km north of the river”, no other representation necessarily changes: It takes a certain amount of inferential work to ramify the consequences through the rest of the system. Since it doesn’t seem like we’re constantly making such a plethora of inferences, the maps view might have an advantage here. On the other hand, perhaps just because the linguistic view requires inference for what appears to happen automatically on the maps view, the linguistic view can more easily account for failures of rationality, in which not all the necessary changes are made and the subject ends up with an inconsistent view. Indeed generally speaking it’s unclear how the map view can accommodate inconsistent beliefs unless one allows a proliferation of maps, with the complications that ensue (like redundancy and mechanisms for relating the maps; Yalcin 2021). Certain sorts of indeterminacy may also be more difficult to accommodate in map-like than in language-like structures. A linguistic representation like “there are some lakes east of the mountain” can leave completely unspecified how many lakes, of what shape, and where; a map does not, it seems, as easily do this. One further point of apparent difference between the two views will be discussed in §2.2 below. Generally speaking, one might worry that the maps view overgenerates and overspecifies beliefs, while the linguistic view undergenerates and underspecifies them.

A third and very different way of thinking about representational structure arises from the perspective of connectionism, a position in cognitive science and computational theory. According to connectionism, cognition proceeds by activation streaming through a series of “nodes” connected by adjustable “connection weights”—somewhat as neural networks in the brain can have different levels of activation and different strengths of connection between each other. It is sometimes suggested (e.g., by van Gelder 1990; Smolensky 1995; Shea 2007) that the structure of connectionist networks is representational but non-linguistic or non-“compositional”; and perhaps so also is human representational structure. However, it would take us too far afield to enter this technical issue here. (For more on this topic see the entry on connectionism.)

It would perhaps be surprising if the representational structures of human cognition correspond precisely with any familiar technologies. A pluralistic, computational approach to representational structure would allow for multiple format types, each characterized by the computational vehicles’ allowable value ranges and the constraints on the relations among those values (Vernazzani and Mollo forthcoming).

1.2 Dispositionalism

While representationalists like Fodor, Dretske, and Mandelbaum contend that having the right internal, representational structure is essential to having beliefs, another group of philosophers treats the internal structure of the mind as of only incidental relevance to the question of whether a being is properly described as believing. One way to highlight the difference between this view and representationalism is this: Imagine that we discover an alien being, of unknown constitution and origin, whose behavior and overall behavioral dispositions are perfectly normal by human standards. “Rudolfo”, say, emerges from a spacecraft and integrates seamlessly into U.S. society, becoming a tax lawyer, football fan, and Democratic Party activist. Even if we know next to nothing about what is going on inside his head, it may seem natural to say that Rudolfo has beliefs much like ours—for example, that the 1040 is normally due April 15, that a field goal is worth 3 points, and that labor unions tend to support Democratic candidates. Perhaps we can coherently imagine that Rudolfo does not manipulate sentences in a language of thought or possess internal representational structures of the right sort. Perhaps it is conceptually, even if not physically, possible that he has no complex, internal, cognitive organ, no real brain. But even if it is granted that a creature must have human-like representations in order to behave thoroughly like a human being, one might still think that it is the pattern of actual and potential behavior that is fundamental in belief—that representations are essential to belief only because, and to the extent to, they ground such a pattern. Dispositionalists and interpretationists are drawn to this way of thinking.

Traditional dispositional views of belief assert that for someone to believe some proposition P is for that person to possess one or more particular behavioral dispositions pertaining to P. Often cited is the disposition to assent to utterances of P in the right sorts of circumstances (if one understands the language, wishes to reveal one’s true opinion, is not physically incapacitated, etc.). Other relevant dispositions might include the disposition to exhibit surprise should the falsity of P make itself evident, the disposition to assent to Q if one is shown that P implies Q, and the disposition to depend on P’s truth in executing one’s plans. Perhaps all such dispositions can be brought under a single heading, which is, most generally, being disposed to act as though P is the case. Such actions are normally taken to be at least pretty good prima facie evidence of belief in P; the question is whether being disposed, overall, so to act is tantamount to believing P, as the dispositionalist thinks, or whether it is merely an outward sign of belief. Braithwaite (1932–1933) and Marcus (1990) are prominent advocates of the traditional dispositional approach to belief (though Braithwaite emphasizes in his analysis another form of belief, rather like “occurrent” belief as described in §2.1 below).

There are two standard objections to traditional dispositional accounts of belief. The first, tracing back at least to Chisholm (1957), assumes that the dispositionalist’s aim is to reduce or analyze facts about belief entirely into facts about outward behavior, facts specifiable without reference to other beliefs, desires, inner feelings, and so forth (see the entry on philosophical behaviorism). Such a reduction or analysis appears impossible for the following reason: People with the same belief may behave very differently, depending on their other beliefs, desires, and so forth. For example, a person who believes that it will rain will only be disposed to take an umbrella if they also believe that the umbrella will ward off the water and if they don’t want to get wet. Change the surrounding beliefs and desires and very different behavior may result. A dispositionalist attempting to specify the particular behavioral dispositions associated with, for example, the belief that it’s raining will then either get it wrong about the dispositions of some people (such as those who like to get wet) or will be forced to incorporate into their dispositional analysis conditional antecedents invoking the very ideas they are trying to analyze or reduce away—saying, for example, that the person who believes that P will behave in such-and-such a way if they also believe X and desire Y—apparently dooming the reductionist project. (It may be possible to avoid this objection by invoking a “Ramsey”-like approach to the reduction [see the section on Functional States and Ramsey Sentences in the entry on functionalism and Lewis 1972], but this type of analysis was not widely discussed until after traditional dispositional approaches to belief had gone largely out of fashion.)

The second standard objection to traditional dispositional accounts of belief is to note the loose connection between belief and behavior in some cases—for example, in a recently paralyzed person, or in someone who wants to keep a private opinion (e.g., a Muscovite who believes, in 1937, that Stalin’s purges are morally wrong), or in matters of very little practical relevance (e.g., an American homebody’s belief that there is at least one church in Nice). Again, the traditional dispositionist seems faced with a choice between oversimplifying (and thus mischaracterizing some people’s dispositions) and loading the dispositions with potentially problematic or unwieldy conditional antecedents (e.g., they’d get the umbrella if their paralysis healed; they’d speak up if the political climate changed). On the other hand, however, the demand for an absolutely precise specification of the conditions under which a disposition will be manifested, without exception, may be excessive. As Cartwright (1983) has noted, even perfectly respectable claims in the physical sciences often hold only ceteris paribus or “all else being equal”.

In light of these concerns and others, most recent philosophers sympathetic with the view described in the first paragraph of this section have abandoned traditional dispositionalism. They divide into roughly two classes, which we may call liberal dispositionalists and interpretationists. Liberal dispositionalists avoid the first objection by abandoning the reductionist project associated with traditional dispositionalism. They permit appeal to other mental states in specifying the dispositions relevant to any particular belief—possibly including other beliefs and desires. They also broaden the range of dispositions considered relevant to the possession of a belief so as to include at least some dispositions to undergo private mental episodes that do not manifest in outwardly observable behavior—dispositions, for example, for the subject to feel (and not just exhibit) surprise should they discover the falsity of P, for them privately to draw conclusions from P, to feel confidence in the truth of P, to utter P silently in inner speech, and so forth. This appears also to mitigate the second objection to some extent (though see Moore and Botterill 2023): The Muscovite possesses their belief about Stalin’s purges at least as much in virtue of the things they says silently in inner speech and the disapproval they privately feel as in virtue of their disposition to express that opinion were the political climate to change. Advocates of views of this sort include Price (1969), Baker (1995), Schwitzgebel (2002, 2013), and arguably Ryle (1949) and Ramsey (1926 [1990], 1927–1929 [1991]; see Wright 2017). Smithies (forthcoming) sheds the behavioral focus of traditional dispositionalism entirely, arguing that to believe is to be disposed to feel (or to occurrently feel) conviction (see also Cohen 1992).

However, a philosopher approaching belief with the specific goal of defending physicalism or materialism—the view that everything in the world, including the mind, is wholly physical or material (see physicalism)—might have reason to be dissatisfied with liberal dispositionalism, for the very reason that it abandons the reductionist project. Although liberal dispositional accounts of belief are consistent with physicalism, they do not substantially advance that thesis, since they relate belief to other mental states that may or may not be seen as physical. The defense of physicalism was one of the driving forces in philosophy of mind in the period during which the most influential approaches to belief in contemporary analytic philosophy of mind were developed—the 1960s through the 1980s—and it was one of the principal reasons philosophers were interested in accounts of propositional attitudes such as belief. Consequently, the failure of liberal dispositionalism to advance the physicalist project might be seen as an important drawback.

1.3 Interpretationism

Interpretationism shares with dispositionalism the emphasis on patterns of action and reaction, rather than internal representational structures, but retains the focus, abandoned by the liberal dispositionalist, on observable behavior—behavior interpretable by an outside observer. Since behavior is widely assumed to be physical, interpretationism can thus more easily be seen as advancing the physicalist project. The two most prominent interpretationists have been Dennett (1978, 1987, 1991) and Davidson (1984; see Donald Davidson; also see Lewis 1974; Mölder 2010; Curry 2020).

To gain a sense of Dennett’s view, consider three different methods we can use to predict the behavior of a human being. One method, which involves taking what Dennett calls the “physical stance”, is to apply our knowledge of physical law. We can predict that a diver will trace a parabolic trajectory to the water because we know how objects of that mass and size behave in fall near the surface of the Earth. A second method, which involves taking the “design stance”, is to attribute functions to the system or its parts and to predict that the system will function properly. We can predict that a jogger’s pulse will increase as she heads up the hill because of what we know about exercise and the proper function of the circulatory system. A third method, which involves taking the “intentional stance”, is to attribute beliefs and desires to the person, and then to predict that they will behave rationally, given those beliefs and desires. Much of our prediction of human behavior appears to involve such attribution (though see Andrews 2012). Certainly, treating people as mere physical bodies or as biological machines will not, as a practical matter, get us very far in predicting what is important to us.

On Dennett’s view, a system with beliefs is a system whose behavior, while complex and difficult to predict when viewed from the physical or the design stance, falls into patterns that may be captured with relative simplicity and substantial if not perfect accuracy by means of the intentional stance. The system has the particular belief that P if its behavior conforms to a pattern that can be effectively captured by taking the intentional stance and attributing the belief that P. For example, we can say that Heddy believes that a hurricane may be coming because attributing her that belief (along with other related beliefs and desires) helps reveal the pattern, invisible from the physical and design stances, behind her boarding up her windows, making certain phone calls, stocking up provisions, etc. All there is to having beliefs, according to Dennett, is embodying patterns of this sort. Dennett acknowledges that his view has the unintuitive consequence that a sufficiently sophisticated chess-playing machine would have beliefs if its behavior is very complicated from the design stance (which would involve appeal to its programmed strategies) but predictable with relative accuracy and simplicity from the intentional stance (attributing the desire to defend its queen, the belief that you won’t sacrifice a rook for a pawn, etc.).

Davidson also characterizes belief in terms of practices of belief attribution. He invites us to imagine encountering a being with a wholly unfamiliar language and then attempting the task of constructing, from observation of the being’s behavior in its environment, an understanding of that language (e.g., 1984, p. 135–137). Success in this enterprise would necessarily involve attributing beliefs and desires to the being in question, in light of which its utterances make sense. An entity with beliefs is a being for whom such a project is practicable in principle—a being that emits, or is disposed to emit, a complex pattern of behavior that can productively be interpreted as linguistic, rational, and expressive of beliefs and desires.

Dennett and Davidson both endorse the “indeterminacy” of belief attributions: In at least some cases, multiple incompatible interpretive schemes may be equally good, and thus there may be no fact of the matter which of those schemes is “really” the correct one, and thus whether the subject “really” believes P, if belief that P is attributed by one scheme but not by the other.

1.4 Functionalism

Many philosophers identify themselves as functionalists (see functionalism) about mental states in general or belief in particular. Functionalism about mental states is the view that what makes something a mental state of a particular type are its actual and potential, or its typical, causal relations to sensory stimulations, behavior, and other mental states (seminal sources include Armstrong 1968; Fodor 1968; Lewis 1972, 1980; Putnam 1975; Block 1978). Functionalists generally contrast their view with the view that what makes something a mental state of a particular type are facts about its internal structure. To understand this distinction, it may be helpful to begin with some non-mental examples. Arguably, what makes something a streptococcal bacterium, or a cube, is its shape or internal structure; its causal history or proneness to produce particular effects on particular occasions is only secondarily relevant, if at all. In contrast, whether something is a hard drive or not is not principally a matter of internal structure. A hard drive could be made of plastic or steel, employ magnetic tape or lasers. What matters are the causal relationships it’s prone to enter with a computer: Under certain promptings, it enters states such that, under certain further promptings, it will generate outputs of a certain sort. Internal structure is relevant only secondarily, insofar as it grounds these causal capacities. Likewise, according to the functionalist, what makes a state pain is not its particular neural configuration. People and animals with very different neural configurations could all equally be in pain (even, conceivably, a Martian with an internal structure radically different from ours could suffer pain). What matters is that the subject is in a state that (roughly) is apt to be caused by tissue damage or tissue stress and that, in turn, is apt to cause signs of distress, withdrawal, future avoidance of the painful stimulus, and (in verbal subjects) thoughts and utterances like “that hurts!”.

Philosophers frequently endorse functionalism about belief without even briefly sketching out the various particular functional relationships that are supposed to be involved, though Loar (1981) is a notable exception to this tendency (see also Leitgeb 2017). However, among the causal relationships contemporary philosophers have often seen as characteristic of belief are the following (these are sketched here only roughly; they come in many versions differing in nuance):

(1) Reflection on propositions (e.g., [Q] and [if Q then P]) from which P straightforwardly follows, if one believes those propositions and is not antecedently committed to the falsity of P, typically causes the belief that P.

(2) Directing perceptual attention to the perceptible properties of things, events, or states of affairs, in conditions favorable to accurate perception, typically causes the belief that those things, events, or states of affairs have those properties (e.g., visually attending to a red shirt in good viewing conditions will typically cause the belief that the shirt is red).

(3) Believing that performing action A would lead to event or state of affairs E, conjoined with a desire for E and no overriding contrary desire, will typically cause an intention to do A.

(4) Believing that P, in conditions favoring sincere expression of that belief, will typically lead to an assertion of P.

Loar emphasizes versions of (2) and (3) over (1) and (4), but one sees conditions of this sort at least briefly alluded to by a number of functionalist philosophers, including Armstrong (1973), Dennett (1969, 1978), Stalnaker (1984), Fodor (1990), Pettit (1993), Shoemaker (2003), and Zimmerman (2018). For the functionalist, to believe just is to be in a state that plays this sort of causal role. The intimate connection, noted in (3), between belief and action is also historically rooted in the pragmatist tradition (Bain 1859/1876; Peirce 1878).

As the list of names of the previous paragraph suggests, functionalism is compatible with either a representationalist approach to belief (as in Fodor) or an interpretationist one (as in Dennett). (The interpretationist, of course, will have to treat the relevant functional states as posits of an interpretative theory or scheme.) Dispositional accounts of belief, too, can be functionalist. Indeed, dispositional accounts can be seen as a special or limiting case of functional accounts. To see this, it’s helpful to divide the causal relations appealed to by functionalism into the backward-looking and the forward-looking. Backward-looking causal relations pertain to what actually, potentially, or typically causes the state in question; forward-looking causal relations pertain to what effects the state in question actually, potentially, or typically has. Thus (1) and (2) above are backward-looking causal relations, while (3) and (4) are forward-looking. We might, then, see the dispositionalist as a functionalist who thinks only the forward-looking causal relations are definitive of belief: To believe is to be in a state apt to cause such-and-such behavioral (or other) manifestations. (This view is, of course, compatible with accepting the existence of regularities like (1) and (2), as long as they are not regarded as defining characteristics of belief.) Two caveats, however, should accompany this reduction of dispositionalism to functionalism: First, insofar as functionalism about belief requires a causal relationship between the belief state and its manifestations in behavior (or in other mental states), it will exclude dispositionalists like Ryle (1949) who don’t view the disposition-manifestation relationship causally (for discussion, see Section 6 (‘The causal efficacy of dispositions’) of the entry on dispositions). Second, the liberal dispositionalist may wish to demur from the functionalist’s usual commitment to the reducibility of facts about functionally-definable mental states, en masse and in principle (allowing for the intricate network of interrelationships among them), to facts about sensory inputs and outward behavior.

The compatibility of functionalism and representationalism is not evident on its face, though a number of prominent philosophers appear to embrace both positions (e.g., Fodor 1968, 1975, 1981, 1990; Armstrong 1973; Harman 1973; Lycan 1981a, 1981b; Stalnaker 1984; Lewis 1994). As Millikan (1984), Papineau (1984), and others have suggested, it seems one thing to say that to believe is to be in a state that fills a particular causal role, and it seems quite another to say that beliefs are essentially states that represent how things stand in the world. How can something represent the world outside simply by virtue of playing a certain causal role in a cognitive system? Suppose, for example, that a state represents by virtue of having an indicator function of the sort described at the end of §1.1 above. The indicator function of an internal state or system would seem, at least sometimes and in part, to depend constitutively on the evolutionary history of that state or system, or its learning history, and not simply on the causal relationships it is currently disposed to enter. Despite the word “function” in “functionalism”, it’s not clear that standard functionalist accounts, limited as they are to appeal to a state’s actual, potential, or typical causal roles, can incorporate facts about a system’s evolutionary history or learning history: Conceivably, for example, two states in different individuals may have exactly analogous causal roles, yet differ in their (as Millikan says) “proper function” because of differences in the evolutionary or learning history of those systems.

Three escapes from this potential difficulty suggest themselves. One is to endorse a version of “conceptual [or functional] role semantics” according to which the representational status and content of a mental state is reducible just to facts about what is apt to cause and to be caused by the mental state in question—that is, to deny the relevance of remote evolutionary or learning history (e.g., Harman 1973, 1987). Another is to accept that causal role determines the representational status of a mental state (i.e., that it is a representation) but does not fully specify representational content (i.e. how that representation represents things as being); but this seems to involve abandoning full-blown functionalism. A third is to interpret more liberally what it is for a mental state to be “typically caused” (or perhaps “normally caused”) by some event or state of affairs: Perhaps it is enough that in the young organism, or its evolutionary ancestors, mental states of that sort were caused in a particular way, or the system was selected to be responsive to certain sorts of environmental factors. Such claims may be more easily reconcilable with certain canonical statements of functionalism (such as Lewis 1980) than with others (such as Putnam 1975). The issue has not been as fully discussed as it should be.

1.5 Eliminativism, Instrumentalism, and Fictionalism

Some philosophers have denied the existence of beliefs altogether. Advocates of this view, generally known as eliminativism, include Churchland (1981), Stich (in his 1983 book; he subsequently moderated his opinion), and Jenson (2016). On this view, people’s everyday conception of the mind, their “folk psychology”, is a theory on par with folk theories about the origin of the universe or the nature of physical bodies. And just as our pre-scientific theories on the latter topics were shown to be radically wrong by scientific cosmology and physics, so also will folk psychology, which is essentially still pre-scientific, be overthrown by scientific psychology and neuroscience once they have advanced far enough. According to eliminativism, once folk psychology is overthrown, strict scientific usage will have no place for reference to most of the entities postulated by folk psychology, such as belief. Beliefs, then, like “celestial spheres” or “phlogiston”, will be judged not actually to exist, but rather to be the mistaken posits of a radically false theory. We may still find it convenient to speak of “belief” in informal contexts, if scientific usage is cumbersome, much as we still speak of “the sun going down”, but if the concept of belief does not map onto the categories described by a mature scientific understanding of the mind, then, literally speaking, no one believes anything. For further discussion of eliminativism and the considerations for and against it, see the entry on eliminative materialism.

Instrumentalists about belief regard belief attributions as useful for certain purposes, but hold that there are no definite underlying facts about what people really believe, or that beliefs are not robustly real, or that belief attributions are never in the strictest sense true. One sort of instrumentalism—what we might call hard instrumentalism—denies that beliefs exist in any sense. Hard instrumentalism is thus a form of eliminativism, conjoined with the thesis that belief-talk is nonetheless instrumentally useful (e.g., Quine 1960, p. 221 [but for a caveat see p. 262–266]). Another type of instrumentalism, which we might call soft instrumentalism, grants that beliefs are real, but only in a less robust sense than is ordinarily thought. Dennett (1991) articulates a view of this sort. Consider as an analogy: Is the equator real? Well, not in the sense that there’s a red stripe running through the Congo; but saying that a country is on the equator says something true about its position relative to other countries and how it travels on the spinning Earth. Are beliefs real? Well, not perhaps in the sense of being representations stored somewhere in the mind; but attributing a belief to someone says something true about that person’s patterns of behavior and response. Beliefs are as real as equators, or centers of gravity, or the average Canadian. The soft instrumentalist holds that such things are not robustly real—not as real as mountains or masses or individual, actual Canadians. They are in some sense inventions that capture something useful in the structure of more robustly real phenomena. Soft instrumentalism in this sense comports naturally with approaches to belief such as dispositionalism and interpretationism, to the extent those positions treat belief attribution simply as a convenient means of pointing toward certain patterns in a subject’s real and hypothetical behavior (see also Poslajko 2022).

Similarly to instrumentalism, fictionalism treats belief attribution practices as potentially useful while downplaying or denying the real existence of the attributed beliefs (Demeter, Parent, and Toon, eds., 2022). Instrumentalism and fictionalism are not incompatible. However, fictionalism emphasizes the resemblance between belief attribution and fictional storytelling, while instrumentalism emphasizes the resemblance to devising a predictively successful scientific instrument or model.

1.6 Normativism

Normativists hold that belief necessarily has a normative or evaluative dimension. That is, they emphasize the idea that it is central to a mental state’s being a belief that it is necessarily defective in a certain way if it is false, unjustified, or not rationally related to other attitudes. Shah and Velleman (2005) argue that conceiving of an attitude as a belief that P entails conceiving of it as governed by a norm of truth, that is, as an attitude that is correct if and only if P is true. Engel (2018) argues that among propositional attitudes, belief is the only one whose “correctness condition” is truth, distinguishing it from other closely related mental states, such as acceptances and epistemic feelings. (See also Wedgwood 2002; Gibbard 2005; Glüer and Wikforss 2013; McHugh and Whiting 2014.) Zangwill (2005) argues that part of the essence of belief is that if we believe that P and that P implies Q we should believe that Q (note the difference from the functionalist view that possessing those two beliefs typically causes belief that Q). Helton (2020) and Flores (forthcoming) argue that believing entails having the capacity to rationally update one’s beliefs.

Since normativism commits only to one necessary condition for a mental state to qualify as a belief, it is not by itself a full positive account of the nature of belief and is compatible with most of the approaches described above. Representationalist normativism, for example, starts from the idea that representational systems are functional systems of a certain sort (Millikan 1984; Dretske 1988), and function appears to be a normative concept, implying at least a contrast with malfunction. Burge (2010) argues that the “primary constitutive function” of believing is the production of veridical propositional representations. More broadly, belief has often been described as having a “direction of fit” in the sense that beliefs (unlike, for example, desires) ought to fit with, or get it right about, or match up to, the states of affairs they describe or represent (Anscombe 1957/1963; Searle 1983; Humberstone 1992; Frost 2014). If you believe that P and P is false, you have erred or made a mistake, whereas if you desire that P and P is false, you have not in the same way erred or made a mistake.

2. Types, Degrees, and Relatives of Belief

2.1 Occurrent Versus Dispositional Belief

Philosophers often distinguish dispositional (alternatively, standing) from occurrent belief. This distinction depends on the more general distinction between dispositions and occurrences. Examples of dispositional statements include:

(1a) Corina runs a six-minute mile,

(1b) Leopold is excitable,

(1c) salt dissolves in water.

These statements can all be true even if, at the time they are uttered, Corina is asleep, Leopold is relaxed, and no salt is actually dissolved in any water. They thus contrast with statements about particular occurrences, such as:

(2a) Corina is running a six-minute mile,

(2b) Leopold is excited,

(2c) some salt is dissolving in water.

Although (1a-c) can be true while (2a-c) are false, (1a-c) cannot be true unless there are conditions under which (2a-c) would be true. We cannot say that Corina runs a six-minute mile unless there are conditions under which she would in fact do so. A dispositional claim is a claim, not about anything that is actually occurring at the time, but rather that some particular thing is prone to occur, under certain circumstances.

Suppose Harry thinks plaid ties are hideous. Only rarely does the thought or judgment that they are hideous actually come to the forefront of his mind. When it does, he possesses the belief occurrently. The rest of the time, Harry possesses the belief only dispositionally. The occurrent belief comes and goes, depending on whether circumstances elicit it; the dispositional belief endures. The common representationalist warehouse model of memory and belief suggests a way of thinking about this. A subject dispositionally believes P if a representation with the content P is stored in their memory or “belief box” (in the central, “explicit” case: see §2.2). When that representation is retrieved from memory for active deployment in reasoning or planning, the subject occurrently believes P. As soon as they move to the next topic, the occurrent belief ceases.

As the last paragraph suggests, one needn’t adopt a dispositional approach to belief in general to regard some beliefs as dispositional in the sense here described. In fact, a strict dispositionalism may entail the impossibility of occurrent belief: If to believe something is to embody a particular dispositional structure, then a thought or judgment might not belong to the right category of things to count as a belief. The thought or judgment, P, may be a manifestation of an overall dispositional structure characteristic of the belief that P, but it itself is not that structure.

Though the distinction between occurrent and dispositional belief is widely employed, it is rarely treated in detail. Important discussions include Price (1969), Armstrong (1973), Lycan (1986), Searle (1992), Audi (1994), and Bartlett (2018). David Hume (1740) famously offers an account of belief that treats beliefs principally as occurrences (see the section on Belief in Hume), in which he is partly followed by Braithwaite (1932–1933) and Gertler (2007).

2.2 Varieties of Implicit Belief

2.2.1. Belief Without Explicit Representation

It seems natural to say that you believe that the number of planets is less than 9, and also that the number of planets is less than 10, and also that the number of planets is less than 11, and so on, for any number greater than 8 that one cares to name. On a simplistic reading of the representational approach, this presents a difficulty. If each belief is stored individually in representational format somewhere in the mind, it would seem that we must have a huge number of stored representations relevant to the number of planets—more than it seems plausible or necessary to attribute to an ordinary human being. And of course this problem generalizes easily.

The advocate of the maps view of representational structure (see §1.1.1, above) can, perhaps, avoid this difficulty entirely, since it seems a map of the solar system does represent all these facts about the number of planets within a simple, tractable system. However, representationalists have more commonly responded to this issue by drawing a distinction between explicit and implicit belief. One believes P explicitly if a representation with that content is actually present in the mind in the right sort of way—for example, if a sentence with that content is inscribed in the “belief box” (see §1.1 above). One believes P implicitly (or tacitly) if one believes P, but the mind does not possess, in a belief-like way, a representation with that content. (Philosophers sometimes use the term dispositional to refer to beliefs that are implicit in the present sense—but this invites confusion with the occurrent-dispositional distinction discussed above (§2.1). Implicit beliefs are, perhaps, necessarily dispositional in the sense of the previous subsection, if occurrently deploying a belief requires explicitly tokening a representation of it; but explicit beliefs may plausibly be dispositional or occurrent.)

Perhaps all that’s required to implicitly believe something is that the relevant content be swiftly derivable from something one explicitly believes (Dennett 1978, 1987). Thus, in the planets case, we may say that you believe explicitly that the number of planets is 8 and only implicitly that the number of planets is less than 9, less than 10, etc. Of course, if swift derivability is the criterion, then although there may be a sharp line between explicit and implicit beliefs (depending on whether the representation is stored or not), there will not be a sharp line between what one believes implicitly and what, though derivable from one’s beliefs, one does not actually believe, since swiftness is a matter of degree (see Field 1978; Lycan 1986).

The representationalist may also grant the possibility of implicit belief, or belief without explicit representation, in cases of the following sort (discussed in Dennett 1978; Fodor 1987). A chess-playing computer is explicitly programmed with a large number of specific strategies, in consequence of which it almost always ends up trying to get its queen out early; but nowhere is there any explicitly programmed representation with the content “get the queen out early”, or any explicitly programmed representation from which “get the queen out early” is swiftly derivable. The pattern emerges as a product of various features of the hardware and software, despite its not being explicitly encoded. While most philosophers would not want to say that any currently existing chess-playing computer literally has the belief that it should get its queen out early, it is clear that an analogous possibility could arise in the human case and thus threaten representationalism, unless representationalism makes room for a kind of emergent, implicit belief that arises from more basic structural facts in this way. However, if the representationalist grants the presence of belief whenever there is a belief-like pattern of actual or potential behavior, regardless of underlying representational structure, then the position risks collapsing into dispositionalism or interpretationism. The issue of how to account for apparent cases of belief without explicit representation poses an underexplored challenge to representationalism (see Schwitzgebel forthcoming).

2.2.2. Belief Without Conscious Endorsement

Empirical psychologists have drawn a contrast between implicit and explicit memory or knowledge, but this distinction does not map neatly onto the implicit/explicit belief distinction described in Section 2.2.1. In the psychologists’ sense, explicit memory involves the conscious recollection of previously presented information, while implicit memory involves the facilitation of a task or a change in performance as a result of previous exposure to information, without, or at least not as a result of, conscious recollection (Schacter 1987; Schacter and Tulving 1994; though see Squire 2004). For example, if a subject is asked to memorize a list of word pairs—bird/truck, stove/desk, etc.—and is then cued with one word and asked to provide the other, the subject’s explicit memory is being tested. If the subject is brought back two weeks later, and has no conscious recollection of most of the word pairs on the list, then they have no explicit memory of them. However, implicit memory of the word-pairs would be revealed if they found it easier to learn the “forgotten” pairs a second time. Knowledge that is “implicit” in this sense will normally not be implicit in the sense of the previous subsection (if it were swiftly derivable from what one explicitly believes, presumably one could answer the test questions correctly); it’s also at least conceptually possible that some such psychologically implicit knowledge may be stored stored “explicitly” in the sense of the previous subsection.

A different empirical literature addresses the issue of “implicit attitudes”, for example implicit racism or sexism, which are often held to conflict with verbally or consciously espoused attitudes. Such implicit attitudes might be revealed by emotional reactions (e.g., more negative affect among White participants when assigned to a co-operative task with a Black person than with a White person) or by association or priming tasks (e.g., faster categorization responses when White participants are asked to pair negative words with dark-skinned faces and positive words with light-skinned faces than vice versa). However, it remains controversial to what extent tests of this sort reveal subjects’ (implicit) beliefs, as opposed to merely culturally-given associations or attitudes other than full-blown belief (Wilson, Lindsey, and Schooler 2000; Kihlstrom 2004; Lane et al. 2007; Hunter 2011; Tumulty 2014; Levy 2015; Machery 2016; Madva 2016; Zimmerman 2018; Brownstein, Madva, and Gawronski 2019). Gendler, for example, suggests that we regard such implicit attitudes as arational and automatic aliefs rather than genuine evidence-responsive beliefs (Gendler 2008a–b; for critique see Schwitzgebel 2010; Mandelbaum 2013).

2.3 Degree of Belief

Jessie believes that Stalin was originally a Tsarist mole among the Bolsheviks, that her son is at school, and that she is eating a tomato. She feels different degrees of confidence with respect to these different propositions. The first she recognizes to be a speculative historical conjecture; the second she takes for granted, though she knows it could be false; the third she regards as a near-certainty. Consequently, Jessie is more confident of the second proposition than the first and more confident of the third than the second. We might suppose that every subject holds each of their beliefs with some particular degree of confidence. In general, the greater the confidence one has in a proposition, the more willing one is to depend on it in one’s actions.

One common way of formalizing this idea is by means of a scale from 0 to 1, where 0 indicates absolute certainty in the falsity of a proposition, 1 indicates absolute certainty in its truth, and .5 indicates that the subject regards the proposition just as likely to be true as false. This number then indicates one’s credence or degree of belief. Standard approaches equate degree of belief with the maximum amount the subject would, or alternatively should, be willing to wager on a bet that pays nothing if the proposition is false and 1 unit if the proposition is true. So, for example, if the subject thinks that the proposition “the restaurant is open” is three times more likely to be true than false, they should be willing to pay no more than $0.75 for a wager that pays nothing if the restaurant is closed and $1 if it is open. Consequently, the subject’s degree of belief is .75, or 75%. Such a formalized approach to degree of belief has proven useful in decision theory, game theory, and economics. Standard philosophical treatments of this topic include Jeffrey (1983) and Skyrms (2000).

However, the phrase “degree of belief” may be misleading, because the relationship between confidence, betting behavior, and belief is not straightforward. The dispositionalist or interpretationist, for example, might regard exhibitions of confidence and attitudes toward risk as only part of the overall pattern underwriting belief ascription. Similarly, the representationalist might hold that readiness to deploy a representation in belief-like ways need not line up perfectly with betting behavior. Some people also find it intuitive to say that a rational person holding a ticket in a fair lottery may not actually believe that they will lose, but instead regard it as an open question, despite having a “degree of belief” of, say, .9999 that they will lose. If this person genuinely believes some other propositions, such as that their son is at school, with a “degree of belief” considerably less than .9999, then it appears to follow that a rational person may in some cases have a higher “degree of belief” in a proposition that they do not believe than in a proposition they do believe (see Harman 1986; Sturgeon 2008; Buchak 2014; Leitgeb 2017; Friedman 2019). This suggests a dissociation between belief and credence, raising the question of whether they are distinct attitudes, and if so whether one is more fundamental (see Jackson 2020 for a review).

Relatedly, Neil Van Leeuwen has argued for a functional distinction between “factual belief” and “religious credence” (Van Leeuwen 2014) or, alternatively, “mundane” and “groupish” belief (Van Leeuwen forthcoming). The first type of belief guides mundane action and tends to do so successfully when the belief is true. Typical contents might be: the light switch is to the left; class is at 2 p.m. The second type of belief, in contrast, is connected with group identity and works well if it effectively signals group membership, regardless of truth. Typical contents might be: God is a Trinity; Earth is flat. If Van Leeuwen is correct, mundane and groupish beliefs are sufficiently different in their causes and effects, or their functional roles, as to be worth distinguishing as distinct types of attitude.

2.4 Belief and Acceptance

Philosophers have sometimes drawn a distinction between acceptance and belief. Generally speaking, acceptance is held to be more under the voluntary control of the subject than belief and more directly tied to a particular practical action in a context. For example, a scientist, faced with evidence supporting a theory, evidence acknowledged not to be completely decisive, may choose to accept the theory or not to accept it. If the theory is accepted, the scientist ceases inquiring into its truth and becomes willing to ground their own research and interpretations in that theory; the contrary if the theory is not accepted. If one is about to use a ladder to climb to a height, one may check the stability of the ladder in various ways. At some point, one accepts that the ladder is stable and climbs it. In both of these examples, acceptance involves a decision to cease inquiry and to act as though the matter is settled. This does not, of course, rule out the possibility of re-opening the question if new evidence comes to light or new risks arise.

The distinction between acceptance and belief can be supported by appeal to cases in which one accepts a proposition without believing it and cases in which one believes a proposition without accepting it. Van Fraassen (1980) has argued that the former attitude is common in science: the scientist often does not think that some particular theory on which their work depends is the literal truth, and thus does not believe it, but nonetheless they accept it as an adequate basis for research. The ladder case, due to Bratman (1999), may involve belief without acceptance: One may genuinely believe, even before checking it, that the ladder is stable, but because so much depends on it and because it is good general policy, one nonetheless does not accept that the ladder is stable until one has checked it more carefully.

Important discussions of acceptance include van Fraassen (1980), Harman (1986), Cohen (1989, 1992), Lehrer (1990), Bratman (1999), Velleman (2000), and Frankish (2004).

2.5 Belief and Knowledge

The traditional analysis of knowledge, brought into contemporary discussion (and famously criticized) by Gettier (1963), takes propositional knowledge to be a species of belief—specifically, justified true belief. Most contemporary treatments of knowledge are modifications or qualifications of the traditional analysis and consequently also treat knowledge as a species of belief. (For a detailed treatment of this topic see the entry on the analysis of knowledge. For critique of the view that propositional knowledge entails belief, see Radford 1966; Murray, Sytsma, and Livengood 2013; Myers-Schulz and Schwitzgebel 2013).

There may also be types of knowledge that are not types of belief, though they have received less attention from epistemologists. Ryle (1949), for example, emphasizes the distinction between knowing how to do something (e.g., ride a bicycle) and knowing that some particular proposition is true (e.g., that Seoul is the capital of Korea). In contemporary psychology, a similar distinction is sometimes drawn between procedural and declarative knowledge (see Squire 1987; Schacter, Wagner, and Buckner 2000; also the entry on memory). Although knowledge-that or declarative knowledge may plausibly be a kind of belief, it is not easy to see how procedural knowledge or knowledge-how could be so, unless one holds that people have a myriad of beliefs about minute and non-obvious procedural details. At least, there is no readily apparent relation between knowledge-how and “belief-how” that runs parallel to the relation epistemologists generally accept between knowledge-that and belief-that. (For an influential attempt to subsume knowledge-how under knowledge-that, see Stanley and Williamson 2001; Stanley 2011.)

2.6 Belief and Delusion

The standard reference text in psychiatry, the Diagnostic and Statistical Manual of Mental Disorders (DSM-V-TR, 2022) characterizes delusions (e.g., persecutory delusions, delusions of grandiosity) as beliefs. However, delusions often do not appear to connect with behavior in the usual way. For example, a victim of Capgras delusion—a delusion in which the subject asserts that a family member or close friend has been replaced by an identical-looking imposter—may continue to live with the “imposter” and make little effort to find the supposedly missing loved one. Some philosophers have therefore suggested that delusions do not occupy quite the functional role characteristic of belief and thus are not, in fact, beliefs (e.g., Currie 2000; Stephens and Graham 2004; Gallagher 2009; Matthews 2013). Others have defended the view that delusions are beliefs (e.g., Campbell 2001; Bayne and Pacherie 2005; Bortolotti 2010, 2012) or in-between cases, with some features of belief but not other features (e.g., Egan 2009; Tumulty 2011). See the entry on delusion, especially §4.2 Are Delusions Beliefs?

3. The Content of Beliefs

Philosophers generally say that the belief that P has the (propositional) content P. A variety of issues arise about how to characterize those contents and what determines them.

3.1 Fine- or Coarse-Grained?

The standard view that the contents of beliefs are propositions gives rise to a debate about belief contents parallel to, and closely related to, a debate about the metaphysics of propositions. One standard view of propositions takes propositions to be sets of possible worlds; another takes propositions to have something more closely resembling a linguistic logical structure (see structured propositions for a detailed exposition of this issue).

Stalnaker (1984) endorses the possible-worlds view of propositions and imports it directly into his discussion of belief content: He contends the content of a belief is specified by the set of “possible worlds” at which that belief is true (see Lewis 1979 for a similar approach). The structure of belief content is thus the structure of set theory. Among the advantages Stalnaker claims for this view is its smooth accommodation of gradual change and of what might, from the point of view of a discrete linguistic structure, be seen as problematically indeterminate belief contents. Developing an example from Dennett (1969), he describes the gradual transition from a child’s learning to say (without really understanding) that “Daddy is a doctor” to having a full, adult appreciation of the fact that their father is a doctor. At some point, Stalnaker suggests, it’s best to say that child “sort of” or “half” believes the proposition in question. It’s not clear how to characterize such gradual shifts by means of a linguistic or quasi-linguistic propositional structure (1984, p. 64–65; see also Schwitzgebel 2001). On Stalnaker’s view, the child’s half-belief is handled by attributing the child the capacity to rule out some but not all of the possibilities incompatible with Daddy’s being a doctor: As their knowledge grows, so does their sense of the excluded possibilities.

The possible worlds approach to belief content is sometimes referred to as a “coarse-grained” approach because it implies that any two beliefs that would be true in exactly the same set of possible worlds have the same content—as opposed to a “fine-grained” approach on which beliefs that would be true at exactly the same set of possible worlds may nonetheless differ in content. The difference between these two approaches is brought out most starkly by considering mathematical propositions. On standard accounts of possibility, all mathematically true propositions are true in exactly the same set of possible worlds—every world. It seems to follow, on the coarse-grained view, that the belief that 1 + 1 = 2 has exactly the same content as the belief that the cosine of 0 is 1, and thus that anyone who believes (or fails to believe) the one accordingly believes (or fails to believe) the other. And that seems absurd.

Stalnaker attempts to escape this difficulty by characterizing mathematical belief as belief about sentences: The belief that the sentence “1 + 1 = 2” expresses a truth and the belief that the sentence “the cosine of 0 is 1” expresses a truth have different content and may differ in truth value between possible worlds (due simply to possible variations in the meanings of terms, if nothing else). However, it’s probably fair to say that few philosophers follow Stalnaker in this view (see discussion in Robbins 2004; and Rayo 2013 for a recent view similar to Stalnaker’s). The apparent difficulty of sustaining such a view of belief is often held to reflect badly on the a coarse-grained possible-worlds view of propositions in general, since it’s generally thought that one of the principal metaphysical functions of propositions is to serve as the contents of belief and other “propositional attitudes” (e.g., Field 1978; Soames 1987).

3.2 Atomism Versus Holism

Ani believes that salmon are fish; not knowing that whales are mammals, she also believes that whales are fish. Sanjay, like Ani, believes that salmon are fish, but he denies that whales are fish. Do Ani and Sanjay share exactly the same belief about salmon—namely, that they are fish—or is the content of their belief somehow subtly different in virtue of their different attitude toward whales? With certain caveats, the atomist will say the former, the holist the latter. In general, the atomist holds that the content of one’s beliefs does not depend in any general way on one’s related beliefs (though it may depend on the contents of a few specially related beliefs such as definitions) and thus, consequently, that people who sincerely and comprehendingly accept the same sentence normally have exactly the same belief. Holism is the contrary view that the content of every belief depends to a large degree on a broad range of one’s related beliefs, and consequently that two people will rarely believe exactly the same thing.

Holism may be defended by a slippery-slope argument. It seems that we can imagine Sanjay’s and Ani’s beliefs about the nature of fish and the members of the class of fish slowly diverging. At some point, it will seem plainly correct to say that even though they may both say “salmon are fish”, they are not expressing the same belief by that sentence. As an extreme case, we might imagine Ani to be so benighted as to hold that to be a “fish” is neither more nor less than to be an Earthly animal in regular contact with Martians, and that only salmon, whales, leopards, and banana slugs are in such contact. But if we deny, in the extreme case, that Ani and Sanjay share the same belief, expressed by the sentence “salmon are fish”, it seems artificial to draw a sharp line anywhere in the progression of divergence, on one side of which they share exactly the same belief about salmon and on the other side of which they have divergent beliefs. One is thus led to the conclusion that similarity in belief is a matter of degree, and it may then be difficult to avoid accepting that even a relatively small divergence in surrounding beliefs may be sufficient to generate subtle differences between two beliefs expressed in the same words. Similar slippery slope arguments can be constructed that emphasize gradual belief change in concept acquisition (“Leibniz was a metaphysician” agreed to before and after learning philosophy) or gradual change in surrounding theory or in the meaning of a term (“electrons have orbits” as uttered by Niels Bohr in 1913 and as uttered by Richard Feynman in 1980). (This argument is similar in some ways to Stalnaker’s argument for a possible-worlds analysis of the propositional contents of belief—see §3.1, above—and indeed Stalnaker takes himself, there, to be committed to holism.)

Dispositional and interpretational approaches to belief tend to be holist. On these views, recall, to believe is to be disposed to exhibit patterns of behavior interpretable or classifiable by means of various belief attributions (see §1.2 and §1.3 above). It is plausible to suppose that a subject’s match to the relevant patterns will generally be a matter of degree. There may be few actual cases in which two subjects exactly match in their behavioral patterns regarding P, even if it gets matters approximately right to attribute to each of them the belief that P. Since behavioral dispositions are interlaced in a complex way, divergence in any of a variety of attitudes related to P may be sufficient to ensure divergence in the patterns relevant to P itself. As Ani’s associated beliefs grow stranger, her overall behavioral pattern, or dispositional structure, begins to look less and less like one that we would associate with believing that salmon are fish.

It is sometimes objected to holism that, intuitively, both Shakespeare and contemporary physicians believe that blood is red, while on the holist view it is hard to see how their beliefs could even be similar, given that they have so many different surrounding beliefs about both blood and redness. Although in principle a holist could respond to this objection by describing what sort of differences in surrounding belief create only minor divergences and what differences create major ones, there have been no influential attempts at such a project.

Holism appears to be incompatible with a certain variety of representationalism about belief. If beliefs, or the representations underlying them, are stored symbols in the mind, somewhat like sentences on a chalkboard or objects in a box (to use standard Fodorian metaphors), then it is natural to suppose that those beliefs can, in principle, exist independently of each other. Whether one believes P depends on whether a representation with the content “P” is present in the right sort of way in the mind, which would not seem to be directly affected by whether Q or not-Q, or R or not-R, is also represented. If there is, in addition, an innate language of thought of the sort advocated by Fodor and others, then the basic terms of that language may also be exactly the same from person to person. If a view of this sort about the mind can be sufficiently well supported, holism would have to be rejected. Conversely, if holism is plausible, it cuts against the more atomistic forms of representationalism.

Fodor and Lepore (1992) contains an excellent if dated review and critique of arguments for holism. The foremost defenders of holism are probably Quine (1951) and Davidson (1984).

3.3 De Re Versus De Dicto Belief Attributions

Quine (1956) introduced contemporary philosophy of mind to the distinction between de re and de dicto belief attributions by means of examples like the following. Ralph sees a suspicious-looking man in a trenchcoat and concludes that that man is a spy. Unbeknownst to him, however, the man in the trenchcoat is the newly elected mayor, Bernard J. Ortcutt, and Ralph would sincerely deny the claim that “the mayor is a spy”. So does Ralph believe that the mayor is a spy? There appears to be a sense in which he does and a sense in which he does not. Philosophers have attempted to characterize the difference between these two senses by saying that Ralph believes de re, of that man (the man in the trenchcoat who happens also to be the mayor), that “he is a spy”, while he does not believe de dicto that “the mayor is a spy”.

The standard test for distinguishing de re from de dicto attributions is referential transparency or opacity. A sentence, or more accurately a position in a sentence, is held to be referentially transparent if terms or phrases in that position that refer to the same object can be freely substituted without altering the truth of the sentence. The (non-belief attributing) sentence “Jill kicked X” is naturally read as referentially transparent in this sense. If “Jill kicked the ball” is true, then so also is any sentence in which “the ball” is replaced by a term or phrase that refers to that same ball, e.g., “Jill kicked Davy’s favorite birthday present”, “Jill kicked the thing we bought at Walmart on August 26”. Sentences, or positions, are referentially opaque just in case they are not transparent, that is, if the substitution of co-referring terms or phrases could potentially alter their truth value. De dicto belief attribution is held to be referentially opaque in this sense. On the de dicto reading of belief, “Ralph believes that the man in the trenchcoat is a spy” may be true while “Ralph believes that the mayor is a spy” is false. Likewise, on a de dicto reading, “Lois Lane believes that Superman is strong” may be true while “Lois believes that Clark Kent is strong” is false, even if Superman and Clark Kent are, unbeknownst to Lois, one and the same person. (Regarding the Lois example, however, see also §3.5, on Frege’s Puzzle, below.)

In some contexts, the liberal substitution of co-referential terms or phrases seems permissible in ascribing belief. Shifting examples, suppose Davy is a preschooler who has just met a new teacher, Mrs. Sanchez, who is Mexican, and he finds her too strict. Davy’s mother, in reporting this fact to his father, might say “Davy thinks Mrs. Sanchez is too strict” or “Davy thinks the new Mexican teacher is too strict”, even though Davy does not know the teacher’s name or that she is Mexican. Similarly, if Ralph eventually discovers that the man in the trenchcoat was Ortcutt, he might, in recounting the incident to his friends later, laughingly say, “For a moment, I thought the mayor was a spy!” or “For a moment, I thought Ortcutt was a spy”. In a de re mood, then, we can say that Davy believes, of X, that she is too strict and Ralph believes, of Y, that he is a spy, where X is replaced by any term or phrase that picks out Mrs. Sanchez and Y is replaced by any term or phrase that picks out Ortcutt—though of course, depending on the situation, pragmatic considerations will favor the use of some terms or phrases over others. In a strict de re sense, perhaps we can even say that Lois believes, of Clark Kent, that he is strong (though she may also simultaneously believe of him that he is not strong).

The standard view, then, takes belief-attributing sentences to be systematically ambiguous between a referentially opaque, de dicto structure and a referentially transparent, de re structure. Sometimes this view is conjoined with the view that de re but not de dicto belief requires some kind of direct acquaintance with the object of belief.

The majority of the literature on the de re / de dicto distinction since at least the 1980s has challenged this standard view in one way or another. The challenges are sufficiently diverse that they resist brief classification, except perhaps to remark that a number of them invoke pragmatics or conversational context, instead of an ambiguity in the term “belief”, or in the structure of belief ascriptions, to explain the fact that it seems in some way appropriate and in some way inappropriate to say that Ralph believes the mayor is a spy.

Among the more important discussions of the de re / de dicto distinction are Quine (1956), Kaplan (1968), Burge (1977), Lewis (1979), Stich (1983), Dennett (1987), Crimmins (1992), Brandom (1994), Jeshion (2002), Taylor (2002), and Keshet (2010). See also the supplement on the De Re/De Dicto Distinction in the entry on propositional attitude reports.

3.4 Internalism and Externalism

A number of philosophers have suggested that the content of one’s beliefs depends entirely on things going on inside one’s head, and not at all on the external world, except via the effects of the latter on one’s brain. Consequently, if a genius neuroscientist were to create a molecule-for-molecule duplicate of your brain and maintain it in a vat, stimulating it artificially so that it underwent exactly the same sequence of electrical and chemical events as your actual brain, that brain would have exactly the same beliefs as you. Those who accept this position are internalists about belief content. Those who reject it are externalists.

Several arguments against internalism have prompted considerable debate in philosophy of mind. Here is a condensed version of one argument, due to Putnam (1975; though it should be said that Putnam’s original emphasis was on linguistic meaning, not on belief). Suppose that in 1750, in a far-off region of the universe, there existed a planet that was physically identical to Earth, molecule-for-molecule, in every respect but one: Where Earth had water, composed of H2O, Twin Earth had something else instead, “twater”, coming down as rain and filling streams, behaving identically to water by all the chemical tests then available, but having a different atomic formula, XYZ. Intuitively, it seems that the inhabitants of Earth in 1750 would have beliefs about water and no beliefs about twater, while the inhabitants of Twin Earth would have beliefs about twater and no beliefs about water. By hypothesis, however, each inhabitant of Earth will have a molecularly identical counterpart on Twin Earth with exactly the same brain structures (except, of course, that their brains will contain XYZ instead of H2O, but reflection on analogous examples regarding chemicals not contained in the brain suggests that this fact is irrelevant). Consequently, the argument goes, the contents of one’s beliefs do not depend entirely on internal properties of one’s brain.

For further detail on the debate between internalists and externalists, see the entries on externalism about the mind and narrow mental content.

3.5 Frege’s Puzzle

Recall that in the de dicto sense (see §3.3 above) it seemed plausible to say that Lois Lane, who does not know that Clark Kent is Superman, believes that Superman is strong but does not believe that Clark Kent is strong. Despite the intuitive appeal of this view, some widely accepted “Russellian” views in the philosophy of language appear committed to attributing to Lois exactly the same beliefs about Clark Kent as she has about Superman. On such views, the semantic content of a name, or the contribution it makes to the meaning or truth conditions of a sentence, depends only on the individual picked out by that name. Since the names “Superman” and “Clark Kent” pick out the same individual, it follows that the sentence “Lois believes Superman is strong” could not have a different meaning or truth value from the sentence “Lois believes Clark Kent is strong”. Philosophers of language have discussed this issue, known as “Frege’s Puzzle”, extensively since the 1970s. Although the issues here arise for all the propositional attitudes (at least), generally the puzzle is framed and discussed in terms of belief. See the entry on propositional attitude reports.

4. Can There Be Belief Without Language?

Some philosophers have argued that beings without language, notably human infants and non-human animals, cannot have beliefs. The most influential case for this view has been Davidson’s (1982, 1984; Heil 1992). Three primary arguments in favor of the necessity of language for belief can be extracted from Davidson.

The first starts from the observation that if we are to ascribe a belief to a being without language—a dog, say, who is barking up a tree into which a squirrel has just run—we must ascribe a belief with some particular content. At first blush, it seems natural to say that, in the case described, the dog believes that the squirrel is in the tree. However, on reflection, that attribution may seem to be not quite right. The dog does not really have the concept of a squirrel or a tree in the human sense. The dog may not know, for instance, that trees have roots and require water to grow. Consequently, according to Davidson, it is not really accurate to say that the dog believes that the squirrel is in the tree (at least in the de dicto sense: see §3.3 above). However, Davidson argues, neither does the dog have any other particular belief. Embracing holism (see §3.2 above), Davidson asserts that to have a belief with a specific content, that belief must be embedded in a rich network of other beliefs with specific contents, but a dog’s cognitive life is not complex enough to support such a network. “Belief” talk thus cannot get traction (cf. Dennett 1969; Stich 1979, 1983).

Several philosophers (e.g., Routley 1981; Smith 1982; Allen 1992; Glock 2010) have objected to this argument on the grounds that the dog’s cognition about things such as trees, while perhaps not much like ours, is nonetheless relatively rich, involving a number of elements relatively neglected by us, such as their scent and their use in marking territory. The dog’s understanding of a tree may be at least as rich as the human understanding of some objects about which we seem to have beliefs. For example, it seems that a chemically untrained person may believe that boron is a chemical element without knowing very much about boron apart from that fact. Since we have no language for doggy concepts, our belief ascriptions to dogs can only be approximate—but if one accepts holism, then belief ascription to other human beings may be similarly approximate.

Davidson also argues that to have a belief one must have the concept of belief, which involves the ability to recognize that beliefs can be false or that there is a mind-independent reality beyond one’s beliefs; and that these abilities require language. However, Davidson offers little explicit support for this claim. Furthermore, many developmental psychologists have suggested that children do not understand the appearance-reality distinction and do not recognize that beliefs can be false until they are at least three years old (Perner 1991; Wellman, Cross, and Watson 2001; though see Southgate, Senju, and Csibra 2007; Scott and Baillargeon 2017). Davidson’s view thus requires him either to reject this empirical thesis or embrace the seemingly implausible view that two-year-olds have no beliefs (see also Andrews 2002).

The view that belief requires language is a natural consequence of the view that belief attribution is inextricably intertwined with the interpretation of a subject’s linguistic utterances. Davidson, as described above (§1.3), argues that the interpretation of creature’s beliefs, desires, and its language must come together as a package. This provides a third Davidsonian reason for rejecting belief without language (a reason that, however, remains largely implicit in Davidson): Creatures without language are missing part of what is essential to a behavioral pattern of the sort that can underwrite proper belief ascription (and recall that on an interpretational view, all there is to having a belief is having a pattern of behavior that is interpretable in that way to an outside observer). Any view that ties belief attribution and the subject’s language as closely together as Davidson’s does—Sellars (1956, 1969), Brandom (1994), and Wettstein (2004) also offer views of this sort—will have difficulty accommodating the possibility of belief in creatures without language. Thus, whatever draws us to such views will also provide reason to deny belief (or at least robust, full-blown belief) to languageless creatures.

Positive arguments for attributing beliefs to (at least) human infants and non-linguistic mammals have tended to focus on the general biological and behavioral similarity between adult human beings, human infants, and non-human mammals; the naturalness of describing the behavior of infants and non-linguistic mammals in terms of their beliefs and desires; and the difficulty of usefully characterizing their mental lives without relying on the ascription of propositional attitudes (e.g., Routley 1981; Marcus 1995; Allen and Bekoff 1997; Zimmerman 2018; Curry forthcoming).

Bibliography

  • Aizawa, Kenneth, 2003, The systematicity arguments, Dordrecht: Kluwer.
  • Allen, Colin, 1992, “Mental content”, British Journal for the Philosophy of Science, 43: 537–553.
  • Allen, Colin and Marc Bekoff, 1997, Species of mind, Cambridge, MA: MIT Press.
  • Andrews, Kristin, 2002, “Interpreting autism: A critique of Davidson on thought and language”, Philosophical Psychology, 15: 317–332.
  • –––, 2012, Do apes read minds?, Cambridge, MA: MIT Press.
  • Anscombe, G.E.M., 1957/1963, Intention, 2nd edition, Cambridge, MA: Harvard University Press.
  • Armstrong, D.M., 1968, A materialist theory of the mind, New York: Routledge & Kegan Paul.
  • –––, 1973, Belief, truth, and knowledge Cambridge: Cambridge University Press.
  • Audi, Robert, 1994, “Dispositional beliefs and dispositions to believe”, Noûs, 28: 419–434.
  • Bain, Alexander, 1859/1876, The emotions and the will, New York: Appleton.
  • Baker, Lynne R., 1995, Explaining attitudes, Cambridge: Cambridge University Press.
  • Bartlett, Gary, 2018, “Occurrent states”, Canadian Journal of Philosophy, 48(1): 1–17.
  • Bayne, Tim, and Elisabeth Pacherie, 2005, “In defence of the doxastic conception of delusions”, Mind and Language, 20, 163–188.
  • Block, Ned, 1978, “Troubles with functionalism”, Midwest Studies in the Philosophy of Science, 9: 261–325.
  • Blumson, Ben, 2012, “Mental maps”, Philosophy and Phenomenological Research, 85: 413–434.
  • Bortolotti, Lisa, 2010, Delusions and other irrational beliefs, Oxford: Oxford University Press.
  • –––, 2012, “In defence of modest doxasticism about delusions”, Neuroethics, 5: 39–53.
  • Braddon-Mitchell, David and Frank Jackson, 1996, The philosophy of mind and cognition, Oxford: Oxford University Press.
  • Braithwaite, R.B., 1932–1933, “The nature of believing”, Proceedings of the Aristotelian Society, 33: 129–146.
  • Brandom, Robert B., 1994, Making it explicit, Cambridge, MA: Harvard University Press.
  • Bratman, Michael, 1999, Faces of intention, Cambridge: Cambridge University Press.
  • Brownstein, Michael, Alex Madva, and Bertram Gawronski, 2019, “What do implicit measures measure?”, WIREs Cognitive Science, 10:e1501.
  • Buchak, Lara, 2014, “Belief, credence, and norms”, Philosophical Studies, 169: 285–311.
  • Burge, Tyler, 1977, “Belief de re”, Journal of Philosophy, 75: 119–138.
  • –––, 2010, Origins of objectivity, Oxford: Oxford University Press.
  • Camp, Elisabeth, 2007, “Thinking with maps”, Philosophical Perspectives, 21: 145–182.
  • –––, 2018, “Why maps are not propositional”, in A. Grzankowski and M. Montague (eds.), Non-propositional intentionality, Oxford: Oxford University Press, 19–45.
  • Campbell, John, 2001, “Rationality, meaning, and the analysis of delusion”, Philosophy, Psychiatry, and Psychology, 8: 89–100.
  • Carnap, Rudolf, 1956, Meaning and necessity, revised edition, Chicago: University of Chicago Press.
  • Carruthers, Peter, 1996, Language, thought, and consciousness, Cambridge: Cambridge University Press.
  • Chan, Timothy (ed.), 2013, The aim of belief, Oxford: Oxford University Press.
  • Cartwright, Nancy, 1983, How the laws of physics lie, Oxford: Oxford University Press.
  • Chisholm, Roderick M., 1957, Perceiving, Ithaca: Cornell University Press.
  • Churchland, Paul M., 1981, “Eliminative materialism and the propositional attitudes”, Journal of Philosophy, 78: 67–90.
  • Cohen, L. Jonathan, 1989, “Belief and acceptance”, Mind, 98: 367–389.
  • –––, 1992, An essay on belief and acceptance, Oxford: Oxford University Press.
  • Crimmins, Mark, 1992, Talk about beliefs, Cambridge, MA: MIT Press.
  • Currie, Gregory, 2000, “Imagination, delusion, and hallucinations”, Mind and Language, 15: 168–183.
  • Curry, Devin Sanchez, 2020, “Interpretivism and norms”, Philosophical Studies, 177: 905–930.
  • –––, forthcoming, “Morgan’s Quaker gun and the species of belief”, Philosophical Perspectives.
  • Davidson, Donald, 1982, “Rational animals”, Dialectica, 36: 317–327.
  • –––, 1984, Inquiries into truth and interpretation, Oxford: Clarendon.
  • Demeter, Tamás, T. Parent, and Adam Toon, eds., 2022, Mental fictionalism, London: Routledge.
  • Dennett, Daniel C., 1969, Content and consciousness, London: Routledge.
  • –––, 1978, Brainstorms, Cambridge, MA: MIT Press.
  • –––, 1987, The intentional stance, Cambridge, MA: MIT Press.
  • –––, 1991, “Real patterns”, Journal of Philosophy, 87: 27–51.
  • Diagnostic and Statistical Manual of Mental Disorders: DSM-V-TR, 2022, Washington, DC: American Psychiatric Association.
  • Dretske, Fred, 1988, Explaining behavior, Cambridge, MA: MIT Press.
  • Egan, Andy, 2009, “Imagination, delusion, and self-deception”, in T. Bayne and J. Fernández (eds.), Delusions, self-deception, and affective influences on belief-formation, Hove, Sussex: Psychology Press, 263–280.
  • Engel, Pascal, 2018, “The doxastic zoo”, in A. Coliva, P. Leonardi, and S. Moruzzi (eds.), Eva Picardi on language, analysis and history, Palgrave.
  • Field, Hartry H., 1978, “Mental representation”, Erkenntnis, 13: 9–61.
  • Flores, Carolina, forthcoming, “Why think that belief is evidence-responsive?”, in J. Jong and E. Schwitzgebel (eds.), The nature of belief, Oxford: Oxford University Press.
  • Fodor, Jerry A., 1968, Psychological explanation, New York: Random House.
  • –––, 1975, The language of thought, New York: Cromwell.
  • –––, 1981, Representations, Cambridge, MA: MIT Press.
  • –––, 1987, Psychosemantics, Cambridge, MA: MIT Press.
  • –––, 1990, A theory of content, Cambridge, MA: MIT Press.
  • Fodor, Jerry and Ernest Lepore, 1992, Holism, Oxford: Blackwell.
  • Fodor, Jerry A. and Zenon W. Pylyshyn, 1988, “Connectionism and cognitive architecture: A critical analysis”, Cognition, 28: 3–71.
  • Frankish, Keith, 2004, Mind and Supermind, Cambridge: Cambridge University Press.
  • Friedman, Jane, 2019, “Inquiry and belief”, Noûs, 53: 296–315.
  • Frost, Kim, 2014, “On the very idea of direction of fit”, Philosophical Review, 123: 429–484.
  • Gallagher, Shaun, 2009, “Delusional realities”, in M. R. Broome and L. Bortolotti (eds.), Psychiatry as cognitive neuroscience, Oxford: Oxford University Press, 245–266.
  • Gendler, Tamar Szabó, 2008a, “Alief and belief”, Journal of Philosophy, 105: 634–663.
  • –––, 2008b, “Alief in action, and reaction”, Mind and Language, 23: 552–585.
  • Gertler, Brie, 2007, “Overextending the mind”, in B. Gertler and L. Shapiro (eds.), Arguing about the Mind, New York: Routledge.
  • Gettier, Edmund L., 1963, “Is justified true belief knowledge?”, Analysis, 23: 121–123.
  • Gibbard, Allan, 2005, “Truth and Correct Belief”, Philosophical Issues, 15: 338–350.
  • Glock, Hans-Johann, 2010, “Can animals judge?”, Dialectica, 64: 11–33.
  • Glüer, Kathrin, and Åsa Wikforss, 2013, “Against belief normativity”, in T. Chan, The aim of belief, Oxford: Oxford University Press, 121–146.
  • Harman, Gilbert, 1973, Thought, Princeton: Princeton University Press.
  • –––, 1986, Change in view, Cambridge: Cambridge University Press.
  • –––, 1987, “(Nonsolipsistic) conceptual role semantics”, in E. LePore (ed.), New directions in semantics, London: Academic, 55–81.
  • Heil, John, 1992, The nature of true minds, Cambridge, MA: MIT Press.
  • Helton, Grace, 2020, “If you can’t change what you believe, you don’t believe it”, Noûs, 54: 501–526.
  • Humberstone, I.L., 1992, “Direction of fit”, Mind, 101: 59–83.
  • Hume, David, 1740, Treatise of human nature, L.A. Selby-Bigge and P.H. Nidditch (eds.), Oxford: Oxford University Press, 1978.
  • Jackson, Elizabeth, 2020, “The relationship between belief and credence ”, Philosophy Compass, 15: e12668.
  • Jeffrey, Richard C., 1983, The logic of decision, 2nd edition, Chicago: University of Chicago Press.
  • Jenson, J. Christopher, 2016, “The belief illusion”, British Journal for the Philosophy of Science, 67: 965–995.
  • Jeshion, Robin, 2002, “Acquaintanceless de re belief”, in J.K. Campbell, M. O’Rourke, and D. Shier (eds.), Meaning and truth, New York: Seven Bridges, 53–78.
  • Johnson, Kent, 2015, “Maps, languages, and manguages: Rival cognitive architectures?”, Philosophical Psychology, 28: 815–836.
  • Kaplan, David, 1968, “Quantifying in”, Synthese, 19: 178–214.
  • Keshet, Ezra, 2010, “Split intensionality: A new scope theory of de re and de dicto”, Linguistics and Philosophy, 33: 251–283.
  • Kihlstrom, John F., 2004, “Implicit methods in social psychology”, in The SAGE handbook of methods in social psychology, Carol Sansone, Carolyn C. Morf, and A.T. Panter (eds.), Thousand Oaks, CA: Sage Publications, 195–212.
  • Lane, Kristin A., Mahzarin R. Banaji, Brian A. Nosek, and Anthony G. Greenwald, 2007, “Understanding and using the Implicit Association Test: IV”, in Implicit measures of attitudes, Bernd Wittenbrink and Norbert Schwarz (eds.), New York: Guilford, 59–102.
  • Lehrer, Keith, 1990, Metamind, Oxford: Clarendon.
  • Leitgeb, Hannes, 2017, The stability of belief, Oxford: Oxford.
  • Levy, Neil, 2015, “Neither fish nor fowl: Implicit attitudes as patchy endorsements”, Noûs, 49: 800–823.
  • Lewis, David, 1972, “Psychophysical and theoretical identifications”, Australasian Journal of Philosophy, 50: 249–258.
  • –––, 1974, “Radical interpretation”, Synthese, 23: 331–344.
  • –––, 1979, “Attitudes de dicto and de se”, Philosophical Review, 88: 513–543.
  • –––, 1980, “Mad pain and Martian pain”, in N. Block (ed.), Readings in the philosophy of psychology (Volume 1), Cambridge, MA: Harvard University Press, 216–222.
  • –––, 1994, “Lewis, David: Reduction of Mind”, in S. Guttenplan (ed.), A companion to the philosophy of mind, Oxford: Blackwell, 412–431.
  • Loar, Brian, 1981, Mind and meaning, Cambridge: Cambridge University Press.
  • Lycan, William G., 1981a, “Form, function, and feel”, Journal of Philosophy, 78: 24–50.
  • –––, 1981b, Toward a homuncular theory of believing, Cognition and Brain Theory, 4: 139–159.
  • –––, 1986, “Tacit belief”, in R.J. Bogdan (ed.), Belief: Form, content, and function, Oxford: Clarendon, 61–82.
  • Machery, Edouard, 2018, “De-Freuding implicit attitudes”, in M. Browstein and J. Saul (eds.), Implicit bias and philosophy, Oxford: Oxford University Press, 104–129.
  • Madva, Alex, 2016. “Why implicit attitudes are (probably) not beliefs”, Synthese, 193: 2659–2684.
  • Mandelbaum, Eric, 2013. “Against alief”, Philosophical Studies, 165: 197–211.
  • –––, 2016. “Attitude, inference, association: On the propositional structure of implicit bias”, Noûs, 50: 629–658.
  • Marcus, Ruth B., 1990, “Some revisionary proposals about belief and believing”, Philosophy and Phenomenological Research, 50: 132–153.
  • –––, 1995, “The anti-naturalism of some language centered accounts of belief”, Dialectica, 49, 113–129.
  • Matthews, Robert J., 2013, “Belief and belief’s penumbra”, in N. Nottelmann (ed.), New essays on belief, New York: Palgrave Macmillan, 100–123.
  • McHugh, Conor, and Daniel Whiting, 2014, “The normativity of belief”, Analysis Reviews, 74: 698–713.
  • Millikan, Ruth G., 1984, Language, thought, and other biological categories, Cambridge, MA: MIT Press.
  • –––, 1993, White Queen psychology and other essays for Alice, Cambridge, MA: MIT Press.
  • –––, 2017, Beyond concepts, Oxford: Oxford University Press.
  • Mölder, Bruno, 2010, Mind ascribed, Johns Benjamins.
  • Moore, Andrew Garford, and George Botterill, 2023, “Why beliefs are not dispositional stereotypes”, Theoria, 89: 483–494.
  • Murray, Dylan, Justin Sytsma, and Jonathan Livengood, 2013, “God knows (but does God believe?)”, Philosophical Studies, 166: 83–107.
  • Myers-Schulz, Blake, and Eric Schwitzgebel, 2013, “Knowing that P without Believing that P”, Noûs, 47: 371–384.
  • Neander, Karen, 2017, A mark of the mental, Cambridge, MA: MIT Press.
  • Papineau, David, 1984, “Representation and explanation”, Philosophy of Science, 51: 550–572.
  • Peirce, C. S., 1878, “How to make our ideas clear”, Popular Science Monthly, 12: 286–302.
  • Perner, Josef, 1991, Understanding the representational mind, Cambridge, MA: MIT Press.
  • Pettit, Philip, 1993, The common mind, New York: Oxford University Press.
  • Poslajko, Krzysztof, 2022, “How to think about the debate over the reality of beliefs”, Review of Philosophy and Psychology, 13: 85–107.
  • Price, H. H., 1969, Belief, London: Allen & Unwin.
  • Putnam, Hilary, 1975, Mind, language, and reality, London: Cambridge University Press.
  • Quilty-Dunn, Jake, and Eric Mandelbaum, 2018, “Against dispositionalism: Belief in cognitive science”, Philosophical Studies, 175: 2353–2372.
  • Quine, W.V.O., 1951, “Two dogmas of empiricism”, Philosophical Review, 60: 20–43.
  • –––, 1956, “Quantifiers and propositional attitudes”, Journal of Philosophy, 53: 177–186.
  • –––, 1960, Word and object, Cambridge, MA: MIT Press.
  • Radford, Colin, 1966, “Knowledge–by examples”, Analysis, 27: 1–11.
  • Ramsey, Frank P., 1926 [1990], “Truth and probability”, in D.H. Mellor (ed.), Ramsey: Philosophical Papers, Cambridge: Cambridge University Press, 1990, 52–94.
  • –––, 1927–1929 [1991], On truth, N. Rescher and U. Majer (eds.), Dordrecht: Springer.
  • Rayo, Augustín, 2013, The construction of logical space, Oxford: Oxford University Press.
  • Rescorla, Michael, 2009, “Cognitive maps and the language of thought”, British Journal for the Philosophy of Science, 60: 377–407.
  • Robbins, Philip, 2004, “To structure, or not to structure?”, Synthese, 139: 55–80.
  • Routley, Richard, 1981, “Alleged problems in attributing beliefs and intentionality to animals”, Inquiry, 24: 385–417.
  • Ryle, Gilbert, 1949, The concept of mind, New York: Barnes & Noble.
  • Schacter, Daniel L., 1987, “Implicit memory: History and current status”, Journal of Experimental Psychology: Learning, Memory, and Cognition, 13: 501–518.
  • Schacter, Daniel L. and Endel Tulving, 1994, “What are the memory systems of 1994?”, in D.L. Schacter and E. Tulving (eds.), Memory systems 1994, Cambridge, MA: MIT Press, 1–38.
  • Schacter, Daniel L., Anthony D. Wagner, and Randy L. Buckner, 2000, “Memory systems of 1999”, in E. Tulving and F.I.M. Craik (eds.), The Oxford handbook of memory, Oxford: Oxford University Press, 627–643.
  • Schwitzgebel, Eric, 2001, “In-between believing”, Philosophical Quarterly, 51: 76–82.
  • –––, 2002, “A phenomenal, dispositional account of belief”, Noûs, 36: 249–275.
  • –––, 2010, “Acting contrary to our professed beliefs, or the gulf between occurrent judgment and dispositional belief”, Pacific Philosophical Quarterly, 91: 531–553.
  • –––, 2013, “A dispositional approach to attitudes: Thinking outside the belief box”, in N. Nottelmann (ed.), New essays on belief, New York: Palgrave Macmillan, 75–99.
  • –––, forthcoming, “Dispositionalism, yay! Representationalism, boo!”, in J. Jong and E. Schwitzgebel (eds.), The nature of belief, Oxford: Oxford University Press.
  • Scott, Rose M. and Renee Baillargeon, 2017, “Early false belief understanding”, Trends in Cognitive Sciences, 21: 237–249.
  • Searle, John R., 1983, Intentionality, Cambridge: Cambridge University Press.
  • –––, 1992, The rediscovery of the mind, Cambridge, MA: MIT Press.
  • Sellars, Wilfrid, 1956, “Empiricism and the philosophy of mind”, Minnesota Studies in the Philosophy of Science, 1: 253–329.
  • –––, 1969, “Language as thought and as communication”, Philosophy and Phenomenological Research, 29: 506–527.
  • Shea, Nicholas, 2007, “Content and its vehicles in connectionist systems”, Mind and Language, 22: 246–269.
  • Shoemaker, Sydney, 2003, Identity, cause, and mind, expanded edition, Oxford: Oxford University Press.
  • Skyrms, Brian, 2000, Choice and chance, 4th edition, Belmont, CA: Wadsworth/Thompson.
  • Smith, Peter, 1982, “On animal beliefs”, Southern Journal of Philosophy, 20: 503–512.
  • Smithies, Declan, forthcoming, “Belief as a feeling of conviction”, in J. Jong and E. Schwitzgebel (eds.), The nature of belief, Oxford: Oxford University Press.
  • Smolensky, Paul, 1995, “Connectionism, constituency, and the language of thought”, in C. Macdonald and G. Macdonald (eds.), Connectionism, Cambridge, MA: Blackwell, 164–198.
  • Soames, Scott, 1987, “Direct reference, propositional attitudes and semantic content”, Philosophical Topics, 15: 47–87.
  • Southgate, B., A. Senju, and G. Csibra, 2007, “Action anticipation through attribution of false belief by 2-year-olds”, Psychological Science, 18: 587–592.
  • Squire, Larry R., 1987, Memory and brain, New York: Oxford University Press.
  • –––, 2004, “Memory systems of the brain: A brief history and current perspective”, Neurobiology of Learning and Memory, 82: 171–177.
  • Stalnaker, Robert, 1984, Inquiry, Cambridge, MA: MIT Press.
  • Stanley, Jason, 2011, Know how, Oxford: Oxford University Press.
  • Stanley, Jason, and Timothy Williamson, 2001, “Knowing how”, Journal of Philosophy, 92: 411–444.
  • Stephens, G. Lynn, and George Graham., 2004, “Reconceiving delusions”, International Review of Psychiatry, 16: 236–241.
  • Stich, Stephen P., 1979, “Do animals have beliefs?”, Australasian Journal of Philosophy, 57: 15–28.
  • Stich, Stephen P., 1983, From folk psychology to cognitive science, Cambridge, MA: MIT Press.
  • Sturgeon, Scott, 2008, “Reason and the grain of belief”, Noûs, 42: 139–165.
  • Taylor, Kenneth A., 2002, De re and de dicto: Against the conventional wisdom. Philosophical Perspectives, 16: 225–265.
  • Tumulty, Maura, 2011, “Delusions and dispositionalism about belief”, Mind and Language, 26: 596–628.
  • –––, 2014, “Managing mismatch between belief and behavior”, Pacific Philosophical Quarterly, 95: 261–292.
  • van Fraassen, Bas C., 1980, The scientific image, Oxford: Oxford University Press.
  • van Gelder, Tim, 1990, “Compositionality: A connectionist variation on a classical theme”, Cognitive Science, 14: 355–384.
  • Van Leeuwen, Neil, 2014, “Religious credence is not factual belief”, Cognition, 133(3): 698–715.
  • –––, forthcoming, “The Trinity and the light switch: Two faces of belief”, in J. Jong and E. Schwitzgebel (eds.), The nature of belief, Oxford: Oxford University Press.
  • Velleman, J. David, 2000, The possibility of practical reason, Oxford: Clarendon.
  • Vernazzani, Alfredo, and Dimitri Coelho Mollo, forthcoming, “The formats of cognitive representation: A computational account”, Philosophy of Science.
  • Wedgwood, Ralph, 2002, “The aim of belief”, Noûs, 36: 267–297.
  • Wellman, Henry M., David Cross, and Julanne Watson, 2001, “Meta-analysis of theory of mind development: The truth about false belief”, Child Development, 72: 655–684.
  • Wettstein, H., 2004, The magic prism, Oxford: Oxford University Press.
  • Wilson, Timothy D., Samuel Lindsey, and Tonya T. Schooler, 2000, “A model of dual attitudes”, Psychological Review, 107: 101–126.
  • Wright, Jessica, 2017, “Ramsey’s theory of belief and the problem of attitude divergence”, in S. Pihlström (ed.), Pragmatism and objectivity, New York: Routledge, 133–149.
  • Yalcin, Seth, 2021, “Fragmented but rational”, in C. Borgoni, D. Kindermann, and A. Onofri (eds.), The fragmented mind, Oxford: Oxford University Press.
  • Zangwill, Nick, 2005, “The normativity of the mental”, Philosophical Explorations, 8: 1–19.
  • Zimmerman, Aaron, 2018, Belief: A pragmatic picture, Oxford: Oxford University Press.

Other Internet Resources

Copyright © 2023 by
Eric Schwitzgebel <eschwitz@ucr.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free