Causal Theories of Mental Content
Causal theories of mental content attempt to explain how thoughts can be about things. They attempt to explain how one can think about, for example, dogs. These theories begin with the idea that there are mental representations and that thoughts are meaningful in virtue of a causal connection between a mental representation and some part of the world that is represented. In other words, the point of departure for these theories is that thoughts of dogs are about dogs because dogs cause the mental representations of dogs.
- 1. Introduction
- 2. Some Historical and Theoretical Context
- 3. Specific Causal Theories of Mental Content
- 4. General Objections to Causal Theories of Mental Content
- 4.1 Causal Theories do not Work for Logical and Mathematical Relations
- 4.2 Causal Theories do not Work for Vacuous Terms
- 4.3 Causal Theories do not Work for Phenomenal Intentionality
- 4.4 Causal Theories do not Work for Certain Reflexive Thoughts
- 4.5 Causal Theories do not Work for Reliable Misrepresentations
- 4.6 Causal Theories Conflict with the Theory Mediation of Perception
- 4.7 Causal Theories Conflict with the Implementation of Psychological Laws
- 5. Concluding Remarks
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Introduction
Content is what is said, asserted, thought, believed, desired, hoped for, etc. Mental content is the content had by mental states and processes. Causal theories of mental content attempt to explain what gives thoughts, beliefs, desires, and so forth their contents. They attempt to explain how thoughts can be about things.[1]
2. Some Historical and Theoretical Context
Although one might find precursors to causal theories of mental content scattered throughout the history of philosophy, the current interest in the topic was spurred, in part, by perceived inadequacies in “similarity” or “picture” theories of mental representation. Where meaning and representation are asymmetric relations—that is, a syntactic item “X” might mean or represent X, but X does not (typically) mean or represent “X”—similarity and resemblance are symmetric relations. Dennis Stampe (1977), who played an important role in initiating contemporary interest in causal theories, drew attention to related problems. Consider a photograph of one of two identical twins. What makes it a photo of Judy, rather than her identical twin Trudy? By assumption, it cannot be the similarity of the photo to one twin rather than the other, since the twins are identical. Moreover, one can have a photo of Judy even though the photo happens not to look very much like her at all. What apparently makes a photo of Judy a photo of Judy is that she was causally implicated, in the right way, in the production of the photo. Reinforcing the hunch that causation could be relevant to meaning and representation is the observation that there is a sense in which the number of rings in a tree stump represents the age of the tree when it died and that the presence of smoke means fire. The history of contemporary developments of causal theories of mental content consists largely of specifying what it is for something to be causally implicated in the right way in the production of meaning and refining the sense in which smoke represents fire to the sense in which a person’s thoughts, sometimes at least, represent the world.
If one wanted to trace a simple historical arc for recent causal theories, one would have to begin with the seminal 1977 paper by Dennis Stampe, “Toward a Causal Theory of Linguistic Representation.” Among the many important features of this paper is its having set much of the conceptual and theoretical stage to be described in greater detail below. It drew a contrast between causal theories and “picture theories” that try to explain representational content by appeal to some form of similarity between a representation and the thing represented. It also drew attention to the problem of distinguishing the content determining causes of a representation from adventitious non-content determining causes. So, for example, one will want “X” to mean dogs because dogs causes dogs, but one does not want “X” to mean blow-to-the-head, even though blows to the head might cause the occurrence of an “X”. (Much more of this will be described below.) Finally, it also provided some attempts to address this problem, such as an appeal to the function a thing might have.
Fred Dretske’s 1981 Knowledge and the Flow of Information offered a much expanded treatment of a type of causal theory. Rather than basing semantic content on a causal connection per se, Dretske began with a type of informational connection derived from the mathematical theory of information. This has led some to refer to Dretske’s theory as “information semantics”. Dretske also appealed to the notion of function in an attempt to distinguish content determining causes from adventitious non-content determining causes. This has led some to refer to Dretske’s theory as a “teleoinformational” theory or a “teleosemantic” theory. Dretske’s 1988 book, Explaining Behavior, further refined his earlier treatment.
Jerry Fodor’s 1984 “Semantics, Wisconsin Style” gave the problem of distinguishing content-determining causes from non-content determining causes its best-known guise as “the disjunction problem”. How can a causal theory of content say that “X” has the non-disjunctive content dog, rather than the disjunctive content dog-or-blow-to-the-head, when both dogs and blows to the head cause instances of “X”? By 1987, in Psychosemantics, Fodor published his first attempt at an alternative method of solving the disjunction problem, the Asymmetric (Causal) Dependency Theory. This theory was further refined for the title essay in Fodor’s 1990 book A Theory of Content and Other Essays.
Although these causal theories have subsequently spawned a significant critical literature, other related causal theories have also been advanced. Two of these are teleosemantic theories that are sometimes contrasted with causal theories. (Cf., e.g., Papineau (1984), Millikan (1989), and Teleological Theories of Mental Content.) Other more purely causal theories are Dan Lloyd’s (1987, 1989) Dialectical Theory of Representation, Robert Rupert’s (1999) Best Test Theory (see section 3.5 below), Marius Usher’s (2001) Statistical Referential Theory, and Dan Ryder’s (2004) SINBAD neurosemantics.
Causal theories of mental content are typically developed in the context of four principal assumptions. First, they typically presuppose that there is a difference between derived and underived meaning.[2] Normal humans can use one thing, such as “%”, to mean percent. They can use certain large red octagons to mean that one is to stop at an intersection. In such cases, there are collective arrangements that confer relatively specific meanings on relatively specific objects. In the case of human minds, however, it is proposed that thoughts can have the meanings or contents they do without recourse to collective arrangements. It is possible to think about percentage or ways of negotiating intersections prior to collective social arrangements. It, therefore, appears that the contents of our thoughts do not acquire the content they do in the way that “%” and certain large red octagons do. Causal theories of mental content presuppose that mental contents are underived, hence attempt to explain how underived meaning arises.
Second, causal theories of mental content distinguish what has come to be known as natural meaning and non-natural meaning.[3] Cases where an object or event X has natural meaning are those in which, given certain background conditions, the existence or occurrence of X “entails” the existence or occurrence of some state of affairs. If smoke in the unspoiled forest naturally means fire then, given the presence of smoke, there was fire. Under the relevant background conditions, the effect indicates or naturally means the cause. An important feature of natural meaning is that it does not generate falsity. If smoke naturally means fire, then there must really be a fire. By contrast, many non-naturally meaningful things can be false. Sentences, for example, can be meaningful and false. The utterance “Colleen currently has measles” means that Colleen currently has measles but does not entail that Colleen currently has measles in the way that Colleen’s spots do entail that she has measles. Like sentences, thoughts are also meaningful, but often false. Thus, it is generally supposed that mental content must be a form of non-natural unassigned meaning.[4]
Third, these theories assume that it is possible to explain the origin of non-derived content without appeal to other semantic or contentful notions. So, it is assumed that there is more to the project than simply saying that one’s thoughts mean that Colleen currently has the measles because one’s thoughts are about Colleen currently having the measles. Explicating meaning in terms of aboutness, or aboutness in terms of meaning, or either in terms of some still further semantic notion, does not go as far as is commonly desired by those who develop causal theories of mental content. To note some additional terminology, it is often said that causal theories of mental content attempt to naturalize non-natural, non-derived meaning. To put the matter less technically, one might say that causal theories of mental content presuppose that it is possible for a purely physical system to bear non-derived content. Thus, they presuppose that if one were to build a genuinely thinking robot or computer, one would have to design it in such a way that some of its internal components would bear non-natural, non-derived content in virtue of purely physical conditions. To get a feel for the difference between a naturalized theory and an unnaturalized theory of content, one might note the theory developed by Grice (1948). Grice developed an unnaturalized theory. Speaking of linguistic items, Grice held that ‘Speaker S non-naturally means something by “X”’ is roughly equivalent to ‘S intended the utterance of “X” to produce some effect in an audience by means of the recognition of this intention.’ Grice did not explicate the origin of mental content of speaker’s intentions or audience recognition, hence he did not attempt to naturalize the meaning of linguistic items.
Fourth, it is commonly presupposed that naturalistic analyses of non-natural, non-derived meanings will apply, in the first instance, to the contents of thought. The physical items “X” that are supposed to be bearers of causally determined content will, therefore, be something like the firings of a particular neuron or set of neurons. These contents of thoughts are said to be captured in what is sometimes called a “language of thought” or “mentalese.” The contents of items in natural languages, such as English, Japanese, and French, will then be given a separate analysis, presumably in terms of a naturalistic account of non-natural derived meanings. It is, of course, possible to suppose that it is natural language, or some other system of communication, that first develops content, which can then serve as a basis upon which to provide an account of mental content. Among the reasons that threaten this order of dependency is the fact that cognitive agents appear to have evolved before systems of communication. Another reason is that human infants at least appear to have some sophisticated cognitive capacities involving mental representation, before they speak or understand natural languages. Yet another reason is that, although some social animals may have systems of communication complex enough to support the genesis of mental content, other non-social cognizing animals may not.
It is worth noting that, in recent years, this last presupposition has sometimes been abandoned by philosophers attempting to understand animal signaling or animal communication, as when toads emit mating calls or vervet monkeys cry out when seeing a cheetah, eagle, or snake. See, for example, Stegmann, 2005, 2009, Skyrms, 2008, 2010a, b, 2012, and Birch, 2014. In other words, there have been efforts to use the sorts of apparatus originally developed for theories of mental content, plus or minus a bit, as apparatus for handling animal signaling. These approaches seem to allow that there are mental representations in the brains of the signaling/communicating animals, but do not reply on the content of the mental representations to provide the representational contents of the signals. In this way, the contents of the signals are not derived from the contents of the mental representations.
3. Specific Causal Theories of Mental Content
The unifying inspiration for causal theories of mental content is that some syntactic item “X” means X because “X”s are caused by Xs.[5] Matters cannot be this simple, however, since in general one expects that some causes of “X” are not among the content-specifying causes of “X”s. There are numerous examples illustrating this point, each illustrating a kind of cause that must not typically be among the content-determining causes of “X”:
- Suppose there is some syntactic item “X” that is a putative mental representation of a dog. Dogs will presumably cause tokens of “X”, but so might foxes at odd angles, with some obstructions, at a distance, or under poor lighting conditions. The causal theorist will need some principle that allows her to say that the causal links between dogs and “X”s will be content-determining, where the causal links between, say, foxes and “X”s will not. Mice and shrews, mules and donkeys, German Shepherds and wolves, dogs and paper mâché dogs, dogs and stuffed dogs, and any number of confusable groups would do to make this point.
- A syntactic item “X” with the putative content of dog might also be caused by a dose of LSD, a set of strategically placed and activated microelectrodes, a brain tumor, or quantum mechanical fluctuations. Who knows what mental representations might be triggered by these things? LSD, microelectrodes, etc., should (typically) not be among the content-determining causes of most mental representations.
- Upon hearing the question “What kind of animal is named ‘Fido’?” a person might token the syntactic item “X”. One will want at least some cases in which this “X” means dog, but to get this result the causal theorist will not want the question to be among the content-determining causes of “X”.
- In seeing a dog, there is a causal pathway between the dog through the visual system (and perhaps beyond) to a token of “X”. What in this causal pathway from the dog to “X” constitutes the content-determining element? In virtue of what is it the case that “X” means dog, rather than retinal projection of a dog, or any number of other possible points along the pathway? Clearly there is a similar problem for other sense modalities. In hearing a dog, there is a causal pathway between the dog through the auditory system (and perhaps beyond) to a token of “X”. What makes “X” mean dog, rather than sound of a dog (barking?) or eardrum vibration or motion in the stapes bone of the inner ear? One might press essentially the same point by asking what makes “X” mean dog, rather than some complex function of all the diverse causal intermediaries between dogs and “X”.
The foregoing problem cases are generally developed under the rubric of “false beliefs” or “the disjunction problem” in the following way and can be traced to Fodor (1984). No one is perfect, so a theory of content should be able to explicate what is going on when a person makes a mistake, such as mistaking a fox for a dog. The first thought is that this happens when a fox (at a distance or in poor lighting conditions) causes the occurrence of a token of “X” and, since “X” means dog, one has mistaken a fox for a dog. The problem with this first thought arises with the invocation of the idea that “X” means dog. Why say that “X” means dog, rather than dog or fox? On a causal account, we need some principled reason to say that the content of “X” is dog, hence that the token of “X” is falsely tokened by the fox, rather than the content of “X” is dog or fox, hence that the token of “X” is truly tokened by the fox. What basis is there for saying that “X” means dog, rather than dog or fox? Because there appears always to be this option of making the content of a term some disjunction of items, the problem has been called “the disjunction problem”.[6]
As was noted above, what unifies causal theories of mental content is some version of the idea that “X”s being causally connected to Xs makes “X”s mean Xs. What divides causal theories of mental content, most notably, is the different approaches they take to separating the content-determining causes from the non-content-determining causes. Some of these different theories appeal to normal conditions, others to functions generated by natural selection, others to functions acquired ontogenetically, and still others to dependencies among laws. At present there is no approach that is commonly agreed to correctly separate the content-determining causes from the non-content determining causes while at the same time respecting the need not to invoke existing semantic concepts. Although each attempt may have technical problems of its own, the recurring problem is that the attempts to separate content-determining from non-content-determining causes threaten to smuggle in semantic elements.
In this section, we will review the internal problematic of causal theories by examining how each theory fares on our battery of test cases (I)–(IV), along with other objections from time to time. This provides a simple, readily understood organization of the project of developing a causal theory of mental content, but it does this at a price. The primary literature is not arranged exactly in this way. The positive theories found in the primary literature are typically more nuanced than what we present here. Moreover, the criticisms are not arranged into the kind of test battery we have with cases (I)–(IV). One paper might bring forward cases (I) and (III) against theory A, where another paper might bring forward cases (I) and (II) against theory B. Nor are the examples in our test battery exactly the ones developed in the primary literature. In other words, the price one pays for this simplicity of organization is that we have something less like a literature review and more like a theoretical and conceptual toolbox for understanding causal theories.
3.1 Normal Conditions
Trees usually grow a certain way. Each year, there is the passage of the four seasons with a tree growing more quickly at some times and more slowly at others. As a result, each year a tree adds a “ring” to its girth in such a way that one might say that each ring means a year of growth. If we find a tree stump that has twelve rings, then that means that the tree was twelve years old when it died. But, it is not an entirely inviolable law that a tree grows a ring each year. Such a law, if it is one, is at most a ceteris paribus law. It holds only given certain background conditions, such as that weather conditions are normal. If the weather conditions are especially bad one season, then perhaps the tree will not grow enough to produce a new ring. One might, therefore, propose that if conditions are normal, then n rings means that the tree was n years old when it died. This idea makes its first appearance when Stampe (1977) invokes it as part of his theory of “fidelity conditions.”
An appeal to normal conditions would seem to be an obvious way in which to bracket at least some non-content-determining causes of a would-be mental representation “X”. It is only the causes that operate under normal conditions that are content-determining. So, when it comes to human brains, under normal conditions one is not under the influence of hallucinogens nor is one’s head being invaded by an elaborate configuration of microelectrodes. So, even though LSD and microelectrodes would, counterfactually speaking, cause a token neural event “X”, these causes would not be among the content-determining causes of “X”. Moreover, one can take normal conditions of viewing to include good lighting, a particular perspective, a particular viewing distance, a lack of (seriously) occluding objects, and so forth, so that foxes in dim light, viewed from the bottom up, at a remove of a mile, or through a dense fog, would not be among the content-determining causes of “X”. Under normal viewing conditions, one does not confuse a fox with a dog, so foxes are not to be counted as part of the content of “X”. Moreover, if one does confuse a fox with a dog under normal viewing conditions, then perhaps one does not really have a mental representation of a dog, but maybe only a mental representation of a member of the taxonomic family canidae.
Although an appeal to normal conditions initially appears promising, it does not seem to be sufficient to rule out the causal intermediaries between objects in the environment and “X”. Even under normal conditions of viewing that include good lighting, a particular perspective, a particular viewing distance, a lack of (seriously) occluding objects, and so forth, it is still the case that both dogs and, say, retinal projections of dogs, lead to tokens of “X”. Why does the content of “X” not include retinal projections of dogs or any of the other causal intermediaries? Nor do normal conditions suffice to keep questions from getting in among the content-determining causes. What abnormal conditions are there when the question, “What kind of animal is named ‘Fido’?,” leads to a tokening of an “X” with the putative meaning of dog? Suppose there are instances of quantum mechanical fluctuations in the nervous system, wherein spontaneous changes in neurons lead to tokens of “X”. Do normal conditions block these out? So, there are problem cases in which appeals to normal conditions do not seem to work. Fodor (1990b) discusses this problem with proximal stimulations in connection with his asymmetric dependency theory, but it is one that clearly challenges the causal theory plus normal conditions approach.
Next, suppose that we tightly construe normal conditions to eliminate the kinds of problem cases described above. So, when completely fleshed out, under normal conditions only dogs cause “X”s. What one intuitively wants is to be able to say that, under normal conditions of good lighting, proper viewing distance, etc. “X” means dog. But, another possibility is that in such a situation “X” does not mean dog, but dog-under-normal-conditions-of-good-lighting, proper-viewing-distance, etc. Why take one interpretation over another? One needs a principled basis for distinguishing the cause of “X” from the many causally contributing factors. In other words, we still have the problem of bracketing non-content-determining causes, only in a slightly reformulated manner. This sort of objection may be found in Fodor (1984).
Now set the preceding problem aside. There is still another developed in Fodor (1984). Suppose that “X” does mean dog under conditions of good lighting, lack of serious occlusions, etc. Do not merely suppose that “X” is caused by dogs under conditions of good light, lack of serious occlusions, etc.; grant that “X” really does mean dog under these conditions. Even then, why does “X”, the firing of the neuronal circuit, still mean dog, when those conditions do not hold? Why does “X” still mean dog under, say, degraded lighting conditions? After all, we could abide by another apparently true conditional regarding these other conditions, namely, if the lighting conditions were not so good, there were no serious occlusions, etc., then the neuronal circuit’s firing would mean dog or fox. Even if “X” means X under one set of conditions C1, why doesn’t “X” mean Y under a different set of conditions C2? It looks as though one could say that C1 provides normal conditions under which “X” means X and C2 provides normal conditions under which “X” means Y. We need some non-semantic notions to enable us to fix on one interpretation, rather than the other. At this point, one might look to a notion of functions to solve these problems.[7]
3.2 Evolutionary Functions
Many physical objects have functions. (Stampe (1977) was the first to note this as a fact that might help causal theories of content.) A familiar mercury thermometer has the function of indicating temperature. But, such a thermometer works against a set of background conditions which include the atmospheric pressure. The atmospheric pressure influences the volume of the vacuum that forms above the column of mercury in the glass tube. So, the height of the column of mercury is the product of two causally relevant features, the ambient atmospheric temperature and the ambient atmospheric pressure. This suggests that one and the same physical device with the same causal dependencies can be used in different ways. A column of mercury in a glass tube can be used to measure temperature, but it is possible to put it to use as a pressure gauge. Which thing a column of mercury measures is determined by its function.
This observation suggests a way to specify which causes of “X” determine its content. The content of “X”, say, the firing of some neurons, is determined by dogs, and not foxes, because it is the function of those neurons to register the presence of dogs, but not foxes. Further, the content of “X” does not include LSD, microelectrodes, or quantum mechanical fluctuations, because it is not the function of “X” to fire in response to LSD, microelectrodes, or quantum mechanical fluctuations in the brain. Similarly, the content of “X” does not include proximal sensory projections of dogs, because the function of the neurons is to register the presence of the dogs, not the sensory stimulations. It is the objective features of the world that matter to an organism, not its sensory states. Finally, it is the function of “X” to register the presence of dogs, but not the presence of questions, such as ‘What kind of animal is named “Fido”?’, that leads to “X” meaning dogs. Functions, thus, provide a prima facie attractive means of properly winnowing down the causes of “X” to those that are genuinely content determining.
In addition, the theory of evolution by natural selection apparently provides a non-semantic, non-intentional basis upon which to explicate functions and, in turn, semantic content. Individual organisms vary in their characteristics, such as how their neurons respond to features of the environment. Some of these differences in how neurons respond make a difference to an organism’s survival and reproduction. Finally, some of these very differences may be heritable. Natural selection, commonly understood as this differential reproduction of heritable variation, is purely causal. Suppose that there is a population of rabbits. Further suppose that either by a genetic mutation or by the recombination of existing genes, some of these rabbits develop neurons that are wired into their visual systems in such a way that they fire (more or less reliably) in the presence of dogs. Further, the firing of these neurons is wired into a freezing behavior in these rabbits. Because of this configuration, the rabbits with the “dog neurons” are less likely to be detected by dogs, hence more likely to survive and reproduce. Finally, because the genes for these neurons are heritable, the offspring of these dog-sensitive rabbits will themselves be dog-sensitive. Over time, the number of the dog-sensitive rabbits will increase, thereby displacing the dog-insensitive rabbits. So, natural selection will, in such a scenario, give rise to mental representations of dogs. Insofar as such a story is plausible, there is hope that natural selection and the genesis of functions can provide a naturalistically acceptable means of delimiting content-determining causes.
3.2.1 Objections to Evolutionary Functions
There is no doubt that individual variation, differential reproduction, and inheritance can be understood in a purely causal manner. Yet, there remains skepticism about how naturalistically one can describe what natural selection can select for. There are doubts about the extent to which the objects of selection really can be specified without illicit importation of intentional notions. Fodor (1989, 1990a) give voice to some of this skepticism. Prima facie, it makes sense to say that the neurons in our hypothetical rabbits fire in response to the presence of dogs, hence that there is selection for dog representations. But, it makes just as much sense, one might worry, to say that it is sensitivity to dog-look-alikes that leads to the greater fitness of the rabbits with the new neurons.[8] There are genes for the dog-look-alike neurons and these genes are heritable. Moreover, those rabbits that freeze in response to dog-look-alikes are more likely to survive and reproduce than are those that do not so freeze, hence one might say that the freezing is in response to dog-look-alikes. So, our ability to say that the meaning of the rabbits’ mental representation “X” is dog, rather than dog-look-alike, depends on our ability to say that it is the dog-sensitivity of “X”, rather than the dog-look-alike-sensitivity of “X”, that keeps the rabbits alive longer. Of course, being dog-sensitive and being dog-look-alike-sensitive are connected, but the problem here is that both being dog-look-alike-sensitive and being dog-sensitive can increase fitness in ways that lead to the fixation of a genotype. And it can well be that it is avoidance of dogs that keeps a rabbit alive, but one still needs some principled basis for saying that the rabbits avoid dogs by being sensitive to dogs, rather than by being sensitive to dog-look-alikes. The latter appears to be good enough for the differential reproduction of heritable variation to do its work. Where we risk importing semantic notions into the mix is in understanding selection intentionally, rather than purely causally. We need a notion of “selection for” that is both general enough to work for all the mental contents causal theorists aspire to address and that does not tacitly import semantic notions.
In response to this sort of objection, it has been proposed that the correct explanation of a rabbit’s evolutionary success with, say, “X”, is not that this enables the rabbit to avoid dog-look-alikes, but that it enables them to avoid dogs. It is dogs, but not mere dog-look-alikes, that prey on rabbits. (This sort of response is developed in Millikan (1991) and Neander (1995).) Yet, the rejoinder is that if we really want to get at the correct explanation of a rabbit-cum-“X” system, then we should not suppose that “X” means dog. Instead, we should say that it is in virtue of the fact that “X” picks up on something like, say, predator of such and such characteristics that the “X” alarm system increases the chance of a rabbit’s survival. (This sort of rejoinder may be found in Agar (1993).)
This problem aside, there is also some concern about the extent to which it is plausible to suppose that natural selection could act on the fine details of the operation of the brain, such as the firing of neurons in the presence of dogs. (This is an objection raised in Fodor (1990c)). Natural selection might operate to increase the size of the brain so there is more cortical mass for cognitive processing. Natural selection might also operate to increase the folding of the brain so as to maximize the cortical surface area that can be contained within the brain. Natural selection might also lead to compartmentalization of the brain, so that one particular region could be dedicated to visual processing, another to auditory processing, and still another to face processing. Yet, many would take it to be implausible to suppose that natural selection works at the level of individual mental representations. The brain is too plastic and there is too much individual variation in the brains of mammals to admit of selection acting in this way. Moreover, such far reaching effects of natural selection would lead to innate ideas not merely of colors and shapes, but of dogs, cats, cars, skyscrapers, and movie stars. Rather than supposing that functions are determined by natural selection across multiple generations, many philosophers contend that it is more plausible that the functions that underlie mental representations are acquired through cognitive development.
3.3 Developmental Functions
Hypothesizing that certain activities or events within the brain mean what they do, in part, because of some function that develops over the course of an individual’s lifetime shares many of the attractive features of the hypothesis that these same activities or events mean what they do, in part, because of some evolutionarily acquired function. One again can say that it is not the function of “X” to register the presence of LSD, microelectrodes, foxes, stuffed dogs, or paper mâché dogs, or questions, but it is their function to report on dogs. Moreover, it does not invoke dubious suppositions about an intimate connection between natural selection and the precise details of neuronal hardware and its operation. A functional account based on ontogenetic function acquisition or learning seems to be an improvement. This is the core of the approach taken in Dretske (1981; 1988).
The function acquisition story proposes that during development, an organism is trained to discriminate real flesh and blood dogs from questions, foxes, stuffed dogs, paper mâché dogs under conditions of good lighting, without occlusions, or distractions. A teacher ensures that training proceeds according to plan. Once “X” has acquired the function to respond to dogs, the training is over. Thereafter, any instances in which “X” is triggered by foxes, stuffed dogs, paper mâché dogs, LSD, microelectrodes, etc., are false tokenings and figure into false beliefs.
3.3.1 Objections to Developmental Functions
Among the most familiar objections to this proposal is that there is no principled distinction between when a creature is learning and when it is done learning. Instances in which a creature entertains the hypothesis that “X” means X, instances in which the creature entertains the hypothesis that “X” means Y, instances in which the creature straightforwardly uses “X” to mean X, and instances in which the creature straightforwardly uses “X” to mean Y are thoroughly intermingled. The problem is perhaps more clearly illustrated with tokens of natural language, where children will use words struggling through correct and incorrect uses of a word before (perhaps) finally settling on a correct usage. There seems to be no principled way to specify if learning has stopped or whether there is instead “lifelong learning”. This is among the objections to be found in Fodor (1984).
This, however, is a relatively technical objection. Further reflection suggests that there may be an underlying appeal to the intentions of the teacher. Let us revisit the learning story. Suppose that during the learning period the subject is trained to use “X” as a mental representation of dogs. Now, let the student graduate from “X” using school and immediately thereafter see a fox. Seeing this fox causes a token of “X” and one would like to say that this is an instance of mistaking a fox for a dog, hence a false tokening. But, consider the situation counterfactually. If the student had seen the fox during the training period just before graduation, the fox would have triggered a token of “X”. This suggests that we might just as well say that the student learned that “X” means fox or dog as that the student learned that “X” means dog. Thus, we might just as well say that, after training, the graduate does not falsely think of a dog, but truly thinks of a fox or a dog. The threat of running afoul of naturalist scruples comes if one attempts to say, in one way or another, that it is because the teacher meant for the student to learn that “X” means dog, rather than “X” means fox or dog. The threatened violation of naturalism comes in invoking the teacher’s intentions. This, too, is an objection to be found in Fodor (1984).
3.4 Asymmetric Dependency Theory
The preceding attempts to distinguish the content-determining causes from non-content-determining causes focused on the background or boundary conditions under which the distinct types of causes may be thought to act. Fodor’s Asymmetric Dependency Theory (ADT), however, represents a bold alternative to these approaches. Although Fodor (1987, 1990a, b, 1994) contain numerous variations on the details of the theory, the core idea is that the content-determining cause is in an important sense fundamental, where the non-content-determining causes are non-fundamental. The sense of being fundamental is that the non-content-determining causes depend on the content-determining cause; the non-content-determining causes would not exist if not for the content-determining cause. Put a bit more technically, there are numerous laws such as ‘Y1 causes “X”,’ ‘Y2 causes “X”,’ etc., but none of these laws would exist were it not a law that X causes “X”. The fact that the ‘X causes “X”’ law does not in the same way depend on any of the Y1, Y2, …, Yn laws makes the dependence asymmetric. Hence, there is an asymmetric dependency between the laws. The intuition here is that the question, ‘What kind of animal is called “Fido”?’ will cause an occurrence of the representation “X” only because of the fact that dogs cause “X”. Instances of foxes cause instances of “X” only because foxes are mistaken for dogs and dogs cause instances of “X”.
Causation is typically understood to have a temporal dimension. First there is event C and this event C subsequently leads to event E. Thus, when the ADT is sometimes referred to as the “Asymmetric Causal Dependency Theory,” the term “causal” might suggest a diachronic picture in which there is, first, an X-“X” law which subsequently gives rise to the various Y-“X” laws. Such a diachronic interpretation, however, would lead to counterexamples for the ADT approach. Fodor (1987) discusses this possibility. Consider Pavlovian conditioning. Food causes salivation in a dog. Then a bell causes salivation in the dog. It is likely that the bell causes salivation only because the food causes it. Yet, salivation hardly means food. It may well naturally mean that food is present, but salivation is not a thought or thought content and it is not ripe for false semantic tokening. Or take a more exotic kind of case. Suppose that one comes to apply “X” to dogs, but only by means of observations of foxes. This would be a weird case of “learning”, but if things were to go this way, one would not want “X” to mean fox. To block this kind of objection, the theory maintains the dependency between the fundamental X-“X” law and the non-fundamental Y-“X” laws is synchronic. The dependency is such that if one were to break the X-“X” law at time t, then one would thereby instantaneously break all the Y-“X” laws at that time.
The core of ADT, therefore, comes down to this. “X” means X if
- ‘Xs cause “X”s’ is a law,
- For all Ys that are not Xs, if Ys qua Ys actually cause “X”s, then the Y’s causing “X”s is asymmetrically dependent on the Xs causing “X”s,
- The dependence in (2) is synchronic (not diachronic).
This seems to get a number of cases right. The reason that questions like “What kind of animal is named ‘Fido’?” or “What is a Sheltie?” trigger “X”, meaning dog, is that dogs are able to trigger “X”s. Foxes only trigger “X”s, meaning dog, because dogs are able to trigger them. Moreover, it appears to solve the disjunction problem. Suppose we have a ‘dogs cause “X”s’ law and a ‘dogs or foxes cause “X”s’ law. If one breaks the ‘dogs cause “X”s’ law, then one thereby breaks the ‘dogs or foxes cause “X”s’ law, since the only reason either dogs or foxes cause “X”s is because dogs do. Moreover, if one breaks the ‘dogs or foxes cause “X”s’ law, one does not thereby break the ‘dogs cause “X”s’ law, since dogs alone might suffice to cause “X”s. So, the ‘dogs or foxes cause “X”s’ law depends on the ‘dogs cause “X”s’ law, but not vice versa. Asymmetric dependency of laws gives the right results.[9]
3.4.1 Objections to ADT
Adams and Aizawa (1994) mention an important class of causes that the ADT does not appear to handle, namely, the “non-psychological interventions”. We have all along assumed the “X” is some sort of brain event, such as the firing of some neurons. But, it is plausible that some interventions, such as a dose of hallucinogen or maybe some carefully placed microelectrodes, could trigger such brain events, quite apart from the connection of those brain events to other events in the external world. If essentially all brain events are so artificially inducible, then it would appear that for all putative mental representations, there will be some laws, such as ‘microelectrodes cause “X”s,’ that do not depend on laws such as ‘dogs causes “X”s.’ If this is the case, then the second condition of the ADT would rarely or never be satisfied, so that the theory would have little relevance to actual cognitive scientific practice.
Fodor (1990a) discusses challenges that arise with the fact that the perception of objects involves causal intermediaries. Suppose that there is a dog-“X” law that is mediated entirely by sensory mechanisms. In fact, suppose unrealistically that the dog-“X” law is mediated by a single visual sensory projection. In other words let the dog-“X” law be mediated by the combination of a dog-dogsp law and a dogsp-“X” law. Under these conditions, it appears that “X” means dogsp, rather than dog. Condition (1) is satisfied, since there is a dogsp-“X” law. Condition (2) is satisfied, since if one were to break the dogsp-“X” law one would thereby break the dog-“X” law (i.e., there is a dependence of one law one the other) and breaking the dog-“X” law would not necessarily break the dogsp-“X” law (i.e., the dependence is not symmetric). The dependence is asymmetric, because one can break the dog-“X” law by breaking the dog-dogsp law (by changing the way dogs look) without thereby breaking the dogsp-“X” law. Finally, condition (3) is satisfied, since the dependence of the dog-“X” law on the dogsp-“X” law is synchronic.
The foregoing version of the sensory projections problem relies on what was noted to be the unrealistic assumption that the dog-“X” law is mediated by a single visual sensory projection. Relaxing the assumption does not so much solve the problem as transform it. So, adopt the more realistic assumption that the dog-“X” law is sustained by a combination of a large set of dog-sensory projection laws and a large set of dogsp-“X” laws. In the first set, we have laws connecting dogs to particular patterns of retinal stimulation, laws connecting dogs to particular patterns of acoustic stimulation, etc. In the second set, we have certain psychological laws connecting particular patterns of retinal stimulation to “X”, certain psychological laws connecting particular patterns of acoustic stimulation to “X”, etc. In this sort of situation, there threatens to be no “fundamental” law, no law on which all other laws asymmetrically depend. If one breaks the dog-“X” law one does not thereby break any of the sensory projection-“X” laws, since the former can be broken by dissolving all of the dog-sensory projection laws. If, however, one breaks any one of the particular dogsp-“X” laws, e.g. one connecting a particular doggish visual appearance to “X”, one does not thereby break the dog-“X” law. The other sensory projections might sustain the dog-“X” law. Moreover, breaking the law connecting a particular doggish look to “X” will not thereby break a law connecting a particular doggish sound to “X”. Without a “fundamental” law, there is no meaning in virtue of the conditions of the ADT. Further, the applicability of the ADT appears to be dramatically reduced insofar as connections between mental representations and properties in the world are mediated by sensory projections.
Another problem arises with items or kinds that are indistinguishable. Adams and Aizawa (1994), and, implicitly, McLaughlin, (1991), among others, have discussed this problem. As one example, consider the time at which the two minerals, jadeite and nephrite, were chemically indistinguishable and were both thought to be jade. As another, one might appeal to H2O and XYZ (the stuff of philosophical thought experiments, the water look-alike substance found on twin-earth). Let X = jadeite and Y = nephrite and let there be laws ‘jadeite causes “X”’ and ‘nephrite causes “X”’. Can “X” mean jadeite? No. Condition (1) is satisfied, since it is a law that ‘jadeite causes “X”’. Condition (3) is satisfied, since breaking the jadeite-“X” law will immediately break the nephrite-“X” law. If jadeite cannot trigger an “X”, then neither can nephrite, since the two are indistinguishable. That is, there is a synchronic dependence of the ‘nephrite causes “X”’ law on the ‘jadeite causes “X” law. The problem arises with condition (2). Breaking the jadeite-“X” law will thereby break the nephrite-“X” law, but breaking the nephrite-“X” law will also thereby break the jadeite-“X” law. Condition (2) cannot be satisfied, since there is a symmetric dependence between the jadeite-“X” law and the nephrite-“X” law. By parity of reasoning, “X” cannot mean nephrite. So, can “X” mean jade? No. As before, conditions (1) and (3) could be satisfied, since there could be a jade-“X” law and the jadeite-“X” law and the nephrite-“X” law could synchronically depend on it. The problem is, again, with condition (2). Presumably breaking the jade-“X” law would break the jadeite-“X” and nephrite-“X” law, but breaking either of them would break the jade-“X” law. The problem is, again, with symmetric dependencies.
Here is a problem that we earlier found in conjunction with other causal theories. Despite the bold new idea underlying the ADT method of partitioning off non-content-determining causes, it too appears to sneak in naturalistically unacceptable assumptions. Like all causal theories of mental content, the asymmetric causal dependencies are supposed to be the basis upon which meaning is created; the dependencies are not themselves supposed to be a product, or byproduct, of meaning. Yet, ADT appears to violate this naturalistic pre-condition for causal theories. (This kind of objection may be found in Seager (1993), Adams & Aizawa (1994), (1994b), Wallis (1995), and Gibson (1996)). Ys are supposed to cause “X”s only because Xs do and this must not be because of any semantic facts about “X”s. So, what sort of mechanism would bring about such asymmetric dependencies among things connected to the syntactic item “X”? In fact, why wouldn’t lots of things be able to cause “X”s besides Xs, quite independently of the fact that Xs do? The instantiation of “X”s in the brain is, say, some set of neurochemical events. There should be natural causes capable of producing such events in one’s brain under a variety of circumstances. Why on earth would foxes be able to cause the neurochemical “X” events in us only because dogs can? One might be tempted to observe that “X” means dog, “Y” means fox, we associate foxes with dogs and that is why foxes cause “X”s only because dogs cause “X”s. We would not associate foxes with “X”s unless we associated “X”s with dogs and foxes with dogs. This answer, however, involves deriving the asymmetric causal dependencies from meanings, which violates the background assumption of the naturalization project. Unless there is a better explanation of such asymmetrical dependencies, it may well be that the theory is misguided in attempting to rest meaning upon them.
3.5 Best Test Theory
A relatively more recent causal theory is Robert Rupert’s (1999) Best Test Theory (BTT) for the meanings of natural kind terms. Unlike most causal theories, this one is restricted in scope to just natural kinds and terms for natural kinds. To mark this restriction, we will let represented kinds be denoted by K’s, rather than our usual X’s.
Best Test Theory: If a subject S bears no extension-fixing intentions toward “X” and “X” is an atomic natural kind term in S’s language of thought (i.e., not a compound of two or more other natural kind terms), then “X” has as its extension the members of natural kind K if and only if members of K are more efficient in their causing of “X” in S than are the members of any other natural kind.
To put the idea succinctly, “X” means, or refers to, those things that are the most powerful stimulants of “X”. That said, we need an account of what it is for a member of a natural kind to be more efficient in causing “X”s than are other natural kinds. We need an account of how to measure the power of a stimulus. This might be explained in terms of a kind of biography.
“X1” | “X2” | “X3” | “X4” | “X5” | |
K1 | 1 | 1 | 1 | ||
K1 | 1 | 1 | 1 | ||
K1 | 1 | 1 | |||
K1 | 1 | 1 | 1 | ||
K1 | 1 | 1 | 1 | ||
K1 | 1 | 1 | |||
K2 | 1 | ||||
K2 | 1 | ||||
K2 | 1 | 1 | |||
K3 | 1 | ||||
K3 | 1 | ||||
K3 | 1 | ||||
K3 | 1 |
Figure 1. A spreadsheet biography
Consider an organism S that (a) causally interacts with three different natural kinds, K1-K3, in its environment and (b) has a language of thought with five terms “X1”-“X5”. Further, suppose that each time S interacts with an individual of kind Ki this causes an occurrence of one or more of “X1”-“X5”. We can then create a kind of “spreadsheet biography” or “log of mental activity” for S in which there is a column for each of “X1”-“X5” and a row for each instance in which a member of K1-K3 causes one or more instances of “X1”-“X5”. Each mental representation “Xi” that Ki triggers receives a “1” in its column. Thus, a single spreadsheet biography might look like that shown in Figure 1.
To determine what a given term “Xi” means, we find the kind Ki that is most effective at causing “Xi”. This can be computed from S’s biography. For each Ki and “Xi”, we compute the frequency with which Ki triggers “Xi”. “X1” is tokened four out of six times that K1 is encountered, three out of three times that K2 is encountered, and one out of four times that K3 is encountered. “Xi” means that Ki that has the highest sample frequency. Thus, in this case, “X1” means K2. Just to be clear, when BTT claims that “Xi” means the Ki that is the most powerful stimulant of “X”, this is not to say that “X” means the most common stimulant of “X”. In our spreadsheet biography, K1 is the most common stimulant of “X1”, since it triggers “X1” four times, where K2 triggers it only three times, and K3 triggers it only one time. This is why, according to BTT, “X1” means K2, rather than K1 or K3.
How does the BTT handle our range of test cases? Consider, first, the standard form of the disjunction problem, the case of “X” meaning dog, rather than dog or fox-on-a-dark-night-at-a-distance. Since the latter is not apparently a natural kind, “X” cannot mean that.[10] Moreover, “X” means dog, rather than fox, because the only times the many foxes that S encounters can trigger “X1”s is on dark nights at a distance, where dogs trigger “X”s more consistently under a wider range of conditions.
How does the BTT address the apparent problem of “brain interventions,” such as LSD, microelectrodes, or brain tumors? The answer is multi-faceted. The quickest method for taking much of the sting out of these cases is to note that they generally do not arise for most individuals. The Best Test Theory relies on personal biographies in which only actual instances of kinds triggering mental representations are used to specify causal efficiency. The counterfactual truth that, were a stimulating microelectrode to be applied to, say, a particular neuron, it would perfectly reliably produce an “X” token simply does not matter for the theory. So, for all those individuals who do not take LSD, do not have microelectrodes inserted in their brains, do not have brain tumors, etc., these sorts of counterfactual possibilities are irrelevant. A second line of defense against “brain interventions” appeals to the limitation to natural kinds. The BTT might set aside microelectrodes, since they do not constitute a natural kind. Maybe brain tumors are; maybe not. Unfortunately, however, LSD is a very strong candidate for a chemical natural kind. Still the BTT is not without a third line of defense for handling these cases. One might suppose that LSD and brain tumors act on the brain in a rather diffuse manner. Sometimes a dose of LSD triggers “Xi”, another time it triggers “Xj”, and another time it triggers “Xk”. One might then propose that, if one counts all these episodes with LSD, none of these will act often enough on, say, “Xi” to get it to mean LSD, rather than, say, dog. This is the sort of strategy that Rupert invokes to keep mental symbols from meaning omnipresent, but non-specific causes such as the heart. The heart might causally contribute to “X1”, but it also contributes to so many other “Xi”s, that the heart will turn out not to be the most efficient cause of “X1”.
What about questions? Presumably questions as a category will count as an instance of a linguistic natural kind. Moreover, particular sentences will also count. So, the restriction of the BTT to natural kinds is of little use here. So, what of causal efficiency? Many sentences appear to provoke a wide range of possible responses. In response to, “I went to the zoo last week,” S could think of lions, tigers, bear, giraffes, monkeys, and any number of other natural kinds. But, the question, “What animal goes ‘oink, oink’?”—perhaps uttered in “Motherese” in a clear deliberate fashion so that it is readily comprehensible to a child—will be rather efficient in generating thoughts of a pig. Moreover, it could be more efficient than actual pigs, since a child might have more experience with the question than with actual pigs, often not figuring out that actual pigs are pigs. In such situations, “pig” would turn out to mean “What animal goes ‘oink, oink’?,” rather than pig. So, there appear to be cases in which BTT could make prima facie incorrect content assignments.
What, finally, of proximal projections of natural kinds? One plausible line might be to maintain that proximal projections of natural kinds are not themselves natural kinds, hence that they are automatically excluded from the scope of the theory. This plausible line, however, might be the only available line. Presumably, in the course of S’s life, the only way dogs can cause “X”s is by way of causal mediators between the dogs and the “X”s. Thus, each episode in which a dog causes an “X” is also an episode in which a sensory projection of a dog causes an “X”. So, dog efficiency for “X” can be no higher the efficiency of dog sensory projections. And, if it is possible for there to be a sensory projection of a dog without there being an actual dog, then the efficiency of the projections would be greater than the efficiency of the dogs. So, “X” could not mean dog. But, this problem is not necessarily damaging to BTT.
Since the BTT has not received a critical response in the literature, we will not devote a section to objections to it. Instead, we will leave well enough alone with our somewhat speculative treatment of how BTT might handle our familiar test cases. The general upshot is that the combination of actual causal efficiency over the course of an individual’s lifetime along with the restriction to natural kinds provides a surprisingly rich means of addressing some long-standing problems.
4. General Objections to Causal Theories of Mental Content
In the preceding section, we surveyed issues that face the philosopher attempting to work out the details of a causal theory of mental content. These issues are, therefore, one might say, internal to causal theories. In this section, however, we shall review some of the objections that have been brought forward to the very idea of a causal theory of mental content. As such, these objections might be construed as external to the project of developing a causal theory of mental content. Some of these are coeval with causal theories and have been addressed in the literature, but some are relatively recent and have not been discussed in the literature. The first objections, discussed in subsections 4.1–4.4, in one way or another push against the idea that all content could be explained by appeal to a causal theory, but leave open the possibility that one or another causal theory might provide sufficiency conditions for meaning. The last objections, those discussed in subsections 4.5–4.6 challenge the ability of causal theories to provide even sufficiency conditions for mental content.
4.1 Causal Theories do not Work for Logical and Mathematical Relations
One might think that the meanings of terms that denote mathematical or logical relations could not be handled by a causal theory. How could a mental version of the symbol “+” be causally connected to the addition function? How could a mental version of the logical symbol “¬” be causally connected to the negation truth function? The addition function and the negation function are abstract objects. To avoid this problem, causal theories typically acquiesce and maintain that their conditions are merely sufficient conditions on meaning. If an object meets the conditions, then that object bears meaning. But, the conditions are not necessary for meaning, so that representations of abstract objects get their meaning in some other way. Perhaps conceptual role semantics, wherein the meanings of terms are defined in terms of the meanings of other terms, could be made to work for these other theories.
4.2 Causal Theories do not Work for Vacuous Terms
Another class of potential problem cases are vacuous terms. So, for example, people can think about unicorns, fountains of youth, or the planet Vulcan. Cases such as these are discussed in Stampe (1977) and Fodor (1990a), among other places. These things would be physical objects were they to exist, but they do not, so one cannot causally interact with them. In principle, one could say that thoughts about such things are not counterexamples to causal theories, since causal theories are meant only to offer sufficiency conditions for meaning. But, this in principle reply appears to be ad hoc. It is not warranted, for example, by the fact that these excluded meanings involve abstract objects. There are, however, a number of options that might be explored here.
One strategy would be to turn to the basic ontology of one’s causal theory of mental content. This is where a theory based on nomological relations might be superior to a version that is based on causal relations between individuals. One might say that there can be a unicorn-“unicorn” law, even if there are no actual unicorns. This story, however, would break down for mental representations of individuals, such as the putative planet Vulcan. There is no law that connects a mental representation to an individual; laws are relations among properties.
Another strategy would be to propose that some thought symbols are complex and can decompose into meaningful primitive constituents. One could then allow that “X” is a kind of abbreviation for, or logical construction of, or defined in terms of “Y1,” “Y2,” and “Y3,” and that a causal theory applies to “Y1,” “Y2,” and “Y3.” So, for example, one might have a thought of a unicorn, but rather than having a single unicorn mental representation there is another representation made up of a representation of a horse, a representation of a horn, and a representation of the relationship between the horse and the horn. “Horse”, “horn”, and “possession,” may then have instantiated properties as their contents.
4.3 Causal Theories do not Work for Phenomenal Intentionality
Horgan and Tienson (2002) object to what they describe as “strong externalist theories” that maintain that causal connections are necessary for content. They argue, first, that mental life involves a lot of intentional content that is constituted by phenomenology alone. Perceptual states, such as seeing a red apple, are intentional. They are about apples. Believing that there are more than 10 Mersenne primes and hoping to discover a new Mersenne prime are also intentional states, in this case about Mersenne primes. But, all these intentional states have a phenomenology—something it is like to be in these states. There is something it is like to see a red apple, something different that it is like to believe there are more than 10 Mersenne primes, and something different still that it is like to hope to discover a new Mersenne prime. Horgan and Tienson propose that there can be phenomenological duplicates—two individuals with exactly the same phenomenology. Assume nothing about these duplicates other than that they are phenomenological duplicates. In such a situation, one can be neutral regarding how much of their phenomenological experience is veridical and how much illusory. So, one can be neutral on whether or not a duplicate sees a red apple or whether there really are more than 10 Mersenne primes. This suggests that there is a kind of intentionality—that shared by the duplicates—that is purely phenomenological. Second, Horgan and Tienson argue that phenomenology constitutively depends only on narrow factors. They observe that one’s experiences are often caused or triggered by events in the environment, but that these environmental causes are only parts of causal chains that lead to the phenomenology itself. They do not constitute that phenomenology. The states that constitute, or provide the supervenience base for, the phenomenology are not the elements of the causal chain leading back into the environment. If we combine the conclusions of these two arguments, we get Horgan and Tienson’s principal argument against any causal theory that would maintain that causal connections are necessary for content.
P1. There is intentional content that is constituted by phenomenology alone.
P2. Phenomenology is constituted only by narrow factors.
Therefore,
C. There is intentional content that is constituted only by narrow factors.
Thus, versions of causal theories that suppose that all content must be based on causal connections are fundamentally mistaken. For those versions of causal theories that offer only sufficiency conditions on semantic content, however, Horgan and Tienson’s argument may be taken to provide a specific limitation on the scope of causal theories, namely, that causal theories do not work for intentional content that is constituted by phenomenology alone.
A relatively familiar challenge to this argument may be found in certain representational theories of phenomenological properties. (See, for example, Dretske (1988) and Tye (1997).) According to these views, the phenomenology of a mental state derives from that state’s representational properties, but the representational properties are determined by external factors, such as the environment in which an organism finds itself. Thus, such representationalist theories challenge premise P2 of Horgan and Tienson’s argument.
4.4 Causal Theories do not Work for Certain Reflexive Thoughts
Buras (2009) presents another argument that is perhaps best thought of as providing a novel reason to think that causal theories of mental representation only offer sufficiency conditions on meaning. This argument begins with the premise that some mental states are about themselves. To motivate this claim, Buras notes that some sentences are about themselves. So, by analogy with, “This sentence is false,” which is about itself, one might think that there is a thought, “This thought is false,” that is also about itself. Or, how about “This thought is realized in brain tissue” or “This thought was caused by LSD”? These appear to be about themselves. Buras’ second premise is that nothing is a cause of itself. So, “This thought is false” is about itself, but could not be caused by itself. So, the sentence “This thought is false” could not mean that it itself is false in virtue of the fact that “This thought is false” was caused by its being false. So, “This thought is false” must get its meaning in some other way. It must get its meaning in virtue of some other conditions of meaning acquisition.
This is not, however, exactly the way Buras develops his argument. In the first place, he treats causal theories of mental content as maintaining that, if “X” means X, then X causes “X”. (Cf. Buras, 2009, p. 118). He cites Stampe (1977), Dretske (1988), and Fodor (1987) as maintaining this. Yet, Stampe, Dretske, and Fodor explicitly formulate their theories in terms of sufficiency conditions, so that (roughly) “X” means X, if Xs causes “X”s, etc. (See, for example, Stampe (1977), pp. 82–3, Dretske (1988), p. 52, and Fodor (187), p. 100). In the second place, Buras seems to draw a conclusion that is orthogonal to the truth or falsity of causal theories of mental content. He begins his paper with an impressively succinct statement of his argument.
Some mental states are about themselves. Nothing is a cause of itself. So some mental states are not about their causes; they are about things distinct from their causes (Buras, 2009, p. 117).
The causal theorist can admit that some mental states are not about their causes, since some states are thoughts and thoughts mean what they do in virtue of, say, the meanings of mental sentences. These mental sentences might mean what they do in virtue of the meanings of primitive mental representations (which may or may not mean what they do in virtue of a causal theory of meaning) and the way in which those primitive mental representations are put together. As was mentioned in section 2, such a syntactically and semantically combinatorial language of thought is a familiar background assumption for causal theories. The conclusion that Buras may want, instead, is that there are some thoughts that do not mean what they do in virtue of what causes them. So, through some slight amendments, one can understand Buras to be presenting a clarification of the scope of causal theories of mental content or as a challenge to a particularly strong version of causal theories, a version that takes them as offering a necessary condition on meaning.
4.5 Causal Theories do Not Work for Reliable Misrepresentations
As noted above, one of the central challenges for causal theories of mental content has been to discriminate between a “core” content-determining causal connection, as between cows and “cow”s, and “peripheral” non-content-determining causal connections, as between horses and “cow”s. Cases of reliable misrepresentations are representations which always misrepresent in the same way. In such cases, there is supposed to be no “core” content-determining causal connection; there are no X’s to which “X”s are causally connected. Instead, there are only “peripheral” causal connections. Mendelovici, (2013), following a discussion by Hohman, (2002), suggests that color representations may be like this.[11] Color anti-realism, according to which there are no colors in the world, seems to be committed to the view that color representations are not caused by colors in the world. Color representations may be reliably tokened by something in the world, but not by colors that are in the world.
In some instances, reliable misrepresentations provide another take on some of the familiar content-determination problems. So, take attempts to use normal conditions to distinguish between content-determining causes and non-content-determining causes. Even in normal conditions, color representations are not caused by colors, but by, say, surface reflectances under certain conditions of illumination, just in the way that, even in normal conditions cow representations are sometimes not caused by cows, but by, say, a question such as, “What kind of animal is sometimes named ‘Bessie’?” Take a version of the asymmetric dependency theory. On this theory applied to color terms, it might seem that there is no red-to-“red” law on which all the other laws depends in much the same way that it might seem there is no unicorn-to-“unicorn” law on which all other laws depends. (Cf. Fodor (1987, pp. 163–4) and (1990, pp. 100–1)).
Unlike the more familiar cases, Mendelovici, (2013), does not argue that there actually are such problematic cases. The argument is not that there are actual cases of reliable misrepresentations, but merely that reliable misrepresentations are possible and that this is enough to create trouble for causal theories of mental representation. One sort of trouble stems from the need for a pattern of psychological explanation. Let a mental representation “X” mean intrinsically-heavy. Such a representation is a misrepresentation, since there is no such property of being intrinsically heavy. Such a misrepresentation is, nonetheless, reliable (i.e. consistent), since it is consistently tokened by all the same sort of things on earth. But, one can see how an agent using “X” could make a reasonable, yet mistaken, inference to the conclusion that an object that causes a tokening of “X” on earth would be hard to lift on the moon. To allow such a pattern of explanation, Mendelovici argues, a causal theorist must allow for reliable misrepresentation. A theory of what mental representations are should not preclude such patterns of explanation. Another sort of trouble stems from the idea that if a theory of meaning does not allow for reliable misrepresentation, but requires that there be a connection between “X”s and Xs, then this would be constitute a commitment to a realist metaphysics for Xs. While there can be good reasons for realism, the needs of a theory of content would not seem to be a proper source for them.
Artiga, 2013, provides a defense of teleosemantic theories in the face of Mendelovici’s examples of reliable misrepresentation. Some of Artiga’s arguments might also be used by advocates of causal theories of mental content. Mendelovici, (2016), replies to Artiga, 2013, by providing refinements and a further defense of the view that reliable misrepresentations are a problem for causal theories of mental content.
4.6 Causal Theories Conflict with the Theory Mediation of Perception
Cummins (1997) argues that causal theories of mental content are incompatible with the fact that one’s perception of objects in the physical environment is typically mediated by a theory. His argument proceeds in two stages. In one stage, he argues that, on a causal theory, for each primitive “X” there must be some bit of machinery or mechanism that is responsible for detecting Xs. But, since a finite device, such as the human brain, contains only a finite amount of material, it can only generate a finite number of primitive representations. Next, he observes that thought is productive—that it can, in principle, generate an unbounded number of semantically distinct representations. This means that to generate the stock of mental representations corresponding to each of these distinct thoughts, one must have a syntactically and semantically combinatorial system of mental representation of the sort found in a language of thought (LOT). More explicitly, this scheme of mental representation must have the following properties:
- It has a finite number of semantically primitive expressions.
- Every expression is a concatenation of one or more primitive expressions.
- The content of any complex expression is a function of the contents of the primitives and the way those primitives are concatenated into the whole expression.
The conclusion of this first stage is, therefore, that a causal theory of mental representation requires a LOT. In the other stage of his argument, Cummins observes that, for a wide range of objects, their perception is mediated by a body of theory. Thus, to perceive dogs—for dogs to cause “dogs”—one has to know things such as that dogs have tails, dogs have fur, and dogs four legs. But, to know that dogs have tails, fur, and four legs, one needs a set of mental representations, such as “tail”, “fur”, “four”, and “legs”. Now the problem fully emerges. According to causal theories, having an “X” representation requires the ability to detect dogs. But, the ability to detect dogs requires a theory of dogs. But, having a theory of dogs requires already having a LOT—a system of mental representation. One cannot generate mental representations without already having them.[12]
4.7 Causal Theories Conflict with the Implementation of Psychological Laws
Jason Bridges (2006) argues that the core hypothesis of informational semantics conflicts with the idea that psychological laws are non-basic. As we have just observed, causal theories are often taken to offer mere sufficiency conditions for meaning. Suppose, therefore, that we suitably restrict the scope of a causal theory and understand its core hypothesis as asserting that all “X”s with the content X are reliably caused by X. (Nothing in the logic of Bridges’ argument depends on any additional conditions on a putative causal theory of mental content, so for simplicity we can follow Bridges in restricting attention to this simple version.) Bridges proposes that this core claim of a causal theory of mental content is a constitution thesis. It specifies what constitutes the meaning relation (at least in some restricted domain). Thus, if one were to ask, “Why is it that all ‘X’s with content X are reliably caused by Xs?,” the answer is roughly, “That’s just what it is for ‘X’ to have the content X”. Being caused in that way is what constitutes having that meaning. So, when a theory invokes this kind of constitutive relation, there is this kind of constitutive explanation. So, the first premise of Bridges’ argument is that causal theories specify a constitutive relation between meaning and reliable causal connection.
Bridges next observes that causal theorists typically maintain that the putative fact that all “X”s are reliably caused by Xs is mediated by underlying mechanisms of one sort or another. So, “X”s might be reliably caused by dogs in part through the mediation of a person’s visual system or auditory system. One’s visual apparatus might causally connect particular patterns of color and luminance produced by dogs to “X”s. One might put the point somewhat differently by saying that a causal theorist’s hypothetical “Xs causes ‘X’s” law is not a basic or fundamental law of nature, but an implemented law.
Bridges’ third premise is a principle that he takes to be nearly self-evident, once understood. We can develop a better first-pass understanding of Bridges’ argument if, at the risk of distorting the argument, we consider a slightly simplified version of this principle:
(S) If it is a true constitutive claim that all fs are gs, then it’s not an implemented law that all fs are gs.
To illustrate the principle, suppose we say that gold is identical to the element with atomic number 79, that all gold has atomic number 79. Then suppose one were to ask, “Why is it that all gold has the atomic number 79?” The answer would be, “Gold just is the element with atomic number 79.” This would be a constitutive explanation. According to (S), however, this constitutive explanation precludes giving a further mechanistic explanation of why gold has atomic number 79. There is no mechanism by which gold gets atomic number 79. Having atomic number 79 just is what makes gold gold.
So, here is the argument
P1. It is a true constitutive claim that all “X”s with content X are reliably caused by Xs.
P2. If it is a true constitutive claim that all “X”s with content X are reliably caused by Xs, then it is not an implemented law that all “X”s with content X are reliably caused by Xs.
Therefore, by modus ponens on P1 and P2,
C1. It is not an implemented law that all “X”s with content X are reliably caused by Xs.
But, C1 contradicts the common assumption
P3. It is an implemented law that all “X”s with content X are reliably caused by Xs.[13]
Rupert (2008) challenges the first premise of Bridges’ argument on two scores. First, he notes that claims about constitutive natures have modal implications which at least some naturalistic philosophers have found objectionable. Second, he claims that natural scientists do not appeal to constitutive natures, so that one need not develop a theory of mental content that invokes them.
5. Concluding Remarks
Although philosophers and cognitive scientists frequently propose to dispense with (one or another sort of) mental representation (cf., e.g., Stich, 1983, Brooks, 1991, van Gelder, 1995, Haugeland, 1999, Johnson, 2007, Chemero, 2009), this is universally accepted to be a revolutionary shift in thinking about minds. Short of taking on board such radical views, one will naturally want some explanation of how mental representations arise. In attempting such explanations, causal theories have been widely perceived to have numerous attractive features. If, for example, one use for mental representations is to help one keep track of events in the world, then some causal connection between mind and world makes sense. This attractiveness has been enough to motivate new causal theories (e.g. Rupert, 1999, Usher, 2001, and Ryder, 2004), despite the widespread recognition of serious challenges to an earlier generation of theories developed by Stampe, Dretske, Fodor, and others.
Bibliography
- Adams, F., 1979, “A Goal-State Theory of Function Attribution,” Canadian Journal of Philosophy, 9: 493–518.
- –––, 2003a, “Thoughts and their Contents: Naturalized Semantics,” in S. Stich and T. Warfield (eds.), The Blackwell Guide to Philosophy of Mind, Oxford: Basil Blackwell, pp. 143–171.
- –––, 2003b, “The Informational Turn in Philosophy,” Minds and Machines, 13: 471–501.
- Adams, F. and Aizawa, K., 1992, “‘X’ Means X: Semantics Fodor-Style,” Minds and Machines, 2: 175–183.
- –––, 1994a, “Fodorian Semantics,” in S. Stich and T. Warfield (eds.), Mental Representations, Oxford: Basil Blackwell, pp. 223–242.
- –––, 1994b, “‘X’ Means X: Fodor/Warfield Semantics,” Minds and Machines, 4: 215–231.
- Adams, F., Drebushenko, D., Fuller, G., and Stecker, R., 1990, “Narrow Content: Fodor’s Folly,” Mind & Language, 5: 213–229.
- Adams, F. and Dietrich, L., 2004, “What’s in a(n Empty) Name?,” Pacific Philosophical Quarterly, 85: 125–148.
- Adams, F. and Enc, B., 1988, “Not Quite by Accident,” Dialogue, 27: 287–297.
- Adams, F. and Stecker, R., 1994, “Vacuous Singular Terms,” Mind & Language, 71: 1–12.
- Agar, N., 1993. “ What do frogs really believe?,” Australasian Journal of Philosophy, 9: 387–401.
- Aizawa, K., 1994, “Lloyd’s Dialectical Theory of Representation,” Mind & Language, 9: 1–24.
- Antony, L. and Levine, J., 1991, “The Nomic and the Robust,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell, pp. 1–16.
- Baker, L., 1989, “On a Causal Theory of Content,” Philosophical Perspectives, 3: 165–186.
- –––, 1991, “Has Content Been Naturalized?,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell, pp. 17–32.
- Bar-On, D., 1995, “‘Meaning’ Reconstructed: Grice and the Naturalizing of Semantics,” Pacific Philosophical Quarterly, 76: 83–116.
- Boghossian, P., 1991, “Naturalizing Content,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell, pp. 65–86.
- Bridges, J., 2006, “Does Informational Semantics Commit Euthyphro’s Fallacy?,” Noûs, 40: 522–547.
- Brooks, R., 1991, “Intelligence without Representation,”Artificial Intelligence, 47: 139–159.
- Buras, T., 2009, “An Argument against Causal Theories of Mental Content,” American Philosophical Quarterly, 46: 117–129.
- Cain, M. J., 2009, “Fodor’s Attempt to Naturalize Mental Content,” The Philosophical Quarterly, 49: 520–526.
- Chemero, A., 2009, Radical Embodied Cognitive Science, Cambridge, MA: The MIT Press.
- Cummins, R., 1989, Meaning and Mental Representation, Cambridge, MA: MIT/Bradford.
- –––, 1997, “The LOT of the Causal Theory of Mental Content,” Journal of Philosophy, 94: 535–542.
- Dennett, D., 1987, “Review of J. Fodor’s Psychosemantics,” Journal of Philosophy, 85: 384–389.
- Dretske, F., 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT/Bradford Press.
- –––, 1983, “Precis of Knowledge and the Flow of Information,” Behavioral and Brain Sciences, 6: 55–63.
- –––, 1986, “Misrepresentation,” in R. Bogdan (ed.), Belief, Oxford: Oxford University Press, pp. 17–36.
- –––, 1988, Explaining Behavior: Reasons in a World of Causes, Cambridge, MA: MIT/Bradford.
- –––, 1999, Naturalizing the Mind, Cambridge, MA: MIT Press.
- Enç, B., 1982, “Intentional States of Mechanical Devices,” Mind, 91: 161–182.
- Enç, B. and Adams, F., 1998, “Functions and Goal-Directedness,” in C. Allen, M. Bekoff and G. Lauder (eds.), Nature’s Purposes, Cambridge, MA: MIT/Bradford, pp. 371–394.
- Fodor, J., 1984, “Semantics, Wisconsin Style,” Synthese, 59: 231–250. (Reprinted in Fodor, 1990a).
- –––, 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind, Cambridge, MA: MIT/Bradford.
- –––, 1990a, A Theory of Content and Other Essays, Cambridge, MA: MIT/Bradford Press.
- –––, 1990b, “Information and Representation,” in P. Hanson (ed.), Information, Language, and Cognition, Vancouver: University of British Columbia Press, pp. 175–190.
- –––, 1990c, “Psychosemantics or Where do Truth Conditions come from?,” in W. Lycan (ed.), Mind and Cognition, Oxford: Basil Blackwell, pp. 312–337.
- –––, 1991, “Replies,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell, pp. 255–319.
- –––, 1994, The Elm and the Expert, Cambridge, MA: MIT/Bradford.
- –––, 1998a, Concepts: Where Cognitive Science Went Wrong, Oxford: Oxford University Press.
- –––, 1998b, In Critical Condition: Polemical Essays on Cognitive Science and the Philosophy of Mind, Cambridge, MA: MIT/Bradford Press.
- Gibson, M., 1996, “Asymmetric Dependencies, Ideal Conditions, and Meaning,” Philosophical Psychology, 9: 235–259.
- Godfrey-Smith, P., 1989, “Misinformation,” Canadian Journal of Philosophy, 19: 533–550.
- –––, 1992, “Indication and Adaptation,” Synthese, 92: 283–312.
- Grice, H., 1989, Studies in the Way of Words, Cambridge: Harvard University Press.
- Haugeland, J., 1999, “Mind Embodied and Embedded,” in J. Haugeland (ed.), Having Thought, pp. 207–237.
- Horgan, T., and Tienson, J., 2002, “The Intentionality of Phenomenology and the Phenomenology of Intentionality,” in D. Chalmers, Philosophy of Mind: Classical and Contemporary Readings, Oxford: Oxford University Press, pp. 520–933.
- Johnson, M., 2007, The Meaning of the Body: Aesthetics of Human Understanding, Chicago, IL: University of Chicago Press.
- Jones, T., Mulaire, E., and Stich, S., 1991, “Staving off Catastrophe: A Critical Notice of Jerry Fodor’s Psychosemantics,” Mind & Language, 6: 58–82.
- Lloyd, D., 1987, “Mental Representation from the Bottom up,” Synthese, 70: 23–78.
- –––, 1989, Simple minds, Cambridge, MA: The MIT Press.
- Loar, B., 1991, “Can We Explain Intentionality?,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Basil Blackwell, pp. 119–135.
- Loewer, B., 1987, “From Information to Intentionality,” Synthese, 70: 287–317.
- Maloney, C., 1990, “Mental Representation,” Philosophy of Science, 57: 445–458.
- Maloney, J., 1994, “Content: Covariation, Control and Contingency,” Synthese, 100: 241–290.
- Manfredi, P. and Summerfield, D., 1992, “Robustness without Asymmetry: A Flaw in Fodor’s Theory of Content,” Philosophical Studies, 66: 261–283.
- McLaughlin, B. P., 1991, “Belief individuation and Dretske on naturalizing content,” in B. P. McLaughlin (ed.), Dretske and His Critics, Oxford: Basil Blackwell, pp. 157–79.
- –––, 2016, “The Skewed View From Here: Normal Geometrical Misperception,” Philosophical Topics, 44: 231–99.
- Mendelovici, A., 2013, “Reliable misrepresentation and tracking theories of mental representation,” Philosophical Studies, 165: 421–443.
- –––, 2016, “Why tracking theories should allow for clean cases of reliable misrepresentation,” Disputatio, 8: 57–92.
- Millikan, R., 1984, Language, Thought and Other Biological Categories, Cambridge, MA: MIT Press.
- –––, 1989, “Biosemantics,” Journal of Philosophy, 86: 281–97.
- –––, 2001, “What Has Natural Information to Do with Intentional Representation?,” in D. M. Walsh (ed.), Naturalism, Evolution and Mind, Cambridge: Cambridge University Press, pp. 105–125.
- Neander, K., 1995, “Misrepresenting and Malfunctioning,” Philosophical Studies, 79: 109–141.
- –––, 1996, “Dretske’s Innate Modesty,” Australasian Journal of Philosophy, 74: 258–274.
- Papineau, D., 1984, “Representation and Explanation,” Philosophy of Science, 51: 550–72.
- –––, 1998, “Teleosemantics and Indeterminacy,” Australasian Journal of Philosophy, 76: 1–14.
- Pineda, D., 1998, “Information and Content,” Philosophical Issues, 9: 381–387.
- Possin, K., 1988, “Sticky Problems with Stampe on Representations,” Australasian Journal of Philosophy, 66: 75–82.
- Price, C., 1998, “Determinate functions,” Noûs, 32: 54–75.
- Rupert, R., 1999, “The Best Test Theory of Extension: First Principle(s),” Mind & Language, 14: 321–355.
- –––, 2001, “Coining Terms in the Language of Thought: Innateness, Emergence, and the Lot of Cummins’s Argument against the Causal Theory of Mental Content,” Journal of Philosophy, 98: 499–530.
- –––, 2008, “Causal Theories of Mental Content,” Philosophy Compass, 3: 353–80.
- Ryder, D., 2004, “SINBAD Neurosemantics: A Theory of Mental Representation,” Mind & Language, 19: 211–240.
- Skyrms, B., 2008, “Signals,” Philosophy of Science, 75: 489–500.
- –––, 2010a, Signals: Evolution, Learning, and Information, Oxford: Oxford University Press
- –––, 2010b, “The flow of information in signaling games,” Philosophical Studies, 147: 155–65.
- –––, 2012, “Learning to signal with probe and adjust,” Episteme, 9: 139–50.
- Stampe, D., 1975, “Show and Tell,” in B. Freed, A. Marras, and P. Maynard (eds.), Forms of Representation, Amsterdam: North-Holland, pp. 221-245.
- –––, 1977, “Toward a Causal Theory of Linguistic Representation,” in P. French, H. K. Wettstein, and T. E. Uehling (eds.), Midwest Studies in Philosophy, vol. 2, Minneapolis: University of Minnesota Press, pp. 42–63.
- –––, 1986, “Verification and a Causal Account of Meaning,” Synthese, 69: 107–137.
- –––, 1990, “Content, Context, and Explanation,” in E. Villanueva, Information, Semantics, and Epistemology, Oxford: Blackwell, pp. 134–152.
- Stegmann, U. E., 2005, “John Maynard Smith’s notion of animal signals,” Biology and Philosophy, 20: 1011–25.
- –––, 2009, “A consumer-based teleosemantics for animal signals,” Philosophy of Science, 76: 864–75.
- Sterelny, K., 1990, The Representational Theory of Mind, Oxford: Blackwell.
- Stich, S., 1983, From Folk Psychology to Cognitive Science, Cambridge, MA: The MIT Press.
- Sturdee, D., 1997, “The Semantic Shuffle: Shifting Emphasis in Dretske’s Account of Representational Content,” Erkenntnis, 47: 89–103.
- Tye, M., 1995, Ten Problems of Consciousness: A Representational Theory of Mind, Cambridge, MA: MIT Press.
- Usher, M., 2001, “A Statistical Referential Theory of Content: Using Information Theory to Account for Misrepresentation,” Mind and Language, 16: 311–334.
- –––, 2004, “Comment on Ryder’s SINBAD Neurosemantics: Is Teleofunction Isomorphism the Way to Understand Representations?,” Mind and Language, 19: 241–248.
- Van Gelder, T. 1995, “What Might Cognition Be, If not Computation?,” The Journal of Philosophy, 91: 345–381.
- Wallis, C., 1994, “Representation and the Imperfect Ideal,” Philosophy of Science, 61: 407–428.
- –––, 1995, “Asymmetrical Dependence, Representation, and Cognitive Science,” The Southern Journal of Philosophy, 33: 373–401.
- Warfield, T., 1994, “Fodorian Semantics: A Reply to Adams and Aizawa,” Minds and Machines, 4: 205–214.
- Wright, L., 1973, “Functions,” Philosophical Review, 82: 139–168.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Fodor’s Asymmetrical Causal Dependency Theory of Meaning, entry in the Field Guide to the Philosophy of Mind.
- Teleological Theories of Mental Content, by Ruth Millikan.