Teleological Theories of Mental Content

First published Fri Jun 18, 2004; substantive revision Tue Jan 3, 2012

Teleological theories of mental content try to explain the contents of mental representations by appealing to a teleological notion of function. Take, for example, the thought that blossoms are forming. On a representational theory of thought, this thought involves a representation of blossoms forming. A theory of content aims among other things to tell us why this representation has that content; it aims to say why it is a thought about blossoms forming rather than about the sun shining or pigs flying or nothing at all. In general, a theory of content tries to say why a mental representation counts as representing what it represents.

According to teleological theories of content, what a representation represents depends on the functions of the systems that produce or use the representation. The relevant notion of function is said to be the one that is used in biology and neurobiology in attributing functions to components of organisms (as in “the function of the pineal gland is secreting melatonin” and “the function of brain area MT is processing information about motion”). Proponents of teleological theories of content generally understand such functions to be what the thing with the function was selected for, either by ordinary natural selection or by some other natural process of selection.

1. Broad Aims

Many (perhaps all) mental states are about things or are directed on to things in the way that a belief that spring is coming is about spring coming or in the way that a desire for chocolate is directed on to chocolate. The philosopher Franz Brentano (1838–1917) spoke of such mental states as involving presentations of the objects of our thoughts. The idea was that we couldn't desire chocolate unless chocolate was in some way presented to our minds. Nowadays, we would say that chocolate must be represented in our minds if it is chocolate that we desire. Teleological theories of content, like other theories of mental content, attempt to solve what is often referred to as Brentano's problem: the problem of explaining intentionality, explaining how mental states can be about things or be directed on to things in this way.

One version of the problem, often attributed to Brentano but perhaps more correctly attributed to Roderick Chisholm (1957), concerns thoughts about non-existent objects. Chisholm argued that the aboutness (or intentionality) of mental states can not be a physical relation between a mental state and what it is about (its object) because in a physical relation each of the relata must exist whereas the objects of mental states might not exist. If Andrew kisses Kate both Andrew and Kate must exist and if the sun shines on a garden both the sun and the garden must exist too. In contrast, Billy can love Santa and search for unicorns even if Santa does not exist and there are no unicorns.

Chisholm concluded that it is hard to see how intentionality can be a physical phenomenon, but those who offer teleological theories almost always adopt a physicalist framework to try to explain how intentionality is possible. They aim at what is often called a “naturalistic theory”: “naturalistic” because the aim is to give a theory that is consistent with the claim that the fundamental furniture of the universe is nothing but what the natural sciences describe. Within that framework, it is a working hypothesis that intentionality is not ontologically fundamental, so most teleological theories try to show that intentionality is part of the natural world by showing how it can be understood in terms of other natural things. In effect, those who propose teleological theories of mental content try to say why a mental representation, R, represents what it represents, C, by filling in the blank in, “R has the content C because (in virtue of) _______”, without making ineliminable use of intentional terms.

It also needs to account for the normative nature of mental representation. Content is said to be normative because it legitimates certain evaluations. We evaluate beliefs as true or false, memories as accurate or inaccurate, perceptions as veridical or illusory and so on. We also evaluate desires as satisfied or not satisfied and motor instructions as correctly or incorrectly executed. Content that is normative is sometimes described as truth-evaluable. Representational states count as true or false (etc.) by virtue of their content as well as the state of the world. For example, the truth of my belief that today is sunny depends on whether it is sunny but it also depends on its being a belief that today is sunny. If the content of the belief were different (e.g., if it were the belief that today is hot) its truth value might be different. The normative nature of content poses a problem for naturalistic theories but those who propose teleological theories of mental content think that this problem is tractable.

Much attention is paid to the possibility of misrepresentation. This is because the distinction between correct and incorrect representation is often regarded as a central normative distinction and because a capacity to misrepresent is often thought to be essential for representing: no possibility of misrepresentation, no representing. Consider a mental representation of a cat. If it is to have the content cat, so that all and only cats are in its extension, it must be that if it were used to label a non-cat (e.g., a dog) it would count as misrepresenting it. However, there may be exceptions to the general rule that all representations can misrepresent (e.g., a representation that has the content something or nothing). Misrepresentation is also not possible in every kind of mental context (e.g., in dreaming and, perhaps, desiring).

The possibility of misrepresentation also connects with Chisholm's concern with non-existent objects because a capacity to misrepresent amounts to a basic capacity to represent non-existent objects. Imagine a simple detection device that normally goes into a RED-state in response to red. If the RED-state has the content there is red then, if RED could be tokened sometimes when nothing red is present, a token RED could represent a non-existent instantiation of red. There is more to explaining our capacity to represent non-existent objects than explaining how misrepresentation is possible but explaining how misrepresentation is possible is a start.

Misrepresentation makes it clear that representing is often a three-place relation. Suppose, for example, that I see some crumpled newspaper blown by the wind as a cat slinking down the street. There are at least three things involved. First, there is the representation (or representational vehicle) that has the content. In us, it is presumably some sort of neurological state or event. Here, such mental representations are denoted by capitalized English expressions (e.g., CAT). Second, there is the thing that the representation is aimed at representing, in this case this is the newspaper. Cummins (1996) calls this the target of the representation. And third, there is the content of the representation. Since I represent the newspaper as a cat, the content of the representation in this case is cat. Misrepresentation has occurred in this case because the target of the representation is not in the extension of the representation; the newspaper is not a cat.

We can ask questions about each of these three places in the representation relation. First is the question of representational status: Why does CAT count as a representation? Or, more generally, what is the difference between natural states that are representational states and natural states that are not? Second is the question of target determination: What makes it the case that this token of CAT has the newspaper as its target? Or, more generally, what makes anything the target of any given representation? Third is the question of content determination: What makes it the case that CAT has the content cat? Or, more generally, in virtue of what does any representation have the content that it has? Teleological theories of mental content are primarily concerned with content determination, but a complete solution to Brentano's problem will need to give answers to all three.

A distinction is sometimes made between representation of and representation as. Whether or not teleological theories of content are concerned with representation as or of depends on how those locutions are used. In one sense, referring back to the previous example, my CAT-representation represents the newspaper as a cat, although it is a representation of the newspaper. On this way of speaking, teleological theories of content are theories of representation as. However, the words “as” and “of” are not always diagnostic of the contents/targets distinction. For example, we can also say that I used a representation of a cat to represent the newspaper.

The teleological theories that are currently on offer are generally theories of referential content (not theories of cognitive content or mode of presentation). Many philosophers would agree that referential content, which is normative in the aforementioned sense, is not narrow content. By definition, two individuals who are physical replicas at time t “from the skin in” must have the same narrow-content states at t. Proponents of teleological theories do not believe that referential content is narrow. This view is also shared by other philosophers who think that referential content supervenes (in part) on things that are external to individual thinkers, such as on features of their social and physical environment and/or their history (for the kinds of reasons raised by Putnam (1975) and Burge (1979, 1986)). In general, the proponents of teleological theories of content have shown little interest in the notion of narrow content, since they tend to reject the claim that cognitive science should restrict itself to using narrow notions. Still, a teleological theory of mental content could be combined with the view that cognitive science needs a narrow notion of content. A teleological theory of content tries to explain the nature of psycho-semantic norms (i.e., semantic norms insofar as they apply to mental representations). It is to some extent a separate question whether such norms play a role in cognitive science and whether a narrow notion is needed instead or in addition.

A further point about broad aims is that teleological theories of mental content are not usually intended as theories about how we grasp meanings or are conscious of them. To grasp a meaning is plausibly a sophisticated intentional state that involves representations of meanings and not just representations with meanings. To understand how we grasp meanings, we might turn to psychological theories of concept possession and introspective access to conceptual structures. Such theories presuppose that there are representations with content, whereas teleological theories of mental content try to explain the nature of intentionality at its most fundamental; they aim to say how we can, to begin with, have any representations with content.

A final point about broad aims is that teleological theories of mental content are usually intended as real nature theories. These theories do not try to describe the criteria that we use in everyday life to identify the beliefs and desires of people, the criteria used in folk psychological intentional ascriptions (though Price (2001) is an exception). Those who offer real nature theories of mental content think that our everyday ability to recognize intentional states does not make us experts on the fundamental nature of intentional states, any more than our everyday ability to recognize water makes us experts on the fundamental nature of water. The idea is that we can recognize instances of a kind on the basis of the superficial appearances of things of the kind, while remaining ignorant of their essential nature. So, most teleological theories of mental content do not entail that, if Bill thinks that Mavis knows that today is Tuesday then Bill must be thinking about the teleological functions of Mavis's representation producing or using systems.

2. Teleological Functions

As noted in the previous section, a crucial feature of content is that it legitimates semantic evaluations. While teleological theories of mental content come in a variety of forms, they all share the idea that the norms that underwrite these evaluations depend, in part at least, on functions. The next section explains various ideas about the nature of this dependence. This section describes the notion of function that is employed. It is generally thought to be in some sense teleological and normative but both “teleological” and “normative” need qualifying. Let's take the first term first.

Talk of biological functions often has a teleological flavor. For example, when we say that it is the function of the heart to pump blood this seems equivalent to saying that hearts are for pumping blood or that hearts are there in order to pump blood. There is a closely related concept of an artifact's function that is purposive: for example, when we say that moving the cursor is the function of the computer's mouse or trackpad we seem to mean that this is what it is for, that it is there in order to do this, that this is what its designers designed it to do. Along analogous lines, when biologists say that pumping blood is the heart's function, they seem to mean that hearts were selected for, adapted for and in that sense designed for pumping blood. In the latter case, however, the selection is “natural” or, better, it is a non-intentional process.

Some who favor teleological theories of mental content claim that Mother Nature is intentional or purposive. In the case of Millikan (2002), it is unclear whether there is a genuine as opposed to terminological disagreement with the substance of the preceding paragraph. The transition from metaphor to dead metaphor to literal use of such terms as “design” and “purpose” is a matter of degree and Millikan seems to use “function” and “biological purpose” as synonyms. However, Dennett's (1988) claim is that there is no mind-independent determinate fact of the matter about meanings or functions and that the functions of artifacts, the functions of biological systems and the contents of the thoughts of people are all dependent on interpretation, on our adopting either the design stance or the intentional stance toward them. In Dennett's view, nature leaves functions and meanings similarly indeterminate.

There are some who would prefer to reserve the term “teleological” for genuinely purposive contexts in the most literal sense of “purposive” and to refer to biological functions as “teleonomic.” But, on a broader construal of what it means for a concept to be teleological, a concept might be counted as teleological if it concerns what something is for, and the notion of what something was selected for counts as teleological in that sense. This is the sense of the term “teleological” used in this entry.

Intuitively, the relevant concept of function seems to be normative as well, for biologists routinely talk about systems functioning normally or properly, as well as about malfunctioning, dysfunction, functional impairment and so on. Those who offer teleological theories of mental content agree that the relevant notion of function permits the possibility of malfunction; it allows that a token trait could have a function to do Z even if it lacks the disposition to do Z. For example, Joe's pineal gland could have the function to secrete melatonin even if it cannot secrete melatonin because it is malfunctioning. Whether it is appropriate to describe this as “normative” is more controversial but the disagreement is more terminological than substantial among those offering teleological theories of mental content, since all that is usually meant by saying that the notion is normative, by those of them who say this, is that it permits the possibility of malfunction.

Some prefer to reserve the term “normative” for prescriptive contexts. On that way of speaking, a statement would count as normative only if it entails an ought-claim without the addition of further premises. Proponents of teleological theories of mental content can agree that no ought-claim follows from a function ascription without the addition of further premises (for discussion, see Jacob (2001)) and that functions are not prescriptive. Different practices dominate different discourses but talk of purely descriptive “norms” is well established in some contexts (e.g., in talk of statistical norms). If either psycho-semantic or functional norms were prescriptive, the attempt to naturalize them would seem to ignore Hume's warning to beware trying to derive ought-statements from is-statements, but those who offer teleological theories of mental content claim that the norms of both functions and content fall on the is side of any is/ought divide.

Those who favor teleological theories of content usually favor an etiological theory of functions, according to which an item's function is determined by its history of selection or by past selection of things of that type. Roughly, on an etiological theory of functions, an items function is what it was selected for, or what thing of the type were selected for.

Wright (1973, 1976) offered the first developed defense of an etiological theory of functions by a philosopher but earlier expressions of the idea can be found in the writings of some biologists (e.g., Ayala 1970). Wright's (1976, p.81) proposed analysis is as follows.

The function of X is Z if and only if,
  1. Z is a consequence (result) of Xs being there,
  2. X is there because it does (results in) Z.

Wright intended this formula to work for a wide variety of function ascriptions; for artifacts as well as the parts of organisms, and for function ascriptions in Creationist as well as Evolutionary biology. For this reason, he intended the “does” of the second requirement to be tenseless. Thus the second requirement is intended to be read as requiring that X be there because it does, did or will do Z. The precise tense would depend on the nature of the relevant consequence etiology, the causal history that explains X being there because of its effect, Z.

The details of this formula are often regarded as problematic. For instance, it is unclear how it renders malfunction possible, given the first requirement. The second requirement is also too loose to capture only function-conferring consequence etiologies. As Wright (2010) concedes, if a man's gripping a pole were to cause electricity to run through him and this prevented him from letting go of the pole, then his original analysis entails that the function of the man's holding on to the pole is to allow electricity to run through him. In light of this problem, Wright has amended his analysis by proposing that what is needed is a particular kind of consequence etiology, a “virtue etiology” in which the consequence implicated in the etiology must be a “virtue”. In the case of functions derived from natural selection, he views a “virtuous” consequence as an adaptive consequence.

Others who offer etiological theories of function drop Wright's first requirement and speak more specifically of selection as the background process that is responsible for the consequence etiology. On this view, a/the function of an item is (roughly) what it (or items of the type) were selected for. Millikan (1989a) and Neander (1983, 1991) argue that the functions of biology can be precisified by background theory and that their analysis need not be an ordinary language analysis. Neander (1991, p. 74) gives the following definition for functions in physiology in particular, for instance:

It is a/the proper function of an item (X) of an organism (O) to do that which items of X's type did to contribute to the inclusive fitness of O's ancestors, and which caused the genotype, of which X is the phenotypic expression, to be selected by natural selection.

Etiological theories of biological function need to allow for the fact that ancestral traits might have been selected for something other than the present function of descendent traits. For example, a penguin's flippers and an emu's vestigial wings no longer have the function of flight, even though ancestral forelimbs were selected for flight. Griffiths (1993) and Godfrey-Smith (1994) offer “modern history” versions of the etiological theory, according to which functions are determined by recent selection. Note that selection does not cease when traits “go to fixation” if on-going maintenance selection is still weeding out fresh harmful mutations as they arise. However, selection does require some variation and Schwartz (1999) suggests that a continuing usefulness supplement is needed, which kicks in if variation is absent for a time. In the absence of any variation, the trait retains its function if it is still adaptive.

A further issue is whether the etiological theory is circular (see e.g., Nanay (2011)). The worry is that, if a trait token is typed by its function and if a trait's function depends on the selection history that pertains to the relevant type, the analysis is circular. Neander and Rosenberg (forthcoming) respond that the function of a trait and its function-specific type co-supervene on the history of selection and that there is only a superficial appearance of circularity. To figure out if token trait X has the function to Z, they say, first identify the lineage of traits to which X belongs; a lineage of traits connects ancestral and descendent traits by the mechanisms responsible for inheritance. Then segment the lineage at those places where selection for Z stops and starts. X has the function to Z only if there was selection for X-ing in X's segment of the lineage. This procedure does not presuppose prior knowledge of X's function or prior knowledge of X's membership in a function-specific classification of traits. This is also an alternative proposal for handling vestigiality.

To play a role in a naturalistic account of mental content, the relevant selection process must be non-intentional but it need not be natural selection operating over an evolutionary span of time. Millikan (1984) offers an etiological theory of functions on which functions can also result from meme selection. Papineau (1984) speaks of learning and Dretske (1986) invokes functions that depend on recruitment by conditioning. Garson (2011) argues that the notion of selection should be loosened so that differential retention without differential replication could count as selection, in which case neural selection could count as a form of selection that could underwrite the functions that underwrite content. While the contents of sensory-perceptual representations might be determined by the functions that derive from natural selection operating over an evolutionary span of time, the role of learning in concept acquisition suggests that other kinds of functions that derive from other kinds of selection might be needed for the contents of learned concepts. There is, though, no established agreement about how best to more broadly define the relevant class of functions.

While etiological theories dominate the discussion of normative functions in philosophy of biology, the etiological theory is not uncontroversial. Some question whether teleology can be naturalized (e.g., Bedau (1991)). Others support other theories for other reasons. Perhaps the systemic theory is the most popular alternative (see esp. Cummins (1975)). Systemic theories of function emphasize the role of function ascriptions in functional analyses of systems. Functional analyses of systems conceptually decompose complex activities of whole systems into the activities of their contributing parts. The function of a part is its contribution to the complex activity of the system that is under analysis. Proponents of the etiological theory have no objection to the idea that biologists give functional analyses of systems but contend that the systemic analysis, on its own, fails to naturalize the normativity of functions or to do so successfully. Some who support a systemic theory argue that biology has no need for a naturalistic notion of malfunction (e.g., Davies, 2001), while others argue that abnormal functioning is statistically atypical (Boorse (2002), Craver (2001), Lewens (2004)). (Readers who would like to read more on this and other theories of function could turn to several volumes of readings that have appeared: see esp. Allen, Bekoff & Lauder (1998), Buller (1999) and Ariew, Cummins & Perlman (2002).)

It is usual to note that etiological (teleological) functions are distinct from the causal-role functions involved in what is usually called “functionalism” in philosophy of mind. Causal-role functions are often defined as a select subset of a trait's actual causal dispositions, and functionalism is often defined as the view that mental states are individuated or classified into types on the basis of such dispositions (see, e.g., Block (1984)). If causal-role functions are a subset of dispositions actually possessed by token traits then they do not permit the possibility of malfunction because a trait cannot have the causal-role function to Z and at the same time lack the disposition to Z.

That said, the distinction between functionalism and what might be termed “teleo-functionalism” is less stark than might be thought. One reason is that formulations of classical functionalism often spoke of the characteristic or normal causal roles of mental states. Sometimes this was explicitly to allow for pathology (see, e.g., Lewis 1980). Another reason is that, although teleological functions are often said to be selected effects or effects for which traits were selected, such functions can also be described as selected dispositions or dispositions for which traits were selected. Both forms of functionalism also permit multiple physical realizability of traits that perform the same functions.

3. Teleosemantic Theories

What all teleological (or “teleosemantic”) theories of mental content have in common is the idea that psycho-semantic norms are ultimately derivable from functional norms. Beyond saying this, it is hard to give a neat definition of the group of theories that qualify.

Consider, for instance, some theories that are clearly intended as alternatives to teleosemantics, such as Fodor's (1990b) asymmetric dependency theory or theories that appeal to convergence under ideal epistemic conditions (see Rey 1997 for an outline). Elaboration of these theories is beyond the scope of this entry but we can note that they both seem to need a notion of normal or proper functioning. Fodor's theory adverts to the “intact” perceiver and thinker. Presumably this is someone whose perceptual and cognitive systems are functioning properly (this is covered under the ceterus paribus part of the laws to which Fodor's theory refers). The idea of convergence under ideal epistemic conditions also involves a notion of normal functioning, for epistemic conditions are not ideal if perceivers and thinkers are abnormal in certain respects, such as if they are blind or psychotic. If normal or proper functioning is analyzed in terms of an etiological theory, which says that a system functions normally or properly only if all of its parts possess the dispositions for which they were selected, then these theories would qualify as teleological theories of mental content under the characterization provided in the first paragraph of this section. Those who propose these theories might reject an etiological theory of functions, but they need some analysis of them. There could anyway be etiological or teleological versions of theories of this sort.

An appeal to teleological functions can also be combined with a variety of other ideas about how content is determined. For example, there can be both isomorphic and informational versions of teleosemantics. In the former case, the proposal might be that the relevant isomorphism is one that cognitive systems were adapted to exploit. An alternative idea is that the isomorphism does not need to be specified given that the targets of representations are determined by teleological functions. This appears to be the view of Cummins (1996, see esp. p.120) although Cummins is generally critical of teleological functions in biology. A teleological version of an informational theory is given when content is said to depend on information carrying, storing or processing functions of mechanisms. The relevant notion of information is variously defined but (roughly speaking) a type of state (event, etc.) is said to carry natural information about some other state (event, etc.) when it is caused by it or corresponds to it.

It is sometimes said that the role of functions in a teleological theory of content is to explain how error is possible, rather than to explain how content is determined, but the two go hand in hand. To see this, it helps to start with the crude causal theory of content and to see how the problem of error arises for it. According to the crude causal theory, a mental representation represents whatever causes representations of the type; Rs represent Cs if and only if Cs cause Rs. One problem with this simple proposal is its failure to provide for the possibility of misrepresentation, as Fodor (1987, 101–104) points out. To see the problem, recall the occasion on which crumpled paper is seen as a cat. The crude causal theory does not permit this characterization of the event because, if crumpled paper caused a tokening of CAT then crumpled paper is in the extension of CAT, according to the crude causal theory. Since cats also sometimes cause CATs, cats are in the extension too. However, the problem is that crumpled paper is included in the extension as soon as it causes a CAT to be tokened and so, on this theory, there is no logical space for the possibility of error since candidate errors are transformed into non-errors by their very occurrence. Note that the problem is simultaneously one of ruling in the right causes without also ruling in the wrong ones. CAT cannot have the content cat unless non-cats (including crumpled paper) are excluded from its content. So explaining how content is determined and how the possibility of error are accommodated are not separate tasks.

The error problem is an aspect of what (after Fodor) is often called “the disjunction problem.” With respect to the crude causal theory, the name applies because the theory entails disjunctive contents when it should not. For example, it entails that CATs have the content cats or crumpled paper in the case just considered. The disjunction problem is larger than the problem of error, however, because it is not only in cases of error that mental representations are caused by things that are not in their extensions (Fodor, 1990c). Suppose, for example, that Mick's talking about his childhood pet dog reminds Scott of his childhood pet cat. In this case no misrepresentation is involved but the crude causal theory again entails inappropriate disjunctive contents. Now it entails that Scott's CATs has a content along the lines of cats or talk of pet dogs. This last aspect of the disjunction problem might be called the problem of representation in absentia: how do we explain our capacity to think about absent things? How do mental representations retain or obtain their contents outside of perceptual contexts?

Asking how to alter the crude causal theory to allow for error is one place to begin looking for a more adequate proposal. One approach would be to try to describe certain situations in which only the right causes can produce the representation in question and to maintain that the content of the representation is whatever can cause the representation in such situations. This is sometimes referred to as a “type 1 theory.” A type 1 theory distinguishes between two types of situations, ones in which only the right causes can cause a representation and ones in which other things can too. A type-1 theory says that the first type of situation is content-determining. A type 1 teleological theory might state, for example, that the content of a perceptual representation is whatever can cause it when the perceptual system is performing its proper function, or when conditions are optimal for the proper performance of its function. The content of representations in abstract thought might then, it might be proposed, be derived from their role in perception. Not all teleological theories of content are type 1 theories, however. The theory described in the next section is arguably a variant of a type 1 theory but some of the theories described in later sections are not.

The following sub-sections describe some key differences among teleological theories. It is not possible to describe all extant theories but some different approaches are sketched, along with a brief review of some of their strengths and weaknesses. General objections to teleological theories are discussed later, in section 4.

3.1 Indicator Semantics

Stampe (1977) was one of the first philosophers in modern times to suggest a theory of content according to which content is a matter of reliable causes. Dretske's book, Knowledge and the Flow of Information (1981) has also been very influential. The theory Dretske develops in that book is not a teleological theory of mental content but Dretske (1986, 1988, 1991) later offers a teleo-functional version of indicator-semantics. He begins with a notion of information-carrying, which he calls “indicating”, and suggests that a representation's content is what it has the function to indicate.

Dretske (1981) provides the most careful analysis of the indication relation and he often refers back to this in his later work. However, it adverts to background knowledge and, since knowledge is intentional, this aspect of it is omitted in his theory of content, at least as it applies to the simplest kinds of mental representations. The analysis of indication on which his theory of content relies is as follows: an event of type R indicates that a state of affairs of type C is the case if and only if the probability of C, given that R is instanced, is one (assuming that certain background or “channel conditions” obtain).

Although indication is often be underwritten by a causal regularity such that Cs cause Rs, Dretske tell us that it is not a requirement that Cs cause Rs. Cs and Rs might have a common cause, for instance. He also tells us that it need not be a law that if R then C, though it cannot be merely coincidental. For local reasons, it could be that if there is an R then there is always a C. One of his examples is of doorbell ringings: if there is someone ringing the doorbell whenever the doorbell rings in his neighborhood, its ringing indicates that someone is at the door. But if squirrels start to ring doorbells because people start making them out of nuts, a ringing doorbell will no longer indicate that someone is at the door.

Dretske points out that representation is not equivalent to indication. “R indicates C” entails “if R then C.” But, since misrepresentation is possible, it must be the case that “R represents C” does not entail “if R then C.” (At any rate, “R indicates C” entails “if R then C” if the relevant channel conditions obtain.) So Dretske (1986) suggests that perceptual representations have the function of indicating. The starting idea is this: if something has the function of indicating something else then it is supposed to indicate it but, since items don't always perform their functions, room for error has been made. Dretske appears to rely on an etiological analysis of functions (see e.g., Dretske 1995, p. 7). He speaks of states acquiring a function to indicate by being selected or recruited for indicating. Roughly, Dretske suggests that Rs represent Cs iff Rs were recruited for indicating Cs and for causing a bodily movement, M.

Dretske (1995, p. 2) says, “[t]he fundamental idea is that a system, S, represents a property, F, if and only if S has the function of indicating (providing information about) the F of a certain domain of objects. The way S performs its function (when it performs it) is by occupying different states s1, s2, … sn corresponding to the different determinate values f1, f2fn, of F.” For example, part of the visual system might represent the orientation of lines in a region of the visual field. If so, it does so because it has the function of carrying information about the orientation of lines in that region and it performs this function (when it performs it) by entering into different states when different orientations of lines are present in that region.

This account of representation seems to make room for error, because it implies that representations need only indicate their contents during recruitment or in the environment and given the channel conditions in which recruitment took place; error being possible after that time or in other environments or circumstances. However, Dretske (1986) sees a problem with this suggestion. He illustrates the problem with the case of ocean-dwelling anaerobic bacteria that have tiny magnets (magnetesomes) that are attracted to magnetic north, which serve to direct the bacteria downwards into the relatively oxygen-free sediment on the ocean floor . Plausibly, the function of the magnetesomes is to direct the bacteria to anaerobic conditions. If we “fool” the bacteria by holding a bar magnet nearby and lead the bacteria upward to their death, this looks like a case of natural misrepresentation. We were, in Dretske's words, looking for “nature's way of making a mistake” and we seem to have found it. The problem, says Dretske, is that it is indeterminate how we should describe the function of the magnetesomes. We can plausibly say that they have the function of indicating the oxygen-free sediment. But we can also plausibly say that they have the function of indicating geo-magnetic or even local magnetic north. If we say the latter, no misrepresentation has occurred. So Dretske's interim conclusion is that we cannot count this as an unambiguous case of error, on his theory as outlined so far.

A number of distinct problems go under the name of “the functional indeterminacy problem” (section 4.1) and the magnetesome example can be used to illustrate several of them. However, Dretske's response to the indeterminacy problem that he raised suggests that his main concern was with what is known as the problem of distal content. His problem, then, is this. Suppose that we have a simple system that has just one way of detecting the presence of some feature of the environment. We have just seen a case of this for the anaerobic bacteria have just one way of detecting anaerobic conditions (via the local magnetic field). In such a case, if an inner state indicates the distal feature (anaerobic conditions) it will also indicate the more proximal feature (local magnetic north). Moreover, if there was selection for indicating the distal feature, there will also have been selection for indicating the more proximal feature (since it is by indicating the latter that it indicates the former). Dretske further points out that, even if a creature has several routes by which it can detect a given distal feature (e.g., even if the bacteria can detect anaerobic conditions by means of light sensors as well) there would still be a disjunction of more proximal features that the representation could count as representing, since it could still count as having the function of indicating the disjunction of more proximal features (i.e., local magnetic north or reduced light).

While we might be perfectly willing to allow that the magnetesomes in anaerobic bacteria do not represent or misrepresent, the problem of distal content generalizes. When you see a chair across the room as a chair across the room, you represent it as a solid 3D object at a distance from you and not as a stream of light reflected from it or as a pattern of firings in your retinas. Otherwise you would not try to walk to the chair and sit on it. An informational theory of content must therefore explain how mental representations represent distal features of the world, as opposed to the more proximal items that carry information about those distal features to the representations that represent them.

Dretske (1986) therefore modifies his proposal and maintains that a creature that is capable of representing determinate content must be capable of learning any number of new epistemic routes to the same distal feature. In that case, he says, there is no closed disjunction of more proximal stimuli that the representation could count as representing. He speaks of conditioning in this context. The relevant representation is recruited by conditioning to indicate the distal feature rather than the disjunction of more proximal features, because there is no finite time-invariant disjunction of more proximal stimuli that it has the function of indicating. Loewer (1987) points out that conditioning ends at death, at which point no further epistemic routes can be acquired. So, at the death of a creature, there will be a closed disjunction of proximal features that each of the creature's representations was recruited to indicate. (Loewer comments that Dretske might appeal to epistemic routes that could possibly be acquired by a creature but is unsure if this succeeds.)

The claim that misrepresentation is impossible without learning anyway seems problematic, since it seems to preclude representations produced by innate input systems, such as innate sensory-perceptual systems. Some psychologists also claim that some core concepts are innate (e.g., see Carey 2009). Later, Dretske (1988) drops his conditioning requirement insofar as it is a requirement on content possession but he keeps it as a requirement for the kind of content that can explain behavior. (For discussion of Dretske's account of the causal efficacy of content, see the essays in McLaughlin (1991).) This re-raises the question of how representations produced by innate input analyzers have distal content.

Dretske's strict characterization of indication is thought by some to be troublesome. One reason is that there can be no non-intentional process of selection for something to do Z unless that thing, or things of that type at least, did do Z. Hearts cannot be selected for pumping blood by natural selection unless some hearts pump blood. Similarly, no mechanism can be selected for producing Rs because they indicate Cs unless some Rs indicate Cs. However, all Rs must indicate Cs in a region of space-time if any are to do so, given the strict characterization of indication (for if Rs indicate Cs in that region, then in that region it must be the case that C being the case, given an R-tokening, has a probability of one). Hence, where and while recruitment continues, Rs cannot occur without Cs. Fodor (1990b) questions whether this requirement would be met or met often enough, given that misrepresentation can occur later. Perhaps Dretske's appeal to channel conditions can help him out of this apparent difficulty. However, specifying channel conditions without being ad hoc or circular or adverting to intentional phenomena (such as that a perceiver is not distracted) could prove difficult.

There are some hints in Dretske's writings of a willingness to use a less strict notion of indication for he sometimes speaks of the content of a representation as the “maximally indicated state.” This suggests that there are more minimally indicated states, which would be an oxymoron on the strict interpretation. However, this looser interpretation is not developed in Dretske's writings and his (1981) offers several arguments against loosening the requirement.

A further argument against indicator-semantics involves the claim that something qualifies as a representation only if it is used as a representation. Millikan (1989, pp. 84–90) argues that a representation's content must therefore be determined by its use or else something could count as a representation without representing anything, which would be nonsense. The thought seems to be this: if representational status and representational content are determined separately, they could come apart and, if they could came apart, something could count as a representation by satisfying the requirement for representational status, without representing anything in particular by at the same time failing to satisfy the requirement for representational content.

However, there are ways to block this conclusion. Suppose that Dretske is right that something is a representation only if, (a) the mechanisms that produce it were in part recruited for producing it because it indicates something and also because (b) it plays a certain role (e.g.) in causing bodily movements. Note that (a) concerns production and is also most directly relevant to content determination on Dretske's theory. Note as well that (b) concerns use. Whether these two requirements are adequate to characterize representational status is debatable. But the point here is that even if Millikan is right that representational status is determined by use (as it is, in part, on Dretske's proposal) it does not follow that the the production of representations is irrelevant to determining their content (as it is not on Dretske's proposal). On Dretske's proposal, the production of a representation determines its content but something does not count as a representation unless it also has a use-related function.

Millikan (2004, ch.6) also argues that no system can have the function to produce states that carry correlational information, even if the correlation need not be one hundred percent reliable. On Millikan's view, although representation producing systems do produce representations that carry a form of natural information when they function properly, they do not have the function to do so. (Recall that, while hearts produce thumping sounds when they are functioning properly, they do not have the function to produce thumping sounds; it is a side effect of their proper functioning.) She points out that it cannot be the function of her visual system to ensure a general correlation between representations of a certain type (e.g., all REDs produced by human visual systems) and contents of a certain type (e.g., all red instantiations). Her visual system, for instance, cannot have the function to ensure that your visual system produces REDs only in the presence of red. If this objection succeeds, it still leaves open the possibility that an alternative notion of natural information, such as a causal notion, could be used (as discussed in section 3.5).

Despite some problems with the detailed articulation of Dretske's indicator semantics, his central insight seems important and appealing. It is plausible that sensory-perceptual systems have the function to produce representations that carry information and that this bears on their content. An alternative attempt to elaborate this insight is sketched later.

3.2 Consumer and Benefit-Based Theories

Millikan (1984) and Papineau (1984) were the first to offer non-informational, “benefit-based” or “consumer-based,” versions of teleological theories of mental content. Millikan's theory is described in this section and Papineau's in the next. Millikan's view is richly elaborated in her (1984), her (1989) provides a compressed version, while her (2004, part IV) is somewhere between the two in terms of detail.

At least in her earlier work, Millikan's theory of content focussed heavily on the “consumers” of representations, where the consumers of representations are the systems that have historically used the mapping between the representations and their contents to perform their (the consumers') proper functions. In her (1989) Millikan maintains that the production of mental representations is irrelevant to their contents. She has claimed that attention to the consumers is crucial for solving a certain functional indeterminacy problem, a claim to be discussed in section 4.1.

On Millikan's theory, when the relevant representation is used to communicate between creatures, the producer and the consumer of the representation are different creatures. One of Millikan's examples is of a beaver splash: the beaver that splashes its tail is the producer of the representation and the consumers are the nearby beavers that dive for cover, having been warned of danger. In the case of internal representations, it is less clear what counts as the producer and consumer. Millikan sometimes speaks as if they are different sub-systems and sometimes as if they are different time-slices of the same system, before and after the representation is tokened. In either case, a consumer is a system that Normally exploits the mapping between a representation and its represented in the performance of its proper function, where 'Normally' is understood in a teleological and not a statistical sense.

Consumers might or might not be cognitive systems; Millikan does not seem to require them to be cognitive systems. Consider the often mentioned case of the frog, which responds to anything appropriately small, dark and moving past its retinas by darting out its tongue. In this case, one relevant consumer of the frog's sensory-perceptual representation might be the frog's digestive system. The performance of its function of feeding the frog depends on and in that sense exploits the mapping between the frog's sensory-perceptual representation and its content, which is (Millikan says) frog food.

To find out the content of a representation, says Millikan, we look at the functions of its consumers, which are co-adapted with the producing systems. If a consumer system has a function then past systems of the type did something adaptive that contributed to the preservation or proliferation of such systems in the population. Ancestral frogs had ancestral digestive systems, for example, and these did things that contributed to the preservation and proliferation of such digestive systems in frogs. It is the explanation of this selection of the consumer system that most nearly concerns the content of the representation, says Millikan. To determine the content of a representation, we consider those past occasions on which consumer systems of the type contributed to selection of that type of system and we ask what mapping between the representation and the world was required for this contribution. According to Millikan, the frog's visual representation represents frog food, since it was only when there was frog food where the frog snapped that the frog was fed and so it was only then that the frog's digestive system contributed to the selection of systems of that type through the use of the representation. Millikan calls that which must have mapped on to the representation in this way the Normal condition for the performance of the proper function of the consumer (in the Normal way). The Normal condition is the content of the representation.

An issue worth considering is whether a multiplicity of consumers (e.g., the frog's motor control system employed in orienting toward the stimulus, the digestive system that digests the food, the circulatory system that circulates the digested nutrients and so on) for a given representation will lead to inappropriate content ambiguity. This will depend on whether different consumers have different Normal conditions for the use of the same representation. If the Normal conditions for the functions of various systems that consume a representation in an individual routinely coincide one might wonder if the Normal conditions for the functions of producing systems will also coincide and, if so, why we need to focus on consumers in particular. This might be one reason why, in later writings, Millikan does not emphasize the consumer's functions over the producer's to the same extent.

Some argue that Millikan's theory has advantages in comparison with Dretske's indicator semantics (see e.g., Godfrey-Smith 1989 and Millikan 2004). On Millikan's theory, a representation, R, can represent some environmental feature, C, even if it was never entirely reliable that if there was an R then there was a C. It is enough, on her theory, that Rs mapped on to Cs often enough for the representation's consumers to have (so to speak) benefited from that mapping. There is no need to provide independently specifiable channel conditions or to distinguish between recruitment and post-recruitment environments.

It can also be argued that Millikan has solved the problem of distal content for innate as well as learned concepts. Neither retinal images nor light reflected from prey feed a frog. So it can be argued that the Normal condition for the performance of the proper function of the consumer of the frog's perceptual representation is frog food, not light reflected from the prey or retinal images. However, whether Millikan's solution to the problem of distal content survives closer scrutiny is not clear. A solution must exclude inappropriately proximal items, as well as include appropriately distal items. Food is included in the content of the frog's perceptual representation, on Millikan's theory, but the issue is whether the proximal items that carry information about the food to the frog are excluded. Frog food is of no use to a frog if the frog cannot detect it and a frog can only Normally detect its prey if light is reflected from it and an appropriate retinal image results. So a worry is whether the Normal condition includes the more proximal links in the causal chain as well.

Millikan considers a related objection to do with omnipresent beneficial background conditions, the prima facie worry being whether her theory excludes them. To stay with the same example, consider that other things besides frog food were required for a contribution to fitness on past occasions when the frog's perceptual representation was used (e.g., oxygen and gravity). Does her theory entail that the frog's perceptual representation means, not frog food, but something more like frog food in the presence of oxygen and gravity. Millikan excludes such background conditions on the grounds that they do not explain the success of the systems that consume the representation.

This entry refers to Millikan's theory as a “benefit-based” theory, since it links content to the benefit to the creatures (or to the consuming systems) that accrues from the use of a representation. That to which a representation refers is not necessarily beneficial; it might instead be its avoidance that is beneficial (e.g., the avoidance of danger, in the case of the beaver splash). While gravity is beneficial, being tied to Earth by gravity is not a benefit that accrues to frogs due to the use of their prey-representations. The ingestion of nutritional substances, on the other hand, is something that results from the use of the prey-representations. Benefit-based theories need not be consumer-based theories, however, since we could speak of benefits to producing systems or (when the relevant selection is natural selection operating over an evolutionary span of time) to the inclusive fitness of the creature as a whole.

One objection to Millikan's Normal conditions is that they are overly specific for plausible contents. Consider the fact that all sorts of circumstances could prevent a contribution to fitness: for example, an infected fly or a crow standing nearby could spell disease or death instead of nutrition for the frog (Hall, 1990). It has been argued that Millikan's theory has the unintended consequence that the frog's representation has the content food that is not infected, when no crow is standing by … etc..

Pietroski (1992) also argues that Millikan's theory provides implausible intentional explanations. His tale of the kimu is intended to press the point. The kimu are color-blind creatures, until a mutation arises which results in a mechanism that produces a brain state, B, in response to red. Those who inherit this mechanism enjoy the sensation, which leads them to climb to the top of the nearest hill every morning (to see the rising sun or some flowers). The result is that they avoid the dawn-marauding predators, the snorf, who hunt in the valley below and, solely as a result of this, there is selection for the mutation. As Pietroski wants to describe the case, Bs have the content red (or there is some red) and the kimu enjoy the sight or red and seek out the sight of red things. The point of the story is that Millikan's theory does not allow the story to be told this way. On her theory, the kimu do not see a visual target as red or desire the sight of red, given that it was not the mapping between Bs and red but between Bs and snorf-free-space that was crucial for the fitness of the kimu (and so for the selection of any relevant consumers of the representation). On Millikan's theory, Bs mean snorf-free-space and there is no representation of red in a kimu's brain.

Pietroski argues that biting the bullet is radically revisionist in this case. Behavioral tests, he says, could support his claim. Plant a red flag among a crowd of snorf and the kimu will eagerly join them. It is consistent with his story that contemporary kimu might never have seen a snorf and might be unable to recognise one were it stood smack in front of their faces. Intuitively, we want to say that they might know nothing of snorf, he says. Pietroski suggests that this might be a problem for all teleological theories of content. However, it is more specifically an objection to a benefit-based version (some other teleological theories of content imply that the kimu represent red, see section 3.5).

Millikan (2000, p. 149) agrees that her theory entails that the kimu's B-states represent fewer snorf this way. She argues that we need to distinguish between the properties represented and the properties that cause representations. How else, she asks, could a tortoise think chow this way, given that being nutritious is an invisible property and so could not cause a sensory-perceptual representation? Setting aside what a tortoise really thinks, the worry is how a causal theory of content can allow for the representation of that which lies behind the surface features of objects, or how a causal theory of content can account for natural kind concepts that have hidden or unknown “essences” (e.g., a concept of water, which is necessarily composed of H2O).

Price (2001) offers a detailed teleological theory that is similar to Millikan's. She defends Millikan's interpretation of the mind of the kimu on the ground that it better explains their behavior. She endorses the idea that the point of making content ascriptions is to rationalize behavior and her claim is that a desire to avoid snorf is a better reason to climb to the top of the hill than a desire to watch the sun rise or see red flowers. Several responses are possible. One is that a desire to watch a sunrise is reason enough to climb a hill. Another is that we are left without a rational explanation of why a kimu would be eager to enter snorf-infested space when the snorf are near red, other than that they are psychologically incapable of correctly representing the presence of snorf when snorf are near red. A further possible response is to question whether it is the role of content ascriptions to rationalize behavior (as famously claimed by Davidson (1985) and Dennett (1996)).

In relation to this last point, one can ask if some content ascriptions are suitable for some theoretical purposes and others for others. One might agree that folk psychological ascriptions of intentional mental states are meant to rationalize behavior but question whether this is their role in cognitive science. In the latter case, the aim is to explain the psychological capacities of humans and (in the case of cognitive neuroethology) other creatures. Thus a question to ask is what content ascriptions would serve the explanatory purposes of the mind and brain sciences, rather than our folk psychological intuitions. Neander (2006) and Schulte (forthcoming) argue that benefit-based theories generate the wrong contents for mainstream (information-processing) theories of perception in relation to the simple system cases discussed in the philosophy literature. A principle of such mainstream theories is is that, in vision, the invisible properties of objects are only represented after the visible surface features of objects are first represented (see, e.g., Palmer 1999). The worry is that benefit-based theories can entail that it is only the invisible but beneficial property that are represented in perception.

Further afield, Shapiro (1992) discusses the role of content ascriptions in foraging theory, which raises a different set of theoretical considerations.

Millikan occasionally makes it clear that her theory is intended as a version of an isomorphism theory. According to an isomorphism theory, representation is a matter of mirroring the relations among the elements in the represented domain in the relations among elements in the representing domain. Since the relevant resemblances are relational, there is no requirement that representations share properties other than abstract relational properties with their representeds. This makes isomorphism theories more plausible than crude resemblance theories. However, this aspect of Millikan's theory is not much developed. (See Shea 2012 for discussion of the role of isomorphism in her theory.)

To a large extent, Millikan's theory has been responsible for the great interest, both positive and negative, that philosophers have shown in this general class of theories. Her writings on the topic are extensive and this section has only touched on the basics of her view.

3.3 Non-combinatorial Theories

A further way in which teleological theories of content can differ is with respect to the contents that they aim to explain. David Papineau's theory, developed at the same time as Millikan's, will help illustrate this point. Papineau (1984, 1987, 1990 and 1993) develops a theory that is top-down, or non-combinatorial, insofar as the representational states to which his theory most directly applies are whole propositional attitudes (e.g., beliefs and desires). In early writings, Millikan sometimes seems to hold a similar view and some objections initially raised against her theory are based on this interpretation of her view (see, e.g., Fodor 1990b, 64–69, where he raises some of the following points).

In Papineau's theory, the contents of desires are primary and those of beliefs are secondary in terms of their derivation. According to Papineau, a desire's “real satisfaction condition” is “… that effect which it is the desire's biological purpose to produce” (1993, 58–59), by which he means that “[s]ome past selection mechanism has favored that desire — or, more precisely, the ability to form that type of desire — in virtue of that desire producing that effect” (1993, 59). So desires have the function of causing us, in collaboration with our beliefs, to bring about certain conditions, conditions that enhanced the fitness of people in the past who had these desires. Desires, in general, were selected for causing us to bring about conditions that contributed to our fitness, and particular desires were selected for causing us to bring about particular conditions. These conditions are referred to as their satisfaction conditions and they are the contents of desires.

The “real truth condition” of a belief, Papineau tells us, is the condition that must obtain if the desire with which it collaborates in producing an action is to be satisfied by the condition brought about by that action. A desire that has the function of bringing it about that we have food has the content that we have food, since it was selected for bringing it about that we have food, and if this desire collaborates with a belief to cause us to go to the fridge, the content of the belief is that there is food in the fridge if our desire for food would only be satisfied by our doing so if it is true that there is food in the fridge (Papineau's example).

This seems to reject the Language of Thought hypothesis, according to which thought employs a combinatorial semantics. Language is combinatorial to the extent that the meaning of a sentence is a function of the meanings of the words in the sentence and their syntactic relations. “Rover attacked Fluff” has a combinatorial meaning if its meaning is a function of the meaning of “Rover”, the meaning of “attacked” and the meaning of “Fluff”, along with their syntactic relations (so that “Rover attacked Fluff” differs in meaning from “Fluff attacked Rover”). According to some philosophers (see esp. Fodor 1975) the content of propositional attitudes is combinatorial in an analogous sense. That is, for instance, the content of a belief is a function of the contents of the component concepts employed in the proposition believed, along with their syntactic relations. A teleological theory of content can be combinatorial, for it can maintain that the content of a representation that expresses a proposition is determined by the separate histories of the representations for the conceptual constituents of the proposition (and, perhaps, by the selection history of the syntactic rules that apply to their syntactic relations). Papineau's theory is not combinatorial, at least for some propositional attitudes. Instead, the proposal is that the contents of concepts are a function of their role in the beliefs and desires in which they participate.

Papineau's theory is a benefit-based theory, and some issues discussed in the previous sub-section are relevant to an assessment of it. For instance, it is unclear that what we desire is always what is beneficial to fitness. One might want sex, not babies or bonding, and yet it might be the babies and the bonding that are crucial for fitness. However, this section will not attempt an overview of the strengths and weaknesses of this theory but will focus on issues peculiar to non-combinatorial accounts.

Any non-combinatorial theory must face certain general objections to non-combinatorial theories, such as the objection that it cannot account for the productivity and systematicity of thought (Fodor 1981, 1987). This entry will not rehearse that argument (see the entry on the language of thought hypothesis) but special problems for a teleological version of a non-combinatorial theory need to be mentioned. Consider, for example, the desire to dance around a magnolia tree when the stars are bright, while wearing two carrots for horns and two half cabbages for breasts. Probably no-one has wanted to do this. But now suppose that someone does develop this desire (to prove Papineau wrong, say) so that it is desired for the first time. We cannot characterize the situation in this way, according to a non-combinatorial teleological theory. Since it has never been desired before, it has no history of selection and so no content on its first occurrence, on that style of theory. It is also a problem for this kind of theory that some desires do not or cannot contribute to their own satisfaction (e.g., the desire for rain tomorrow or the desire to be immortal) and that some desires that do contribute to their own satisfaction will not be selected for doing so (e.g., the desire to smoke or to kill one's children). In contrast, teleological theories that are combinatorial have no special problem with novel desires, desires that cannot contribute to bringing about their own satisfaction conditions or desires that have satisfaction conditions that do not enhance fitness, as long as their constitutive concepts have appropriate selection histories or are somehow built up from simpler concepts that have appropriate selection histories.

Papineau can respond by agreeing that some concessions to a combinatorial semantics have to be made. Once some desires and beliefs have content, the concepts involved acquire content from their role in these and they can be used to produce further novel, or self-destructive or causally impotent desires. However, it needs to be shown that such a concession is not ad hoc. The problem is to justify the claim that the desire to blow up a plane with a shoe explosive is combinatorial, whereas the belief that there is food in the fridge is not.

3.4 More or Less Modest Combinatorial Theories

In contrast to Papineau's theory, some teleological theories are combinatorial theories. According to these theories, a teleological theory directly accounts for the contents of just the representational simples and combinatorial processes are in addition involved in determining the content of more complex representations.

There are two kinds of possible combinatorial processes that might be involved. One operates at the level of a proposition, or at the level of entire map-like or pictorial representations. This type of combinatorial process is thought to play a role that is roughly analogous to the role of a grammar in a spoken language, or a role that is roughly analogous to the principles of map-formation in cartography or pictorial composition in picturing. For example, it might allow us to combine the concepts CAT, ON and MAT to produce the thought (belief, desire, etc.) that the cat is on the mat.

A second kind of combinatorial process that might be involved operates at the level of single concepts and their associated conceptions. Some think that simpler concepts could be combined in conceptions to formulate more sophisticated concepts or to fix the reference of more sophisticated concepts that remain at roughly the grain of the lexemes of a language. Most simply, the concepts MALE, ADULT and NOT MARRIED might be combined to form the concept BACHELOR by means of a definitional conception. Or there might be other types of conceptions involved, such as Wittgensteinian family resemblance conceptions or prototype-style conceptions.

Teleological theories can be more or less modest in their scope. A modest theory only aims to directly account for the contents of representational simples. Dretske (1986), expresses a “modest” view when he gives voice to the hope that more sophisticated representations can be built out of the simple sensory-perceptual representations his theory accommodates. However, there is as yet no clear agreement among philosophers or psychologists as to which the representational simples are.

One modest view is that a teleological theory should directly apply to sensory-perceptual and motor representations and to innate concepts only (i.e., those that can be produced without learning). However, even this needs qualifying, since it is controversial which of our concepts are innate. On a radical nativist view, such as that of Fodor (1981), all or almost all of the concepts expressed by the lexical morphemes (the smallest meaningful components) of a language are innate (not learned, only triggered). If that were really so, a theory that aimed to account for the contents of all innate concepts would need to be quite ambitious. Those who propose genuinely modest teleological theories of content do not hold this view, for they claim that some mental representations that correspond to lexical morphemes are sophisticated, in the sense that they are somehow composed out of or acquired through the use of other representations.

Sterelny (1990) describes his teleological theory as “modest” because it only attempts to give an account of innate representations and he assumes these to be a relatively small subset of the complete set of our mental representations. As for giving an account of the human propositional attitudes, Sterelny maintains that a teleological theory of content will face “appalling difficulties.” He believes that a teleological theory for the representational simples will be part of the complete psycho-semantic theory but not the whole of it. This contrasts with Papineau's theory, which most directly applies to propositional attitudes. It also contrasts with Millikan's (1984) highly ambitious attempt to directly account, not only for the contents of all mental representations, but also for the meanings of all linguistic utterances via a teleological theory.

A modest teleological theory might claim some advantages. Most obviously, unless some concepts can be derived from other concepts, teleological theories would seem to have trouble accounting for empty concepts. For example, no unicorns were ever indicated by UNICORNs, the presence of a unicorn was never a Normal condition for the performance of the proper function of a consumer of UNICORNs, and the desire to find a unicorn has never been satisfied so that the conditions involved in the satisfaction of this desire could not have contributed to selection of the mechanisms that produce desires of the type. This problem is avoided by a teleological theory that aims to directly account for the contents of just the representational simples, on the assumption that no representational simple expresses an empty concept. (Rey (2010) questions that assumption.)

It is sometimes argued that the lack of unicorns as (e.g.) Normal conditions is unproblematic since UNICORN does not refer (to anything actual). Arguably, non-modest theories deliver the correct referential content. It is a question whether a theory of referential content needs to determine the extension of a concept in all possible worlds. (If the reader's view is that there are no unicorns in any possible worlds because unicorns are essentially fictional, the reader should here substitute another example of an actually empty but possibly non-empty concept, such as a concept of phlogiston or of entelechies.) Some theories of referential content do and some do not take on this task.

The greatest challenge to those offering modest theories will be to explain how complex concepts can be composed out of or derived from simpler concepts. It might fairly be said that it is not the task of a fundamental theory of mental content per se to explain how complex concepts can be composed out of simpler ones, but it is a problem for modest theories if no such explanation is available. Moreover, providing such an explanation is generally thought to be problematic. Some say that “modest” theories have some seriously immodest consequences. One is alleged to be that there must be a principled analytic/synthetic distinction. See, for instance, Fodor and Lepore (1992), who argue that we must choose between three options: defending a principled analytic/synthetic distinction, accepting meaning holism or accepting that virtually no concepts of roughly the grain of the lexemes of a language are composed out of simpler concepts. They further argue that the first two options are not viable. However, some psychologists maintain that we must somehow “bootstrap” up from simple to sophisticated concepts (see e.g., Carey (2009)). And some philosophers are anyway unconvinced by Fodor and Lepore's arguments. (Readers who would like to read more on concepts and conceptions might start with the introduction to and readings in Margolis and Laurence (1999) and the entries in this encyclopedia on concepts and on the analytic-synthetic distinction.)

3.5 Causal-Informational Theories

To round out this survey of views, we return to informational theories, to look at some more recent work that is broadly in the tradition of Stampe and Dretske. These theories take seriously the idea that mental representations have informational functions.

First, a response is offered to an argument that is intended to block all informational versions of teleosemantics. This argument is that, because functions are selected effects, any appeal to representational functions must be an appeal to the effects of representations and not their causes (Millikan (1989b, 85), Papineau (1998, 3)). One response is to accept this argument's conclusion but to maintains that an additional informational requirement can nonetheless be added to an appeal to functions; teleological theories of mental content can appeal to other things besides functions (Shea, 2007).

An alternative response rejects the argument. Neander (2012) claims that sensory-perceptual systems have what she calls “response functions,” where to respond to something is to be caused by it to do something else. For example, a visual system might be caused by a red instantiation to change into a RED state, and it might have been selected (in part) for being disposed to change into a RED state in response to red and have the function to do so.

On Neander's view, these state changes represent the causes to which the system is supposed to respond by producing the representation in question. They are, so to speak, the Normal causes of the producer of the representation, rather than the Normal conditions for the performance of the proper function of the representation's consumer. On this view, RED has the content red if the visual system that produces it has the function to produce it in response to red, or more specifically in response to red being instanced in the receptive field of the perceptual processing pathways responsible for the RED's production. This is the basic idea though further complications are added. One is intended to solve the problem of distal content as follows:

A sensory-perceptual representation, R, in a sensory-perceptual system S, has the descriptive content C and not Prox-C if:
  1. S was selected for producing Rs in response to Cs and,
  2. if S was selected for producing Rs in response to both Cs and Prox-Cs, it was selected for producing Rs in response to Prox-Cs because this was a means to its producing Rs in response to Cs and not vice-versa.

The second requirement is intended to determine appropriately distal content and is to be applied only after the first requirement is applied. The first requirement on its own does not determine suitably distal content because there is a causal chain leading from C to R and, if the system had been selected for responding to Cs by producing Rs, it must also have been selected for responding to the proximal items in the causal chain (such as the light reflected from Cs toward the retina of the eye, in the case of visual perception). These more proximal items in the causal chain carry information about C to the system and through the system to the R. There is, however, an asymmetry, to which the second requirement appeals. The system was selected for its disposition to respond to the proximal items because by that means it responded to the more distal items, but the system was not selected for responding to the more distal item because by that means it responded to the more proximal items. (It does not respond to the more proximal items by means of its responding to the more distal items; that is not how the means-end analysis pans out).

On this causal theory, a sensory-perceptual system need not have produced Rs only in the presence of Cs during selection of the system. There is no need to specify channel conditions or conditions in which representation is reliable. This is not a type-1 teleological theory of content. The idea that representations are reliably caused by or correlated with their contents in some conditions does not figure in the proposal.

The first requirement ensures different content ascriptions to those generated by benefit-based teleological theories. For example, consider again the kimu (see section 3.2). As stipulated by Pietrosky, it is the presence of red and not the absence of snorf that causes the relevant mechanism in a kimu to produce a B-state. Mechanisms of the type were not selected for a disposition to be caused by an absence of snorf to produce B-states. They had no such disposition, so they could not have been selected for it. The relevant mechanisms in the kimu were selected for a disposition to be caused by red to produce a B-state, as well as for further causing certain movements (hill climbing of a morning) thereby. They were selected for this because red correlated well enough with snorflessness in the kimu's habitat. However, on this proposal, that further fact becomes a background evolutionary fact that is not content constitutive. The candidate content fewer snorf this way fails to pass the first requirement.

Consider too the notorious case of the frog. Plausibly, the relevant visual pathways in the frog's brain were selected for their disposition to be caused by a certain configuration of visible features (roughly, something's being small, dark and moving) to produce the sensory-perceptual representation in question, as well as for their disposition to initiate orienting and so on thereby. They were plausibly selected for this preferential response to the configuration of visible features because things with these features were often enough nutritious for the frog. The visual pathways in the frog were not selected for a disposition to respond to the nutritional value of a stimulus, however. For the normal frog's visual system has no causal sensitivity to the nutritional value of the stimulus and cannot have been selected for a causal sensitivity it did not have. So, on this proposal, the visual content of the representation is something small, dark, moving (or something along these lines) rather than frog food. According to Neander (2006) the configuration of visible features is the right style of visual content to ascribe for the purpose of mainstream scientific explanations of an anuran's visual capacities.

Nor does this proposal seem to generate overly specific contents of the kind mentioned earlier in relation to benefit-based theories. On this informational theory, the frog does not represent the stimulus as not carrying an infectious disease, even if only those small, dark and moving things that were not carrying an infectious disease contributed to frog fitness when the frog was fed. Sensory-perceptual systems can only have been selected for causal dispositions which past systems of the type possessed. Since past systems had no disposition to respond preferentially to the absence of an infectious disease in visual stimuli that were small, dark and moving, the fact that contributions to fitness were made only on those occasions when an infectious disease was absent is, again, a background evolutionary fact that is not content-constitutive on this proposal.

One possible concern is whether sufficient room for misrepresentation has been made. Some early discussions of teleological theories of content assumed that the content of the frog's representation must be frog food or fly or else misrepresentation would be impossible. The frog would not be in error when it snapped at something small, dark and moving that was not frog food, or not a fly. However, misrepresentation is possible on this proposal. A representation that is supposed to be produced in response to something that is small, dark and moving and is instead produced in response to something large and looming would count as misrepresenting and a neurologically damaged frog (e.g., one with a damaged thalamus) will indeed attempt to catch all sorts of inappropriate things (e.g., an experimenter's hand or even the frog's own limbs). This informational theory also entails that a kimu's B-state will misrepresent if it is tokened in response to anything that is not red. More importantly, perhaps, it seems to entail that human REDs will misrepresent if tokened at something not-red, as could happen in red-green color blindness, in color contrast illusions or in unusual viewing conditions.

As Millikan (2012) and others have pointed out, there are representations that cannot be caused by their contents, such as TOMORROW. No tomorrow has ever caused a thought about tomorrow. However, TOMORROW is not a sensory-perceptual representation and so this is not an objection to this proposal per se. As with other modest theories, however, the challenge is explaining how to link this modest theory for some mental contents to a more comprehensive theory that accounts for all of the contents of all of our concepts (see section section 3.4).

4. Problems for Teleosemantics

The preceding survey of teleological theories of content does not mention all of the extant teleological theories but it illustrates some of the commonalities and differences among them. Now we turn to some objections that have been raised against the general idea of teleosemantics. This section looks at the objections that have been most influential. Some have already been touched on in previous sections.

4.1 Functional Indeterminacy

There are several potential indeterminacy problems. Aside from the problem of distal content, which has already been discussed above in relation to the different theories that treat it in different ways, there are two other indeterminacy problems. One concerns the fact that natural selection is extensional (Fodor, 1990b) and the other concerns the fact that natural selection selects traits for complex causal roles (Neander, 1995). Both problems can perhaps be attributed to Dretske (1986), though Dretske did not distinguish them from the problem of distal content, the problem he seems primarily to have been interested in solving.

Fodor once devised a teleological theory of mental content (published years later, as Fodor 1990a). However, he quickly repudiated the idea and has since been one of the most vigorous critics of the general idea. His main objection was initially that teleological theories leave content indeterminate because functions are indeterminate. Functional indeterminacy, according to Fodor (1990b), stems from the fact that natural selection is extensional in the following sense: if it is adaptive for an organism, O, to do something, M, in the presence of environmental feature, F, and F is reliably co-extensive with another feature, G, then it is equally adaptive for O to do M in the presence of G. Fodor argues that teleological theories therefore cannot distinguish between candidate contents that are co-extensional in the environment in which a creature evolved.

Fodor's example is the frog that snaps at anything that is suitably small, dark and moving and thereby feeds itself. According to Fodor, if it was adaptive for the frog to snap at flies then it was equally adaptive for it to snap at small, dark, moving things on the simplifying assumption that flies and small, dark, moving things were reliably co-extensive in the frog's natural habitat. According to Fodor, we can equally well say that the function of the device is to detect flies and that its function is to detect small, dark, moving things. So, if we try to determine the content of the representation by reference to the function of the detection mechanism, the content remains indeterminate. We can choose to describe the function one way or another but if the content depends on how we choose to describe the function it is not a naturalized content. Note that the candidate contents fly and frog food and small, dark moving thing each license different assessments concerning misrepresentation. If the frog is representing the stimulus as a fly, for instance, it misrepresents something that is small, dark and moving that is not a fly, using the relevant representation. If it represents the stimulus as small, dark and moving, it does not.

The standard response to this objection starts by pointing out that the function of a trait is what that type of trait was selected for and that the notion of selection for is a causal notion (Sterelny 1990, Millikan 1991). A trait is selected for its possession of a certain property only if that property causally contributed to selection of traits of the type (see Sober 1984). The heart was selected for circulating blood but not for making a thumping noise even though the two co-occured. It was selected for pumping rather than thumping given that the pumping but not the thumping causally contributed to the inclusive fitness of ancestral creatures and thus causally contributed to the selection of hearts. Functions can therefore distinguish between two properties that reliably co-vary as long as one but not caused the trait to be selected. This point has mostly been well-taken.

However, appeal to selection for does not suffice to disambiguate content (Griffiths & Goode 1995, Neander 1995). In the case of the frog's detection device, its responding to small, dark, moving things and its helping the frog to catch and swallow something nutritious both played a causal role in selection of the relevant representation producing or consuming systems. It was by detecting small, dark moving things that the frog got fed. So neither the detecting of something small, dark and moving, nor the eating of something nutritious was a mere side-effect or mere piggy back trait. We return to this issue in a moment.

Fodor (1996) anyway continues to object that there is a remainder of a problem along these lines because content, he claims, is more fine-grained than selection histories can account for. He maintains that teleological theories cannot discriminate contents finely enough when there are properties that are logically or nomologically co-extensive. Being triangular (being a closed plane figure with three straight sides) and being trilateral (being a closed plane figure with three inner angles) are logically co-extensive properties. Being a renate (a creature with a kidney) and being a cordate (a creature with a heart) are (Fodor assumes) nomologically co-extensive. According to Fodor, we cannot distinguish between selection for adaptive responses in the presence of one versus the other of two such properties. We can represent each distinctly but, according to Fodor, selection histories are not sufficiently fine-grained to distinguish such contents.

Consider the two options: either the causal powers of two co-extensional properties F and G are distinct or they are not distinct. Suppose first that they are not distinct. On some plausible and medium-grained theories of property individuation, properties are individuated by their causal powers, so if there is no difference in the causal powers of F and G, they are the same property on such a theory. On this way of individuating properties, a representation that refers to the one must refer to the other too and so there is no problem here for a theory of referential content. On this way of thinking, if there is no distinction between the causal powers of triangularity and trilaterality, any difference in the mental representations TRIANGULAR and TRILATERAL must be a difference of a different sort. It might be a difference in representational vehicle, or in other words, the two might be different predicates denoting the same property. They might, consistent with this, differ in their cognitive roles. Alternatively, modest theories can maintain that these two representations are semantically complex, in which case there might be a difference (even a referential difference) in the constituent concepts out of which TRIANGULAR and TRILATERAL are composed (e.g., one mentions angles and one does not).

Suppose, on the other hand, that F and G do have distinct causal powers. Most would agree that this is in fact the case if X is the property of being a creature with a kidney and Y is the property of being a creature with a heart. In that case, this version of the objection does not get off the ground. If the causal powers of the properties differ, they can play different roles in selection histories. Consider, for example, the proposal that the contents of sensory-perceptual representations are (so to speak) their Normal causes. A system can have a disposition to be caused by Fs to do M, without having a disposition to be caused by Gs to do M, if F and G have distinct causal powers even (if they are co-extensive). The system can be selected for the one disposition that it has but it cannot be selected for the disposition that it does not have.

Fodor's objection has evolved into a general objection to any adaptational explanation and to the very notion of selection for. It would take too much space to follow the trail further here. (See Fodor and Piatelli-Palmarini 2010 and see esp. Block and Kitcher 2010 and Sober 2011 (Other Internet Resources) for effective critical discussion).

We turn now to the second functional indeterminacy problem. It stems from the fact that organic systems are selected for complex causal roles, as indicated earlier. For example, a gene in an antelope might have been selected because it (i) altered the shape of hemoglobin, (ii) which increased oxygen uptake, (iii) which allowed the antelope to move to higher ground, (iv) which gave them access to richer pasture in summer, (v) and so improved their nutritional status, their immunity to disease, their vigor in avoiding predation, their attraction to mates and (vi) their chances of survival and reproduction (Neander, 1995). To determine the function of a trait, such as the altered shape of the hemoglobin, the etiological theory of functions tells us to ask, “what did past instances do that was adaptive and that caused traits of that type to be selected?”. In this case, the answer is (ii) through (vi). The altered shape of the hemoglobin did all of this, and all of this was adaptive, and all of this contributed to the selection of the trait (i.e., it was selected for all of this). So all of this would seem to be the trait's function. Its function is the complex causal role for which it was selected.

The problem for content can be seen when we consider mechanisms that produce or consume representations. For instance, the frog's detection device was selected because it (a) responded to small, dark, moving things and (b) that helped the frog catch these things, and (c) that provided the frog with nutrients and (d) that contributed to the frog's chances of survival and reproduction in various ways. Thus ancestral detection devices contributed to the selection of that type of device by way of a complex causal route in which the visible configuration of the stimulus and the nutritional properties of the stimulus both play a role. Note that this does not depend on these features of the environment being co-extensional. Even if not all small, dark and moving things were nutritious and not all nutritious things were small, dark and moving in the frog's natural habitat, this problem of complex causal roles would still remain. The problem is that the systems responsible for the production and the consumption of representations were selected for complex causal roles in which a number of environmental features were involved.

Agar (1993) supports the idea that the frog's representation represents small, dark, moving food, a content intended to incorporate all of the properties causally responsible for the selection. Price (1998, 2001) claims that, contrary to what has just been said, there is a unique, correct function ascription for each trait and she elaborates a number of principles to isolate the unique, correct function ascription. Enc (2002) endorses Price's claim that function ascriptions must be determinate if any teleological theory of content is to succeed but raises problems for her attempt to show that function ascriptions are suitably determinate.

However, teleological theories of content do not merely gesture toward functions and leave it at that. Consider again the causal theory discussed in the preceding section. The content of the frog's sensory-perceptual representation is not indeterminate between the configuration of visible features and something nutritious on that theory, since the frog's visual system was not selected for producing the relevant sensory-perceptual representation in response to the nutritional value of the stimulus. A frog's visual system is not causally sensitive to the presence or absence of nutrients and could not have been selected for a causal sensitivity it did not have. The general point here is that teleological theories of content appeal to functions in certain ways and one must examine the particular theory to see if the theory isolates a sufficiently determinate content.

In responding to the indeterminacy problem, Millikan (1991) might be thought to rely on the fact that, on her theory, it is the proper function of the consumer and not that of the producer of the representation that determines its content. For instance, in discussing Dretske's magnetesome example she says that, “[t]he mechanisms THAT USE the magnetesome's offerings don't care at all whether the magnet points to magnetic north, geomagnetic north or, say, to the North Star. The only one of the conditions Dretske mentioned that is necessary FOR THE USER'S PROPER FUNCTIONING is that the magnet point in the direction of lesser oxygen” (Millikan (1991, 163) original emphasis). However, it seems (to this author) that Millikan's emphasis here does not put the emphasis in the right place for her theory. Recall that one consumer of the frog's perceptual representation is the motor control system which controls the frog's orienting toward the stimulus. We can describe its function as controlling the frog's orienting toward frog food, but we could also describe it as controlling the frog's orienting toward small, dark, moving things. A mere appeal to consumers would seem to shift the problem without solving it. However, it does not follow that Millikan's theory leaves content indeterminate. It is Millikan's appeal to Normal conditions that does more work in disambiguating the content for her.

Finally, some proponents of teleological theories do not think that content is determinate in the cases used to illustrate the alleged problem. Dennett (1995) maintains that such content indeterminacy is unproblematic. Papineau (1997) maintains that content is indeterminate in the case of a creatures that lacks a belief-desire psychological structure. Whether a creature lacks a belief-desire structure will in part depend on how we construe this requirement. It is not straightforward whether frogs lack a belief-desire psychological structure given that they have both informational and motivational states. Nonetheless, Papineau is probably right that the informational and motivational states are not so distinct as ours and he might also be right that content indeterminacy at this level is unproblematic. We will, however, need to resolve related content indeterminacy problems for human mental states.

4.2 Swampman

Another objection that has been influential is the Swampman objection. Swampman-style examples have been around for some time. Boorse (1976) imagines a population of rabbits accidentally coalescing into existence as a counter-example to Wright's etiological theory of functions. Boorse's claim was that we could ascribe functions to the rabbits' parts even if the rabbits lacked any selection-history. Swampman in particular was raised by Davidson (1987) as a potential objection to his own historical (but not teleological) theory of content. When Swampman comes into existence he is a synchronic (at a time, but not extended over time) physical replica of Davidson at a certain point in time (t). Swampman's history differs radically from Davidson's because he comes into existence as a result of a purely accidental collision of elementary particles. Crucially, he does not partake in our evolutionary history or have any other evolutionary history or any developmental history of his own. Nor is he created by God or copied from Davidson by a machine. The resemblance between Davidson and Swampman is nothing but a stupendous coincidence. Swampman's appearance of design is deceptive because he in no way derives from any design process, natural or intentional. Swampman's component parts have no functions according to an etiological theory of function and so his “brain” states have no contents according to a teleological theory of mental content.

Many people find these results highly counter-intuitive, especially the result that Swampman lacks all intentional states. Assuming physicalism, we could substitute Swampman for Davidson and no one, including his most intimate friends and family, would detect a difference. Swampman would make noises that his friends and family would interpret as witty, interesting and meaningful but, according to teleological theories (and Davidson's own theory of content) Swampman has no ideas about philosophy, no perceptions of his surroundings and no beliefs or desires about anything at all.

There are two broad strategies in responding to this objection. One is to try to loosen the grip of the intuition that Swampman has intentional states and the other is to argue that any intuitions that remain do not show that teleological theories are wrong. In either case, it is important to isolate the relevant intuition because, by all accounts, Swampman would have much that Davidson had at t. All of the chemical activity in Davidson's brain when he understood words, for example, would occur in Swampman's brain-analog and certain descriptions of this activity will apply to both equally: e.g., physical, chemical and formal descriptions of it. Further, it is trivial that Swampman has narrow content if “narrow content” is defined as whatever most closely approximates content that nonetheless supervenes on just the narrow physical states of an individual at a time and “from the skin in.” By definition, whatever narrow content Davidson's mental states had at t, Swampman's inner states had too, since Swampman is at t physically indistinguishable “from the skin in” from Davidson at t. What teleological theories entail is that Swampman, no matter what narrow content he has, lacks regular normative content. The intuition that conflicts with teleological theories, therefore, is that Swampman's inner states, which are narrowly identical to Davidson's, are true, false, accurate or inaccurate in the usual sense.

It is clear that, if Swampman's inner states do have truth-evaluable contents, they cannot always have the same truth values as Davidson's. Everyone will probably agree that, at t, Swampman cannot remember his past life since at most he could only have pseudo-memories of Davidson's. Everyone will also agree that Swampman cannot correctly think that he is returning home to his wife and sitting in his house, since the house and the wife are not his. Further, it should be kept in mind that many think that Putnam (1975) has shown that the contents of natural kind concepts do not supervene on just what is “in the head.” If Putnam-style twin cases can be constructed for other mental representations and their contents as well (see Burge 1979, 1986) then Swampman's lack of history might anyway be an issue even before considering the further complication of a teleological theory. It thus requires careful analysis with respect to controversial issues to determine just what intuitions about Swampman would tell against the externalism of teleological theories in particular.

Those who try to dislodge any remaining intuitions against teleological theories argue that an appearance of design can be misleading. (Recall that “design” here includes the mechanical design-work of natural selection.) Consider, for example, Boorse's swamprabbits. It might be intuitive to attribute functions to their eye-analogs. But in nature nothing so intricately organized as if for the performance of a function fails to be the result of a design process. It is argued that habits of thought, which usually take us from an appearance of design to a function ascription, lead to false ascriptions in purely hypothetical unrealistic cases (Neander 1991). Dretske (1996) argues the case with another imaginary example. Twin-Tercel, a random replica of his old Tercel, comes about as the result of a freakish storm in a junk yard. It is molecule-for-molecule identical to his old Tercel, except that its “gas-gauge” does not move in relation to the amount of gas in its “tank”. We might be tempted to say that the thing is broken, but Dretske says that there is no basis for saying that it does not work because to say that it does not work implies that it was designed to do something it cannot do and it was not designed to do anything. If we should reform our intuitions in the one case, perhaps we should also reform them in the case of Swampman's intentionality, he says.

We might grant Dretske his claim about Twin-Tercel and yet resist the move from functions to intentionality. The problem for theories of content, as opposed to theories of function, is exacerbated by the relation between intentionality and consciousness. Many philosophers find it plausible that an individual's phenomenal consciousness at a time supervenes on just the inner physical properties of that individual at that time. If this narrow supervenience thesis is true, then Swampman will have phenomenal consciousness when he comes into existence, assuming Davidson did at t. However, it is hard to see how we can attribute phenomenal consciousness to Swampman without also attributing some intentional states to him. Suppose, for example, that Swampman has a red-sensation. Then presumably it will seem to him that he is seeing something red. But it seeming to him that he is seeing something red is presumably an intentional state.

Here we connect with another important issue that lies outside of the scope of this entry. However, a couple of points can be made. First, some proponents of teleological theories of content are not troubled by this line of argument because they reject the view that consciousness supervenes on narrow states and hold theories of phenomenal consciousness that deny consciousness to Swampman. According to some, phenomenal consciousness supervenes on (non-narrow) content, so if Swampman lacks content he must also lack phenomenal consciousness on this view (see esp. Dretske 1995).

If, though, any proponents of teleosemantics accept the narrow supervenience thesis for phenomenal consciousness, they cannot deny that Swampman would have phenomenal consciousness. In that case, the objection remains in force. Then there appear to be just two options. One is to maintain that Swampman can have a red sensation without it seeming to him that he sees something red. The other is to maintain that, although it seems to Swampman that he sees something red, this seeming is not truth-evaluable in the usual sense. This last option fits with the traditional idea that seemings have a special epistemic status; it fits with the idea that we cannot be mistaken about how things seem to us and that, in that context, misrepresentation is not possible. It does not, however, fit with the idea that a person is, in principle, always fallible with respect even to how things seem.

The second broad strategy is to argue that Swampman intuitions cannnot show that teleological theories are incorrect because they are irrelevant. They are, it can be argued, not to the point if a teleological theory is offered as a real-nature theory (Millikan (1996), Neander (1996)). The analogy with an a posteriori analysis of the nature of water is thought to be helpful here. Recall that XYZ is an imaginary liquid that is superficially indistinguishable from water (H2O), although it has a different molecular constitution (dubbed “XYZ”). We can, it is argued, agree that “water” and WATER can refer to H2O exclusively, even if all of the members of the relevant community would classify XYZ as water were they to find some, given their ignorance of water's chemical composition. Following Kripke and Putnam, many have been persuaded that “water” and WATER might have referred to H2O exclusively, even before it was known that water is H2O, because there was deference to an unknown nature that explained the superficial properties by means of which we usually recognise instances of the liquid. On this view, it was (in 1700) an epistemological possibility that water was not H2O, but it was not a metaphysical possibility, given that water is in fact H2O. Along similar lines, it can be argued that it is only an epistemological and not a genuine metaphysical possibility that Swampman might have intentionality.

Note that this last claim is not the claim that it is merely an epistemological possibility that Swampman might exist. Rather, the crucial claim is that, even if he did exist, it would remain a mere epistemological possibility that he would have genuine intentionality. This parallels the claim regarding water and XYZ. Even if XYZ were to exist on Twin-Earth and Twin-Earth were in our universe, it would not be water. Superficial appearances would be on the side of Swampman's having intentionality, just as they would be on the side of XYZ's being water, but it may turn out that Swampman's “intentionality” is not intentionality, just as it would turn out that XYZ is not water (it is just twin-water). Intuitions about Swampman, it is claimed, cannot decide the issue of what the correct analysis of intentionality is. Rather, the decision about Swampman's intentionality should be driven by the theory of content that best accounts for the real kind. That in turn should be driven by other considerations, such as which theory delivers correct content ascriptions for us and other existing creatures.

Of course, in the case of intentionality, unlike the case of water, the hidden nature or essence cannot be an inner structure, if a teleological theory is correct. On such a theory, intentionality is alleged to be an historical kind, so the previously hidden nature is alleged to be a matter of history. As proponents of teleological theories point out, there is an apparent need for other historical kinds in biology (e.g., offspring, homologs and species). (Braddon-Mitchell and Jackson (1997) have argued that this “real nature” response is not available to proponents of teleological theories of content. See Papineau 2001 for a response.)

The Methodological Individualism debate is also relevant here, since it questions whether science should have any historical kinds. If those who favor methodological invidualism are correct, teleological theories of content do not provide us with a good scientific way to individuate psychological states (Fodor 1991). One argument for methodological individualism involves the claim that science should individuate kinds on the basis of causal powers. In brief, the idea is that, since science is in the business of causal explanations and causal powers are what are relevant for causal explanations, science should classify items on the basis of similarities and differences in causal powers. Since there are no differences in causal powers between Davidson's kidney or beliefs at t and Swampman's kidney-analog and belief-analogs when he first pops into existence, Davidson's kidney and Swampman's kidney-analog should belong to all of the same scientific kind and Davidson's beliefs and Swampman's belief-analogs should belong to all of the same scientific kinds. (For discussion of this issue, see Heil & Mele eds. 1993.)

One problem with methodological individualism is that it is radically revisionary, for biology at least. Moreover, if we classify kidneys on the basis of actual causal powers, we include Swampman's kidney-analog at the cost of excluding many real kidneys, such as the kidneys of people on dialysis. While the arguments given in favor of methodological individualism may seem plausible, they are not usually accompanied by any attempt to understand the role that historical classifications play in biology or elsewhere. That being the case, we have reason to worry that the understanding of scientific classification that supports methodological individualism is too simple. Further, it must be kept in mind that the proponents of teleological theories claim that a historical theory of content is needed to capture psycho-semantic norms. Perhaps this is wrong. But if it is right, and if cognitive science needs such a normative notion, then methodological individualism must be wrong. Thus the debate must turn on the more specific issues of whether normative content involves history and whether cognitive science needs normative content.

4.3 Sophisticated Concepts and Capacities

The weightiest objection to teleological theories of content and the hardest to assess is that it is unclear how such theories could explain our most sophisticated concepts and cognitive capacities.

No naturalistic theory of content at this time yet makes perfectly clear how we think about democracy, virtue, quarks or perhaps even tomorrow, and so this is not a problem that is peculiar to teleo-functional theories. However, it is sometimes argued that teleological theories of content have a special problem in this respect (e.g., Peacocke (1992)). The thought is that they may have some hope of working for contents that concern things that impact on fitness — food, shelter, mates, etc. — but that they are, in principle, unable to deal with contents that cannot have impacted on fitness, or not in any suitably selective way. Some contents cannot have impacted on fitness because they belong to the future or are non-existent. Others cannot affect fitness in any suitably selective way because, although they have an impact, their impact is too non-specific: for example, quarks have an impact but because they are omnipresent in our environment they cannot qualify as the content of a representation by virtue of some simple selectional story.

This objection is hard to assess for a number of reasons. One is that there are many different kinds of sophisticated concepts and capacities and accounting for them all is a large task. Another is that, while the objection is posed as an objection to all teleosemantic theories, different versions will address it in different ways. Yet another is that we might allow that it is still early days with respect to the development of teleological (and other) naturalistic theories of mental content. It has really only been since the advent of cognitive science in the middle of the last century and the general acceptance of a broadly physicalist perspective on the mind in the decades that followed that philosophers of mind have devoted much effort to trying to give a naturalistic theory of mental content.

In view of all of this, the present section can do little more than offer a few remarks about how some versions of teleosemantics make some inroads on the issue. Most of the points that follow have been touched on in earlier sections.

It should be emphasized that those who favor teleosemantic theories rarely restrict the relevant functions to those that derive from natural selection operating over an evolutionary span of time. As remarked earlier, there might be non-intentional selection processes that operate over the span of a culture or over the span of an individual's own development or life. Meme selection, conditioning or some other forms of learning and neural selection are considered to be relevant kinds of selection by some proponents of teleosemantics.

Those who favor modest teleo-functional theories would also emphasize that conceptual atomism is highly controversial. Conceptual atomism is the view that every concept of roughly the grain of a lexeme of a natural language derives its content, constitutively speaking, independently of every other such concept's content. Many psychologists and some philosophers believe that some complex concepts are somehow composed out of or are anyway learned through the use of simpler concepts. Crucially, to deny that conceptual atomism is true does not commit one to the view that complex concepts are simply defined in terms of simpler concepts (a fuller discussion of concepts and whether conceptions can play any role in determining reference is outside of the scope of this entry).

Millikan would in this context ask us to take note of her notions of derived and adapted proper functions. What Millikan refers to as a “direct proper function” belongs to a mechanism for which there has been selection. The mechanisms that produce camouflage patterns on the surface of the octopus have the direct proper function to do so. The patterns that the mechanisms produce by means of which they perform this function possess what Millikan calls a “derived proper function,” derived from the function of the mechanism to provide camouflage. Further, a pattern produced on a particular occasion has an “adapted derived proper function,” which is a relational function, in this case to provide camouflage in that particular setting in which the octopus is situated. Millikan makes use of these extended senses in which items may have functions to try to explain the contents of novel representations and representations that are produced as a result of learning. Learning mechanisms have certain functions and when they perform their functions in particular circumstances their products can have adapted derived proper functions in relation to those circumstances, whether or not the circumstances obtained during the history of our species.

Millikan (2000) gives an extensive treatment of concepts. In brief, her view is that conceptions play no role in determining the extensions of the concepts with which they are associated. Millikan's theory presupposes innate learning mechanisms that are tuned to identify substances of different sorts in accord with certain principles. The relevant sort of substance is that which accounts for the past selective success of the learning mechanisms. For instance, some mental mechanisms might have been selected for recognizing faces of individuals in accord with certain principles of operation, and others might have been selected for recognizing animals of different species in accord with other principles of operation. These mechanisms can acquire the “purpose” to recognize something more specific, such as a particular individual's face or animals of a particular species, because the mechanisms were selected for recognizing things in that domain (faces or animals) in accord with certain principles of operation and, in accord with those principles, it is a particular individual's face or animals of a particular species that it now has the “purpose” to recognize. The extension of a substance concept, she tells us, is what substance it was selected to recognize.

Large issues relevant to assessing the different teleological theories of content remain to be settled. On a hopeful note, much good work has been done in exploring the possible range of such theories, in producing interesting in-principle objections and in responding to such objections in ways that have resulted in better developed or better defended versions. We should also keep in mind that serious work on naturalistic theories of content has only been going on for decades rather than centuries and that, on a philosophical timescale, that is quite a short time.

Bibliography

  • Agar, N., 1993, “What do Frogs Really Believe?”, in Australasian Journal of Philosophy, 71: 1–12.
  • Allen, C., Bekoff, M. & Lauder, G. (eds.), 1998, Nature's Purposes: Analyses of Function and Design in Biology, Cambridge, Mass: Bradford, MIT.
  • Ariew, A., Cummins, R., & Perlman, M., (eds.), 2002, Functions: New Readings in the Philosophy of Biology and Psychology, Oxford: Oxford University Press.
  • Ayala, F., 1970, “Teleological Explanations in Evolutionary Biology”, in Philosophy of Science, 37: 1–15.
  • Bedau, M., 1991, “Can Biological Teleology be Naturalized?”, in Journal of Philosophy, 88: 647–55.
  • Block, N., 1986, “Advertisement for a Semantics for Psychology”, in P. French, T. Uehling, and H. Wettstein (eds.), Studies in the Philosophy of Mind, Midwest Studies in Philosophy (Volume 10): Minneapolis: University of Minnesota Press: Minneapolis.
  • Block, N., & Kitcher, P. 2010, “Misunderstanding Darwin: Natural Selection's Secular Critics Get it Wrong”, in Boston Review: March/April, 29–32.
  • Boorse, C., 1976, “Wright on Functions”, in The Philosophical Review, 85: 70–86.
  • ––– 2002, “A Rebuttal on Functions”, in A. Ariew, R. Cummins and M. Perlman (eds.) in Functions: New Essays in the Philosophy of Biology, Oxford: Oxford University Press, 63–112.
  • Braddon-Mitchell, D., & Jackson, F., 1997, “The Teleological Theory of Content”, in Australasian Journal of Philosophy, 75: 474–89.
  • Buller, D., 1999, Function, Selection and Design, New York: State University of New York Press.
  • Burge, T., 1979, “Individualism and the Mental”, in P. French, T. Uehling Jr. and H. Wettstein, eds., Contemporary Perspectives in the Philosophy of Language, Midwest Studies in Philosophy, 2 , Minneapolis, University of Minnesota Press.
  • ––– 1986, “Individualism and Psychology”, in Philosophical Review, 95: 3–45.
  • Carey, S., 2009, The Origin of Concepts, Oxford: New York.
  • Chisholm, R., 1957, Perceiving: A Philosophical Study, Cornell: Cornell University Press.
  • Craver, C., 2001, “Role Functions, Mechanism and Hierarchy,” in Philosophy of Science, 68: 31–55.
  • Cummins, R., 1975, “Functional Analysis”, in Journal of Philosophy, 72: 741–765.
  • ––– 1996, Representations, Targets and Attitudes, Cambridge, Mass: MIT Press.
  • Davidson, D., 1987, “Knowing One's Own Mind”, in Proceedings and Addresses of the American Philosophical Association, 60: 441–58.
  • Davies, P. S., 2001, Norms of Nature: Naturalism and the Nature of Functions, MIT: Cambridge, Mass.
  • Dawkins, R., 1986, The Blind Watchmaker: Why the evidence of evolution reveals a universe without design, New York: Norton.
  • Dennett, D., 1988, “Evolution, Error and Intentionality,” in Y. Wilks and D. Partidge (eds.), Sourcebook on the Foundations of Artificial Intelligence, New Mexico University Press: New Mexico.
  • ––– 1995, Darwin's Dangerous Idea, New York: Simon & Schuster.
  • Devitt, M., 1996, Coming to our Senses; A Naturalistic Program for Semantic Localism, Cambridge, UK: Cambridge University Press.
  • Dretske, F., 1981, Knowledge and the Flow of Information , Cambridge, MA: MIT Press.
  • ––– 1986, “Misrepresentation”, in Radu Bogdan (ed) Belief: Form, content and function, New York: Oxford: 17–36.
  • ––– 1988, Explaining Behavior, Cambridge, MA: Bradford, MIT.
  • ––– 1991, “Dretske's replies”, in McLaughlin B. (ed.) op. cit.: 180–221.
  • ––– 1995, Naturalizing the Mind, Cambridge, MA: MIT Press.
  • ––– 1996, “Absent Qualia”, in Mind and Language, 11 (1): 70–130.
  • Enc, B., 2002, “Indeterminacy of Function Attributions”, in Ariew, A., Cummins, R., and Perlman, M., (eds.) op. cit.
  • Fodor, J. A., 1981, “The Current Status of the Innateness Controversy”, in Fodor, J., Representations, Cambridge, MA: MIT Press.
  • ––– 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind, Cambridge, MA: MIT Press, Bradford Books.
  • ––– 1990(a), “Psychosemantics, or: Where do truth conditions come from?” in W. Lycan (ed) Mind and Cognition, a Reader, 1990, Oxford: Basil Blackwell: 312–337.
  • ––– 1990(b), “A Theory of Content”, in A Theory of Content and Other Essays, Cambridge, MA: MIT Press, Bradford Book.
  • ––– 1990(c), “Information and Representation” in P. Hanson (ed.) Information, Language and Cognition, University of British Columbia Press.
  • ––– 1991, “A Modal Argument for Narrow Content”, in Journal of Philosophy, 88: 5–25.
  • ––– 1996, “Deconstructing Dennett's Darwin”, in Mind and Language 11: 246–262.
  • Fodor, J & Lepore, E., 1992, Holism: A Shopper's Guide, Blackwell: Oxford.
  • Fodor, J & Piatelli-Palmarini, M., 2010, What Darwin Got Wrong, Farrar, Straus and Giroux.
  • Gallistel, R., 1990, The Organization of Learning, Cambridge, MA: Bradford, MIT, ch.2.
  • Garson, J., 2011, “Selected Effects Functions and Causal Role Functions in the Brain: The Case for an Etiological Approach to Neuroscience,”Philosophy & Biology, 26: 547–565. Cambridge, MA: Bradford, MIT, ch.2.
  • Gelman, S., & Wellman, H., 1999 “Insides and Essences: Early Understandings of the Non-obvious”, in Margolis and Laurence, (eds.) op. cit.: 613–637.
  • Godfrey-Smith, P., 1984, “A Modern History Theory of Functions”, in Noûs, 28 (3):344-362.
  • Godfrey-Smith, P., 1989, “Misinformation”, in Canadian Journal of Philosophy, 19 (4): 533–550.
  • Goodman, N., 1976, Languages of Art, Indianopolis: Hacket.
  • Griffiths, P., 1993, “Functional Analysis and Proper Functions”, British Journal for the Philosophy of Science, 44, 409–422.
  • Griffiths, P. & Goode, P. E., 1995, “The Misuse of Sober's Selection for/Selection of Distinction”, in Biology and Philosophy, 10: 99–107.
  • Hall, R., 1990, “Does Representational Content Arise from Biological Function?”, in Philosophy of Science Association, 1: 193–199.
  • Heil, J. and Mele, A. (eds.), 1993, Mental Causation, Oxford: Oxford University Press.
  • Jacob, P., 1997, What Minds Can Do: Intentionality in a Non-Intentional World, Cambridge, Cambridge University Press.
  • ––– 2001, “Is Meaning Intrinsically Normative?”, in Proceedings, Meeting of the German Analytical Philosophy (GAP), Allermagne: Bielefeld.
  • Kingsbury, J., D. Ryder & Williford, K., forthcoming, Millikan and her Critics, Blackwell.
  • Kitcher, P., 1993, “Function and Design” in Midwest Studies in Philosophy, XVIII.
  • Lettvin, J., Maturana, H., McCulloch, W., & Pitts, W., 1951, “What the frog's eye tells the frog's brain,” in Proceedings of the IRE, Vol. 47.
  • Lewens, T., 2004, Organisms and Artifacts, Design in Nature and Elsewhere, Cambridge, MA: MIT.
  • Lewis, D., 1980, “Mad Pain and Martian Pain”, in Readings in the Philosophy of Psychology, vol. 1, N. Block (ed.), Cambridge, MA: Harvard University Press, 216–222.
  • Loewer, B., 1987, “From Information to Intentionality” in Synthese, 70: 287–317.
  • Margolis, E. & Laurence, S., 1999, in Concepts: Core Readings, Cambridge, MA: MIT Press.
  • McLaughlin, B., (ed.), 1991, Dretske and his Critics, Cambridge, MA: Blackwell.
  • Millikan, R., 1984, Language, Thought and Other Biological Categories, Cambridge, MA: MIT Press.
  • ––– 1989a, “In Defense of Proper Functions”, Philosophy of Science, 56, no. 2: 288–302, and reprinted in Millikan, 1993(a) op. cit.
  • ––– 1989b, “Biosemantics”, in Journal of Philosophy, 86: 281–97.
  • ––– 1990, “Truth, Rules, Hoverflies and the Kripke-Wittgenstein Paradox” in Philosophical Review, 99: 232–53.
  • ––– 1991, “Speaking Up for Darwin” in Loewer, B. & Rey, G. (eds.) (1991) Meaning in Mind: Fodor and his critics, Cambridge, MA: Blackwell, 151–165.
  • ––– 1993, White Queen Psychology and Other Essays for Alice, Cambridge, MA: MIT Press.
  • ––– 1996, “On Swampkinds”, in Mind and Language, 11 (1): 70–130.
  • ––– 2000, On Clear and Confused Ideas: An Essay about Substance Concepts, Cambridge: Cambridge University Press.
  • ––– 2004, Varieties of Meaning, Cambridge, Mass: MIT Press.
  • Nanay, B., 2010, “A Modal Theory of Content,” in Journal of Philosophy, 107: 412–431.
  • Neander, K., 1983, Abnormal Psychobiology, Ph.D. thesis, La Trobe.
  • ––– 1991, “Functions as Selected Effects”, in Philosophy of Science, 58: 168–184.
  • ––– 1995, “Malfunctioning and Misrepresenting”, in Philosophical Studies, 79: 109–141.
  • ––– 1996, “Swampman Meets Swampcow”, in Mind and Language, 11 (1): 70–130.
  • ––– 2002, “Types of Traits: the importance of functional homologues”, in A. Ariew, R. Cummins & M. Perlman (eds.), op. cit., 390–415.
  • ––– 2006, “Content for Cognitive Science”, in G. McDonald and D. Papineau (eds.), Teleosemantics, Oxford: Oxford University Press, 167–194.
  • ––– forthcoming, “Toward an Informational Teleosemantics”, in J. Kingsbury, D. Ryder and K. Williford (eds.) Millikan and Her Critics, Oxford: Blackwell.
  • Palmer, S., 1999, Vision Science: Protons to Phenomenology, Cambridge, MA: MIT Press.
  • Papineau, David, 1984, “Representation and Explanation”, in Philosophy of Science, 51: 550–72.
  • ––– 1987, Reality and Representation, Oxford: Basil Blackwell.
  • ––– 1990, “Truth and Teleology,” in D. Knowles (ed.), Explanation and its Limits, Cambridge: Cambridge University Press, 21–44.
  • ––– 1993, Philosophical Naturalism Oxford: Blackwell.
  • ––– 1997, “Teleosemantics and Indeterminacy”, in Australasian Journal of Philosophy, 76: 1–14.
  • ––– 2001, “The Status of Teleosemantics, or How to Stop Worrying about Swampman”, in Australasian Journal of Philosophy, 79: 279–89
  • ––– 2010, “Review of Fodor and Piattelli-Palmarini's What Darwin Got Wrong”, in Prospect, 168: 83–84.
  • Peacocke, C., 1992, A study of concepts, Cambridge, MA: MIT Press.
  • Pietroski, P., 1992, “Intentional and Teleological Error”, in Pacific Philosophical Quarterly, 73: 267–81.
  • Price, C., 1998, “Determinate Functions”, in Noûs, 32: 54–75.
  • ––– 2001, Functions in Mind: A Theory of Intentional Content, Oxford: Clarendon Press.
  • Prinz, J., 2002, Furnishing the Mind: Concepts and Their Perceptual Basis, Cambridge, MA: Bradford, MIT.
  • Putnam, H., 1975, “The Meaning of ‘Meaning’”, in K. Gunderson, Language, Mind and Knowledge, Minnesota, Minneapolis: University of Minnesota Press, 131–93; reprinted in H. Putnam, Philosophical Papers, vol 2: Mind, Language and Reality, Cambridge, UK: Cambridge University Press.
  • Rey, G., 1997, Contemporary Philosophy of Mind, Cambridge, MA: Blackwell.
  • Schwartz, P., 1999, “Proper Function and Recent Selection,” in Philosophy of Science, 66 (3) (Supplement): S210–S222.
  • Shapiro, L., 1992, “Darwin and Disjunction: Foraging Theory and Univocal Assignments of Content,” in Proceedings of the 1992 Biennial Meeting of the Philosophy of Science Association, vol. 1, 469–480.
  • Shea, N., 2007, “Consumers Need Information: supplementing teleosemantics with an input condition” in Philosophy and Phenomenological Research, 75 (2): 404–435.
  • ––– forthcoming, “Millikan's Isomorphism Requirement” in J. Kingsbury, D. Ryder and K. Williford (eds.), Millikan and Her Critics, Oxford: Blackwell.
  • Sober, E., 1984, The Nature of Selection, Chicago: University of Chicago Press.
  • Stampe, D., 1977, “Toward a Causal Theory of Linguistic Representation”, in P. A. French, T. E. Uehling, Jr., and H. K. Wettstein (eds) Midwest Studies in Philosophy: Studies in the Philosophy of Language, vol. 2, Minneapolis: University of Minnesota Press, 81–102.
  • Sterelny, K, 1990, The Representational Theory of Mind: An Introduction, Cambridge, MA: Blackwell.
  • Wright, L., 1973, “Functions”, in The Philosophical Review, 82: 139–168.
  • ––– 1976, Teleological Explanation, Berkeley, CA: University of California Press.

Other Internet Resources

Acknowledgments

Thanks to David Chalmers and Georges Rey for penetrating comments. The editors would also like to thank Christopher von Bülow for carefully reading this entry and calling numerous typographical errors to our attention.

Copyright © 2012 by
Karen Neander <kneander@duke.edu>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]