Animal Consciousness

First published Sat Dec 23, 1995; substantive revision Fri Apr 11, 2014

Questions about animal consciousness — in particular, which animals have consciousness and what (if anything) that consciousness might be like — are both scientific and philosophical. They are scientific because answering them will require gathering information using scientific techniques — no amount of arm-chair pondering, conceptual analysis, logic, a priori theory-building, transcendental inference or introspection will tell us whether a platypus, an iguana, or a squid (to take a few examples) enjoy a life of subjective experience — at some point we'll have to learn something about the animals. Just what sort(s) of science can bear on these questions is a live question, but at the least this will include investigations of the behavior and neurophysiology of a wide taxonomic range of animals, as well as the phylogenetic relationships among taxa. But these questions are deeply philosophical as well, with epistemological, metaphysical, and phenomenological dimensions. Progress will therefore ultimately require interdisciplinary work by philosophers willing to engage with the empirical details of animal biology, as well as scientists who are sensitive to the philosophical complexities of the issue.

1. Motivations

There are many reasons for philosophical interest in nonhuman animal (hereafter “animal”) consciousness:

First, if philosophy often begins with questions about the place of humans in nature, one way humans have attempted to locate themselves is by comparison and contrast with those things in nature most similar to themselves, i.e., other animals. At least in the West, the traditional — and perhaps still intuitive to many people — way of thinking about consciousness is as primarily an innate endowment of humans, which other animals may or may not share in virtue of being sufficiently like us. Within the traditional Biblical cosmology, while all animals were said to have arisen through divine intentional creation, humans were the only ones created in the likeness of the deity, and thus enjoyed a special, privileged role in the intended workings of the cosmos — including, for example, access to an eternal afterlife not overpopulated with fleas, ants and snails. (See Lewis, 2009 Ch 9 for an in-depth treatment of the problem of animal consciousness in relation to Christian theology.) However, within a modern biological worldview, while humans may be unique in certain (perhaps quite important) respects, we are only one species of animal among many — one tip of one branch of the phylogenetic tree of life, and enjoy no particular special status.

From an evolutionary perspective, consciousness is a trait that some animals have (at least humans have it). Salient questions include: Is it a late evolved, narrowly distributed trait, or an older more broadly shared trait? And, did it evolve only once, or a number of times independently? From this view point, the question “are (non-human) animals conscious” is rather strange, because, for example, it implicitly groups bats together with rabbits (as ‘nonhuman’ animals) in contrast to humans. In reality, rabbits are more closely related to humans than they are to bats (Nishihara et al. 2006), so framing the question this way embeds a false presupposition. Of course, it is consistent with an evolutionary perspective that humans are the only conscious animals. This would imply that consciousness was acquired through a recent evolutionary event that occurred since the split of our ancestral lineage from that of our closest non-human relatives, chimpanzees and bonobos (see section 6 for discussion of such hypotheses). But such a view requires support; though perhaps intuitive to some, its choice as a default position is arbitrary.

Second, there is a lot at stake morally in the question of whether animals are conscious beings or “mindless automata”. (See article on the Moral Status of Animals.) Many billions of animals are slaughtered every year for food, use in research, and other human purposes. Moreover, before their deaths, many — perhaps most — of these animals are subject to conditions of life that, if they are in fact experienced by the animals in anything like the way a human would experience them, amount to cruelty. Arguments that non-human animals are not conscious therefore effectively double as apologetics for our treatment of animals. When the question of animal consciousness is under consideration, our guilt or innocence as a civilization for an enormous body of cruelty may hang in the balance. However, some philosophers have argued that consciousness per se does not matter for the treatment of animals, and therefore either that a) even if animals are not conscious, they may deserve moral consideration, or b) even if animals are conscious, they may not deserve moral consideration. (For more discussion of the ethical issues, see Singer 1990 [1975]; Regan 1983; Rollin 1989; Varner 1998, 2012; Steiner 2008.)

Third, while theories of consciousness are frequently developed without special regard to questions about animal consciousness, the plausibility of such theories has sometimes been assessed against the results of their application to animal consciousness (and, similarly, to human infants). This raises questions about the relative epistemic weight of theoretical considerations (e.g. philosophical arguments for a given theory of consciousness) against particular case judgments or intuitions about whether a given creature is conscious. For example, Searle (1998) argues that our intuitive, commonsense attributions of intentional and emotional states to dogs carries more epistemic weight than philosophically motivated skeptical concerns. In contrast, Carruthers (1989) asserts that his own arguments that nonhuman animals (even dogs) lack consciousness are sufficiently weighty that we are morally obligated to eradicate or ignore our sympathetic feelings toward such creatures. Should our theories of consciousness be constrained by our intuitive attributions of consciousness to animals (or, e.g., babies), or should the former override the latter?

Fourth, the problem of determining whether animals are conscious stretches the limits of knowledge and scientific methodology (beyond the breaking point, according to some). The so-called “cognitive revolution” that took place during the latter half of the 20th century has led to many innovative experiments by comparative psychologists and ethologists probing the cognitive capacities of animals. The philosophical issues surrounding the interpretation of experiments to investigate perception, learning, categorization, memory, spatial cognition, numerosity, communication, language, social cognition, theory of mind, causal reasoning, and metacognition in animals are discussed in the entry on animal cognition. Despite this work on cognition, the topic of consciousness per se in animals has remained controversial, even taboo, among many scientists, while other scientists from a variety of disciplinary backgrounds (e.g. neuroscience, animal behavior, evolutionary biology) have developed novel ways of approaching the subject. The recent Cambridge Declaration on Animal Consciousness indicates that many scientists agree that “the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness.” However, other scientists, including Marian Stamp Dawkins, who has been prominent in the science of animal welfare (Dawkins 1985, 1993), are not ready to endorse the claim, writing that, “The mystery of consciousness remains. The explanatory gap is as wide as ever and all the wanting in the world will not take us across it” (Dawkins 2012, pp. 171–172).

Many philosophers and scientists have either argued or assumed that consciousness is inherently private, and hence that one's own experience is unknowable to others. While language may allow humans to cross this supposed gap by communicating their experience to others, this is allegedly not possible for other animals. Despite the controversy in philosophical and scientific circles, it remains a matter of common sense to most people that some animals do have conscious experiences. Most people, if asked why they think familiar animals such as their pets are conscious, would point to similarities between the behavior of those animals and human behavior — for example, animals seem to visibly express pleasure and displeasure and a variety of emotions, their behavior seems to be motivated by seeking food, comfort, social contact, etc., they seem aware of their surroundings and able to learn from experience. Similarity arguments for animal consciousness thus have roots in common sense observations. But they may also be bolstered by scientific investigations of behavior and the comparative study of brain anatomy and physiology, as well as considerations of evolutionary continuity between species. Neurological similarities between humans and other animals have been taken to suggest commonality of conscious experience; all mammals share the same basic brain anatomy, and much is shared with vertebrates more generally. Even structurally different brains may be neurodynamically similar in ways that enable inferences about animal consciousness to be drawn (Seth et al. 2005).

As well as generic arguments about the connections among consciousness, neural activity, and behavior, a considerable amount of scientific research directed towards understanding particular conscious states uses animals as proxies for humans. The reactions of many animals, particularly other mammals, to bodily events that humans would report as painful are easily and automatically recognized by most people as pain responses. High-pitched vocalizations, fear responses, nursing of injuries, and learned avoidance are among the responses to noxious stimuli that are all part of the common mammalian heritage, and similar responses are also observable in organisms from a wide range of taxonomic groups (see section 7.1 below).

Much of the research that is of direct relevance to the treatment of human pain, including on the efficacy of analgesics and anesthetics, is conducted on rats and other animals. The validity of this research depends on the similar mechanisms involved[1] and to many it seems arbitrary to deny that injured rats, who respond well to opiates for example, feel pain.[2] Likewise, much of the basic research that is of direct relevance to understanding human visual consciousness has been conducted on the very similar visual systems of monkeys. Monkeys whose primary visual cortex is damaged even show impairments analogous to those of human blindsight patients (Stoerig & Cowey 1997) suggesting that the visual consciousness of intact monkeys is similar to that of intact humans. Scientific demonstrations that members of other species, even of other phyla, are susceptible to the same visual illusions as we are (e.g., Fujita et al. 1991) suggesting that their visual experiences are similar.

It is often argued that the use of animals to model neuropsychiatric disorders presupposes convergence of emotional and other conscious states and further refinements of those models may strengthen the argument for attributing such states to animals (Sufka et al. 2009). An interesting reversal of the modeling relationship can be found in the work of Temple Grandin, Professor of Animal Science at Colorado State University, who uses her experience as a so-called “high-functioning autistic” as the basis for her understanding of the nature of animal experience (Grandin 1995, 2004).

Such similarity arguments are, of course, inherently limited in that it is always open to critics to exploit some disanalogy between animals and humans to argue that the similarities don't entail the conclusion that both are sentient. Even when bolstered by evolutionary considerations of continuity between the species, the arguments are vulnerable, for the mere fact that humans have a trait does not entail that our closest relatives must have that trait too. There is no inconsistency with evolutionary continuity to maintain that only humans have the capacity to learn to play chess. Likewise for consciousness. Povinelli & Giambrone (2000) also argue that the argument from analogy fails because superficial observation of quite similar behaviors even in closely related species does not guarantee that the underlying cognitive principles are the same, a point that Povinelli believes is demonstrated by his research into how chimpanzees use cues to track visual attention (Povinelli 1996). (See Allen 2002 for criticism of their analysis of the argument by analogy.)

Perhaps a combination of behavioral, physiological and morphological similarities with evolutionary theory amounts to a stronger overall case[3]. However, a convincing argument will likely also require motivation in terms of a well developed theory of the structure and function of consciousness as a cognitive process — a route that many recent participants in the debate on animal consciousness have pursued (see section 6).

2. Concepts of Consciousness

The term “consciousness” is notoriously ambiguous and difficult to define. Having origins in folk psychology, “consciousness” has a multitude of uses that may not be resolvable into a single, coherent concept (Wilkes 1984). Nevertheless, several useful distinctions among different notions of consciousness have been made, and with the help of these distinctions it is possible to gain some clarity on the important questions that remain about animal consciousness.

Two ordinary senses of consciousness which are not in dispute when applied to animals are the sense of consciousness involved when a creature is awake rather than asleep[4], or in a coma, and the sense of consciousness implicated in the basic ability of organisms to perceive and thereby respond to selected features of their environments, thus making them conscious or aware of those features. Consciousness in both these senses is identifiable in organisms belonging to a wide variety of taxonomic groups (see, e.g., Mather 2008).

A third, more technical notion of consciousness, access consciousness, has been introduced by Block (1995) to capture the sense in which mental representations may be poised for use in rational control of action or speech. This “dispositional” account of access consciousness — the idea that the representational content is available for other systems to use — is amended by Block (2005) to include an occurrent aspect in which the content is “broadcast” in a “global workspace” (Baars 1997) which is then available for higher cognitive processing tasks such as categorization, reasoning, planning, and voluntary direction of attention. Block believes that many animals possess access consciousness (speech is not a requirement). Indeed, some of the neurological evidence cited by Block (2005) in support of the global workspace is derived from monkeys. But clearly an author such as Descartes, who, we will see, denied speech, language, and rationality to animals, would also deny access consciousness to them. Those who follow Davidson (1975) in denying intentional states to animals would likely concur.

There are two remaining senses of consciousness that cause more controversy when applied to animals: phenomenal consciousness and self-consciousness.

Phenomenal consciousness refers to the qualitative, subjective, experiential, or phenomenological aspects of conscious experience, sometimes identified with qualia. (In this article we also use the term “sentience” to refer to phenomenal consciousness.) To contemplate animal consciousness in this sense is to consider the possibility that, in Nagel's (1974) phrase, there might be “something it is like” to be a member of another species. Nagel disputes our capacity to know, imagine, or describe in scientific (objective) terms what it is like to be a bat, but he assumes that there is something it is like.

For many authors, Nagel's formulation of phenomenal consciousness as “what it's like” serves as a reference point for what's at stake in the debate on animal consciousness — in investigating whether a group of animals are consciousness, the crucial question is whether there is ‘something it is like’ to be those animals, i.e. whether there is a subjective experience of life or being for them, a proprietary perspective that individuals have on their own perceptual, cognitive and emotive processes.

Though some authors (including Nagel himself) have argued that the very subjectivity of phenomenal consciousness makes it exceedingly difficult or even impossible to investigate scientifically, particularly in other species, others have proceeded by developing structural and/or functional theories of consciousness, and using these to argue for a particular hypothesis about the distribution of consciousness among animals. Such theories will be discussed below, in sections 5 and 6.

Self-consciousness refers to a subject's awareness of itself, but is also a notoriously ambiguous term — there are importantly distinct senses in which a subject can be self-aware (see for example the SEP article on Phenomenological Approaches to Self-Consciousness). These include: an awareness of one's body as a physical object, or as the medium of one's own perception and action (i.e. bodily self-awareness); awareness of one's own mental states (i.e. mental or experiential self-awareness); awareness of one-self as perceived by others, or as a member of a social group such as a family, team, or institution (i.e. social self-awareness); awareness of one-self as a persistent character in the narratives told by oneself and others (i.e. narrative self-awareness). This list is far from exhaustive, and further, each listed notion is subject to further disambiguation. Hence, although on many theories self-consciousness is tightly related to phenomenal consciousness, proposals to this effect can vary greatly in their meaning and their implications for which animals might be conscious.

The remainder of this article deals primarily with the attribution of consciousness in its phenomenal sense to animals, although there will be some discussion of access consciousness, self-consciousness and theory of mind in animals, especially where these have been related theoretically to phenomenal consciousness — as, for instance, in Carruthers' (1998a,b, 2000) argument that a particular sort of mental self-representation is required for phenomenal consciousness.

3. Historical Background

Questions about animal consciousness in the Western tradition have their roots in ancient discussions about the nature of human beings, as filtered through the “modern” philosophy of Descartes. It would be anachronistic to read ideas about consciousness from today back into the ancient literature. Nevertheless, because consciousness is sometimes thought to be a uniquely human mental phenomenon, it is important to understand the origins of the idea that humans are qualitatively (and “qualia-tatively”) different from animals.

Aristotle asserted that only humans had rational souls, while the locomotive souls shared by all animals, human and nonhuman, endowed animals with instincts suited to their successful reproduction and survival. Sorabji (1993) argues that the denial of reason to animals created a crisis for Greek thought, requiring a “wholesale reanalysis” (p. 7) of the nature of mental capacities, and a revision in thinking about “man and his place in nature above the animals” (ibid.). The argument about what is reasoning, and whether animals display it, remains with us 25 centuries later, as evidenced by the volume Rational Animals? (Hurley & Nudds 2006). The Great Chain of Being derived from early Christian interpretation of Aristotle's scale of nature (Lovejoy 1936) provides another Aristotelian influence on the debate about animal minds.

Two millennia after Aristotle, Descartes' mechanistic philosophy introduced the idea of a reflex to explain the behavior of nonhuman animals. Although his conception of animals treated them as reflex-driven machines, with no intellectual capacities, it is important to recognize that he took mechanistic explanation to be perfectly adequate for explaining sensation and perception — aspects of animal behavior that are nowadays often associated with consciousness. He drew the line only at rational thought and understanding. Given the Aristotelian division between instinct and reason and the Cartesian distinction between mechanical reflex and rational thought, it's tempting to map the one distinction onto the other. Nevertheless, Crowley & Allen (2008) argue that it would be a mistake to assimilate the two. First, a number of authors before and after Darwin have believed that conscious experience can accompany instinctive and reflexive actions. Second, the dependence of phenomenal consciousness on rational, self-reflective thought is a particularly strong and contentious claim (although it has current defenders, discussed below).

Although the roots of careful observation and experimentation of the natural world go back to ancient times, study of animal behavior remained largely anecdotal until long after the scientific revolution. Animals were, of course, widely used in pursuit of answers to anatomical, physiological, and embryological questions. Vivisection was carried out by such ancient luminaries as Galen and there was a resurgence of the practice in early modern times (Bertoloni Meli 2012). Descartes himself practiced and advocated vivisection (Descartes, Letter to Plempius, Feb 15 1638), and wrote in correspondence that the mechanical understanding of animals absolved people of any guilt for killing and eating animals. Mechanists who followed him (e.g. Malebranche) used Descartes' denial of reason and a soul to animals as a rationale for their belief that animals were incapable of suffering or emotion, and did not deserve moral consideration — justifying vivisection and other brutal treatment (see Olson 1990, p. 39–40, for support of this claim). The idea that animal behavior is purely reflexive may also have served to diminish interest in treating behavior as a target of careful study in its own right.

A few glimmers of experimental approaches to animal behavior can be seen in the late 18th century (e.g., Barrington 1773; White 1789), and soon thereafter Frédéric Cuvier worked from 1804 until his death in 1838 on the development of sexual and social behavior in captive mammals. By the mid 19th century Alfred Russel Wallace (1867) was arguing explicitly for an experimental approach to animal behavior, and Douglas Spalding's (1872) experiments on instinctual feeding behaviors in chicks were seminal. Still, the emergence of experimental approaches had very little to say about consciousness per se, though Spalding's work can be seen as a contribution to the discussion about instinct and reason.

In the same vein of instinct vs. reason, Darwin in the Origin of Species wrote, “It is a significant fact, that the more the habits of any particular animal are studied by a naturalist, the more he attributes to reason, and the less to unlearnt instinct” (1871, Book I, p.46). He devoted considerable attention in both the Origin and in the Descent of Man to animal behavior, with the obvious goal of demonstrating mental continuity among the species. To make his case, Darwin relied heavily on anecdotes provided by his correspondents — a project infamously pursued after Darwin's death by his protégé George Romanes (1882). Darwin also carried out experiments and was a keen observer, however. In his final work he describes experiments on the flexibility of earthworm behavior in manipulating leaves, which he took to show considerable intelligence (Darwin 1881; see also Crist 2002).

The idea of behavioral flexibility is central to discussions of animal mind and consciousness. Descartes' conception of animals as automata seems to make phenomenal consciousness superfluous at best — a connection whose philosophical development was traced by T.H. Huxley (1874). Huxley reported a series of experiments on a frog, showing very similar reflexive behavior even when its spinal cord had been severed, or large portions of its brain removed. He argued that without a brain, the frog could not be conscious, but since it could still do the same sort of things that it could do before, there is no need to assume consciousness even in the presence of the entire brain, going on to argue that consciousness is superfluous. (The argument is somewhat curious since it seems to show too much by making the brain itself superfluous to the frog's behavior!)

Still, for those (including Huxley) who became quickly convinced of the correctness of Darwin's theory of evolution, understanding and defending mental continuity between humans and animals loomed large. In his Principles of Psychology (1890), William James promoted the idea of differing intensities of conscious experience across the animal kingdom, an idea that was echoed by the leading British psychologist of his day, Conwy Lloyd Morgan in his 1894 textbook An Introduction to Comparative Psychology. Morgan had been very skeptical and critical of the anecdotal approach favored by Darwin and Romanes, but he came around to the Darwinian point of view about mental continuity if not about methodology. To address the methodological deficit he introduced his “double inductive” method for understanding the mental states of animals (Morgan 1894). The double induction consisted of inductive inferences based on observation of animal behavior combined with introspective knowledge of our own minds. At the same time, to counteract the anthropomorphic bias in the double inductive method, Lloyd Morgan introduced a principle now known as Morgan's canon: “in no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale” (Lloyd Morgan 1894, p.53).

Lloyd Morgan's Double Induction Method from his 1894 textbook

Even though the double inductive method is now mainly of historical interest, Morgan's canon lives on. Questions about quite what the canon means and how to justify it are active topics of historical and philosophical investigation (e.g., Burghardt 1985; Sober 1998, 2005, 2012; Radick 2000; Thomas 2001, Fitzpatrick 2008). The questions include what Lloyd Morgan means by ‘higher’ and ‘lower’, to what extent the principle can or should be justified by evolutionary considerations, and whether the canon collapses to a principle of parsimony, a version of Ockham's razor, or some general principles of empirical justification. Despite current uncertainty about what it really means, Morgan's canon, interpreted (or, perhaps, misinterpreted; Thomas 2001) as a strong parsimony principle, served a central rhetorical role for behavioristic psychologists, who sought to eliminate any hint of Cartesian dualism from comparative psychology.

Behaviorism dominated American psychology in the early part of the 20th century, beginning with Thorndike's (1911) experiments on animals learning by trial and error to escape from the “puzzle boxes” that he had constructed. But even Thorndike's famous “law of effect” refers to the animal's “satisfaction or discomfort” (1911, p.244). It was with the radical anti-mentalism of John B. Watson (1928) and B.F. Skinner (1953), both of whom strongly rejected any attempts to explain animal behavior in terms of unobservable mental states, that American psychology became the science of behavior rather than, as the dictionary would have it, the science of mind and behavior.

At the same time, things were progressing rather differently in Europe, where ethological approaches to animal behavior were more dominant. Ethology is part natural history with an emphasis on fieldwork and part experimental science conducted on captive animals, reflecting the different styles of its two seminal figures, Konrad Lorenz and Niko Tinbergen (see Burkhardt 2005). Initially, “innate” behaviors were the central focus of Lorenz's work. According to Lorenz, it is the investigation of innate behaviors in related species that puts the study of animal behavior on a par with other branches of evolutionary biology, and he demonstrated that it was possible to derive the phylogenetic relations among species by comparing their instinctive behavioral repertoires (Lorenz 1971a). In pursuing this direction, Lorenz and Tinbergen explicitly sought to distance ethology from the purposive, mentalistic, animal psychology of Bierens de Haan and the lack of biological concern they detected in American comparative psychology (see Brigandt 2005). Like Lloyd Morgan, the ethologists rejected Romanes anecdotal approach, but they also criticized Lloyd Morgan's subjectivist approach.

In the 1970s, Donald Griffin, who made his reputation taking careful physical measurements to prove that bats use echolocation, made a considerable splash with his plea for a return to questions about animal minds, especially animal consciousness. Griffin (1978) coined the term “cognitive ethology” to describe this research program, which is based in naturalistic observations of animal behavior and the attempt to understand animal minds in the context of evolution (see Allen 2004a). Fierce criticism of Griffin emerged both from psychologists and classically trained ethologists (Bekoff & Allen 1997). Griffin emphasized behavioral flexibility and versatility as the chief source of evidence for consciousness, which he defined as “the subjective state of feeling or thinking about objects and events” (Griffin & Speck 2004, p. 6). In seeing subjectivity, at least in simple forms, as a widespread phenomenon in the animal kingdom, Griffin's position also bears considerable resemblance to Lloyd Morgan's. Burghardt reports that “considerable discomfort with subjectivism” (Burghardt 1985, p. 907) arose during the Dahlem conference that Griffin convened in an early discipline-building exercise (Griffin 1981). Griffin's subjectivist position, and the suggestion that even insects such as honeybees are conscious, seemed to many scientists to represent a lamentable return to the anthropomorphic over-interpretation of anecdotes seen in Darwin and Romanes. This criticism may be partly unfair in that Griffin does not repeat the “friend-of-a-farmer” kinds of story collected by Romanes, but bases his interpretations on results from the more sophisticated scientific literature that had accumulated more than a century after Darwin (e.g., Giurfa et al. 2001). However, the charge of over-interpretation of those results may be harder to avoid. It is also important to note the role played by neurological evidence in his argument, when he concludes that the intensive search for neural correlates of consciousness has not revealed “any structure or process necessary for consciousness that is found only in human brains” (Griffin & Speck 2004). This view is widely although not universally shared by neuroscientists.

Griffin's behavior-based methodology for studying animal consciousness has also been dismissed as anthropomorphic (see Bekoff & Allen 1997 for a survey). But such criticisms may have overestimated the dangers of anthropomorphism (Fisher 1990) and many of the critics themselves rely on claims for which there are scant scientific data (e.g., Kennedy 1992, who claims that the “sin” of anthropomorphism may be programmed into humans genetically). At the same time, other scientists, whether or not they have explicitly endorsed Griffin's program, have sought to expand evolutionary investigation of animal consciousness to include the neurosciences and a broad range of functional considerations (e.g., Ârhem et al. 2002, and see section 6). Whatever the shortfalls of his specific proposals, Griffin played a crucial role in reintroducing explicit discussions of consciousness to the science of animal behavior and cognition, hence paving the way for modern investigations of the distribution and evolutionary origins of consciousness.

4. Epistemological and Metaphysical Issues

The topic of consciousness in nonhuman animals has been primarily of epistemological interest to philosophers of mind. Two central questions are:

  1. Can we know which animals beside humans are conscious? (The Distribution Question)[5]
  2. Can we know what, if anything, the experiences of animals are like? (The Phenomenological Question)

In his seminal paper “What is it like to be a bat?” Thomas Nagel (1974) simply assumes that there is something that it is like to be a bat, and focuses his attention on what he argues is the scientifically intractable problem of knowing what it is like. Nagel's confidence in the existence of conscious bat experiences would generally be held to be the commonsense view and, as the preceding section illustrates, a view that is increasingly taken for granted by many scientists too. But, as we shall see, it is subject to challenge and there are those who would argue that the Distribution Question is just as intractable as the Phenomenological Question.

4.1 The Problem of Other Minds

The two questions might be seen as special cases — or, alternatively, as generalized versions — of the skeptical “problem of other minds” — how can one know that others have mental states that are anything like one's own? Although there is no generally accepted solution to this problem, it is nevertheless generally ignored to good effect by psychologists, and indeed by most people, who in practice are willing to take for granted that others have mental states similar to theirs. However it is often thought that knowledge of animal minds presents special methodological difficulties. First of all, nonhuman animals cannot describe their mental states using language. Although there have been attempts to teach human-like languages to members of other species, none has reached a level of conversational ability that would solve this problem directly (see Anderson 2004 for a review). Furthermore, except for some language-related work with parrots and dolphins, such approaches are generally limited to those animals most like ourselves, particularly the great apes. But there is great interest in possible forms of consciousness in a much wider variety of species than are suitable for such research. More generally, the problem of other minds is more acute when applied to nonhuman animals because the similarities between our behavior and bodies, and those of others animals (which form the basis for ‘analogical’ solutions to the problem of other minds) are less exact. As well, the perceptual access to other minds that some have argued defuses the problem of other minds is arguably weaker regarding the minds of other animals. (Sober (2000) discusses of the problem of other minds within an evolutionary framework, and Farah 2008 provides a neuroscientist's perspective.)

4.2 The Epistemic Status of Intuitions and Perception of Mental States

For many people it seems obvious that familiar animals such as dogs and cats have conscious mental lives that include perceptual states, motivational and hedonic states, basic emotions and social attachments. David Hume, known for championing skepticism generally, wrote that “no truth appears to me more evident, than that beasts are endow'd with thought and reason as well as men” (1888 p 176, reproduced in Jamieson 1998). Hume did not provide elaborate philosophical or empirical arguments to this effect — he thought it was clear from observation. As Searle (1998) puts it,

I do not infer that my dog is conscious, any more than, when I came into this room, I inferred that the people present are conscious. I simply respond to them as is appropriate to conscious beings. I just treat them as conscious beings and that is that.

What is the epistemic status of such pretheoretical intuitions (if that is indeed a fair way of describing them)?

Defenders of theories that deny consciousness to such animals must deny that such intuitions have any epistemic weight. For example Dennett (who argues that consciousness is unique to humans), claims that intuitive attributions of mental states are “untrustworthy”, and points out that “it is, in fact, ridiculously easy to induce powerful intuitions of not just sentience but full-blown consciousness (ripe with malevolence or curiosity) by exposing people to quite simple robots made to move in familiar mammalian ways at mammalian speeds (1995).” (Emphasis from the original.)

Carruthers (1989) acknowledges that intuitive attributions of consciousness to animals are widespread, and go hand in hand with sympathetic attitudes toward animals (e.g. wanting to prevent suffering). However, he argues that these attitudes are incorrect, and we have a moral imperative to purge or at least override them:

In the case of brutes: since their experiences, including their pains, are nonconscious ones, their pains are of no immediate moral concern... Neither the pain of the broken leg, itself, nor its further effects upon the life of the dog, have any rational claim upon our sympathy... Are we really capable of suppressing our sympathy when we see an animal (especially a cuddly one) in severe pain? Not only is it possible that this should occur — after all, the history of mankind is replete with examples of those who have eradicated all feelings of sympathy even for members of other races, by telling themselves that they are not ‘really human’ — but it is a moral imperative that it ought to. Much time and money is presently spent on alleviating the pains of brutes which ought properly to be directed toward human beings, and many are now campaigning to reduce the efficiency of modern farming methods because of the pain caused to the animals involved. If the arguments presented here have been sound, such activities are not only morally unsupportable but morally objectionable (Carruthers 1989, p. 268).

It should be noted that while Carruthers continues to argue that only humans have consciousness, he has more recently amended his ethical view, holding that animals may deserve some moral concern despite lacking consciousness (1999).

A crucial point here is how trustworthy these pretheoretic intuitions about the minds of animals are. Call Perceptualism the view that there is direct perception of (at least some) mental states of others. What this means is that, at least sometimes, when we observe another in a mental state (e.g. in joy, or in pain) their mental state is part of the content of our perception — we perceive the mental states of others. In contrast, call inferentialism the view that we perceive ‘mere behavior’ and must infer or reason to conclusions about mental states. An alternative way of framing the issues is in terms of whether or not behavior as we perceive it is laden with mental properties. According to perceptualism, we (at least sometimes) perceive states of mind in behavior — an action (e.g. walking across a room) can be perceivably angry or sad, purposeful or eager or hesitant, etc. Goals, desires, motivations, emotions, pain or pleasure, and many other mental states are manifested in modes of action — though they cannot be reduced to patterns of disposition to behave (this would amount to behaviorism), they are tightly linked to behavior by conceptual, constitutive or causal connections that ground perceptual access.

Jamieson (1998) argues for perceptualism, pointing out that our everyday practices of attributing mental states to nonhuman animals are deeply ingrained, automatic, conceptually unifying and empirically powerful. Strands of the same point of view can also be found in scientists writing about cognitive ethology (Allen 2004a) and in Wittgensteinian attitudes towards questions of animal mind (e.g., Gaita 2003). Perceptualism as a theory of social cognition (e.g. empathy) in philosophy of psychology has recently been defended by Zahavi (2011) and Gallagher (2008).

The Perceptualism/Inferentialism question is critical for the deciding the epistemic value of common sense attributions of mental states to nonhuman animals. If inferentialism is true, then when I see, e.g., a dog bounding around in front of me with a toy in its mouth, wagging its tail and looking at me, then I may consider the possibility that the dog wants my attention, that it is feeling happy and playful — but this is only a hypothesis, for which I must provide a solid argument from justified premises if I am to justifiably believe it. On a perceptualist account, by contrast, I literally see (or at least, seem to see) that the dog wants my attention and feels happy and playful.

Perception is usually understood to ground defeasible epistemic warrant for belief — for example, if you look outside and it appears to be raining, you have some grounds to believe that it is raining. It is difficult to forgo this assumption without succumbing to radical global skepticism, since we base so many of our beliefs on perception. If seeming to perceive something to be the case provides defeasible epistemic warrant for believing it to be the case, than the fact the I seem to perceive a dog as being happy and playful warrants my belief that the dog is happy and playful, i.e. warrants my ascription of mental states to the dog. Given the prima facie epistemic support of seeming to perceive mental states in familiar animals like dogs, perceptualists would argue that only overwhelming evidence should overturn the common-sense, intuitive attribution of mental states to those animals. Whereas, as discussed above, Carruthers (1989) argues that because his theory denies consciousness to animals, we should strive to eradicate our intuitive attributions of consciousness, a perceptualist would respond that the evidence derived from our perceptual encounters with dogs is more convincing than his arguments (which hang on the plausibility of his higher order thought theory of consciousness; see section 6.1).

Nevertheless, even if we have perceptual access to the mental states of other humans and familiar animals like our pet dogs, there are sharp limitations to how far this will get us toward solving the general problem of animal consciousness. First of all, our perceptual access may be limited to animals that are familiar, comfortable interacting with humans, and biologically very similar to humans (i.e. mammals). It may be much harder to ‘see’ what (if anything) a spider or a squid is thinking and feeling as she goes about her business — both because these animals may express their mental states differently, and more radically because they may have a very different repertoire of mental states.

Second, as argued by Dennett, there are examples where our perceptions of mental states can be deceived — so Dennett seems to embrace perceptualism, but to hold that perceptions of mental states are particularly unreliable. However, Dennett's favored example of the robot ‘Cog’, unlike nonhuman animals, was intentionally designed by human engineers to seem life-like, i.e., to mimic the dynamical properties of motion that trigger the perception of mindedness. Hence, there may be no more reason to fear that our seeming perceptions of mind in others are undermined by such examples than to fear that our perception of objects in space is undermined by the existence of photography — in both cases, human engineers can be characterized as having figured out ways of creating perceptual illusions. There are deep questions about how perception of bodies in motion might disclose the mental states of others — just as there are deep questions about how visual perception of objects in space is generally possible. But, pace Dennett, there is no clear reason why the existence of carefully crafted illusions undermines the general epistemic value of perception.

Third, even among scientists who are sympathetic to the idea of themselves as sensitive observers of animals with rich mental lives, there is the recognition that the scientific context requires them to provide a particular kind of empirical justification of mental state attributions. This demand requires those who would say that a tiger pacing in the zoo is “bored”, or that the hooked fish is in pain to define their terms, state empirical criteria for their application, and provide experimental or observational evidence for their claims. Even if perceptualism is a viable theory of folk practice with respect to attributing animal consciousness, it seems unlikely to make inroads against scientific epistemology.

4.3 Cognition and Consciousness

Many scientists and philosophers remain convinced that even if some questions about animal minds are empirically tractable, no amount of experimentation can provide access to phenomenal consciousness per se. This remains true even among those who are willing to invoke cognitive explanations of animal behavior that advert to internal representations. Opposition to dealing with consciousness can be understood in part as a legacy of behavioristic psychology first because of the behaviorists' rejection of terms for unobservables unless they could be formally defined in terms of observables, or otherwise operationalized experimentally, and second because of the strong association in many behaviorists' minds between the use of mentalistic terms and the twin bugaboos of Cartesian dualism and introspectionist psychology (Bekoff & Allen 1997). In some cases these scientists are even dualists themselves, but they are strongly committed to denying the possibility of scientifically investigating consciousness, and remain skeptical of all attempts to bring it into the scientific mainstream.

Also important has been a line of argumentation by philosophers that the subjective nature of consciousness makes it inherently difficult to study. Block's (1995) influential distinction between phenomenal and access consciousness was framed as part of a critique of treatments of consciousness in the psychological literature: by failing to properly distinguish between consciousness as experience (phenomenal consciousness) and consciousness as general availability of information (access consciousness), scientists were equivocating — drawing conclusions about consciousness in the phenomenal sense from premises about consciousness in the access sense. The conceptual distinction between access consciousness and phenomenal consciousness, and the difficulty of gaining empirical traction on the latter, has been seen as a major hurdle to empirical consciousness studies. Block himself has recently been more optimistic, even arguing that certain experiments can empirically tease apart phenomenal and access consciousness (Block 2011). But the distinction between access and phenomenal consciousness does raise special methodological hurdles for those who want to study the latter empirically.

Because consciousness is assumed to be private or subjective, it is often taken to be beyond the reach of objective scientific methods (e.g., Nagel 1974). This claim might be taken in either of two ways. On the one hand it might be taken to bear on the possibility of answering the Distribution Question, i.e., to reject the possibility of knowledge that a member of another taxonomic group (e.g., a bat) has conscious states. On the other hand it might be taken to bear on the possibility of answering the Phenomenological Question, i.e., to reject the possibility of knowledge of the phenomenological details of the mental states of a member of another taxonomic group. The difference between believing with justification that a bat is conscious and knowing what it is like to be a bat is important because, at best, the privacy of conscious experience supports a negative conclusion only about the latter. To support a negative conclusion about the former, one must also assume that consciousness has absolutely no measurable effects on behavior, i.e., one must accept epiphenomenalism. But if one rejects epiphenomenalism and maintains that consciousness does have effects on behavior then a strategy of inference to the best explanation may be used to support its attribution. Moreover, if particular conscious states have particular effects on behavior, then this strategy might be pursued to elucidate some specific features of the conscious experience of other animals, even if some aspects must remain out of reach because of our inability, as humans, to fully grasp what it would be like to experience them. More will be said about this in the next section.

If phenomenal consciousness is completely epiphenomenal, as some philosophers believe, then a search for the functions of consciousness is doomed to futility. In fact, if consciousness is completely epiphenomenal then it cannot have evolved by natural selection. On the assumption that phenomenal consciousness is an evolved characteristic of human minds, at least, and therefore that epiphenomenalism is false, then an attempt to understand the biological functions of consciousness may provide the best chance of identifying its occurrence in different species. (See Robinson 2007 for more discussion of this issue.)

4.4 Dualism and Physicalism

While epistemological and related methodological issues have been at the forefront of discussions about animal consciousness, philosophical attention to consciousness in the analytic tradition over the last several decades has focused on metaphysical questions about the nature of phenomenal consciousness and its fit (or lack thereof) within a naturalistic framework.

One might think that the question of what consciousness is (metaphysically) should be settled prior to tackling the Distribution Question — that ontology should drive the epistemology. However, the metaphysical questions that have occupied analytic philosophers over the last few decades are largely orthogonal to the distribution problem, which depends more on questions about the structure and function of consciousness, discussed below.

The traditional ‘Mind Body Problem’ concerns the the metaphysical status of mind in relation to the physical world (see SEP article on dualism). Dualists argue that the mental and physical are fundamentally distinct, whereas physicalists hold that the mind is physical — and therefore not distinct, despite supposed appearances to the contrary. A third alternative is idealism, the view that the physical world is actually mental (and therefore that the two are not really distinct).

Dualistic theories of consciousness typically deny that it can be accounted for in the current terms of the natural sciences. Traditional dualists may argue that the reduction of consciousness to physically describable mechanisms is impossible on any concept of the physical. Others may hold that consciousness is an as-yet-undescribed fundamental constituent of the physical universe, not reducible to any known physical principles. Such accounts of consciousness (with the possible exception of those based in anthropocentric theology) provide no principled reasons, however, for doubting that animals are conscious.

Cartesian dualism is, of course, traditionally associated with the view that animals lack minds. Descartes' argument for this view was not based, however, on any ontological principles, but upon what he took to be the failure of animals to use language rationally, or to reason generally. On this basis he claimed that nothing in animal behavior requires a non-mechanistic, mental explanation; hence he saw no reason to attribute possession of mind to animals. In a sense, therefore, the Cartesian argument for the human-uniqueness of consciousness rests on the premise that material processes are insufficient to account for human capacities for language, rationality, and self-awareness (i.e. the awareness of oneself as, putatively, an essentially thinking thing) — and hence a non-material soul was posited to account for these phenomena. Few today would hold that material processes are incapable of producing complex phenomena such as language and rationality, and indeed our understanding of ‘the material’ has changed dramatically since Descartes' time. However, the subjective nature of consciousness continues to motivate some authors to argue that mental phenomena cannot be reduced to physical phenomena.

There is no conceptual reason why animal bodies are any less suitable vehicles for embodying a Cartesian soul, or any other of the putatively non-physical aspects of mind posited by proponents of dualism, than are human bodies. Hence, dualism does not preclude animal minds as a matter of conceptual necessity. The distribution of consciousness is a matter of empirical contingency on dualist theories as for physicalist theories. For some dualists, this may come down to whether or not the animals in question possess specific cognitive capacities, although others may argue that the non-physical nature of the mental makes it difficult or impossible to investigate empirically.

Early physicalist accounts of consciousness explored the philosophical consequences of identifying consciousness with unspecified physical or physiological properties of neurons. In this generic form, such theories do not provide any particular obstacles to attributing consciousness to animals, given that animals and humans are built upon the same biological, chemical, and physical principles. If it could be determined that phenomenal consciousness was identical to (or at least perfectly correlated with) some general property such as quantum coherence in the microtubules of neurons, or brain waves of a specific frequency, then settling the Distribution Question would be a straightforward matter of establishing whether or not members of other species possess the specified properties. Searle (1998) too, although he rejects the physicalist/dualist dialectic, also suggests that settling the Distribution Question for hard cases like insects will become trivial once neuroscientists have carried out the non-trivial task of determining the physiological basis of consciousness in animals for which no reasonable doubt of their consciousness can be entertained (i.e., mammals).

4.5 Neurofunctional accounts

Some philosophers have sought more specific grounding in the neurosciences for their accounts of consciousness. Block (2005) pursues a strategy of using tentative functional characterizations of phenomenal and access consciousness to interpret evidence from the search by neuroscientists for neural correlates of consciousness. He argues, on the basis of evidence from both humans and monkeys, that recurrent feedback activity in sensory cortex is the most plausible candidate for being the neural correlate of phenomenal consciousness in these species. Prinz (2005) also pursues a neurofunctional account, but identifies phenomenal consciousness with a different functional role than Block. He argues for identifying phenomenal consciousness with brain processes that are involved in attention to intermediate-level perceptual representations which feed into working memory via higher level, perspective-invariant representations. Since the evidence for such processes is at least partially derived from animals, including other primates and rats, his view is supportive of the idea that phenomenal consciousness is found in some nonhuman species (presumably most mammals). Nevertheless, he maintains that it may be impossible ever to answer the Distribution Question for more distantly related species; he mentions octopus, pigeons, bees, and slugs in this context.

4.6 Representationalist accounts

Representational theories of consciousness link phenomenal consciousness with the representational content of mental states, subject to some further functional criteria.

First-order representationalist accounts hold that if a particular state of the visual system of an organism represents some property of the world in a way that is functionally appropriate (e.g., not conceptually mediated, and operating as part of a sensory system), then the organism is phenomenally conscious of that property. First-order accounts are generally quite friendly to attributions of consciousness to animals, for it is relatively uncontroversial that animals have internal states that have the requisite functional and representational properties (insofar as mental representation itself is uncontroversial, that is). Such a view underlies Dretske's (1995) claim that phenomenal consciousness is inseparable from a creature's capacity to perceive and respond to features of its environment, i.e., one of the uncontroversial senses of consciousness identified above. On Dretske's view, phenomenal consciousness is therefore very widespread in the animal kingdom. Likewise, Tye (2000) argues, based upon his first-order representational account of phenomenal consciousness, that it extends even to honeybees.

Driven by a variety of allegedly counter-intuitive consequences of first-order theories of consciousness, including skepticism about the range of organisms it spans, a number of philosophers have offered a variety of higher-order accounts of phenomenal consciousness. Such accounts invoke mental states directed towards other mental states to explain phenomenal consciousness. Carruthers' “higher order thought” (HOT) theory is that a mental state is phenomenally conscious for a subject just in case it is available to be thought about directly by that subject (Carruthers 1998a,b, 2000). The term “available” here makes this a “dispositionalist” account. The contrast is an “actualist” account, which requires the actual occurrence of the 2nd order thought for subject to be conscious in the relevant sense. According to Carruthers, such higher-order thoughts are not possible unless a creature has a “theory of mind” to provide it with the concepts necessary for thought about mental states. Carruthers' view is of particular interest in the current context because he has used it explicitly to deny phenomenal consciousness to (almost) all nonhuman animals.

Carruthers argues, there is little, if any, scientific support for theory of mind in nonhuman animals, even among the great apes — with the possible exception of chimpanzees — from which he concludes that there is little support either for the view that any animals possess phenomenological consciousness. Further evaluation of this argument will be taken up further below, but it is worth noting here that if (as experiments on the attribution of false beliefs suggest) young children before the age of four years typically lack a theory of mind, Carruthers' view entails that they are not sentient either — fear of needles notwithstanding! This is a bullet Carruthers bites, although for many it constitutes a reductio of his view (a response Carruthers would certainly regard as question-begging).

In contrast to Carruthers' higher-order thought account of sentience, other theorists such as Armstrong (1980), and Lycan (1996) have preferred a higher-order experience account, where consciousness is explained in terms of inner perception of mental states, a view that can be traced back to Aristotle, and also to John Locke. Because such models do not require the ability to conceptualize mental states, proponents of higher-order experience theories have been slightly more inclined than higher-order theorists to allow that such abilities may be found in other animals[6]. Gennaro (2004) argues, however, that a higher order thought theory is compatible with consciousness in nonhuman animals, arguing that Carruthers and others have overstated the requirements for the necessary mental concepts and that reentrant pathways in animal brains provide a structure in which higher- and lower-order representations could actually be combined into a unified conscious state.

4.7 Is consciousness binary?

One metaphysical question that is more directly relevant for the question of the phylogenetic distribution and evolution of consciousness is whether possessing it (i.e. being conscious) is binary (i.e. on/off, all-or-nothing), or admits of degrees. Several authors have, for quite different reasons, denied what they take to be a common but problematic assumption — that “Consciousness is an on/off switch; a system is either conscious or not,” as Searle — who endorses the thesis puts it (quoted by Lycan 1996, who denies the thesis).

Lycan argues that consciousness can come in a wide spectrum of degrees of richness or fullness of consciousness, and that there is a meaningful sense in which a system with a minimal degree of consciousness is not “really” conscious (1996, p. 8). Admittedly, this sounds a bit paradoxical, but the point seems to be that it is counter-intuitive for us to consider very low degrees of consciousness, as it is hard to imagine the contents of very simple mental states. One reading of this is that Lycan is arguing that the predicate ‘conscious’ is vague, without committing himself to the view that consciousness is distributed according to a linear scale.

Dennett (1995) also argues that consciousness is not binary. He does so in the context of advocating a radically deflationary anti-realism about consciousness overall, on which consciousness is essentially an illusion created by language (1991/1995). On his view, “the very idea of there being a dividing line between those creatures ‘it is like something to be’ and those that are ‘mere automata’ (is) an artifact of our traditional assumptions.” (1995, p. 706)

Velmans (2012) distinguishes between ‘discontinuity theories’, which claim that there was a particular point at which consciousness originated, before which there was no consciousness (this applies both the the universe at large, and also to any particular consciousness individual), and ‘continuity theories’, which conceptualize the evolution of consciousness in terms of “a gradual transition in consciousness from unrecognizable to recognizable.” He argues that continuity theories are more elegant, as any discontintuity is based on arbitrary criteria, and that discontinuity theories face “the hard problem” in a way that continuity theories don't. Velmans takes these arguments to weigh in favor of adopting, not just a continuity theory, but a form of panpsychism.

The three authors described just above deny that consciousness is binary for very different reasons, and each of their views is controversial. Further, none of them offers much in the way of tools or concepts for thinking about the putatively nonbinary nature of consciousness. Following up on Lycan's suggestion of degrees of richness or fullness, one might ask what graded dimensions or qualitative thresholds might be available to distinguish different kinds of minds? Various authors have distinguished between ‘primary’ and ‘higher order’ consciousness (Seth et al. 2005); ‘primary’, ‘secondary’, and ‘tertiary’ consciousness (Panksepp 2005); and ‘core’ and ‘extended’ consciousness (Damasio 1999). However, most of these authors seem to correlate phenomenal consciousness, i.e. having any subjective experience at all, with primary or core consciousness. The terms “secondary” and “tertiary” are supposed to pick out elaborated forms of consciousness. Hence it is not clear that any of these taxonomies are at odds with the idea that phenomenal consciousness itself is binary — either wholly present or wholly absent for a given system. However, the issue deserves more scrutiny, as it bears on the problems of the distribution and evolutionary origins of consciousness. If consciousness is non-binary, then the distribution of consciousness will not be sharply bounded, but will include gradations — some animals may be partially or incompletely conscious.

4.8 Limits of philosophical theories

Phenomenal consciousness is just one feature (some would say the defining feature) of mental states or events. Any theory of animal consciousness must be understood, however, in the context of a larger investigation of animal cognition that (among philosophers) will also be concerned with issues such as intentionality (in the sense described by the 19th C. German psychologist Franz Brentano) and mental content (Dennett 1983, 1987; Allen 1992a,b, 1995, 1997).

Philosophical opinion divides over the relation of consciousness to intentionality with some philosophers maintaining that they are strictly independent, others (particularly proponents of the functionalist theories of consciousness described in this section) arguing that intentionality is necessary for consciousness, and still others arguing that consciousness is necessary for genuine intentionality (see Allen 1997 for discussion). Many behavioral scientists accept cognitivist explanations of animal behavior that attribute representational states to their subjects. Yet they remain hesitant to attribute consciousness. If the representations invoked within cognitive science are intentional in Brentano's sense, then these scientists seem committed to denying that consciousness is necessary for intentionality.

There remains great uncertainty about the metaphysics of phenomenal consciousness and its precise relations to intentionality, to the brain, to behavior, etc. It is beyond the scope of this article to survey the strong attacks that have been mounted against the various accounts of consciousness in these terms, but it is safe to say that none of them seems secure enough to hang a decisive endorsement or denial of animal consciousness upon it. Accounts of consciousness in terms of basic neurophysiological properties, the quantum-mechanical properties of neurons, or sui generis properties of the universe are just as insecure as the various functionalist accounts. And even those accounts that are compatible with animal in their general outline, are not specific enough to permit ready answers to the Distribution Question in its full generality. Hence no firm conclusions about the distribution of consciousness can be drawn on the basis of the philosophical theories of consciousness that have been discussed so far.

Where does this leave the epistemological questions about animal consciousness? While it may seem natural to think that we must have a theory of what consciousness is before we try to determine whether other animals have it, this may in fact be putting the conceptual cart before the empirical horse. In the early stages of the scientific investigation of any phenomenon, putative samples must be identified by rough rules of thumb (or working definitions) rather than complete theories. Early scientists identified gold by contingent characteristics rather than its atomic essence, knowledge of which had to await thorough investigation of many putative examples — some of which turned out to be gold and some not. Likewise, at this stage of the game, perhaps the study of animal consciousness would benefit from the identification of animal traits worthy of further investigation, with no firm commitment to idea that all these examples will involve conscious experience.

Recall the Cartesian argument that animals do not use language conversationally or reason generally. This argument, based on the alleged failure of animals to display certain intellectual capacities, is illustrative of a general pattern of using certain dissimilarities between animals and humans to argue that animals lack consciousness.

A common refrain in response to such arguments is that, in situations of partial information, “absence of evidence is not evidence of absence”. Descartes dismissed parrots vocalizing human words because he thought it was merely meaningless repetition. This judgment may have been appropriate for the few parrots he encountered, but it was not based on a systematic, scientific investigation of the capacities of parrots. Nowadays many would argue that Pepperberg's study of the African Grey parrot “Alex” (Pepperberg 1999) should lay the Cartesian prejudice to rest. This study, along with several on the acquisition of a degree of communicative competence by chimpanzees and bonobos (e.g., Gardner et al. 1989; Savage-Rumbaugh 1996) would seem to undermine Descartes' assertions about lack of meaningful communication and general reasoning abilities in animals. (See, also, contributions to Hurley & Nudds 2006.)

Cartesians respond by pointing out the limitations shown by animals in such studies (they can't play a good game of chess, after all, let alone tell us what they are thinking about), and they join linguists in protesting that the subjects of animal-language studies have not fully mastered the recursive syntax of natural human languages.[7] But this kind of post hoc raising of the bar suggests to many scientists that the Cartesian position is not being held as a scientific hypothesis, but as a dogma to be defended by any means. Convinced by evidence of sophisticated cognitive abilities, most philosophers these days agree with Block that something like access consciousness is properly attributed to many animals. Nevertheless, when it comes to phenomenal consciousness, dissimilarity arguments are not entirely powerless to give some pause to defenders of animal sentience, for surely most would agree that, at some point, the dissimilarities between the capacities of humans and the members of another species (the common earthworm Lumbricus terrestris, for example) are so great that it is unlikely that such creatures are sentient. A grey area arises precisely because no one can say how much dissimilarity is enough to trigger the judgment that sentience is absent.

The aim of picking out, in a principled way, behavioral or neurophysiological characteristics that could serve as reliable indicators for consciousness motivates the structure and function oriented approach that many authors have pursued since the turn of the 21st century. Though sometimes pursued along with metaphysical questions about consciousness, this project promises empirical tractability even in the face of persistent uncertainty about the metaphysical questions of consciousness.

5. The Structure and Function of Consciousness

One strategy for bringing consciousness into the scientific fold is to try to articulate a theoretical basis for connecting the observable characteristics of animals (behavioral or neurological) to consciousness. What effects should consciousness have on behavior? What capacities and dispositions should we expect a conscious creature to have that might be absent in a nonconscious creature? What neurophysiological structures and processes might realize the dynamics or information processing required for consciousness?

Such an approach is nascent in Griffin's attempts to force ethologists to pay attention to questions about animal consciousness, in all its senses — including phenomenal consciousness. In a series of books, Griffin (who made his scientific reputation by carefully detailing the physical and physiological characteristics of echolocation by bats) provides examples of communicative and problem-solving behavior by animals, particularly under natural conditions, and argues that these are prime places for ethologists to begin their investigations of animal consciousness (Griffin 1976, 1984, 1992).

Although he thinks that the intelligence displayed by these examples suggests conscious thought, many critics have been disappointed by the lack of systematic connection between Griffin's examples and the attribution of consciousness (see Alcock 1992; Bekoff & Allen 1997; Allen & Bekoff 1997). Griffin's main positive proposal in this respect has been the rather implausible suggestion that consciousness might have the function of compensating for limited neural machinery. Thus Griffin is motivated to suggest that consciousness may be more important to honey bees than to humans.

If compensating for small sets of neurons is not a plausible function for consciousness, what might be? The commonsensical answer would be that consciousness “tells” the organism about events in the environment, or, in the case of pain and other proprioceptive sensations, about the state of the body. But this answer begs the question against opponents of attributing conscious states to animals for it fails to respect the distinction between phenomenal consciousness and mere awareness (in the uncontroversial sense of detection) of environmental or bodily events. Opponents of attributing phenomenal consciousness to animals are not committed to denying the more general kind of consciousness of various external and bodily events, so there is no logical entailment from awareness of things in the environment or the body to animal sentience.

Perhaps more sophisticated attempts to spell out the functions of consciousness are similarly doomed. But Allen & Bekoff (1997, ch. 8) suggest that progress might be made by investigating the capacities of animals to adjust to their own perceptual errors. Not all adjustments to error provide grounds for suspecting that consciousness is involved, but in cases where an organism can adjust to a perceptual error while retaining the capacity to exploit the content of the erroneous perception, then there may be a robust sense in which the animal internally distinguishes its own appearance states from other judgments about the world. (Humans, for instance, have conscious visual experiences that they know are misleading — i.e., visual illusions — yet they can exploit the erroneous content of these experiences for various purposes, such as deceiving others or answering questions about how things appear to them.) Given that there are theoretical grounds for identifying conscious experiences with “appearance states”, attempts to discover whether animals have such capacities might be a good place to start looking for animal consciousness. It is important, however, to emphasize that such capacities are not themselves intended to be definitive or in any way criterial for consciousness. Carruthers (2000) makes a similar suggestion about the function of consciousness, relating it to the general capacity for making an appearance-reality distinction; of course, he continues to maintain that this capacity depends upon having conceptual resources that are beyond the grasp of nonhuman animals.

The broad issue of function is closely related to questions about just what sort of mental process consciousness is. As we shall see in the next section, hypotheses in the modern literature on the distribution and evolution of consciousness are therefore generally advanced together with theories of its structure and function in the following senses:

Structure: what are the contents of consciousness (what information, representations, intentional contents, properties, processes, etc. does it include)? What (possibly unconscious or subconscious) information, representations, or other cognitive or intentional processes, entities and relations, are required for consciousness?

Function: how does consciousness relate to other (nonconscious) processes, in cognition, the body and the environment? How does possessing consciousness contribute to an animal's ability to navigate and respond adaptively to its environment, to survive and thrive?

Different views about what consciousness is, qua cognitive process, and how it relates to other biological processes such as behavior, development and ecological interaction, largely determine biologically oriented views about which animals have consciousness and when, how, and why it evolved. To illustrate this, we can start with a crude distinction between views that see consciousness as fundamental to the basic perceptual and cognitive processes involved in controlling an animal body, or as something that can be added on or plugged in to a system that is already sufficient for basic control of perception-guided action. The more fundamental consciousness is to basic animal functioning, the more widely distributed and ancient consciousness must be; if, however, consciousness is relatively modular, functionally narrow, and conceptually high level, then it should be narrowly distributed among animals and relatively recently evolved. The views surveyed in the following section all exploit the connections between function, structure, distribution and evolutionary origin.

One further point worth noting is that structural models of consciousness are usually justified in terms of phenomenological or introspective observations — i.e. observations about the nature of consciousness as it is experienced by the subject. Though the use of such first person methods is now and has been controversial in psychology and philosophy throughout the 20th and 21st century, there seems to now be a broad acknowledgement that it has an indispensable role in the scientific study of consciousness, as many authors who have published recent scientific theories of consciousness include some appeal to phenomenological premises in justifying their views (e.g. Seth, Edelman and Baars 2005a; Merker 2005; Tononi 2008; Cabanac et al. 2009).

6. Evolution and Distribution of Consciousness

A variety of hypotheses have been put forward by scientists and philosophers about which animals are conscious and which are not. These views span a huge range of possibilities, from the narrowest, which is that only humans are conscious, to some authors arguing that almost all animals, even simple invertebrates, have a basic capacity to experience the world. Some authors have even argued that single-celled organisms (Margulis 2001) or plants (A. Nagel 1997) are conscious, and some have given arguments for versions of pan-psychism, the view that consciousness is a property of fundamental physical entities, much in the same way that mass and charge are (Chalmers 2013). It is worth noting that neither the attribution of consciousness to single-celled organisms, nor to fundamental physical entities, implies that all animals are conscious. In the former case, it may be that the information processing complexity and integration of relatively complex single-celled organisms outstrips that of the simplest animals. In the latter case, while the version of panpsychism developed by Chalmers attributes ‘microexperience’ to ‘fundamental physical entities’, this does not imply that any particular macroscopic object (like an animal) has ‘macroexperience’ — i.e. “the sort of conscious experience had by human beings” (Chalmers 2013). This view is compatible with the possibility that a given animal has no conscious experience, although it is composed of microphysical entities which possess conscious microexperience. These issues will not be discussed further here, as they fall outside the scope of Animal Consciousness.

The question of which lineages (species, or more inclusive groupings such as class or phylum) of animals are conscious, inevitably goes hand-in-hand with considerations of the evolutionary origin of consciousness. This is a logical implication of the broadly Darwinian view of life, on which modern organisms have evolved through descent, with modification, from a small number (perhaps one) of very ancient ancestors. If a trait is characteristic of a given species, it either arose in that species, or is derived from an ancestor — in which case, it will be present in other species derived from that ancestor, unless it has been secondarily lost in those species. Did consciousness first arise in humans, or in an earlier, nonhuman ancestor? If the latter, then what was this ancestor? Another possibility is that consciousness may have arisen multiple times, like winged flight, which evolved independently in insects, birds, bats, and pterosaurs.

6.1 Humans

As described above, the view that consciousness is unique to humans has a long history. It coheres with a religious view of humanity as the pinnacle of creation, and it also may be appealing insofar as it absolves us of any guilt for our treatment of animals. Religion aside, it may derive considerable intuitive support because of the appeal of connecting consciousness to the problem of human uniquenesss. If consciousness can be tied together with language, abstract reasoning, or some other mental characteristic that potentially could explain our apparent seperateness from the natural world, this would solve two outstanding mysteries at once.

Dennett (1991, 1995) has been an outspoken advocate of the human-uniqueness of consciousness. Dennett argues for this position in connection with his anti-realist theory of consciousness, the upshot of which is that consciousness is a sort of “user illusion” (1995) or “fiction” (1991, p. 365, p. 429) constructed through people's narrative descriptions:

What there is, really, is just various events of content-fixation occuring in various places at various times in the brain... Some of these content-fixations have further effects, which eventually lead to the utterance of sentences — in a natural language — either public or merely internal. And so a heterophenomenological text is created... What about the actual phenomenology? There is no such thing. (1991 p 365)

On Dennett's view, because consciousness is a sort of story telling, which requires language, and only (adult, normally enculturated and language-capable) humans have language, only these humans have consciousness.

Carruthers has championed the view that only humans (with the possible exception of chimpanzees) are conscious, although for different reasons than Dennett. Carruthers (1998a,b, 2000) has argued to this effect based on his ‘higher-order thought’ theory, according to which, phenomenal consciousness requires the capacity to think about, and therefore conceptualize, one's own thoughts.[8] Such conceptualization requires, according to Carruthers, a theory of mind. And, Carruthers maintains, there is little basis for thinking that any nonhuman animals have a theory of mind, with the possible exception of chimpanzees (see Lurz 2011 and Andrews 2012 for in depth discussion of theory of mind in nonhuman animals). This argument is, of course, no stronger than the higher-order thought account of consciousness upon which it is based. But setting that aside for the sake of argument, this challenge by Carruthers deserves further attention as perhaps the most empirically-detailed case against animal consciousness to have been made in the philosophical literature.

Carruthers neither endorses nor outright rejects the conclusion that chimpanzees are sentient. His suspicion that even chimpanzees might lack theory of mind, and therefore (on his view) phenomenal consciousness, is based on some ingenious laboratory studies by Povinelli (1996) showing that in interactions with human food providers, chimpanzees apparently fail to understand the role of eyes in providing visual information to the humans, despite their outwardly similar behavior to humans in attending to cues such as facial orientation. The interpretation of Povinelli's work remains controversial. Hare et al. (2000) conducted experiments in which dominant and subordinate animals competed with each other for food, and concluded that “at least in some situations chimpanzees know what conspecifics do and do not see and, furthermore, that they use this knowledge to formulate their behavioral strategies in food competition situations.” They suggest that Povinelli's negative results may be due to the fact that his experiments involve less natural chimp-human interactions. Given the uncertainty, Carruthers is therefore well-advised in the tentative manner in which he puts forward his claims about chimpanzee sentience.

A full discussion of the controversy over theory of mind (e.g., Heyes 1998; Lurz 2011; Andrews 2012) deserves an entry of its own, but it is worth remarking here that the theory of mind debate has origins in the hypothesis that primate intelligence in general, and human intelligence in particular, is specially adapted for social cognition (see Byrne & Whiten 1988, especially the first two chapters, by Jolly and Humphrey). Consequently, it has been argued that evidence for the ability to attribute mental states in a wide range of species might be better sought in natural activities such as social play, rather than in laboratory designed experiments which place the animals in artificial situations (Allen & Bekoff 1997; see esp. chapter 6; see also Hare et al. 2000, Hare et al. 2001, and Hare & Wrangham 2002). Alternative approaches that have attempted to provide strong evidence of theory of mind in nonhuman animals under natural conditions have generally failed to produce such evidence (see, e.g., the conclusions about theory of mind in vervet monkeys and baboons by Cheney & Seyfarth 1990, 2007), although anecdotal evidence tantalizingly suggests that researchers still have not managed to devise the right experiments. Furthermore, theory of mind — and social cognition more broadly — are active areas of research, and it is quite possible that new research will reveal evidence of theory of mind in nonhuman animals.

On views such as Carruthers', consciousness is grounded in cognitive processes that are highly specific and modular — indeed, irrelevant for the perceptual, motivational and cognitive processes involved with all nonhuman animal behavior. Given that most of the cognitive processes (and corresponding brain-systems) involved with human activities are shared with nonhuman animals, this line of thinking implies that much of human activity is nonconscious as well. Thus, for example, Carruthers (1989, 1992) argued that all animal behavior can be assimilated to the non-conscious activities of humans, such as driving while distracted (“on autopilot”), or to the capacities of “blindsight” patients whose damage to visual cortex leaves them phenomenologically blind in a portion of their visual fields (a “scotoma”) but nonetheless able to identify things presented to the scotoma. (He refers to both of these as examples of “unconscious experiences”.)

This comparison of animal behavior to the unconscious capacities of humans can be criticized on the grounds that, like Descartes' pronouncements on parrots, it is based only on unsystematic observation of animal behavior. There are grounds for thinking that careful investigation would reveal that there is not a very close analogy between animal behavior and human behaviors associated with these putative cases of unconscious experience. For instance, it is notable that the unconscious experiences of automatic driving are not remembered by their subjects, whereas there is no evidence that animals are similarly unable to recall their allegedly unconscious experiences. Likewise, blindsight subjects do not spontaneously respond to things presented to their scotomas, but must be trained to make responses using a forced-response paradigm. There is no evidence that such limitations are normal for animals, or that animals behave like blindsight victims with respect to their visual experiences (Jamieson & Bekoff 1992).

Nevertheless, there are empirical grounds for concern that behavior suggesting consciousness in animals may be the product of unconscious processes. Allen et al. (2009) describe work on learning in spinal cords of rats that shows phenomena analogous to latent inhibition and overshadowing. In intact animals, these learning and memory related phenomena have been argued to involve attention. But their similarity to mechanisms in the spinal cord, assumed by most not to involve consciousness, calls into question their status as evidence for consciousness. There are, of course, differences between the learning capacities of spinal cords and the learning capacities of intact organisms, and there are prima facie reasons for thinking that sophisticated forms of learning are related to consciousness (Clark & Squire 1998; Allen 2004b; Ginsburg & Jablonka 2007b; Allen et al. 2009). But the current point is similar to that made about blindsight: a more fine-grained analysis of these similarities and differences is needed before conclusions about consciousness can be drawn.

6.2 Great Apes

Gallup (1970) developed an experimental test of mirror self-recognition that has become widely used as a test of self-awareness, although interpretation of the test remains controversial (see the section on self-consciousness and metacognition below). Gallup argues that the performance of chimpanzees in this test indicates that they are self-aware, and that animals that fail the test lack self-awareness. Further, foreshadowing Carruthers, Gallup argues that self-awareness — in the sense of being able to think about one's own mental states — is required for having a mind, and therefore that animals that ‘fail’ the mirror test have no minds (1982, 1985). Though there has been controversy over just which animals ‘pass’ the mirror test — studies with elephants, dolphins, and what versions of the test are valid — as of 2002, Gallup mantained that there was evidence that humans common chimpanzees, bonobos and orangutans consistently pass the test, and strong evidence that a wide range of other primates fail consistently fail. He took this to support the claim that self-awareness is unique to great apes (Gallup et al. 2002). Combined with his earlier arguments that consciousness requires the sort of self-awareness measured by the mirror test, this would imply that consciousness is unique to the great apes.

Gallup's interpretation of the mirror results have not been uncontroversial (Mitchell 2002). Rochat and Zahavi (2010) challenge Gallup both on a) the interpretation of chimps' mirror-oriented behavior as indicating a human-like experience of mirror self-recognition, and b) the claim that mirror self-recognition is implied by consciousness.

As a side note, there has been a debate, ongoing since the early 1990s (Cavalieri and Singer 1994) about whether great apes deserve special legal protection amounting to ‘human rights’. The crux of the debate is not whether the great apes have consciousness per se (this seems to be assumed by most participants of the debate, on both sides), but whether they have personhood. Personhood is a vexed notion, but is generally thought to be related to certain forms of agency and self awareness, and is often thought to be tightly coupled to moral status, as reflected in this debate (DeGrazia 1997; SEP article on Moral Status of Animals; Varner 2012). Though not essential to phenomenal consciousness, personhood is often thought to presuppose consciousness, and so perhaps is best thought of as a level of elaboration or complexity of consciousness.

6.3 Mammals

A variety of theoretical and empirical arguments have been put forward to the effect that consciousness is shared across all mammals. Seth, Baars and Edelman (2005a) argue that the neural processes essential to human conscious — widespread reentrant activity in the thalamo-cortical complex — involve anatomical systems that are shared among all mammals (and perhaps more widely). Panksepp (reviewed in 2005) takes a similar approach, although focusing on the neurophysiological systems involved in the ‘core emotions’. Although in both of the above proposals, the authors acknowledge that consciousness may be more widespread than just mammals, they argue that in the case of mammals, the weight of evidence based on homology of relevant neurophysiological systems is overwhelming, whereas outside of mammals, the inference is more tenuous because of the biological differences in non-mammalian animals. Further, it should be kept in mind that all of the following proposals imply that consciousness is widely shared among mammals. Hence, the position that all mammals are conscious is widely agreed upon among scientists who express views on the distribution of consciousness.

6.4 Amniotes (including birds and reptiles)

Questions about whether reptiles are conscious (and if so what their mental lives might be like) are especially interesting because birds are more closely related to them than they are to mammals, yet birds display a variety of behaviors that tend to intuitively suggest intelligence and emotion to human observers much more obviously than the behavior of scaly, so-called ‘cold-blooded’ animals like snakes and turtles. Do birds and mammals share mental features (consciousness, intelligence, emotion, social attachment) that are absent in reptiles? If so this would represent independent, convergent evolution of these phenomena. Alternatively, are these features common to all of these animals, but less obvious in some than others?

Cabanac et al. (2009) argue that consciousness is unique to, and shared by all amniotes — the clade that includes all descendants of the common ancestor of living birds and mammals, including reptiles such as lizards, snakes, turtles and extinct animals such as dinosaurs, pterosaurs and pleseiosaurs (see On this hypothesis, only these animals, and not amphibians, fish, or any invertebrates, possess consciousness. Cabanac's argument is based on an explicit structural and functional theory of consciousness as a unified representational space, “an abstract private model of reality with four dimensions: quality, intensity, hedonicity and duration” (2009, p.268). Possessing this ability to model reality allows animals to simulate possible courses of action, using hedonicity (pleasure or pain) as a ‘common currency’ to evaluate and choose between actions based on expected consequences (which are based on prior experience).

Cabanac identifies a set of behavioral markers of consciousness, based on this model structural and functional theory:

  • the ability to make motivational tradeoffs;
  • play;
  • navigational detouring (which requires an animal to pursue a series of nonrewarding intermediate goals in order to obtain an ultimate reward);
  • expression of emotion;
  • expression of sensory pleasure;
  • emotional fever (an increase in body temperature in response to a supposedly stressful situation — gentle handling, as operationalized in Cabanac's experiments);
  • taste aversion.

Based on supposed evidence of these phenomena in amniotes but not in non-amniotes, Cabanac argues that consciousness originated in the common ancestor of amniotes, and hence is present in all living amniotes but in no other animals. Cabanac and Cabanac (2009) also argue that a qualitative difference in the role of dopamine in motivational processes in the brains of amniotes compared to non-amniotes supports this distribution/origin hypothesis.

Cabanac and colleagues have documented the presence of some of these phenomena in amniotes, in contrast with their absence in at least a small number of non-amniote species, such as emotional fever (Cabanac and Bernieri 2000; Cabanac and Cabanac 2000; Cabanac and Cabanac 2004) and taste aversion (Paradis and Cabanac 2004).

However, the assertion that these aspects of behavior and cognition do not exist outside amniota is largely based on absence of evidence (and hence, inherently limited). In particular, they do not offer direct support for their claims that non-amniotes are incapable of trading off punishments and rewards, play or detouring. Indeed, some of these claims appear to be contradicted by existing studies — for example, documentation of detouring in jumping spiders (Jackson and Wilcox 2003) and work by Elwood and Appel(2009) that shows motivational trade-off behavior in hermit crabs.

Cabanac's structural and functional theory of consciousness can be evaluated independently of the evidence that he marshals in support of his view of the distribution and origins of consciousness. Indeed, one might challenge his views on distribution and origins precisely by accepting his structural and functional theory, and arguing that the list of proposed indicators can actually by identified with a wider distribution (i.e. if motivational trade-offs, play, and detouring are present outside of amniotes). As we shall see, his structural and functional views of consciousness have much in common with those of other authors who argue for wider distributions of consciousness among animals.

6.5 Vertebrates

'Fish' is a folk-biological term that does not correspond precisely to any monophyletic taxonomic group. This can be appreciated by noted that a coelacanth is more closely related to a human than to a tuna, or that a tuna is more closely related to a human than it is to a shark. I.e., some things that are intuitively fish are more closely related to non-fish than to other fish. Basically, the folk term fish refers to all vertebrates other than tetrapods, although it is somewhat ambiguous in regards to animals such as sea-horses, eels, hagfish and sting-rays.

In any case, there has a lively debate over fish consciousness, mostly focusing on the issue of whether fish can experience pain, stress and suffering (see below, section 7.1; see also Rose (2002) and Sneddon et al. (2003) for contrasting views of conscious pain in fish; Allen 2013 for a review and philosophical discussion; or Braithwaite 2010 for a book-length treatment). This is of special relevance in the context of welfare regulation in commercial aquaculture and recreational angling; accordingly, the fish consciousness literature has focused experimentally on salmonids (especially salmon and trout), a group of high commercial and sport-fishing importance, but only a tiny phylogenetic corner of the animals that colloquially count as fish.

Merker (2005) has proposed that consciousness originated early in vertebrate evolution, and is therefore both ancient and widespread. On this proposal, not only mammals and birds, but amphibians and all marine vertebrates are conscious. Merker begins his argument with the phenomenological observation that the contents of conscious experience are object- and goal-oriented, but exclude the fine-grained sensory and motor details represented in peripheral and low-level neural processing. Merker argues that consciousness is an integrated representational platform — what he refers to a ‘synthesized reality space’ — that, for animals with complex bodies with many degrees of freedom of movement and multiple sensory modalities, solves a cluster of critical neural logistics problems. This includes:

  • maintaining the stability of perceptual contents against interference from self-generated motion;
  • integrating interoceptive information about the body's movements and the body's self-regulative needs (e.g. temperature, thirst, hunger);
  • affording high-level decision making that abstracts away from the irrelevant sensorimotor details of implementation.

The neuroanatomical details of Merker's argument (2005) are beyond the scope of this article to review, but his conclusion is that the systems that solve the above problems — giving rise to consciousness — arose in an early vertebrate ancestor. Hence, consciousness is both ancient and widespread among living vertebrates.

6.6 Invertebrates

There are additional daunting challenges to addressing questions of consciousness outside the vertebrate lineage, given the radical differences between vertebrates and invertebrates with respect to anatomy and physiology. The strategy of identifying homologous and functionally analogous structures and processes, which underlies the confidence of researchers such as Cabanac (2009), Seth et al. (2005), Merker (2005), and Panksepp (2005) that consciousness is shared with other animals is much more difficult to apply (Seth et al. 2005).

The vertebrate lineage represents just one of approximately 34 known phyla — ancient lineages of animals characterized by differences in fundamental anatomical organization and the developmental processes that generate it. Each of these phyla is derived from a relatively simple state (i.e. few tissue types and a minimal central nervous system with limited sensory capacities). Hence, the invertebrates such as cephalopod mollusks (e.g., octopi and squids) and arthropods (e.g. crustaceans, insects and spiders), that are complex enough to attract the attention of those interested in animal consciousness, evolved their complexity independently from vertebrates, and in the case of cephalopods and arthropods, independently from each other.

Today, only three of the phyla (vertebrates, arthropods, and mollusks) include animals with complex active bodies (Trestman 2013a), characterized by:

  • articulated and differentiated appendages;
  • many degrees of freedom of controlled motion;
  • distal senses (e.g., ‘‘true’’ eyes);
  • anatomical capability for active, distal-sense-guided mobility (fins, legs, jet propulsion, etc.);
  • anatomical capability for active object manipulation (e.g., chelipeds, hands, tentacles, mouth-parts with fine-motor control).

Trestman (2013a) argues that the evolution of complex active bodies requires a capacity for integrated, embodied spatial cognition, and that this capacity evolved independently in each of the three phyla in which it is currently found (vertebrates, arthropods and mollusks). If Merker (2005) is right that consciousness represents a solution to the neural-logistics problems posed by controlling a complex body in space, it may be a good bet that these three lineages are likely suspects for possessing consciousness. This line of reasoning can be bolstered by considering the role of temporal integration of perceptual information in consciousness and in action-selection and object-oriented perception (Trestman 2013b). Perhaps each of these three lineages evolved consciousness independently during the transition from a relatively simple worm-like body morphology to having complex active bodies.

One group of invertebrate animals that has received attention in the context of questions about consciousness is the coleoid cephalopods — octopuses, squids and cuttlefish. These are large-brained, notoriously clever animals, well-known for their remarkable abilities to camouflage th emselves, and for their flexible hunting strategies. Mather (2008) argues that cephalopods exhibit many behavioral indicators of consciousness, including complex learning and spatial as well as apparent play. Both Merker (2005) and Seth et al. (2005b, 2009) argue that a strong provisional case can be made for consciousness in cephalopods — although these authors emphasize the limitations on our understanding posed by the differences in anatomy and physiology between cephalopods and vertebrates.

The other phylum that has received particular attention is the arthropods, which includes insects, crustaceans, spiders, and many other less familiar animals. This is an ancient and tremendously diverse group of animals, so any generalizations should be made with caution. Arthropods were among the earliest animals to evolve complex active bodies — and correlatively to evolve brains capable of adaptively controlling complex adaptive bodies (Trestman 2013a), and so if the function of consciousness is to solve problems raised by the control of complex active bodies (cf. Merker 2005), it may have evolved early on in the arthropod lineage, in a common ancestor of all living arthropods.

Few studies have aimed directly at answering questions about consciousness in arthropods, but relevant empirical work includes:

  • Work by Elwood and Appel (2009) that seems to show that hermit crabs remember an aversive event (an electric shock), and can use that memory in later context-sensitive decision-making. This seems to satisfy Cabanac and Cabanac's (2009) definition of ‘motivational trade-off’ behavior, which those authors argue is an indicator of consciousness.
  • Work by Jackson and colleagues (reviewed in Jackson and Wilcox 1998) documenting a variety of impressive behaviors in in jumping spiders, including:
    • detouring and other forms of apparent planning (noted as a potential indicator of consciousness by Cabanac et al., 2009)
    • flexible, context-sensitive adjustment of predatory behavior to prey behavior
    • deception and smokescreen tactics
  • Studies on bees, revealing pattern recognition, concepts of ‘same’ and ‘different’, navigation, communication, and visual working memory (reviewed in Srinivasan 2010), and on mood and cognitive bias (Mendl et al. 2011).

Another possibility is that consciousness evolved even earlier in animal history and is even more widely distributed among animals, and hence has a function that is even more fundamental to animal life. Ginsburg & Jablonka (2007a,b) attribute a primitive form of “overall sensation” as a by-product of even the simplest nerve nets in animals. They argue that as these states became harnessed to learning and motivation that they acquired the functional properties of “basic consciousness”. If this is right, than consciousness may have arisen not independently in arthropods, mollusks and vertebrates, but only once in the common ancestor of these ancient groups, very early in animal evolution.

7. Special Topics in the Study of Animal Consciousness

With the gradual loosening of behaviorist strictures in psychology and ethology, and independent advances in neuroscience, there has been a considerable increase in the number of animal studies that have some bearing on animal consciousness. Some of these studies focus on specific kinds of experience, such as pain, while others focus on cognitive abilities such as self-awareness that seem to be strongly correlated with human consciousness. This section contains brief reviews of some of the main areas of investigation. The aim is to provide some quick entry points into the scientific literature.

7.1 Animal pain and suffering

Given the centrality of pain to most accounts of our ethical obligations towards animals, as well as the importance of animal models of pain in clinical medical research (see Mogil 2009 for a review), it is hardly surprising that there is a substantial (albeit controversial) scientific literature bearing on animal pain. Reports by the Nuffield Council on Bioethics in the U.K. (Nuffield Council 2005; see esp. chapter 4) and the U.S. National Academy of the Sciences Institute for Animal Laboratory Research (ILAR 2009) have recently covered the definition of pain and the physiological, neurological, and behavioral evidence for pain in nonhuman animals. These reviews also distinguish pain from distress and suffering (see also Bermond 2001), and the ILAR has divided what used to be a single report on recognition and alleviation of pain and distress into two separate reports, although the scientific investigation of distress is relatively rudimentary (but see Dawkins 1985; Farah 2008).

A proper understanding of neurological studies of animal pain begins with the distinction between nociception and pain. Nociception — the capacity to sense noxious stimuli — is one of the most primitive sensory capacities. Neurons functionally specialized for nociception have been described in invertebrates such as the medical leech and the marine snail Aplysia californica (Walters 1996). Because nociceptors are found in a very wide range of species, and are functionally effective even in decerebrate or spinally transected animals (Allen 2004b), their presence and activity in a species provides little or no direct evidence for phenomenally conscious pain experiences. The gate control theory of Melzack and Wall (1965) describes a mechanism by which “top-down” signals from the brain modulate “bottom-up” nociception, providing space for the distinction between felt pain and nociception.

Smith & Boyd (1991) assess the evidence for the pain-sensing capabilities of animals in the categories of whether nociceptors are connected to the central nervous system, whether endogenous opioids are present, whether analgesics affect responses, and whether the ensuing behavioral responses are analogous to those of humans (see table 2.3 in Varner 1998, p. 53, which updates the one presented by Smith & Boyd). On the basis of these criteria, Varner follows Smith & Boyd in concluding tentatively that the most obvious place to draw a line between pain-conscious organisms and those not capable of feeling pain consciously is between vertebrates and invertebrates. However, Elwood & Appel (2009) conducted an experiment on hermit crabs which they interpret as providing evidence that pain is experienced and remembered by these crustaceans. Varner also expressed some hesitation about the evidence for conscious pain in “lower” vertebrates: fish, reptiles and amphibians. Allen (2004b) argues, however, that subsequent research indicates that the direction of discovery seems uniformly towards identifying more similarities among diverse species belonging to different taxonomic classes, especially in the domains of anatomy and physiology of the nociceptive and pain systems.

It is generally accepted that the mammalian pain system has both a sensory and an affective pathway, and that these can be dissociated to some degree both pharmacologically (with morphine, e.g.) and surgical lesions. The anterior cingulate cortex (ACC) is a particularly important structure of the mammalian brain in this regard (Price 2000). Allen et al. (2005) and Shriver (2006) argue that this dissociability provides a route to empirical assessment of the affective component of animal consciousness, and Farah (2008) uses it to distinguish suffering from “mere pain”.

Detailed analysis of other taxonomic groups may, however, indicate important anatomical differences. Rose (2002) argues that because fish lack an ACC they may not be bothered by pain. This is in contrast to Sneddon et al. (2003) who argue that there is adequate behavioral and physiological evidence to support pain attributions to fish. (See, also, Chandroo et al. 2004 for a review.) While the ACC is important to mammals, there remains the possibility that other taxa may have functionally similar structures, such as the corticoidea dorsolateralis in birds (Atoji & Wild 2005; Dubbeldam 2009). Genetic knockout animals are also providing further clues about the affective aspects of pain (see Shriver 2009 for a review and application of these findings to animal welfare.)

Finally, it is worth noting that a major shift in veterinary practice in regards to animal pain has occurred in the past decade. Whereas surgery on animals was once routinely practiced without analgesics or anesthetics, the vast majority of veterinary practitioners now accept the basic premise that veterinarians can be trained to recognize animal pain reliably, and that veterinary patients benefit from the same kinds of pain alleviation treatments that are delivered to humans. It has even been argued that animals possess the neurobiological mechanisms responsible for phantom limb pain and neuropathic pain (pain in the presence of no obvious tissue damage or disease), and that these conditions may therefore be detectable and treatable in nonhuman animals (Mathews 2008).

7.2 Animal emotions

The idea of animal emotions is, of course, prominent in Darwin's work with his 1872 book The Expression of the Emotions in Man and Animals. Willingness to see animals as emotional beings (and humans, by contrast, as endowed with rationality that can override the emotions) goes back at least to Ancient Greek philosophy. Konrad Lorenz seems to have held a similar view, echoing Oskar Heinroth's statement that “animals are highly emotional people of very limited intelligence” (Lorenz 1971b, 334). These days it is more fashionable to regard emotions as an important component of intelligence. Regardless of the merits of that view, the scientific study of animal emotions has gained its own momentum. Early in the 20th century, although they are not arguing for or about animal consciousness, physiologists recognized that significance of emotion in animal behavior. Dror (1999) explains how the emotional state of animals was considered to be a source of noise in physiological experiments at that time, and researchers took steps to ensure that animals were calm before their experiments. According to Dror, although physiologists were forced to deal with the problem of emotional noise, attempts to treat emotion as a subject of study in its own right never crystallized to the extent of generating a journal or other institutional features (Dror 1999, 219).

More recently, Jaak Panksepp (2004, 2005) has been conducting a research program that he calls “affective neuroscience” and that encompasses direct study of animal emotions (2004), exemplified for example in the experimental investigation of rats “laughing” and seeking further contact in response to tickling by humans (Panksepp & Burgdorf 2003). Over several decades, his work (reviewed in Panksepp 2005) has elucidated the neuro- and molecular-physiological bases of several ‘core emotional systems’ including ‘seeking’, ‘fear’, ‘rage’, ‘lust’, ‘care’, ‘play’, and ‘panic’. Panksepp argues that these are shared by all mammals, and may be more widely shared among vertebrates.

Sufka et al. (2009) have proposed that animal models of neuropsychiatric disorders may also support the experimental investigation of animal emotions. Although depending on a more anecdotal, non-experimental approach, Smuts (2001) and Bekoff (2007) each defend the attribution of conscious emotions to animals from a scientist's perspective. Bekoff has made much of play behavior as an indicator of animal emotions, an idea that is also taken up by Cabanac et al. (2009).

Empathy in animals is also a topic of current investigation (e.g., Preston & de Waal 2002). Langford et al. (2002) argue for empathy in mice based on experiments in mice who observe a cagemate given a noxious stimulus, or in pain, are more sensitive to painful stimuli than control mice who observe an unfamiliar mouse similarly treated. Byrne et al. (2008) argue for empathy in elephants as an inference to the best explanation of various capacities, such as diagnosing animacy and goal directedness, and assessing the physical abilities and emotional states of other elephants when these different from their own.

It is worth noting that almost all of the work, both theoretical and empirical, on animal emotions has limited its scope to mammals, or at least amniotes (For a notable recent exception, see Mendl et al. 2011). This work has exploited certain deep homologies of, e.g. brain structure and molecular neurophysiology (focusing especially on hormones and neurotransmitters) in making arguments that animals in these taxanomic groups share our emotions because they share the mechanisms of our own emotions (as well as behaviors that tend to indicate emotions in us, and bear the same relations to the underlying physiological mechanisms). This approach is not available to animals which are very distantly related to us, i.e. invertebrates. The ancestors of vertebrates split off from the rest of the animal phyla at a time when all animals were still relatively simple, in terms of structure, tissue types, number of neurons, bodily capacities for locomotion and other forms of behavior. The elaboration of complex physiological systems occurred independently in the various phyla. Therefore the physiological systems underlying emotions in other phyla — if these exist — may be very different, and hence difficult to identify, either in terms of direct physiological observations or in terms of observations of behavioral expressions of emotion. It is also possible that the repertoire of emotions in other phyla might be different, further problematizing the task of individuating non-vertebrate emotions. (Further discussion and additional scientific references for the topic of emotions and empathy in animals can be found in the section on emotions and empathy in the entry on animal cognition.)

7.3 Perceptual phenomenology

The idea that careful psychophysical work with could help us understand the nature of their subjective experiences of the world can be traced at least to Donald Griffin's experimental tests of the limits of bat echolocation. It is also behind the idea that knowing that horses have near 360° vision, or that raptors have two fovea on each retina, may tell us something about the nature of their experiences — how the world appears to them — if it is granted that they have such experiences. Neural investigation adds a further layer of analysis to scientific understanding of the nature of perception. For instance, Leopold & Logothetis (1996) used neural data to support inferences about the percepts of monkeys under conditions of binocular rivalry (see also: Myserson et al. 1981; Rees et al. 2002). And Leopold et al. (2003) argue that neural recordings can be used to corroborate the non-verbal “reports” of monkeys shown ambiguous visual stimuli. (Think here of whether it is possible for the monkey to report that it is subject to Gestalt switches like those arising from ambiguous figures such as the duck-rabbit or figure-vase illusions.)

The phenomenon of blindsight, a form of unconscious visual perception that arises with damage to specific areas of primary visual cortex, has also been investigated in surgically lesioned monkeys (Stoerig & Cowey 1997), with a close correspondence between the monkeys' deficits and those of the human patients vis-à-vis parts of the visual field that can be processed only unconsciously, and those for which the patients retain consciousness. The non-verbal approach to assessing visual awareness has been further validated by Stoerig et al. (2002). Blindsight subjects, both human and monkey, do not spontaneously respond to things presented to their scotomas (areas where they are visually unaware), but must be trained to make responses using a forced-response paradigm (Stoerig & Cowey 1997).

The emphasis on visual perception in most of these examples, no doubt reflects a primatocentric bias. We human primates are highly visual creatures, and, as Nagel (1974) argued, we face considerable hurdles in imagining (again, a visual metaphor) the subjective experiences of creatures in modalities in which humans are weak or completely unendowed. The examples of research also reflect an anthropocentric bias in that much of the animal experimentation is explicitly targeted at relieving human disorders. Although there is good work by neuroethologists on the psychophysics of echolocation by bats and dolphins, or on the sensory capacities of weakly electric fish, questions of subjective awareness are generally not broached in that literature.

As with the investigation of animal pain, fine-grained analysis of the neural correlates of consciousness may reveal subtle differences between different species. For instance, Rees et al. (2002) report that while the behavior of rhesus monkeys is rather similar to humans under conditions of binocular rivalry, “the firing of most cells in and around V1 primarily reflects stimulus properties rather than the conscious percept reported by the animal ... However, several neuroimaging studies in humans have presented evidence that argues for a stronger role of V1 in binocular rivalry and hence, by implication, visual awareness.” Nevertheless, it is noteworthy that they take the behavioral evidence to be unproblematically describable as report of a conscious percept by the monkey.

7.4 Mental Time Travel

A debate has unfolded in the literature over whether other animals, like humans, are capable of thinking about past and future events. Suddendorf and Corballis (1997, 2007) have argued that so-called ‘Mental Time Travel’ is unique to humans, and indeed plays a major role in explaining what is cognitively unique about humans, including capacities for language and culture. Many mammals, birds and fish exhibit behavior such as food caching, nest building, tool use, or migration that seems to suggest foresight. For example, tayras — a members of the weasel family found in Central and South America — hide bunches of bananas inside of bromeliads, recovering them only when the bananas are ripe (Soley et al. 2011). Skeptics are quick to point out that many of these examples may be either a) ‘instinctive’ fixed-action patterns shaped by natural selection or b) the result of classical or operant conditioning, rather than behavior that is mediated by ‘cognitive’ processes such as episodic memory, insight, or understanding. However, the novelty, flexibility and ability to make situation-specific adjustments often calls such dismissals into question. For example, tayras cache several species of bananas, and accurately judge for bananas of each species when the banana is mature enough to continue ripening once picked. This includes domestic bananas, to which tayras have not been exposed over evolutionary time-scales (Soley et al. 2011).

A variety of careful experimental work with animals shows impressive abilities for integrated what-where-when memory — the ability to recall details of an event together with its location and time. This work was pioneered by Clayton and colleagues with scrub jays, focusing on their caching behavior — wherein the birds bury food and later recover it (Clayton et al. 2003). For example, if scrub jays are prevented from recovering their caches for long enough, they will recover only nonperishable items (peanuts, in the study), ignoring their caches of otherwise preferred but perishable food (mealworms, in the study) (Clayton et al. 2003). Recent work has also documented what is referred to as ‘episodic-like memory’ in rats (Crystal 2009), and the apparent ability to plan for the future (including in novel ways that are not plausible ruled out as ‘mere instinct’) in several animals, including nonhuman primates, birds, rats and other mammals (Feeney et al. 2011 for an example of recent experimental work; see Roberts 2012 for a review and discussion).

This debate has been somewhat complicated by the fact that proponents of the human-uniqueness of mental time travel tend to rely on descriptions of the ability that are laden with researchers' and subjects' phenomenological, introspective or intuitive descriptions of the way their own minds work (e.g. Tulving 1985; Suddendorf and Corbalis 2007), whereas animal behavior researchers must rely on strict standards of behavioral evidence to support their claims. Animal behavior researchers are typically circumspect in their interpretations, limiting their claims to operationalizable terms such as ‘what-where-when’ memory, or ‘episodic-like’ memory, rather than making claims about the nature of the experience that may be involved in an animal's performing a task. The situation may therefore represent a double standard in the interpretation of evidence about human and nonhuman animal subjects, with researchers uncritically making liberal assumptions about human cognition that would not be allowed for animal researchers — an example of what Buckner (2013) has called ‘anthropofabulation’. The question of to what extent, and in what ways, humans' awareness of time differs from that of other animals remains an open one, and an active line of research.

7.5 Self-consciousness and metacognition

Systematic study of self-consciousness and theory of mind in nonhuman animals has roots in an approach to the study of self-consciousness pioneered by Gallup (1970). Gallup's rationale for linking mirror-self recognition to self-awareness has already been discussed above. The idea for the experiment came from observations well-known to comparative psychologists that chimpanzees would, after a period of adjustment, use mirrors to inspect their own images. Gallup used these observations to develop a widely-replicated protocol that appears to allow a scientific determination of whether it is merely the mirror image per se that is the object of interest to the animal inspecting it, or whether it is the image qua proxy for the animal itself that is the object of interest. Taking chimpanzees who had extensive prior familiarity with mirrors, Gallup anesthetized his subjects and marked their foreheads with a distinctive dye, or, in a control group, anesthetized them only. Upon waking, marked animals who were allowed to see themselves in a mirror touched their own foreheads in the region of the mark significantly more frequently than controls who were either unmarked or not allowed to look into a mirror.

Although it is typically reported that chimpanzees consistently “pass” the mirror-mark test, a survey of the scientific literature by Shumaker & Swartz (2002) indicates that of 163 chimpanzees tested, only 73 showed mark-touching behavior (although there was considerable variation in the age and mirror experience among these animals). Shumaker & Swartz also report mark-touching behavior in 5 of 6 tested orang utans and 6 of 23 gorillas. They suggest that the lower incidence of mark touching by gorillas may be due to avoidance of socially-significant direct eye contact.

For non-human primates outside the great apes, the evidence for mirror self-recognition has been sparse. Gallup himself regards it as a phenomenon restricted to the great apes only, and he was among the first to challenge Hauser's report that cotton top tamarins engaged in mirror-guided self-directed behaviors after their distinctive white tufts had been dyed neon colors, a stimulus that Hauser and coauthors argued was presumably more salient than the red dot used by Gallup (Hauser et al. 1995). Faced with Gallup's challenge, Hauser himself was unable to replicate his initial results (Hauser et al. 2001). However, the idea that Gallup's protocol uses a stimulus that is not particularly salient to monkeys continues to have some currency. For example, Rajala et al. (2010) have presented quantitative and videographic evidence that rhesus monkeys with surgical implants in their heads use mirrors to inspect the implants, as well as other parts of their bodies that they cannot usually see.

Modified versions of Gallup's experiment have also been conducted with non-primate species. Notoriously, Epstein et al. (1981) trained pigeons to peck at a mark on their own bodies that was visible only in a mirror, and they used this to call into question the attribution of “self-awareness” on the basis of the mirror-mark test, preferring an associative learning explanation. Gallup et al. (2002) reject the claimed equivalence, pointing out that chimpanzees were not trained to touch marks before the test was administered. Reiss & Marino (2001) have offered evidence of mirror self-recognition in bottlenose dolphins. Using a modified version of Gallup's procedure that involved no anesthesia, they inferred self-recognition from bodily contortions in front of the mirror (self-touching being anatomically impossible for dolphins). This evidence has been disputed (e.g. Wynne 2004). The mirror-mark test continues to be an area of active investigation in various species including elephants (Plotnik et al. 2006) and magpies (Prior et al. 2008). Various commentators have pointed out that the mirror test may not be entirely fair for species which depend more heavily on senses other than vision (Mitchell 2002; Bekoff and Burghardt 2002).

An intriguing line of research into animals' knowledge of their own mental states considers the performance of animals in situations of cognitive uncertainty. When primates and dolphins are given a “bailout” response allowing them to avoid making difficult discriminations, they have been shown to choose the bailout option in ways that are very similar to humans (Smith et al. 2003). The fact that animals who have no bailout option and are thus forced to respond to the difficult comparisons do worse than those who have the bailout option but choose to respond to the test has been used to argue for some kind of higher-order self understanding. The original experiments have attracted both philosophical criticism of the second-order interpretation (e.g. Carruthers 2008) and methodological criticism by psychologists (reviewed by Crystal & Foote 2009), although alternative approaches to establishing metacognition in non-linguistic animals may be capable of avoiding these criticisms (Terrace & Son 2009).

In the literature on human cognition, awareness of what one knows is called “metacognition” and it is associated with a “feeling of knowing”. Smith and colleagues claim that investigating metacognition in animals could provide information about the relation of self-awareness to other-awareness (theory of mind), and that their results already show that “animals have functional features of or parallels to human conscious cognition” (Smith et al. 2003; Smith 2009). They also raise the question of what this might tell us about the phenomenal features of that cognition. Browne (2004) argues that the dolphin research cannot support the connection to theory of mind, but that it nevertheless is relevant to consciousness in dolphins, particularly within the theoretical framework provided by Lycan, described above. The notion of metacognition also seems relevant to questions about access consciousness. (For additional discussion of self awareness and metacognition, readers are referred to the section on theory of mind and metacognition in the entry on animal cognition.)

8. Summary

An article such as this perhaps raises more questions than it answers, but the topic would be of little philosophical interest if it were otherwise.

It is clear that for many philosophers, the topic of animal consciousness is no longer only of peripheral interest. There is increasing interest in animal cognition from a range of philosophical perspectives, including ethics, philosophy of mind, and the philosophy of science. Philosophers working in all of these areas are increasingly attentive to the particular details of scientific theory, methods, and results. Many scientists and philosophers believe that the groundwork has been laid for addressing at least some of the questions about animal consciousness in a philosophically sophisticated yet empirically tractable way. Yet there remain critics from both sides: on the one hand are those who still think that subjective phenomena are beyond the pale of scientific research, and on the other are those who think that science and philosophy have not moved far enough or fast enough to recognize animal consciousness. The arguments on both sides are by no mean exhausted.


  • Akins, K. A. (1993). A Bat Without Qualities. In M. Davies & G. Humphreys (eds.), Consciousness (pp. 258–273). Oxford: Blackwell.
  • Alcock, J. (1992). Review of Griffin 1992. Natural History, 101(9), 62–65.
  • Allen, C. (1992). Mental Content. British Journal for the Philosophy of Science, 43(4), 537–553.
  • ––– (1992). Mental Content and Evolutionary Explanation. Biology and Philosophy, 7(1), 1–12.
  • ––– (1995). Intentionality: Natural and Artificial. In J. Meyer & H. Roitblat (eds.), Comparative Approaches to Cognitive Science (pp. 93–110). Cambridge, MA: MIT Press.
  • ––– (1997). Animal Cognition and Animal Minds. In P. Machamer & M. Carrier (eds.), Philosophy and the Sciences of the Mind (Vol. 4) (pp. 227–243). Pittsburgh and Konstanz: Pittsburgh University Press and the Universitätsverlag Konstanz.
  • ––– (1997). The Discovery of Animal Consciousness: An Optimistic Assessment. Journal of Agricultural and Environmental Ethics, 10(3), 217–225.
  • ––– (2002). A skeptic's progress. Biology & Philosophy, 17, 695–702.
  • ––– (2004). Animal Pain. Noûs, 38(4), 617–643.
  • ––– (2004). Is Anyone a Cognitive Ethologist? Biology and Philosophy, 19(4), 589–607.
  • ––– (2013). Fish cognition and consciousness. Journal of Agricultural and Environmental Ethics, 26(1), 25–39.
  • Allen, C., & Bekoff, M. (1997). Species of Mind: The Philosophy and Biology of Cognitive Ethology. Cambridge, MA: MIT Press.
  • Allen, C., et al. (2005). Deciphering Animal Pain. In M. Aydede (ed.), Pain: New Essays on the Nature of Pain and the Methodology of its Study (pp. 352–366). Cambridge, MA: MIT Press.
  • Allen, C., & Grau, J., & Meagher, M. (2009). The Lower Bounds of Cognition: What Do Spinal Cords Reveal? In J. Bickle (ed.), The Oxford Handbook of Philosophy of Neuroscience (pp. 129–142). Oxford: Oxford University Press.
  • Allen, C., & Saidel, E. (1998). The Evolution of Reference. In D. Cummins & C. Allen (eds.), The Evolution of Mind, first edition (pp. 183–202). New York: Oxford University Press.
  • Anderson, S. R. (2004). Doctor Dolittle's Delusion. New Haven: Yale University Press.
  • Andrews, K. (1996). The first step in the case for great ape equality: the argument for other minds. Etica & Animali (Special issue devoted to The Great Ape Project), 8, 131–141.
  • ––– (2012). Do Apes Read Minds? Toward a New Folk Psychology. Cambridge, MA: MIT Press.
  • Ârhem, P., & Liljenström, H., & Lindahl, B. I. B. (2002). Evolution of Consciousness: Report of Agora Workshop in Sigtuna, Sweden, August 2001. Journal of Consciousness Studies, 9, 81–84.
  • Armstrong, D. A. (1980). The Nature of Mind and Other Essays. Ithaca, NY: Cornell University Press.
  • Atoji, Y., & Wild, J. M. (2005). Afferent and efferent connections of the dorsolateral corticoid area in comparison with connections of the temporoparieto-occipital area in the pigeon (Columbia livia). Journal of Comparative Neurology, 485, 165–182.
  • Baars, B. J. (1997). In the Theatre of Consciousness: Global Workspace Theory, A Rigorous Scientific Theory of Consciousness. Journal of Consciousness Studies, 4, 292–309.
  • Barrington, D. (1773). Experiments and observations on the singing of birds. Philosophical Transactions of the Royal Society, 63, 249–91.
  • Bekoff, M. (2007). The Emotional Lives of Animals. Novato, CA: New World Library.
  • Bekoff, M., & Allen, C. (1997). Cognitive Ethology: Slayers, Skeptics, and Proponents. In R. Mitchell, N. Thompson & H. Miles (eds.), Anthropomorphism, Anecdotes, and Animals (pp. 313–334). Albany, NY: State University of New York Press.
  • Bekoff, M., & Allen, C., & Burghardt, G. M. (eds.). (2002). The Cognitive Animal.Cambridge, MA: MIT Press.
  • Bermond, B. (2001). A neuropsychological and evolutionary approach to animal consciousness and animal suffering. Animal Welfare Supplement, 10, 47–62.
  • Bertoloni Meli, D. (2013). Early Modern Experimentation on Live Animals. Journal of the History of Biology, 46(2), 199–226.
  • Block, N. (1995). On A Confusion About a Function of Consciousness. Behavioral and Brain Sciences, 18, 227–47.
  • Block, N. (2005). Two Neural Correlates of Consciousness. Trends in Cognitive Sciences, 9, 41–89.
  • ––– (2011). Perceptual consciousness overflows cognitive access. Trends in cognitive sciences, 15(12), 567–575.
  • Blumberg, M. S., & Wasserman, E. A. (1995). Animal Mind and the Argument from Design. American Psychologist, 50(3), 133–144.
  • Braithwaite, V. (2010). Do Fish Feel Pain? Oxford: Oxford University Press.
  • Brigandt, I. (2005). The instinct concept of the early Konrad Lorenz. Journal of the History of Biology, 38(3), 571–608.
  • Browne, D. (2004). Do dolphins know their own minds? Biology & Philosophy, 19, 633–653.
  • Buckner, C. (2013). Morgan's Canon, meet Hume's Dictum: avoiding anthropofabulation in cross-species comparisons. Biology & Philosophy, 28(5), 853–871.
  • Burghardt, G. (1985). Animal awareness: Current perceptions and historical perspective. American Psychologist, 40(8), 905–19.
  • Burkhardt, R. W. Jr. (1997). The founders of ethology and the problem of animal subjective experience. In M. Dol, et al., Animal Consciousness and Animal Ethics: Perspectives from the Netherlands (pp. 1–13). Assen, the Netherlands: van Gorcum.
  • ––– (2005). Patterns of Behavior: Konrad Lorenz, Niko Tinbergen and the Founding of Ethology. Chicago: University of Chicago Press.
  • Byrne, R. W., et al. (2008). Do Elephants Show Empathy? Journal of Consciousness Studies, 15(10–11), 204–225.
  • Byrne, R. W., & Whiten, A. (1988). Machiavellian Intelligence: social expertise and the evolution of intellect in monkeys, apes and humans. Oxford: Oxford University Press.
  • Cabanac, A., & Cabanac, M. (2000). Heart rate response to gentle handling of frog and lizard. Behavioural Processes, 52, 89–95.
  • ––– (2004). No emotional fever in toads. Journal of Thermal Biology, 29, 669–73.
  • Cabanac, M., & Bernieri, C. (2000). Behavioral rise in body temperature and tachycardia by handling of a turtle (Clemys insculpta).. Behavioural Processes, 49, 61–68.
  • Cabanac, M., & Cabanac, J., & Paren, A. (2009). The emergence of consciousness in phylogeny. Behavioural Brain Research, 2(198), 267–272.
  • Cabanac, M., & Gosselin, F. (1993). Emotional fever in the lizard Callopistes maculates. Animal Behavior, 46, 200–202.
  • Cabanac, A. J., & Guillemette, M. (2001). Temperature and heart rate as stress indicators of handled common eider. Physiology and Behavior, 74, 475–9.
  • Carruthers, P. (1989). Brute experience. The Journal of Philosophy, 86(5), 258–269.
  • ––– (1992). The Animals Issue. Cambridge: Cambridge University Press.
  • ––– (1996). Language, Thought and Consciousness. Cambridge: Cambridge University Press.
  • ––– (1998). Animal Subjectivity. Psyche, 4(3), 2377.
  • ––– (1998). Natural Theories of Consciousness. European Journal of Philosophy, 6, 203–222.
  • ––– (1999). Sympathy and subjectivity. Australasian Journal of Philosophy, 77(4), 465–482.
  • ––– (2000). Phenomenal Consciousness: A naturalistic theory. Cambridge: Cambridge University Press.
  • ––– (2004). Suffering without Subjectivity. Philosophical Studies, 121(2), 99–125.
  • ––– (2009). Meta-cognition in Animals: A Skeptical Look. Mind & Language, 23, 58–89.
  • Cavalieri, P., & Singer, P. (eds.). (1994). The great ape project: Equality beyond humanity.New York: St. Martin's Press.
  • Chalmers, D. (in press, 2013). Panpsychism and panprotopsychism. In T. Alter & Y. Nagasawa (eds.), Russellian Monism (pp. ). New York: Oxford University Press.
  • Chandroo, K. P., & Yue, S., & Moccia, R. D. (2004). An evaluation of current perspectives on consciousness and pain in fishes. Fish and Fisheries, 5, 281–295.
  • Cheney, D. L., & Seyfarth, R. M. (1990). How Monkeys See the World: Inside the Mind of Another Species. Chicago: University of Chicago Press.
  • ––– (2007). Baboon Metaphysics: The Evolution of a Social Mind. Chicago: University of Chicago Press.
  • Clark, R. E., & Squire, L. R. (1998). Classical Conditioning and Brain Systems: The Role of Awareness. Science, 280, 77–81.
  • Clayton, N. S., & Bussey, T., & Dickinson, A. (2003). Can Animals Recall the Past and Plan for the Future? Nature Reviews: Neuroscience, 4, 685–91.
  • Clayton, N. S., & Yu, K. S., & Dickinson, A. (2001). Scrub Jays (Aphelocoma coerulescens) form integrated memories of the multiple features of caching episodes. Journal of Experimental Psychology: Animal Behavior Processes, 1, 17–29.
  • Crist, E. (2002). The inner life of earthworms: Darwin's argument and its implications. In M. Bekoff, C. Allen & G. Burghardt (eds.), The Cognitive Animal (pp. 3–8). Cambridge, MA: MIT Press.
  • Crowley, S. J., & Allen, C. (2008). Animal Behavior: E pluribus unum? In M. Ruse (ed.), The Oxford Handbook of the Philosophy of Biology (pp. 327–348). Oxford: Oxford University Press.
  • Crystal, J. D. (2009). Elements of episodic-like memory in animal models. Behavioural Processes, 80(3), 269–277.
  • Crystal, J. D., & Foote, A. L. (2009). Metacognition in animals. Comparative Cognition & Behavior Reviews, 4, 1–16.
  • Damasio, A. (1999). Empathy and direct social perception. Review of Philosophy and Psychology, 3, 541–558.
  • ––– (1999). The Feeling of What Happens. New York: Harcourt Brace.
  • Darwin, C. (1871). The Descent of Man and Selection in Relation to Sex. New York: Appleton.
  • ––– (1881). The formation of vegetable mould, through the action of worms, with observations on their habits. London: John Murray.
  • Davidson, D. (1975). Thought and Talk. In S. Guttenplan (ed.), Mind and Language (pp. 7–23). Oxford: Oxford University Press.
  • Dawkins, M. S. (1985). The scientific basis for assessing suffering in animals. In P. Singer (ed.), In Defense of Animals (pp. 27–50). New York: Blackwell.
  • Dawkins, M. S. (1993). Through Our Eyes Only? The Search for Animal Consciousness. New York: W. H. Freeman.
  • ––– (2012). Why Animals Matter: Animal Consciousness, Animal Welfare, and Human Well-being. New York. Oxford: Oxford University Press.
  • DeGrazia, D. (1997). Great apes, dolphins, and the concept of personhood. The Southern journal of philosophy, 3, 301–320.
  • Dennett, D. C. (1969). Content and Consciousness. London: Routledge and Kegan Paul.
  • ––– (1983). Intentional systems in cognitive ethology: The ‘Panglossian paradigm’ defended. Behavioral and Brain Sciences, 6, 343–390.
  • ––– (1987). The Intentional Stance. Cambridge, MA: MIT Press.
  • ––– (1995). Animal consciousness and why it matters. Social Research, 62, 691–710.
  • ––– (1997). Kinds of Minds: Towards an Understanding of Consciousness. New York: Basic Books.
  • Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press.
  • Dubbeldam, J. (2009). The Trigeminal System in Birds and Nociception. Central Nervous System Agents in Medicinal Chemistry, 9, 150–158.
  • Edelman, D. B., & Baars, B. J., & Seth, A. K. (2005). Identifying hallmarks of consciousness in non-mammalian species. Consciousness and cognition, 14(1), 169–187.
  • Edelman, D. B., & Seth, A. K. (2009). Animal Consciousness: A Synthetic Approach. Trends in Neuroscience, 32(9), 476–484.
  • Elwood, R. W., & Appel, M. (2009). Pain experience in hermit crabs? Animal Behaviour, 77, 1243–1246.
  • Epstein, R., & Lanza, R. P., & Skinner, B. F. (1981). Self-awareness in the pigeon. Science, 212, 695–696.
  • Farah, M. J. (2008). Neuroethics and the Problem of Other Minds: Implications of Neuroscience for the Moral Status of Brain-Damaged Patients and Nonhuman Animals. Neuroethics, 1, 9–18.
  • Feeney, M., & Roberts, W., & Sherry, D. (2011). Black-Capped Chickadees (Poecile artricapillus) Anticipate Future Outcomes of Foraging Choices. Journal of Experimental Psychology, 37(1), 30–40.
  • Feinberg, T. E., & Mallatt, J. (2013). The evolutionary and genetic origins of consciousness in the Cambrian Period over 500 million years ago. Frontiers in Psychology, 4, 00667.
  • Fisher, J. A. (1990). The Myth of Anthropomorphism. In M. Bekoff & D. Jamieson (eds.), Interpretation and explanation in the study of animal behavior: Interpretation, intentionality, and communication (Vol. 1) (pp. 96–116). Boulder: Westview Press.
  • Fitzpatrick, S. (2008). Doing Away with Morgan's Canon. Mind & Language, 23(2), 224–226.
  • Froese, T., & Gould, C., & Seth, A. K. (2011). Validating and calibrating first-and second-person methods in the science of consciousness. Journal of Consciousness Studies, 18(2), 38–64.
  • Fujita, K., & Blough, D. S., & Blough, P. M. (1991). Pigeons see the Ponzo illusion. Animal Learning and Behavior, 19, 283–293.
  • Gaita, R. (2003). The Philosopher's Dog: Friendships with Animals. London: Routledge.
  • Gallagher, S. (2008). Direct perception in the intersubjective context. Consciousness and Cognition, 2, 535–543.
  • Gallup, G. G. Jr. (1970). Chimpanzees: Self-Recognition. Science, 167(3914), 86-87.
  • ––– (1982). Self‐awareness and the emergence of mind in primates. American Journal of Primatology, 2(3), 237–248.
  • ––– (1986). Do minds exist in species other than our own? Neuroscience & Biobehavioral Reviews, 4, 631–641.
  • Gallup, G. G. Jr., & Anderson, J. R., & Shillito, D. J. (2002). The Mirror Test. In M. Bekoff, C. Allen & G. Burghardt (eds.), The Cognitive Animal (pp. 325–334). Cambridge, MA: MIT Press.
  • Gardner, R. A., & Gardner, B. T., & Van Cantfort, T. E. (1989). Teaching sign language to chimpanzees. Albany, NY: SUNY Press.
  • Gennaro, R. J. (2004). Higher-order thoughts, animal consciousness, and misrepresentation: A reply to Carruthers and Levine. In R. Gennaro (ed.), Higher-Order Theories of Consciousness: An Anthology (pp. 45–66). Amsterdam: John Benjamins.
  • Ginsburg, S., & Jablonka, E. (2007). The transition to experiencing: I. Limited learning and limited experiencing. Biological Theory, 2(3), 218–230.
  • Ginsburg, S., & Jablonka, E. (2007). The transition to experiencing: II. The evolution of associative learning based on feelings. Biological Theory, 2(3), 231–243.
  • Giurfa, M., et al. (2001). The concepts of ‘sameness’ and ‘difference’ in an insect. Nature, 410, 930–933.
  • Grandin, T. (1995). Thinking In Pictures: and Other Reports from My Life with Autism. New York: Doubleday.
  • ––– (2004). Animals in Translation: Using the Mysteries of Autism to Decode Animal Behavior. New York: Scribner.
  • Griffin, D. R. (1976). The Question of Animal Awareness: Evolutionary Continuity of Mental Experience. New York: Rockefeller University Press.
  • ––– (1978). Prospects for a cognitive ethology. Behavioral and Brain Sciences, 4, 527–38.
  • ––– (1984). Animal Thinking. Cambridge, MA: Harvard University Press.
  • ––– (1992). Animal Minds. Chicago: University of Chicago Press.
  • ––– (2002). What is it like? In M. Bekoff, C. Allen & G. Burghardt (eds.), The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition (pp. 471–474). Cambridge, MA: MIT Press.
  • Griffin, D. R. (ed.). (1982). Animal Mind – Human Mind.Berlin: Springer.
  • Griffin, D. R., & Speck, G. B. (2004). New evidence of animal consciousness. Animal Cognition, 7, 5–18.
  • Güzeldere, G. (1995). Is consciousness the perception of what passes in one's own mind? In T. Metzinger (ed.), Conscious Experience (pp. 335–357). Paderborn: Schöningh/Imprint Academic.
  • Hare, B., et al. (2000). Chimpanzees know what conspecifics do and do not see. Animal Behavior, 59, 771–785.
  • Hare, B., Call, J., & Tomasello, M. (2001). Do chimpanzees know what conspecifics know? Animal Behaviour, 63, 139–151.
  • Hare, B., & Wrangham, R. (2002). Integrating two evolutionary models for the study of social cognition. In M. Bekoff, C. Allen & G. Burghardt (eds.), The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition (pp. 363–370). Cambridge, MA: MIT Press.
  • Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The Faculty of Language: What is It, Who Has It, and How Did it Evolve? Science, 298(5598), 1569–1579.
  • Hauser, M. D., et al. (1995). Self-recognition in primates: phylogeny and the salience of species-typical features. Proceedings of the National Academy of Sciences, 92, 10811–10814.
  • Hauser, M. D., et al. (2001). Cotton-top tamarins (Saguinus oedipus) fail to show mirror-guided self-exploration. American Journal of Primatology, 53, 131–137.
  • Heyes, C. (1998). Theory of mind in nonhuman primates. Behavioral and Brain Sciences, 21, 101–148.
  • Hume, D. (1888). A Treatise of Human Nature, edited by L.A. Selby-Bigge. Oxford: Oxford University Press.
  • Hurley, S., & Nudds, M. (eds.). (2006). Rational Animals? Oxford: Oxford University Press.
  • Huxley, T. H. (1874). On the hypothesis that animals are automata, and its history. Fortnightly Review, 22, 199–245.
  • Institute for Laboratory Animal Research (2009). Recognition and Alleviation of Pain in Laboratory Animals, Committee Report. Washington, DC: National Research Council.
  • Jackson, R., & Wilcox, S. (1993). Observations in nature of detouring behavior by Portia Fimbriata, a web-invading aggressive mimic jumping spider from Queensland. Journal of Zoology, 230, 135–139.
  • James, W. (1890). The Principles of Psychology. New York: Dover.
  • Jamieson, D. (1998). Science, knowledge, and animal minds. Proceedings of the Aristotelian Society, 98, 79–102.
  • Jamieson, D., & Bekoff, M. (1992). Carruthers on nonconscious experience. Analysis, 52, 23–28.
  • Kennedy, J. S. (1992). The new anthropomorphism. New York: Cambridge University Press.
  • Langford, D., et al. (2006). Social Modulation of Pain as Evidence for Empathy in Mice. Science, 312, 1967–1970.
  • Leopold, D. A., & Logothetis, N. K. (1996). Activity changes in early visual cortex reflect monkeys' percepts during binocular rivalry. Nature, 379, 549–553.
  • Leopold, D. A., Maier, A., & Logothetis, N. K. (2003). Measuring Subjective Visual Perception in the Nonhuman Primate. Journal of Consciousness Studies, 10, 115–130.
  • Lewis, C. S. (2009). The problem of pain. London: HarperCollins e-book.
  • Lorenz, K. (1971). Comparative studies of the motor patterns of Anatinae. In K. Lorenz (ed.), Studies in Animal and Human Behavior (Vol. 2) (pp. 14–114). Cambridge, Mass: Harvard University Press.
  • ––– (1971). Comparative studies of the motor patterns of Anatinae. In K. Lorenz (ed.), Studies in Animal and Human Behavior (Vol. 2) (pp. 14–114). Cambridge, MA: Harvard University Press.
  • ––– (1971). Do animals undergo subjective experiences? In K. Lorenz (ed.), Studies in Animal and Human Behavior (pp. 323–337). Cambridge, MA: Harvard University Press.
  • Lovejoy, A. O. (1936). The Great Chain of Being. Cambridge, MA: Harvard University Press.
  • Lurz, R. W. (2011). Mindreading Animals: The Debate Over What Animals Know About Other Minds. Cambridge, MA: MIT Press.
  • Lycan, W. G. (1995). Consciousness as Internal Monitoring, I: The Third Philosophical Perspectives Lecture. Philosophical Perspectives, 9, 1–14.
  • ––– (1996). Consciousness and Experience. Cambridge, MA: MIT Press.
  • Margulis, L. (2001). The conscious cell. Annals of the New York Academy of Sciences, 929, 55–70.
  • Mather, J. A. (2008). Cephalopod consciousness: behavioral evidence. Consciousness and Cognition, 17, 37–48.
  • Mathews, K. (2008). Neuropathic Pain in Dogs and Cats: If Only They Could Tell Us If They Hurt. In K. Mathews (ed.), Update on Management of Pain, An Issue of Veterinary Clinics: Small Animal Practice (pp. 1365–1414). Philadelphia, PA: Saunders.
  • Melzack, R., & Wall, P. (1965). Pain mechanisms: a new theory. Science, 150, 971–9.
  • Mendl, M., Paul, E. S., & Chittka, L. (2011). Animal behaviour: emotion in invertebrates? Current Biology, 12, R463-R465.
  • Merker, B. (2005). The liabilities of mobility: A selection pressure for the transition to consciousness in animal evolution. Conscious and Cognition, 14(1), 89–114.
  • Mitchell, R. W. (2002). Kinesthetic visual matching, imitation, and self-recognition. In M. Bekoff, C. Allen & G. Burghardt (eds.), The Cognitive Animal (pp. 345–351). Cambridge, MA: MIT Press.
  • Mogil, J. S. (2009). Animal models of pain: progress and challenges. Nature Reviews: Neuroscience, 10, 283–294.
  • Morgan, C. L. (1894). An Introduction to Comparative Psychology. New York: Scribner.
  • Myserson, J., Miezin, F. M., & Allman, J. M. (1981). Binocular rivalry in macaque monkeys and humans: a comparative study in perception. Behaviour Analysis Letters, 1, 149–159.
  • Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–450.
  • Nagel, A. H. M. (1997). Are Plants Conscious? Journal of Consciousness Studies, 4(3), 215–230.
  • Nishihara, H., Hasegawa, M., & Okada, N. (2006). Pegasoferae, an unexpected mammalian clade revealed by tracking ancient retroposon insertions. Proceedings of the National Academy of Sciences, 26, 9929–9934.
  • Olson, R. (1990). Science Defied and Science Deified: The Historical Significance of Science in Western Culture. Berkeley and Los Angeles, California: University of California Press.
  • Panksepp, J. (2004). Affective Neuroscience: The Foundations of Human and Animal Emotions. New York: Oxford University Press.
  • ––– (2005). Affective consciousness: Core emotional feelings in animals and humans. Consciousness and Cognition, 14, 30–80.
  • Panksepp, J., & Burgdorf, J. (2003). ‘Laughing’ rats and the evolutionary antecedents of human joy? Physiology and Behavior, 79, 533–47.
  • Pepperberg, I. M. (1999). The Alex Studies: Cognitive and Communicative Abilities of Grey Parrots. Cambridge, MA: Harvard University Press.
  • Plotnik, J. M., Waal, F., & Reiss, D. (2006). Self-Recognition in an Asian Elephant. Proceedings of the National Academy of Sciences, 103, 17053–17057.
  • Povinelli, D. J. (1996). Chimpanzee theory of mind?: the long road to strong inference. In P. Carruthers & P. Smith (eds.), Theories of Theories of Mind (pp. 293–329). Cambridge: Cambridge University Press.
  • Povinelli, D. J., & Giambrone, S. J. (2000). Inferring Other Minds: Failure of the Argument by Analogy. Philosophical Topics, 27, 161–201.
  • Preston, S. D., & de Waal, F. B. M. (2002). Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences, 25, 1–72.
  • Price, D. (2000). Psychological and neural mechanisms of the affective dimension of pain. Science, 288, 1769–72.
  • Prinz, J. (2005). A Neurofunctional Theory of Consciousness. In A. Brook & K. Akins (eds.), Cognition and the Brain: The Philosophy and Neuroscience Movement (pp. 381–96). New York: Cambridge University Press.
  • Prior, H., Schwartz, A., & Güntürkün, O. (2008). Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition. Public Library of Science/Biology, 6(8), e202.
  • Radick, G. (2000). Morgan's canon, Garner's phonograph, and the evolutionary origins of language and reason. British Journal for the History of Science, 33, 3–23.
  • Radner, D. (1994). Heterophenomenology: learning about the birds and the bees. Journal of Philosophy, 91, 389–403.
  • Radner, D., & Radner, M. (1986). Animal Consciousness. Amherst, NY: Prometheus Books.
  • Rajala, A. Z., et al. (2010). Rhesus Monkeys (Macaca mulatta) Do Recognize Themselves in the Mirror: Implications for the Evolution of Self-Recognition. PLoS ONE, 5(9), e12865.
  • Rees, G., Kreiman, G., & Koch, C. (2002). Neural correlates of consciousness in humans. Nature Reviews Neuroscience, 3, 261–270.
  • Regan, T. (1983). The Case for Animal Rights. Berkeley: University of California Press.
  • Reiss, D., & Marino, L. (2001). Mirror Self-Recognition in the Bottlenose Dolphin: A Case of Cognitive Convergence. Proceedings of the National Academy of Sciences, 98(10), 5937–5942.
  • Roberts, W. (2012). Evidence for future cognition in animals. Learning and Motivation, 43, 169–180.
  • Robinson, W. (2007). Evolution and epiphenomenalism. Journal of Consciousness Studies, 11, 27–42.
  • Rochat, P., & Zahavi, D. (2011). The uncanny mirror: A re-framing of mirror self-experience. Consciousness and cognition, 2, 204–213.
  • Rollin, B. E. (1989). The Unheeded Cry: Animal Consciousness, Animal Pain and Science. New York: Oxford University Press.
  • Romanes, G. (1882). Animal Intelligence. London: Routledge & Kegan Paul.
  • Rose, J. D. (2002). The neurobehavioral nature of fishes and the question of awareness and pain. Reviews in Fisheries Science, 10, 1–38.
  • Rosenthal, D. (1986). Two concepts of consciousness. Philosophical Studies, 49, 329–359.
  • ––– (1993). Thinking that one thinks. In M. Davies & G. Humphreys (eds.), Consciousness (pp. 197–223). Oxford: Blackwell.
  • ––– (1993). Thinking that one thinks. In M. Davies & G. Humphreys (eds.), Consciousness (pp. 197–223). Oxford: Blackwell.
  • Savage-Rumbaugh, S., & Lewin, R. (1996). Kanzi: The Ape at the Brink of the Human Mind. New York: John Wiley & Sons.
  • Searle, J. (1998). Animal minds. Etica & Animali, 9, 37–50.
  • Seth, A., Baars, B. J., & Edelman, D. B. (2005). Criteria for consciousness in humans and other mammals. Consciousness and Cognition, 14, 119–139.
  • Shriver, A. (2006). Minding Mammals. Philosophical Psychology, 19, 433–442.
  • ––– (2009). Knocking Out Pain in Livestock: Can Technology Succeed Where Morality has Stalled? Neuroethics, 2(3), 115–24.
  • ––– (2009). Knocking Out Pain in Livestock: Can Technology Succeed Where Morality has Stalled? Neuroethics, 2(3), 115–124.
  • Shumaker, R. W., & Swartz, B. (2002). When traditional methodologies fail: Cognitive studies of great apes. In M. Bekoff, C. Allen & G. Burghardt (eds.), The Cognitive Animal: Empirical and Theoretical Perspectives on Animal Cognition (pp. 335–43). Cambridge, MA: MIT Press.
  • Singer, P. (1990 [1975]). Animal Liberation. New York: Avon Books.
  • Skinner, B. F. (1953). Science and Human Behavior. New York: Macmillan.
  • Smith, J. D. (2009). The study of animal metacognition. Trends in Cognitive Sciences, 13(9), 389–396.
  • Smith, J., & Boyd, K. (eds.). (1991). Lives in the Balance: The Ethics of Using Animals in Biomedical Research.New York: Oxford University Press.
  • Smith J.D., S. W. (2003). The comparative psychology of uncertainty monitoring and metacognition. Behavioral and Brain Sciences, 26, 317–373.
  • Sneddon, L. U., Braithwaite, V. A., & Gentle, M. J. (2003). Do fish have nociceptors: evidence for the evolution of a vertebrate sensory system. Proceedings of the Royal Society London B, 270, 1115–1121.
  • Sober, E. (1998). Morgan's Canon. In C. Allen & D. Cummins (eds.), The evolution of mind (pp. 224–242). Oxford: Oxford University Press.
  • ––– (2000). Evolution and the problem of other minds. Journal of Philosophy, 97, 365–386.
  • ––– (2000). Evolution and the problem of other minds. Journal of Philosophy, 97, 365–386.
  • ––– (2005). Comparative psychology meets evolutionary biology: Morgan's canon and cladistic parsimony. In L. Datson & G. Mitman (eds.), Thinking with Animals: New Perspectives on Anthropomorphism (pp. 85–99). New York: Columbia University Press.
  • ––– (2012). Anthropomorphism, Parsimony, and Common Ancestry. Mind & Language, 27(3), 229–238.
  • Soley, F. G., & Alvarado-Díaz, I. (2011). Prospective thinking in a mustelid? Eira barbara (Carnivora) cache unripe fruits to consume them once ripened. Naturwissenschaften, 8, 693–698.
  • Sorabji, R. (1993). Animal Minds and Human Morals: the origins of the Western debate. Ithaca, NY: Cornell University Press.
  • Spalding, D. A. (1872). Instinct: With original observations on young animals. MacMillan's Magazine, 27, 283–93.
  • Srinivasan, M. V. (2010). Honey Bees as a Model for Vision, Perception, and Cognition. Annual Review of Entomology, 55, 267–284.
  • Steiner, G. (2008). Animals and the Moral Community: Mental Life, Moral Status, and Kinship. New York: Columbia University Press.
  • Stoerig, P., & Cowey, A. (1997). Blindsight in man and monkey. Brain, 120, 535–559.
  • Stoerig, P., Zontanou, A., & Cowey, A. (2002). Aware or Unaware: Assessment of Cortical Blindness in Four Men and a Monkey. Cerebral Cortex, 12(6), 565–574.
  • Suddendorf, T., & Corballis, M. C. (1997). Mental time travel and the evolution of the human mind. Genetic, social, and general psychology monographs, 123(2), 133–167.
  • ––– (2007). The evolution of foresight: What is mental time travel, and is it unique to humans? Behavioral and Brain Sciences, 3, 299–312.
  • Sufka, K. J., Weldon, M., & Allen, C. (2009). The Case for Animal Emotions: Modeling Neuropsychiatric Disorders. In J. Bickle (ed.), The Oxford Handbook of Philosophy of Neuroscience (pp. 522–536). New York: Oxford University Press.
  • Terrace, H. S., & Son, L. K. (2009). Comparative metacognition. Current Opinion in Neurobiology, 19(1), 67–74.
  • Thomas, R. K.(2001). Lloyd Morgan's canon: A history of misrepresentation. Retrieved from: on June 8, 2006.
  • Thorndike, E. L. (1911). Animal Intelligence. Darien, CT: Hafner.
  • Tononi, G. (2008). Consciousness as integrated information: a provisional manifesto. The Biological Bulletin, 215(3), 216–242.
  • Trestman, M. (2013). The Cambrian explosion and the origins of embodied cognition. Biological Theory, 8(1), 80–92.
  • ––– (2013). The modal breadth of consciousness. Philosophical Psychology, online first, in press.
  • Trout, J. D. (2001). The biological basis of speech: What to infer from talking to the animals. Psychological Review, 108(3), 523–549.
  • Tulving, E. (1985). Memory and consciousness. Canadian Psychology/Psychologie Canadienne, 26(1), 1–12.
  • Tye, M. (2000). Consciousness, Color, and Content. Cambridge, MA: MIT Press.
  • Varner, G. (1998). In Nature's Interests? New York: Oxford University Press.
  • ––– (2012). Personhood, Ethics, and Animal Cognition: Situating Animals in Hare's Two Level Utilitarianism. New York: Oxford University Press.
  • Velmans, M. (2013). The evolution of consciousness. Contemporary Social Science, 7(2), 117–138.
  • Wallace, A. R. (1867). The philosophy of birds' nests. Intellectual Observer, 11, 413–20.
  • Walters, E. T. (1996). Comparative and Evolutionary Aspects of Nociceptor Function. In C. Belmonte & F. Cervero (eds.), Neurobiology of Nociceptors (pp. 92–114). New York: Oxford University Press.
  • Watson, J. B. (1928). The Ways of Behaviorism. New York: Harper.
  • White, G. (1789). The Natural History of Selbourne. London and New York: Dent/Dutton.
  • Wilcox, R. S., & Jackson, R. R. (1998). Cognitive abilities of araneophagic jumping spiders. In C. Kamil (ed.), Animal cognition in nature: the convergence of psychology and biology in laboratory and field. Academic (pp. 411–434). San Diego: Academic Press.
  • Wilkes, K. (1984). Is consciousness important? British Journal for the Philosophy of Science, 35, 223–243.
  • Wilson, M. D. (1995). Animal ideas. Proceedings and Addresses of the American Philosophical Association, 69, 7–25.
  • Wynne, C. (2004). Do Animals Think? Princeton, NJ: Princeton University Press.
  • Zahavi, D. (2011). Empathy and direct social perception. Review of Philosophy and Psychology, 2(3), 541–558.

Academic Tools

sep man icon How to cite this entry.
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

Related Entries

animals, moral status of | behaviorism | Brentano, Franz | cognition, animal | consciousness | consciousness: and intentionality | consciousness: higher-order theories | consciousness: representational theories of | dualism | epiphenomenalism | folk psychology: as a theory | functionalism | intentionality | materialism: eliminative | mental causation | mind/brain identity theory | naturalism | neutral monism | other minds | physicalism


Colin Allen would like to acknowledge the assistance of Ronak Shah in preparing the 2009 revision of this entry.

Copyright © 2014 by
Colin Allen <>
Michael Trestman <>

Open access to the SEP is made possible by a world-wide funding initiative.
Please Read How You Can Help Keep the Encyclopedia Free