The Philosophy of Neuroscience
Over the past four decades, philosophy of science has grown increasingly “local”. Concerns have switched from general features of scientific practice to concepts, issues, and puzzles specific to particular disciplines. Philosophy of neuroscience is one natural result. This emerging area was also spurred by remarkable growth in the neurosciences themselves. Cognitive and computational neuroscience continues to encroach directly on issues traditionally addressed within the humanities, including the nature of consciousness, action, knowledge, and normativity. Cellular, molecular, and behavioral neuroscience using animal models increasingly encroaches on cognitive neuroscience’s domain. Empirical discoveries about brain structure and function suggest ways that “naturalistic” programs might develop in detail, beyond the abstract philosophical considerations in their favor.
The literature has distinguished “philosophy of neuroscience” from “neurophilosophy” for two decades. The former concerns foundational issues within the neurosciences. The latter concerns application of neuroscientific concepts to traditional philosophical questions. Exploring various concepts of representation employed in neuroscientific theories is an example of the former. Examining implications of neurological syndromes for the concept of a unified self is an example of the latter. In this entry, we will develop this distinction further and discuss examples of both. Just as has happened in the field’s history, work in both of these areas is scattered throughout most all sections below. Throughout we will try to specify which area landmark work falls into, when this location isn’t obvious.
One exciting aspect about working in philosophy of neuroscience or neurophilosophy is the continual element of surprise. Both fields depend squarely on developments in neuroscience, and one simply has no inkling what’s coming down the pike in that incredibly fast-moving science. Last year’s speculative fiction is this year’s scientific reality. But this feature makes a once-a-half decade updated encyclopedia entry difficult to manage. The scientific details philosophers were reflecting on at past updates can now read woefully dated. Yet one also wants to capture some history of the ongoing fields. Our solution to this dilemma has been to keep previous discussions, to reflect that history, but to add more recent scientific and philosophical updates, not only to sections of this entry added at later times, but also peppered through the earlier discussions. It’s not always a perfect solution, but it does preserve something of the history of the philosophy of neuroscience and neurophilosophy against the continual advances in the sciences these philosophical fields depend upon.
- 1. Before and After Neurophilosophy
- 2. Eliminative Materialism and Philosophy Neuralized
- 3. Neuroscience and Psychosemantics
- 4. Consciousness Explained?
- 5. Locating Cognitive Functions: From Lesion Studies to Functional Neuroimaging
- 6. A Result of the Co-evolutionary Research Ideology: Philosophy’s Emphasis on Cognitive and Computational Neuroscience
- 7. Developments in the Philosophy of Neuroscience
- 8. Developments over the Second Decade of the Twenty-First Century
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Before and After Neurophilosophy
Historically, neuroscientific discoveries exerted little influence on the details of materialist philosophies of mind. The “neuroscientific milieu” of the past half-century has made it harder for philosophers to adopt substantive dualisms about mind. But even the “type-type” or “central state” identity theories that rose to brief prominence in the late 1950s (Place 1956; Smart 1959) drew upon few actual details of the emerging neurosciences. Recall the favorite early example of a psychoneural identity claim: “pain is identical to C-fiber firing”. The “C-fibers” turned out to be related to only a single aspect of pain transmission (Hardcastle 1997). Early identity theorists did not emphasize psychoneural identity hypotheses. Their “neuro” terms were admittedly placeholders for concepts from future neuroscience. Their arguments and motivations were philosophical, even if the ultimate justification of the program was held to be empirical.
The apology offered by early identity theorists for ignoring scientific details was that the neuroscience at that time was too nascent to provide any plausible identities. But potential identities were afoot. David Hubel and Torsten Wiesel’s (1962) electrophysiological demonstrations of the receptive field properties of visual neurons had been reported with great fanfare. Using their techniques, neurophysiologists began discovering neurons throughout visual cortex responsive to increasingly abstract features of visual stimuli: from edges to motion direction to colors to properties of faces and hands. More notably, Donald Hebb had published The Organization of Behavior (1949) more than a decade earlier. He had offered detailed explanations of psychological phenomena in terms of neural mechanisms and anatomical circuits. His psychological explananda included features of perception, learning, memory, and even emotional disorders. He offered these explanations as potential identities. (See the Introduction to his 1949). One philosopher who did take note of some available neuroscientific detail at the time was Barbara Von Eckardt Klein (Von Eckardt Klein 1975). She discussed the identity theory with respect to sensations of touch and pressure, and incorporated then-current hypotheses about neural coding of sensation modality, intensity, duration, and location as theorized by Mountcastle, Libet, and Jasper. Yet she was a glaring exception. By and large, available neuroscience at the time was ignored by both philosophical friends and foes of early identity theories.
Philosophical indifference to neuroscientific detail became “principled” with the rise and prominence of functionalism in the 1970s. The functionalists’ favorite argument was based on multiple realizability: a given mental state or event can be realized in a wide variety of physical types (Putnam 1967; Fodor 1974). Consequently, a detailed understanding of one type of realizing physical system (e.g., brains) will not shed light on the fundamental nature of mind. Psychology is thus autonomous from any science of one of its possible physical realizers (see the entry on multiple realizability in this Encyclopedia). Instead of neuroscience, scientifically-minded philosophers influenced by functionalism sought evidence and inspiration from cognitive psychology and artificial intelligence. These disciplines abstract away from underlying physical mechanisms and emphasize the “information-bearing” properties and capacities of representations (Haugeland 1985). At this same time, however, neuroscience was delving directly into cognition, especially learning and memory. For example, Eric Kandel (1976) proposed presynaptic mechanisms governing transmitter release rate as a cell-biological explanation of simple forms of associative learning. With Robert Hawkins (Hawkins and Kandel 1984) he demonstrated how cognitivist aspects of associative learning (e.g., blocking, second-order conditioning, overshadowing) could be explained cell-biologically by sequences and combinations of these basic forms implemented in higher neural anatomies. Working on the post-synaptic side, neuroscientists began unraveling the cellular mechanisms of long term potentiation (LTP; Bliss and Lomo 1973). Physiological psychologists quickly noted its explanatory potential for various forms of learning and memory.[1] Yet few “materialist” philosophers paid any attention. Why should they? Most were convinced functionalists. They believed that the “implementation level” details might be important to the clinician, but were irrelevant to the theorist of mind.
A major turning point in philosophers’ interest in neuroscience came with the publication of Patricia Churchland’s Neurophilosophy (1986). The Churchlands (Patricia and Paul) were already notorious for advocating eliminative materialism (see the next section). In her (1986) book, Churchland distilled eliminativist arguments of the past decade, unified the pieces of the philosophy of science underlying them, and sandwiched the philosophy between a five-chapter introduction to neuroscience and a 70-page chapter on three then-current theories of brain function. She was unapologetic about her intent. She was introducing philosophy of science to neuroscientists and neuroscience to philosophers. Nothing could be more obvious, she insisted, than the relevance of empirical facts about how the brain works to concerns in the philosophy of mind. Her term for this interdisciplinary method was “co-evolution” (borrowed from biology). This method seeks resources and ideas from anywhere on the theory hierarchy above or below the question at issue. Standing on the shoulders of philosophers like Quine and Sellars, Churchland insisted that specifying some point where neuroscience ends and philosophy of science begins is hopeless because the boundaries are poorly defined. Neurophilosophers would pick and choose resources from both disciplines as they saw fit.
Three themes predominated Churchland’s philosophical discussion: developing an alternative to the logical empiricist theory of intertheoretic reduction; responding to property-dualistic arguments based on subjectivity and sensory qualia; and responding to anti-reductionist multiple realizability arguments. These projects remained central to neurophilosophy for more than a decade after Churchland’s book appeared. John Bickle (1998) extended the principal insight of Clifford Hooker’s (1981a,b,c) post-empiricist theory of intertheoretic reduction. He quantified key notions using a model-theoretic account of theory structure adapted from the structuralist program in philosophy of science (Balzer, Moulines, and Sneed 1987). He also made explicit a form of argument to draw ontological conclusions (cross-theoretic identities, revisions, or eliminations) from the nature of the intertheoretic reduction relations obtaining in specific cases. For example, it is routinely concluded that visible light, a theoretical posit of optics, is electromagnetic radiation within specified wavelengths, a theoretical posit of electromagnetism; in this case, a cross-theoretic ontological identity. It is also routine to conclude that phlogiston does not exist: an elimination of a kind from our scientific ontology. Bickle explicated the nature of the reduction relation in a specific case using a semi-formal account of “intertheoretic approximation” inspired by structuralist results.
Paul Churchland (1996) carried on the attack on property-dualistic arguments for the irreducibility of conscious experience and sensory qualia. He argued that acquiring some knowledge of existing sensory neuroscience increases one’s ability to “imagine” or “conceive of” a comprehensive neurobiological explanation of consciousness. He defended this conclusion using a characteristically imaginative thought-experiment based on the history of optics and electromagnetism.
Finally, criticisms of the multiple realizability argument flourish—and are challenged—to the present day. Although the multiple realizability argument remains influential among nonreductive physicalists, it no longer commands the near-universal acceptance it once did. Replies to the multiple realizability argument based on neuroscientific details have appeared. For example, William Bechtel and Jennifer Mundale (1999) argue that neuroscientists use psychological criteria in brain mapping studies. This fact undercuts the likelihood that psychological kinds are multiply realized (for a review of recent developments see the entry on multiple realizability in this Encyclopedia).
2. Eliminative Materialism and Philosophy Neuralized
Eliminative materialism (EM), in the form advocated most aggressively by Paul and Patricia Churchland, is the conjunction of two claims. First, our common sense “belief-desire” conception of mental events and processes, our “folk psychology”, is a false and misleading account of the causes of human behavior. Second, like other false conceptual frameworks from both folk theory and the history of science, it will be replaced by, rather than smoothly reduced or incorporated into, a future neuroscience. The Churchlands’ characterized folk psychology as the collection of common homilies invoked (mostly implicitly) to explain human behavior causally. You ask why Marica is not accompanying me this evening. I reply that our grandson needed sitting. You nod sympathetically. You understand my explanation because you share with me a generalization that relates beliefs about taking care of grandchildren, desires to help daughters and to spend time with grandchildren compared to enjoying a night out, and so on. This is just one of a huge collection of homilies about the causes of human behavior that EM claims to be flawed beyond potential revision. Although this example involves only beliefs and desires, folk psychology contains an extensive repertoire of propositional attitudes in its explanatory nexus: hopes, intentions, fears, imaginings, and more. EMists predict that a future, genuinely scientific psychology or neuroscience will eventually eschew all of these, and replace them with incommensurable states and dynamics of neuro-cognition.
EM is physicalist in one traditional philosophical sense. It postulates that some future brain science will be ultimately the correct account of (human) behavior. It is eliminative in predicting the future rejection of folk psychological kinds from our post-neuroscientific ontology. EM proponents often employ scientific analogies (Feyerabend 1963; Paul Churchland, 1981). Oxidative reactions as characterized within elemental chemistry bear no resemblance to phlogiston release. Even the “direction” of the two processes differ. Oxygen is gained when an object burns (or rusts), phlogiston was said to be lost. The result of this theoretical change was the elimination of phlogiston from our scientific ontology. There is no such thing. For the same reasons, according to EM, continuing development in neuroscience will reveal that there are no such things as beliefs, desires, and the rest of the propositional attitudes as characterized by common sense.
Here we focus only on the way that neuroscientific results have shaped the arguments for EM. Surprisingly, only one argument has been strongly influenced. (Most arguments for EM stress failures of folk psychology as an explanatory theory of behavior.) This argument is based on a development in cognitive and computational neuroscience that might provide a genuine alternative to the representations and computations implicit in folk psychological generalizations. Many eliminative materialists assume that folk psychology is committed to propositional representations and computations over their contents that mimic logical inferences (Paul Churchland 1981; Stich 1983; Patricia Churchland 1986).[2] Even though discovering an alternative to this view has been an eliminativist goal for some time, some eliminativists hold that neuroscience only began delivering this alternative over the past thirty years. Points in and trajectories through vector spaces, as an interpretation of synaptic events and neural activity patterns in biological and artificial neural networks are the key features of this alternative. The differences between these notions of cognitive representation and transformations, and those of the propositional attitudes of folk psychology, provide the basis for one argument for EM (Paul Churchland 1987). However, this argument will be opaque to those with no background in cognitive and computational neuroscience, so we present a few details. With these details in place, we will return to this argument for EM (five paragraphs below).
At one level of analysis, the basic computational element of a neural network, biological or artificial, is the nerve cell, or neuron. Mathematically, neurons can be represented as simple computational devices, transforming inputs into output. Both inputs and outputs reflect biological variables. For our discussion, we assume that neuronal inputs are frequencies of action potentials (neuronal “spikes”) in the axons whose terminal branches synapse onto the neuron in question, while neuronal output is the frequency of action potentials generated in its axon after processing the inputs. A neuron thereby computes its total input, usually treated mathematically as the sum of the products of the signal strength along each input line times the synaptic weight on that line. It then computes a new activation state based on its total input and current activation state, and a new output state based on its new activation value. The neuron’s output state is transmitted as a signal strength to whatever neurons its axon synapses on. The output state reflects systematically the neuron’s new activation state.[3]
Analyzed in this fashion, both biological and artificial neural networks are interpreted naturally as vector-to-vector transformers. The input vector consists of values reflecting activity patterns in axons synapsing on the network’s neurons from outside (e.g., from sensory transducers or other neural networks). The output vector consists of values reflecting the activity patterns generated in the network’s neurons that project beyond the net (e.g., to motor effectors or other neural networks). Given that each neuron’s activity depends partly upon their total input, and its total input depends partly on synaptic weights (e.g., presynaptic neurotransmitter release rate, number and efficacy of postsynaptic receptors, availability of enzymes in synaptic cleft), the capacity of biological networks to change their synaptic weights make them plastic vector-to-vector transformers. In principle, a biological network with plastic synapses can come to implement any vector-to-vector transformation that its composition permits (number of input units, output units, processing layers, recurrency, cross-connections, etc.) (discussed in Paul Churchland, 1987, with references to the primary scientific literature).
The anatomical organization of the cerebellum provides a clear example of a network amenable to this computational interpretation. Consider Figure 1. The cerebellum is the bulbous convoluted structure dorsal to the brainstem. A variety of studies (behavioral, neuropsychological, single-cell electrophysiological) implicate this structure in motor integration and fine motor coordination. Mossy fibers (axons) from neurons outside the cerebellum synapse on cerebellar granule cells, which in turn project to parallel fibers. Activity patterns across the collection of mossy fibers (frequency of action potentials per time unit in each fiber projecting into the cerebellum) provide values for the input vector. Parallel fibers make multiple synapses on the dendritic trees and cell bodies of cerebellular Purkinje neurons. Each Purkinje neuron “sums” its post-synaptic potentials (PSPs) and emits a train of action potentials down its axon based (partly) on its total input and previous activation state. Purkinje axons project outside the cerebellum. The network’s output vector is thus the ordered values representing the pattern of activity generated in each Purkinje axon. Changes to the efficacy of individual synapses on the parallel fibers and the Purkinje neurons alter the resulting PSPs in Purkinje axons, generating different axonal spiking frequencies. Computationally, this amounts to a different output vector to the same input activity pattern—plasticity.[4]
This interpretation puts the useful mathematical resources of dynamical systems into the hands of computational neuroscientists. Vector spaces are an example. Learning can then be characterized fruitfully in terms of changes in synaptic weights in the network and subsequent reduction of error in network output. (This approach to learning goes back to Hebb 1949, although the vector-space interpretation was not part of Hebb’s account.) A useful representation of this account uses a synaptic weight-error space. One dimension represents the global error in the network’s output to a given task, and all other dimensions represent the weight values of individual synapses in the network. Consider Figure 2. Points in this multi-dimensional state space represent the global performance error correlated with each possible collection of synaptic weights in the network. As the weights change with each performance, in accordance with a biologically-inspired learning algorithm, the global error of network performance continually decreases. The changing synaptic weights across the network with each training episode reduces the total error of the network’s output vector, compared to the desired output vector for the input vector. Learning is represented as synaptic weight changes correlated with a descent along the error dimension in the space (Churchland and Sejnowski 1992). Representations (concepts) can be portrayed as partitions in multi-dimensional vector spaces. One example is a neuron activation vector space. See Figure 3. A graph of such a space contains one dimension for the activation value of each neuron in the network (or some specific subset of the network’s neurons, such as those in a specific layer). A point in this space represents one possible pattern of activity in all neurons in the network. Activity patterns generated by input vectors that the network has learned to group together will cluster around a (hyper-) point or subvolume in the activity vector space. Any input pattern sufficiently similar to this group will produce an activity pattern lying in geometrical proximity to this point or subvolume. Paul Churchland (1989) argued that this interpretation of network activity provided a quantitative, neurally-inspired basis for prototype theories of concepts developed in late-twentieth century cognitive psychology.
Using this theoretical development, and in the realm of neurophilosophy, Paul Churchland (1987, 1989) offered a novel, neuroscientifically-inspired argument for EM. According to the interpretation of neural networks just sketched, activity vectors are the central kind of representations, and vector-to-vector transformations are the central kind of computations, in the brain. This contrasts sharply with the propositional representations and logical/semantic computations postulated by folk psychology. Vectorial content, an ordered sequence of real numbers, is unfamiliar and alien to common sense. This cross-theoretic conceptual difference is at least as great as that between oxidative and phlogiston concepts, or kinetic-corpuscular and caloric fluid heat concepts. Phlogiston and caloric fluid are two “parade” examples of kinds eliminated from our scientific ontology due to the nature of the intertheoretic relation obtaining between the theories with which they are affiliated and the theories that replaced them. The structural and dynamic differences between the folk psychological and then-emerging cognitive neuroscientific kinds suggested that the theories affiliated with the latter will likewise replace the theory affiliated with the former. But this claim was the key premise of the eliminativist argument based on predicted intertheoretic relations. And with the rise of neural networks and parallel distributed processing, intertheoretic contrasts with folk-psychological explanatory kinds were no longer just an eliminativist’s future hope. Computational and cognitive neuroscience was delivering an alternative kinematics for cognition, one that provided no structural analogue for folk psychology’s propositional attitudes or logic-like computations over propositional contents.
Certainly the vector-space alternatives of this interpretation of neural networks are alien to folk psychology. But do they justify EM? Even if the propositional contents of folk-psychological posits find no analogues in one theoretical development in cognitive and computational neuroscience (that was hot three decades ago), there might be other aspects of cognition that folk psychology gets right. Within the scientific realism that informed early neurophilosophy, concluding that a cross-theoretic identity claim is true (e.g., folk psychological state F is identical to neural state N) or that an eliminativist claim is true (there is no such thing as folk psychological state F) depended on the nature of the intertheoretic reduction obtaining between the theories affiliated with the posits in question (Hooker 1981a,b,c; Churchland 1986; Bickle, 1998). But the underlying account of intertheoretic reduction also recognized a spectrum of possible reductions, ranging from relatively “smooth” through “significantly revisionary” to “extremely bumpy”.[5] Might the reduction of folk psychology to a “vectorial” computational neuroscience occupy some middle ground between “smooth” and “bumpy” intertheoretic reduction endpoints, and hence suggest a “revisionary” conclusion? The reduction of classical equilibrium thermodynamics-to-statistical mechanics provided a potential analogy here. John Bickle (1992, 1998, chapter 6) argued on empirical grounds that such an outcome is likely. He specified conditions on “revisionary” reductions from historical examples and suggested that these conditions are obtaining between folk psychology and cognitive neuroscience as the latter develops. In particular, folk psychology appears to have gotten right the grossly-specified functional profile of many cognitive states, especially those closely related to sensory inputs and behavioral outputs. It also appears to get right the “intentionality” of many cognitive states—the object that the state is of or about—even though cognitive neuroscience eschews its implicit linguistic explanation of this feature. Revisionary physicalism predicts significant conceptual change to folk psychological concepts, but denies total elimination of the caloric fluid-phlogiston variety.
The philosophy of science is another area where vector space interpretations of neural network activity patterns has impacted philosophy. In the Introduction to his (1989) book, A Neurocomputational Perspective, Paul Churchland asserted, distinctively neurophilosophically, that it will soon be impossible to do serious work in the philosophy of science without drawing on empirical work in the brain and behavioral sciences. To justify this claim, in Part II of the book he suggested neurocomputational reformulations of key concepts from the philosophy of science. At the heart of his reformulations is a neurocomputational account of the structure of scientific theories (1989: chapter 9). Problems with the orthodox “sets-of-sentences” view of scientific theories have been well-known since the 1960s. Churchland advocated replacing the orthodox view with one inspired by the “vectorial” interpretation of neural network activity. Representations implemented in neural networks (as sketched above) compose a system that corresponds to important distinctions in the external environment, are not explicitly represented as such within the input corpus, and allow the trained network to respond to inputs in a fashion that continually reduces error. According to Churchland, these are functions of theories. Churchland was bold in his assertion: an individual’s theory-of-the-world is a specific point in that individual’s error-synaptic weight vector space. It is a configuration of synaptic weights that partitions the individual’s activation vector space into subdivisions that reduce future error messages to both familiar and novel inputs. (Consider again Figure 2 and Figure 3.) This reformulation invites an objection, however. Churchland boasts that his theory of theories is preferable to existing alternatives to the orthodox “sets-of-sentences” account—for example, the semantic view (Suppe 1974; van Fraassen 1980)—because his is closer to the “buzzing brains” that use theories. But as Bickle (1993) noted, neurocomputational models based on the mathematical resources described above are a long way into the realm of mathematical abstraction. They are little more than novel (albeit suggestive) application of the mathematics of quasi-linear dynamical systems to simplified schemata of brain circuitries. Neurophilosophers owe some account of identifications across ontological categories (vector representations and transformation to what?) before the philosophy of science community will treat theories as points in high-dimensional state spaces implemented in biological neural networks. (There is an important methodological assumption lurking in Bickle’s objection, however, which we will discuss toward the end of the next paragraph.)
Churchland’s neurocomputational reformulations of other scientific and epistemological concepts build on this account of theories. He sketches “neuralized” accounts of the theory-ladenness of perception, the nature of concept unification, the virtues of theoretical simplicity, the nature of Kuhnian paradigms, the kinematics of conceptual change, the character of abduction, the nature of explanation, and even moral knowledge and epistemological normativity. Conceptual redeployment, for example, is the activation of an already-existing prototype representation—the centerpoint or region of a partition of a high-dimensional vector space in a trained neural network—by a novel type of input pattern. Obviously, we can’t here do justice to Churchland’s many and varied attempts at reformulation. We urge the intrigued reader to examine his suggestions in their original form. But a word about philosophical methodology is in order. Churchland is not attempting “conceptual analysis” in anything resembling its traditional philosophical sense. Neither, typically, are neurophilosophers in any of their reformulation projects. (This is why a discussion of neurophilosophical reformulations fits with a discussion of EM.) There are philosophers who take the discipline’s ideal analyses to be a relatively simple set of necessary and sufficient conditions, expressed in non-technical natural language, governing the application of important concepts (like justice, knowledge, theory, or explanation). These analyses should square, to the extent possible, with pretheoretical usage. Ideally, they should preserve synonymy. Other philosophers view this ideal as sterile, misguided, and perhaps deeply mistaken about the underlying structure of human knowledge (Ramsey 1992). Neurophilosophers tend to reside in the latter group. Those who dislike philosophical speculation about the promise and potential of developing science to reformulate (“reform-ulate”) traditional philosophical concepts have probably already discovered that neurophilosophy is not for them. But the familiar charge that neurocomputational reformulations of the sort Churchland attempts are “philosophically uninteresting” or “irrelevant” because they fail to provide “analyses” of theory, explanation, and the like will fall on deaf ears among many contemporary “naturalistic” philosophers, who have by and large given up on traditional philosophical “analysis”.
Before we leave the topic of proposed neurophilosophical applications of this theoretical development from “neural networks”-style cognitive/computational neuroscience, one final point of actual scientific detail bears mention. This approach did not remain state-of-the-art computational neuroscience for long. Many neural modelers quickly gave up this approach to modeling the brain. Compartmental modeling enabled computational neuroscientists to mimic activity in and interactions between patches of neuronal membrane (Bower and Beeman 1995). This approach permitted modelers to control and manipulate a variety of subcellular factors that determine action potentials per time unit, including the topology of membrane structure in individual neurons, variations in ion channels across membrane patches, and field properties of post-synaptic potentials depending on the location of the synapse on the dendrite or soma. By the mid-1990s modelers quickly began to “custom build” the neurons in their target circuitry. Increasingly powerful computer hardware still allowed them to study circuit properties of modeled networks. For these reasons, many serious computational neuroscientists switched to working at a level of analysis that treats neurons as structured rather than simple computational devices. With compartmental modeling, vector-to-vector transformations came to be far less useful in serious neurobiological models, replaced by differential equations representing ion currents across patches of neural membrane. Far more biological detail came to be captured in the resulting models than “connectionist” models permitted. This methodological change across computational neuroscience meant that a neurophilosophy guided by “connectionist” resources no longer drew from the state of the art of the scientific field.
Philosophy of science and scientific epistemology were not the only areas where neurophilosophers urged the relevance of neuroscientific discoveries for traditionally philosophical topics. A decade after Neurophilosophy’s publication, Kathleen Akins (1996) argued that a “traditional” view of the senses underlies a variety of sophisticated “naturalistic” programs about intentionality. (She cites the Churchlands, Daniel Dennett, Fred Dretske, Jerry Fodor, David Papineau, Dennis Stampe, and Kim Sterelny as examples.) But then-recent neuroscientific work on the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are “veridical” in at least three ways. (1) Each signal in the system correlates with a small range of properties in the external (to the body) environment. (2) The structure in the relevant external relations that the receptors are sensitive to is preserved in the structure of the internal relations among the resulting sensory states. And (3) the sensory system reconstructs faithfully, without fictive additions or embellishments, the external events. Using then-recent neurobiological discoveries about response properties of thermal receptors in the skin (i.e., “thermoreceptors”) as an illustration, Akins showed that sensory systems are “narcissistic” rather than “veridical”. All three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, our “philosophy of perception” or account of “perceptual intentionality” will no longer focus on the search for correlations between states of sensory systems and “veridically detected” external properties. This traditional philosophical (and scientific!) project rests upon a mistaken “veridicality” view of the senses. Neuroscientific knowledge of sensory receptor activity also shows that sensory experience does not serve the naturalist well as a “simple paradigm case” of an intentional relation between representation and world. Once again, available scientific detail showed the naivety of some traditional philosophical projects.
Focusing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urged a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers’ favorite cases for analysis and theorizing about conscious experiences generally. Nevertheless, every position about pain experiences has been defended: eliminativism, a variety of objectivist views, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experiences are the place to start an analysis or theory of consciousness? Hardcastle urged two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a single component of a multi-component system. Second, even those who understand some of the underlying neurobiology of pain tend to advocate gate-control theories.[6] But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposed a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argued that this dual system is consistent with neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pain sensation from stimulation of nociceptors (pain receptors). Hardcastle concluded from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analyzing the general type.
3. Neuroscience and Psychosemantics
Developing and defending theories of content is a central topic in contemporary philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. We’ll here describe a few contributions neurophilosophers have made to this project.
When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or “aboutness”. The percept or memory is about one’s being out of coffee; it represents one as being out of coffee. The representational state has content. A psychosemantics seeks to explain what it is for a representational state to be about something, to provide an account of how states and events can have specific representational content. A physicalist psychosemantics seeks to do this using resources of the physical sciences exclusively. Neurophilosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach. For a description of these and other theories of mental content, see the entries on causal theories of mental content, mental representation, and teleological theories of mental content.
The core claim of a functional role semantics is that a representation has its specific content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive “and” or disjunctive “or”. A physical event instantiates the “and” function just in case it maps two true inputs onto a single true output. Thus it is the relations an expression bears to others that give it the semantic content of “and”. Proponents of functional role semantics propose similar analyses for the content of all representations (Block 1995). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics ascribe content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information (Dretske 1981, 1988). Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of “function”. A brain state represents X by virtue of having the function of carrying information about being caused by X (Dretske 1988). These two approaches do not exhaust the popular options for a psychosemantics, but are the ones to which neurophilosophers have most contributed.
Paul Churchland’s allegiance to functional role semantics goes back to his earliest views about the semantics of terms in a language. In his (1979) book, he insisted that the semantic identity (content) of a term derives from its place in the network of sentences of the entire language. The functional economies envisioned by early functional role semanticists were networks with nodes corresponding to the objects and properties denoted by expressions in a language. Thus one node, appropriately connected, might represent birds, another feathers, and another beaks. Activation of one of these would tend to spread activation to the others. As “connectionist” neural network modeling developed (as discussed in the previous section above), alternatives arose to this one-representation-per-node “localist” approach. By the time Churchland (1989) provided a neuroscientific elaboration of functional role semantics for cognitive representations generally, he too had abandoned the “localist” interpretation. Instead, he offered a “state-space semantics”.
We saw in the previous section how (vector) state spaces provide an interpretation for activity patterns in neural networks, both biological and artificial. A state-space semantics for cognitive representations is a species of a functional role semantics because the individuation of a particular state depends upon the relations obtaining between it and other states. A representation is a point in an appropriate state space, and points (or subvolumes) in a space are individuated by their relations to other points (locations, geometrical proximity). Paul Churchland (1989, 1995) illustrated a state-space semantics for neural states by appealing to sensory systems. One popular theory in sensory neuroscience of how the brain codes for sensory qualities (like color) is the opponent process account (Hardin 1988). Churchland (1995) describes a three-dimensional activation vector state-space in which every color perceivable by humans is represented as a point (or subvolume). Each dimension corresponds to activity rates in one of three classes of photoreceptors present in the human retina and their efferent paths: the red-green opponent pathway, yellow-blue opponent pathway, and black-white (contrast) opponent pathway. Photons striking the retina are transduced by photoreceptors, producing an activity rate in each of the segregated pathways. A represented color is hence a triplet of neuronal activation frequency rates. As an illustration, consider again Figure 3. Each dimension in that three-dimensional space will represent average frequency of action potentials in the axons of one class of ganglion cells projecting out of the retina. Each color perceivable by humans will be a region of that space. For example, an orange stimulus produces a relatively low level of activity in both the red-green and yellow-blue opponent pathways (x-axis and y-axis, respectively), and middle-range activity in the black-white (contrast) opponent pathway (z-axis). Pink stimuli, on the other hand, produce low activity in the red-green opponent pathway, middle-range activity in the yellow-blue opponent pathway, and high activity in the black-white (contrast) opponent pathway.[7] The location of each color in the space generates a “color solid”. Location on the solid, and geometrical proximity between these locations, reflect structural similarities between the perceived colors. Human gustatory representations are points in a four-dimensional state space, with each dimension coding for activity rates generated by gustatory stimuli in each type of taste receptor (sweet, salty, sour, and bitter) and their segregated efferent pathways. When implemented in a neural network with structural resources, and hence computational resources as vast as the human brain, the state space approach to psychosemantics generates a theory of content for a huge number of cognitive states.[8]
Jerry Fodor and Ernest LePore (1992) raised an important challenge to Churchland’s psychosemantics. Location in a state space alone seems insufficient to fix a state’s representational content. Churchland never explains why a point in a three-dimensional state space represents a color, as opposed to any other quality, object, or event that varies along three dimensions.[9]. So Churchland’s account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore alleged that Churchland never specified how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the “external” inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colors are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli. However, this appeal to “external” stimuli as the ultimate individuating conditions for representational content makes the resulting approach a version of informational semantics. Is this approach consonant with other neurobiological details?
The neurobiological paradigm for informational semantics is the feature detector: one or more neurons that are (i) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colors. A favorite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that these cells’ activity functioned to detect flies rested upon knowledge of the frogs’ diet. (Bechtel 1998 provides a useful discussion.) Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists discovered a host of neurons that are maximally responsive to a variety of complex stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesel’s (1962) Nobel Prize-winning work establishing the receptive fields of neurons in striate (visual) cortex is often interpreted as revealing cells whose function is edge detection. However, Lehky and Sejnowski (1988) challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection. (See Churchland and Sejnowski 1992 for a review.)
Kathleen Akins (1996) offered a different neurophilosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section that Akins argued that the physiology of thermoreception violates three necessary conditions on “veridical” representation. From this fact she raised doubts about looking for feature-detecting neurons to ground a psychosemantics generally, including for thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological niceties. Whether a fly seen now is numerically identical to one seen a moment ago need not, and perhaps cannot, figure into the frog’s feature detection repertoire. Akins’ critique cast doubt on whether details of sensory transduction will scale up to provide an adequate unified psychosemantics for all concepts. It also raised new questions for human intentionality. How do we get from activity patterns in “narcissistic” sensory receptors, keyed not to “objective” environmental features but rather only to effects of the stimuli on the patch of tissue innervated, to human ontologies replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the “fly-thought” example presented above reveals), and the rest? And how did the development of a stable, rich ontology confer survival advantages to human ancestors?
4. Consciousness Explained?
Consciousness re-emerged over the past three decades as a topic of research focus in philosophy of mind, and in the cognitive and brain sciences. Instead of ignoring it, many physicalists sought to explain it (Dennett 1991). Here we focus exclusively on ways that neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. (See links to other entries in this encyclopedia below in Related Entries for broader discussions about consciousness and physicalism.)
Thomas Nagel (1974) argued famously that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invited us to ponder “what it is like to be a bat” and urged the intuitive judgment that no amount of physical-scientific knowledge, including neuroscientific, supplies a complete answer. Nagel’s intuition pump has generated extensive philosophical discussion. At least two well-known replies made direct appeal to neurophysiology. John Biro (1991) suggested that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993a) delved deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel’s question. She argued that many of the questions about bat subjective experience that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.
David Chalmers (1996) famously argued that any possible brain-process account of consciousness will leave open an “explanatory gap” between the brain process and properties of the conscious experience.[10] This is because no brain-process theory can answer the “hard” question: Why should that particular brain process give rise to that particular conscious experience? We can always imagine (“conceive of”) a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes a conscious experience, replete with all the features we experience. The fact that the hard question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanism. Paul and Patricia Churchland (1997) offered the following diagnosis and reply. Chalmers offers a conceptual argument, based on our ability to imagine creatures possessing active brains like ours but wholly lacking in conscious experiences. But the more one learns about how the brain produces conscious experience—and such a literature has emerged (for some early work, see Gazzaniga 1995)—the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just bare assertion. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei, particularly between “diffusely projecting” nuclei like the intralaminar nuclei and the cortex.[11] Churchland argues that thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM (rapid-eye movement) sleep, and other “core” features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can no longer “imagine” or “conceive of” this activity occurring without these core features of conscious experience occurring. (Other than just mouthing the expression, “I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming/…”).
A second focus of skeptical arguments about a complete neuroscientific explanation of consciousness is on sensory qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colors of visual sensations are a philosopher’s favorite example. One famous puzzle about color qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible) for two humans not to differ neurophysiologically, while the color that fire engines and tomatoes appear to have to one subject is the color that grass and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. (C.L. Hardin 1988 and Austen Clark 1993 are noteworthy examples.) A related area where neurophilosophical considerations have emerged concerns the metaphysics of colors themselves (rather than color experiences). A longstanding philosophical dispute is whether colors are objective properties existing external to perceivers or rather identifiable as or dependent upon minds or nervous systems. Some neuroscientific work on this problem begins with characteristics of color experiences: for example, that color similarity judgments produce color orderings that align on a circle (Clark 1993). With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colors with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colors with activity in opponent processing neurons does. Such a tidbit is not decisive for the color objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind. (For more details on these issues, see the entry on color in this Encyclopedia.)
We saw in the discussion of Hardcastle (1997) two sections above that neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes tension between neurophysiological discoveries and common sense intuitions about pain experience. He suspects that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland’s reply to Chalmers, Dennett favors scientific investigations over conceivability-based philosophical arguments.
Neurological deficits have attracted philosophers interested in consciousness. For nearly fifty years philosophers have debated implications for the unity of the self of the Nobel Prize-winning experiments with commissurotomy patients who, for clinical reasons, had their corpus callosum surgically ablated (Nagel 1971).[12] The corpus callosum is the huge bundle of axons connecting neurons across the left and right mammalian cerebral hemispheres. In carefully controlled experiments, commissurotomy patients seemingly display two dissociable “seats” of consciousness. Elizabeth Schechter (2018) has recently greatly updated philosophical treatment of the scientific details of these “split-brain” patients, including their own experiential reports, and has traced implications for our understanding of the self.
In chapter 5 of her (1986) book, Patricia Churchland extended both the range and philosophical implications of neurological deficits. One deficit she discusses in detail is blindsight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Block (1995) worried that many of these interpretations conflate distinct notions of consciousness. He labels these notions “phenomenal consciousness” (“P-consciousness”) and “access consciousness” (“A-consciousness”). The former is the “what it is like”-ness of conscious experiences. The latter is the availability of representational content to self-initiated action and speech. Block argued that P-consciousness is not always representational, whereas A-consciousness is. Dennett (1991, 1995) and Tye (1993) are skeptical of non-representational analyses of consciousness in general. They provide accounts of blindsight that do not depend on Block’s distinction.
We break off our brief overview of neurophilosophical work on consciousness here. Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of color and pain experiences quickly attracted neurophilosophical attention (Akins 1993a,b, 1996; Austen Clark 1993), as did self-consciousness (Bermúdez 1998).
5. Locating Cognitive Functions: From Lesion Studies to Functional Neuroimaging
One of the first issues to arise in neurology, as far back as the nineteenth century, concerned the localization of specific cognitive functions to specific brain regions. Although the “localization” approach had dubious origins in the phrenology of Gall and Spurzheim, and had been challenged strenuously by Flourens throughout the early nineteenth century, it re-emerged late in the nineteenth century in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (when possible) of linguistic deficits in their aphasic patients, followed by brain autopsies post mortem.[13] Broca’s initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates Broca postulated for the “speech production center” do not correlate exactly with damage producing production deficits, both this area of frontal cortex and speech production deficits still bear his name (“Broca’s area” and “Broca’s aphasia”). Less than two decades later Carl Wernicke published evidence for a second language center. This area is anatomically distinct from Broca’s area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (“Wernicke’s area”) is located around the first and second convolutions in temporal cortex, and the aphasia that bears his name (“Wernicke’s aphasia”) involves deficits in language comprehension. Wernicke’s method, like Broca’s, was based on lesion studies produced by natural trauma: a careful evaluation of the behavioral deficits, followed by post mortem autopsies to find the sites of tissue damage and atrophy. More recent and more careful lesion studies suggest more precise localization of specific linguistic functions, and remain a cornerstone to this day in aphasia research.
Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioral measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to block or enhance activity in these areas) and re-measure performance on the same behavioral tests. Since we lack widely accepted animal models for human language production and comprehension, this additional evidence isn’t available to the neurologist or neurolinguists. This limitation makes the neurological study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Barbara Von Eckardt (Von Eckardt Klein 1978) attempted to make explicit the steps of reasoning involved in this common and historically important method. Her analysis begins with Robert Cummins’ well-known analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacities c1, c2,…, cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (all together, the constituent capacitiesc1, c2,…, cn). A functional-localization hypothesis has the form: brain structure S in organism (type) O has constituent capacity ci, where ci is a function of some part of O. An example might be: Broca’s area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.
Armed with these characterizations, Von Eckardt Klein argues that inference to a functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behavior the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behavior to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. First, the pathological behavior P (e.g., the speech deficits characteristic of Broca’s aphasia) must result from failing to exercise some complex capacity C (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity C that involves some constituent capacity ci (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ci (Broca’s area) must result in pathological behavior P. Fourth, there must not be a better available explanation for why the patient does P. Arguments to a functional deficit hypothesis on the basis of pathological behavior is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.
Von Eckardt Klein applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia.[14] Her philosophical explication of this important neurological method reveals that most challenges to localization arguments either argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: the available alternative explanations are often severely limited. We must seek theoretical inspiration from any level of investigation or explanation. Hence making explicit the “logic” of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt Klein (1978) thus anticipated what came to be heralded as the “co-evolutionary research methodology”, which remains a centerpiece of neurophilosophy to the present day (see section 6).
Over the last three decades, new evidence for localizations of cognitive functions has come increasingly from a new source, the development and refinement of neuroimaging techniques. However, the logical form of localization-of-function arguments appears not to have changed from those employing lesion studies, as analyzed by Von Eckardt Klein. Instead, these new neuroimaging technologies resolve some of the methodological problems that plagued lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques have been prominent in philosophical discussions: positron emission tomography, or PET, and functional magnetic resonance imaging, or fMRI. Although these measure different biological markers of functional activity, PET approved for human use now has spatial resolution down to the single mm range, while fMRI has resolution down to less than 1mm.[15] As these techniques increased spatial and temporal resolution of functional markers, and continued to be used with sophisticated behavioral methodologies, arguments for localizing specific psychological functions to increasingly specific neural regions continued to grow. Stufflebeam and Bechtel provided an early and philosophically useful discussion of PET. Bechtel and Richardson (1993) provided a general framework for “localization and decomposition” arguments, which anticipated in many ways the coming “new mechanistic” perspective in philosophy of science and philosophy of neuroscience (see sections 7 and 8 below). Bechtel and Mundale (1999) further refined philosophical arguments for localization of function specific to neuroscience.
More recent philosophical discussion of these functional imaging techniques has tended to urge more caution resting localization claims on their results. Roskies (2007), for example, points out the tendency to think of the evidential force of functional neuroimages (especially fMRI) on an analogy of that of photographs. Drawing on work in aesthetics and the visual arts, Roskies argues that many of the features that give photographs their evidential force are not present in functional neuroimages. So while neuroimages do serve as evidence for claims about neurofunctions, and even for localization hypotheses, details of their proper interpretation are far more complicated than philosophers sometimes assume. More critically, Klein (2010) argues that images of “brain activity” resulting from functional neuroimaging, especially fMRI are poor evidence for functional hypotheses. For these images present the results of null hypothesis significance testing on fMRI data, and such testing alone cannot provide evidence about the functional structure of a causally dense system, which the human brain is. Instead, functional neuroimages are properly interpreted as indicating regions where further data and analysis are warranted. But these data will typically require more than simple significance testing, so skepticism about the evidential force of neuroimages does not warrant skepticism more generally about fMRI.
Localization of function remains to this day a central topic of discussion in philosophy of neuroscience. We will cover more recent work in later sections.
6. A Result of the Co-evolutionary Research Ideology: Philosophy’s Emphasis on Cognitive and Computational Neuroscience
What neuroscience has now discovered about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. These results constitute one of the crowning achievements of scientific inquiry. (For those in doubt, simply peruse for five minutes a recent volume of Society for Neuroscience Abstracts.) Less comprehensive, yet still spectacular, are discoveries at “higher” levels of neuroscience: circuits, networks, and systems. All this is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and ultimately the experimental results-driven theories within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture sometimes gets neglected: the relationship between the levels, the “glue” that binds knowledge of neuron activity to subcellular and molecular mechanisms “below”, and to circuit, network, and systems activity patterns “above”. This problem is especially glaring when we try to relate “cognitivist” psychological theories, postulating information-bearing representations and processes operating over their contents, to neuronal activities. “Co-evolution” between these explanatory levels still seems more a distant dream than an operative methodology guiding day-to-day scientific research.
It is here that some philosophers and neuroscientists turned to computational methods (Churchland and Sejnowski 1992). One hope was that the way computational models have functioned in more developed sciences, like in physics, might provide a useful model. One computational resource that has usefully been applied in more developed sciences to similar “cross-level” concerns are dynamical systems. Global phenomena, such as large-scale meteorological patterns, have been usefully addressed as dynamical, nonlinear, and often chaotic interactions between lower-level physical phenomena. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher ones like experimental psychology, artificial intelligence, and philosophy of science.
The use of computational methods in neuroscience itself is not new. Hodgkin, Huxley, and Katz (1952) incorporated values of voltage-dependent sodium and potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modeled conductance versus time data that reproduced the changing membrane potential over time when action potentials were generated. Also using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This model provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and was incorporated into the GENESIS software for programming neurally realistic networks (Bower and Beeman 1995; see discussion in section 2 above). David Sparks and his colleagues showed that a vector-averaging model of activity in neurons of superior colliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements (Lee, Rohrer, and Sparks 1988). Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortex. Their predictions were borne out under a variety of experimental tests (Georgopoulos, Schwartz, and Kettner 1986). We mention these particular studies only because these are ones with which we are familiar. No doubt we could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before “computational neuroscience” was a recognized research endeavor.
We’ve already seen one example, the vector transformation account of neural representation and computation, once under active development in cognitive neuroscience (see section 2 above). Other approaches using “cognitivist” resources were, and continue to be, pursued.[16] Some of these projects draw upon “cognitivist” characterizations of the phenomena to be explained. Some exploit “cognitivist” experimental techniques and methodologies. Some even attempt to derive “cognitivist” explanations from cell-biological processes (e.g., Hawkins and Kandel 1984). As Stephen Kosslyn (1997) put it, cognitive neuroscientists employ the “information processing” view of the mind characteristic of cognitivism without trying to separate it from theories of brain mechanisms. Such an endeavor calls for an interdisciplinary community willing to communicate the relevant portions of the mountain of detail gathered in individual disciplines with interested nonspecialists. This requires more than people willing to confer with others working at related levels, but also researchers trained explicitly in the methods and factual details of a variety of disciplines. This is a daunting need, but it offers hope to philosophers wishing to contribute to actual neuroscience. Thinkers trained in both the “synoptic vision” afforded by philosophy, and the scientific and experimental basis of a genuine (graduate-level) science would be ideally equipped for this task. Recognition of this potential niche was slow to dawn on graduate programs in philosophy, but a few programs have taken steps to fill it (see, e.g., Other Internet Resources below).
However, one glaring shortcoming remains. Given philosophers’ training and interests, “higher-level” neurosciences—networks, cognitive, systems, and the fields of computational neuroscience which ally with these—tend to attract the most philosophical attention. As natural as this focus might be, it can lead philosophers to a misleading picture of neuroscience. Neurobiology remains focused on cellular and molecular mechanisms of neuronal activity, and allies with the kind of behavioral neuroscience that works with animal models. This is still how a majority of members of the Society for Neuroscience, now more than 37,000 members strong, classify their own research; this is where the majority of grant money for research goes; and these are the areas whose experimental publications most often appear in the most highly cited scientific journals. (The link to the Society for Neuroscience’s web site in Other Internet Resources below leads to a wealth of data on these numbers; see especially the Publications section.) Yet philosophers have tended not to pay much attention to cellular and molecular neuroscience. Fortunately this seems to be changing, as we will document in sections 7 and 8 below. Still, the preponderant attention philosophers pay to cognitive/systems/computational neuroscience obscures the wetlab experiment-driven focus of ongoing neurobiology.
7. Developments in the Philosophy of Neuroscience
The distinction between “philosophy of neuroscience” and “neurophilosophy” came to be better clarified over the first decade of the twenty-first century, due primarily to more questions being pursued in both areas. Philosophy of neuroscience still tends to pose traditional questions from philosophy of science specifically about neuroscience. Such questions include: What is the nature of neuroscientific explanation? And, what is the nature of discovery in neuroscience? Answers to these questions are pursued either descriptively (how does neuroscience proceed?) or normatively (how should neuroscience proceed)? Some normative projects in philosophy of neuroscience are “deconstructive”, criticizing claims about the topic made by neuroscientists. For example, philosophers of neuroscience have criticized the conception of personhood assumed by researchers in cognitive neuroscience (cf. Roskies 2009). Other normative projects are constructive, proposing new theories of neuronal phenomena or methods for interpreting neuroscientific data. Such projects often integrate smoothly with theoretical neuroscience itself. For example, Chris Eliasmith and Charles Anderson developed an approach to constructing neurocomputational models in their book Neural Engineering (2003). In separate publications, Eliasmith has argued that the framework introduced in Neural Engineering provides both a normative account of neural representation and a framework for unifying explanation in neuroscience (e.g., Eliasmith 2009).
Neurophilosophy continued to apply findings from the neurosciences to traditional, philosophical questions. Examples include: What is an emotion? (Prinz 2004) What is the nature of desire? (Schroeder 2004) How is social cognition made possible? (Goldman 2006) What is the neural basis of moral cognition? (Prinz 2007) What is the neural basis of happiness? (Flanagan 2009) Neurophilosophical answers to these questions are constrained by what neuroscience reveals about nervous systems. For example, in his book Three Faces of Desire, Timothy Schroeder (2004) argued that our commonsense conception of desire attributes to it three capacities: (1) the capacity to reinforce behavior when satisfied, (2) the capacity to motivate behavior, and (3) the capacity to determine sources of pleasure. Based on evidence from the literature on dopamine function and reinforcement learning theory, Schroeder argued that reward processing is the basis for all three capacities. Thus, reward is the essence of desire.
During the first decade of the twenty-first century a trend arose in neurophilosophy to look toward neuroscience for guidance in moral philosophy. That should be evident from the themes we’ve just mentioned. Simultaneously, there was renewed interest in moralizing about neuroscience and neurological treatments (see Levy 2007; Roskies 2009). This new field, neuroethics, thus combined both interest in the relevance of neuroscience data for understanding moral cognition, and the relevance of moral philosophy for acquiring and regulating the application of knowledge from neuroscience. The regulatory branch of neuroethics initially focused explicitly on the ethics of treatment for people who suffer from neurological impairments, the ethics of attempts to enhance human cognitive performance (Schneider 2009), the ethics of applying “mind reading” technology to problems in forensic science (Farah and Wolpe 2004), and the ethics of animal experimentation in neuroscience (Farah 2008). More recently both of these fields of neuroethics has seen tremendous growth. The interested reader should consult the neuroethics entry in this Encyclopedia.
Trends during the first decade of the twenty-first century in philosophy of neuroscience included renewed interest in the nature of mechanistic explanations. This was in keeping with a general trend in philosophy of science (e.g., Machamer, Darden, and Craver 2000). The application of this general approach to neuroscience isn’t surprising. “Mechanism” is a widely-used term among neuroscientists. In his book, Explaining the Brain (2007), Carl Craver contended that mechanistic explanations in neuroscience are causal explanations, and typically multi-level. For example, the explanation of the neuronal action potential involves the action potential itself, the cell in which it occurs, electro-chemical gradients, and the proteins through which ions flow across the membrane. Thus we have a composite entity (a cell) causally interacting with neurotransmitters at its receptors. Parts of the cell engage in various activities, e.g., the opening and closing of ligand-gated and voltage-gated ion channels, to produce a pattern of changes, the depolarizing current constituting the action potential. A mechanistic explanation of the action potential thus countenances entities at the cellular, molecular, and atomic levels, all of which are causally relevant to producing the action potential. This causal relevance can be confirmed by altering any one of these variables, e.g., the density of ion channels in the cell membrane, to generate alterations in the action potential; and by verifying the consistency of the purported invariance between the variables. For challenges to Craver’s account of mechanistic explanation in neuroscience, specifically concerning the action potential, see Weber 2008, and Bogen 2005.
According to epistemic norms shared implicitly by neuroscientists, good explanations in neuroscience are good mechanistic explanations; and good mechanistic explanations are those that pick out invariant relationships between mechanisms and the phenomena they control. (For fuller treatment of invariance in causal explanations throughout science, see James Woodward 2003. Mechanists draw extensively on Woodward’s “interventionist” account of cause and causal explanations.) Craver’s account raised questions about the place of reduction in neuroscience. John Bickle (2003) suggested that the working concept of reduction in the neurosciences consists of the discovery of systematic relationships between interventions at lower levels of biological organization, as these are pursued in cellular and molecular neuroscience, and higher level behavioral effects, as they are described in psychology. Bickle called this perspective “reductionism-in-practice” to contrast it with the concepts of intertheoretic or metaphysical reduction that have been the focus of many debates in the philosophy of science and philosophy of mind. Despite Bickle’s reformulation of reduction, however, mechanists generally resist, or at least relativize, the “reductionist” label. Craver (2007) calls his view the “mosaic unity” of neuroscience. Bechtel (2009) calls his “mechanistic reduction(ism)”. Both Craver and Bechtel advocate multi-leveled “mechanisms-within-mechanisms”, with no level of mechanism epistemically privileged. This is in contrast to reduction(ism), ruthless or otherwise which privileges lower levels. Still we can ask: Is mechanism a kind of reductionism-in-practice? Or does mechanism, as a position on neuroscientific explanation, assume some type of autonomy for psychology? If it assumes autonomy, reductionists might challenge mechanists on this assumption. On the other hand, Bickle’s reductionism-in-practice clearly departs from inter-theoretic reduction, as the latter is understood in philosophy of science. As Bickle himself acknowledges, his latest reductionism was inspired heavily by mechanists’ criticisms of his earlier “new wave” account. Mechanists can challenge Bickle that his departure from the traditional accounts has also led to a departure from the interests that motivated those accounts. (See Polger 2004 for a related challenge.) As we will see in section 8 below, these issues surrounding mechanistic philosophy of neuroscience have grown more urgent, as mechanism has grown to dominate the field.
The role of temporal representation in conscious experience and the kinds of neural architectures sufficient to represent objects in time generated interest. In the tradition of Husserl’s phenomenology, Dan Lloyd (2002, 2003) and Rick Grush (2001, 2009) have separately drawn attention to the tripartite temporal structure of phenomenal consciousness as an explanandum for neuroscience. This structure consists of a subjective present, an immediate past, and an expectation of the immediate future. For example, one’s conscious awareness of a tune is not just of a time-slice of tune-impression, but of a note that a moment ago was present, another that is now present, and an expectation of subsequent notes in the immediate future. As this experience continues, what was a moment ago temporally immediate is now retained as a moment in the immediate past; what was expected either occurred or didn’t in what has now become the experienced present; and a new expectation has formed of what will come. One’s experience is not static, even though the experience is of a single object (the tune). These earlier works found increased relevance with the rise of “predictive coding” models of whole brain function, developed by neuroscientists including Karl Friston (2009) less than a decade later, and brought to broader philosophical attention by Jakob Hohwy (2013) and Andy Clark (2016).
According to Lloyd, the tripartite structure of consciousness raises a unique problem for analyzing fMRI data and designing experiments. The problem stems from the tension between the sameness in the object of experience (e.g., the same tune through its progression) and the temporal fluidity of experience itself (e.g., the transitions between heard notes). At the time Lloyd was writing, one standard means of analyzing fMRI data consisted in averaging several data sets and subtracting an estimate of baseline activation from the composites. [17] This is done to filter noise from the task-related hemodynamic response. But as Lloyd points out, this then-common practice ignores much of the data necessary for studying the neural correlates of consciousness. It produces static images that neglect the relationships between data points over the time course. Lloyd instead applies a multivariate approach to studying fMRI data, under the assumption that a recurrent network architecture underlies the temporal processing that gives rise to experienced time. A simple recurrent network has an input layer, an output layer, a hidden layer, and an additional layer that copies the prior activation state of either the hidden layer or the output layer. Allowing the output layer to represent a predicted outcome, the input layer can then represent a current state and the additional layer a prior state. This assignment mimics the tripartite temporal structure of experience in a network architecture. If the neuronal mechanisms underlying conscious experience are approximated by recurrent network architecture, one prediction is that current neuronal states carry information about immediate future and prior states. Applied to fMRI, the model predicts that time points in an image series will carry information about prior and subsequent time points. The results of Lloyd’s (2002) analysis of 21 subjects’ data sets, sampled from the publicly accessible National fMRI Data Center, support this prediction.
Grush’s (2001, 2004) interest in temporal representation is part of his broader systematic project of addressing a semantic problem for computational neuroscience, namely: how do we demarcate study of the brain as an information processor from the study of any other complex causal process? This question leads back into the familiar territory of psychosemantics (see section 3 above), but now the starting point is internal to the practices of computational neuroscience. The semantic problem is thereby rendered an issue in philosophy of neuroscience, insofar as it asks: what does (or should) “computation” mean in computational neuroscience?
Grush’s solution drew on concepts from modern control theory. In addition to a controller, a sensor, and a goal state, certain kinds of control systems employ a process model of the actual process being controlled. A process model can facilitate a variety of engineering functions, including overcoming delays in feedback and filtering noise. The accuracy of a process model can be assessed relative to its “plug-compatibility” with the actual process. Plug-compatibility is a measure of the degree to which a controller can causally couple to a process model to produce the same results it would produce by coupling with the actual process. Note that plug-compatibility is not an information relation.
To illustrate a potential neuroscientific implementation, Grush considers a controller as some portion of the brain’s motor systems (e.g., premotor cortex). The sensors are the sense organs (e.g., stretch receptors on the muscles). A process model of the musculoskeletal system might exist in the cerebellum (see Kawato 1999). If the controller portion of the motor system sends spike trains to the cerebellum in the same way that it sends spikes to the musculoskeletal system, and if in return the cerebellum receives spike trains similar to real peripheral feedback, then the cerebellum emulates the musculoskeletal system (to the degree that the mock feedback resembles real peripheral feedback). The proposed unit over which computational operations range is the neuronal realization of a process model and its components, or in Grush’s terms an “emulator” and its “articulants”.
The details of Grush’s framework are too sophisticated to present in short compass. (For example, he introduces a host of conceptual devices to discuss the representation of external objects.) But in a nutshell, he contends that understanding temporal representation begins with understanding the emulation of the timing of sensorimotor contingencies. Successful sequential behavior (e.g., spearing a fish) depends not just on keeping track of where one is in space, but where one is in a temporal order of movements and the temporal distance between the current, prior, and subsequent movements. Executing a subsequent movement can depend on keeping track of whether a prior movement was successful and whether the current movement is matching previous expectations. Grush posits emulators—process models in the central nervous system—that anticipate, retain, and update mock sensorimotor feedback by timing their output proportionally to feedback from an actual process (Grush 2005).
Lloyd’s and Grush’s approaches to studying temporal representation are varied in their emphases. But they are unified in their implicit commitment to localizing cognitive functions and decomposing them into subfunctions using both top-down and bottom-up constraints. (See Bechtel and Richardson 1993 for more details on this general explanatory strategy.) As we mentioned a few paragraphs above, both anticipated in important and interesting ways more recent neuroscientific and philosophical work on predictive coding and the brain. Both developed mechanistic explanations that pay little regard to disciplinary boundaries. One of the principal lessons of Bickle’s and Craver’s work is that neuroscientific practice in general is structured in this fashion. The ontological consequences of adopting this approach continue to be debated.
8. Developments over the Second Decade of the Twenty-First Century
Mechanism, first introduced in section 7 above, came to dominate the philosophy of neuroscience throughout the second decade of the twenty-first century. One much-discussed example is Gualtiero Piccinini and Carl Craver (2011). The authors employ two popular mechanistic notions. Their first is the multi-level, nested hierarchies of mechanisms-within-mechanisms perspective, discussed in section 7 above, that traces back to Craver and Darden (2001). Their second is that of “mechanism sketch”, suggested initially in Machamer, Darden, and Craver (2000) and developed in detail in Craver (2007). Piccinini and Craver’s goal is to “seamlessly” situate psychology as part of an “integrated framework” alongside neuroscience. They interpret psychology’s familiar functional analyses of cognitive capacities as relatively incomplete mechanism-sketches, which leave out many components of the mechanisms that ultimately will fully explain the system’s behavior. Neuroscience in turn fills in these missing components, dynamics, and organizations, at least ones found in nervous systems. This filling-in thereby turns psychology’s mechanism-sketches into full-blown mechanistic explanations. So even though psychology proceeds via functional analyses, so interpreted it is nonetheless mechanistic. Piccinini and Craver realize that their “integrated” account clashes with classical “autonomy” claims for psychology vis-à-vis neuroscience. Nevertheless, they insist that their challenge to classical “autonomy” does not commit them to “reductionism”, in either its classical or more recent varieties. Their commitment to nested hierarchy of mechanisms-within-mechanisms to account for a system’s behavior acknowledges the importance of mechanisms and intralevel causation at all levels constituting the system, not just at lower (i.e., cellular, molecular) levels.
David Kaplan and Craver (2011) focus the mechanist perspective critically on dynamical systems mathematical models popular in recent systems and computational neuroscience. They argue that such models are explanatory only if there exists a “plausible mapping” between elements in the model and elements in the modeled system. At bottom is their Model-to-Mechanism-Mapping (3M) Constraint on explanation. The variables in a genuinely explanatory model correspond to components, activities, or organizational features of the system being explained. And the dependencies posited among variables in the model, typically expressed mathematically in systems and computational neuroscience, correspond to causal relations among the system’s components. Kaplan and Craver justify the 3M Constraint on grounds of explanatory norms, common to both science and common sense. All other things being equal, they insist, explanations that provide more relevant details about a system’s components, activities, and organization, more likely will answer more questions about how the system will behave in a variety of circumstances, than will an explanation that provides fewer (mechanistic) details. “Relevant” here pertains to the functioning of the specific mechanism. Models from systems and computational neuroscience that violate the 3M Constraint are thus more reasonably thought of as mathematical descriptions of phenomena, not explanations of some “non-mechanistic” variety.
Kaplan and Craver challenge their own view with one of the more popular dynamical/mathematical models in all of computational neuroscience, the Haken-Kelso-Bunz (1985) model of human bimanual finger-movement coordination. They point to passages in these modelers’ publications that suggest that the modelers only intended for their dynamical systems model to be a mathematically compact description of the temporal evolution of a “purely behavioral dependent variable”. The modelers interpreted none of the model’s variables or parameters as mapping onto components or operations of any hypothetical mechanism generating the behavioral data. Nor did they intend for any of the model’s mathematical relations or dependencies among variables to map onto hypothesized causal interactions among components or activities of any mechanism. As Kaplan and Craver further point out, after publishing their dynamicist model, these modelers themselves then began to investigate how the behavioral regularities their model described might be produced by neural motor system components, activities, and organization. Their own follow-up research suggests that these modelers saw their dynamicist model as a heuristic, to help neuroscientists move toward “how-possibly”, and ultimately to a “how-actually” mechanistic explanation.
At bottom, Kaplan and Craver’s 3M constraint on explanation presents a dilemma for dynamicists. To the extent that dynamical systems modelers intend to model hypothesized neural mechanisms for the phenomenon under investigation, their explanations will need to cohere with the 3M Constraint (and other canons of mechanistic explanation). To the extent that this is not a goal of dynamicist modelers, their models do not seem genuinely explanatory, at least not in one sense of “explanation” prominent in the history of science. Furthermore, when dynamicist models are judged to be successful, they often prompt subsequent searches for underlying mechanisms, just as the 3M Constraint and the general mechanist account of the move from “how-possibly” to “how actually” mechanisms recommends. Either horn gores dynamicists who claim that their models constitute a necessary additional kind of explanation in neuroscience to mechanistic explanation, beyond any heuristic value such models might offer toward discovering mechanisms.
Kaplan and Craver’s radical conclusion, that dynamicist “explanations” are genuine explanations only to the degree that they respect the (mechanist’s) 3M Constraint, needs more defense. The burden of proof always lies on those whose conclusions strike at popular assumptions. More than the discussion of a couple of landmark dynamicist models in neuroscience is needed (in their 2011, Kaplan and Craver also discuss the difference-of-Gaussians model of receptive field properties of mammalian visual neurons). Expectedly, dynamicists have taken up this challenge. Michael Silberstein and Anthony Chemero (2013), for example, argue that localization and decomposition strategies characterize mechanistic explanation, and that some explanations in systems neuroscience violate one of these assumptions, or both. Such violations in turn create a dilemma for mechanists. Either they must “stretch” their account of explanation, beyond decomposition and localization, to capture these recalcitrant cases, or they must accept “counterexamples” to the generality of mechanistic explanation, in both systems neuroscience and systems biology more generally.
Lauren Ross (2015) and Mazviita Chirimuuta (2014) independently appeal to Robert Batterman’s account of minimal model explanation as an important kind of non-mechanistic explanation in neuroscience. Minimal models were developed initially to characterize a kind of explanation in the physical sciences (see, e.g., Batterman and Rice 2014). Batterman’s account distinguishes between two different kinds of scientific “why-questions”: why a phenomenon manifests in particular circumstances; and why a phenomenon manifests generally, or in a number of different circumstances. Mechanistic explanations answer the first type of why-question. Here a “more details the better” (MDB) assumption (Chirimuuta 2014), akin to Kaplan and Craver’s “all things being equal” assumption about better explanations (mentioned above), holds force. Minimal models, however, which minimize over the presented implantation details and hence violate MDB, are better able to answer the second type of scientific why-questions. Ross (2015), quoting from computational neuroscientists Rinzel and Ermentrout, insists that models containing more details than necessary can obscure identification of critical elements by leaving too many open possibilities, especially when one is trying to answer Batterman’s second kind of why-questions about a system’s behavior.
Chirimuuta and Ross each appeal to related resources from computational neuroscience to illustrate the applicability of Batterman’s minimal model explanation strategy. Ross appeals to “canonical models”, which represent “shared qualitative features of a number of distinct neural systems” (2015: 39). Her central example is the derivation of the Ermentrout-Kopell model of class I neuron excitability, which uses “mathematical abstraction techniques” to “reduce models of molecularly distinct neural systems to a single … canonical model”. Such a model “explains why molecularly diverse neural systems all exhibit the same qualitative behavior”, (2015: 41) clearly a Batterman second-type why-question. Chirimuuta’s resource is “canonical neural computations” (CNCs):
computational modules that apply the same fundamental operations in a variety of contexts … a toolbox of computational operations that the brain applies in a number of different sense modalities and anatomic regions and which can be described at higher levels of abstraction from their biophysical implementation. (Chirimuuta 2014: 138)
Examples include shunting inhibition, linear filtering, recurrent amplification, and thresholding. Rather than being mechanism-sketches, awaiting further mechanistic details to be turned into full-blown how-actually mechanisms, CNCs are invoked in a different explanatory context, namely, ones posing Batterman’s second type of why-questions. Ross concurs concerning canonical models:
Understanding the approach dynamical systems neuroscientists take in explaining [system] behavior requires attending to their explanandum of interest and the unique modeling tools [e.g., canonical models] common in their field. (2015: 52)
In short, both Chirimuuta’s and Ross’s replies to Kaplan and Craver’s challenge is a common one in philosophy: save a particular form of explanation from collapsing into another by splitting the explanandum.
Finally, to wrap up this discussion of mechanism ascendant, an analogue of Craver’s (2007) problem of accounting for “constitutive mechanistic relevance”, that is, for determining which active components of a system are actually part of the mechanism for a given system phenomenon, has also re-emerged in recent discussions. Robert Rupert (2009) suggests that “integration” is a key criterion for determining which set of causally contributing mechanisms constitute the system for a task, based on the relative frequency with which sets of mechanisms co-contribute to causing task occurrences. He cashes frequency of co-contribution as the probability of the set for causing the cognitive task, conditional to every other co-occurring causal set. Felipe De Brigard (2017) challenges Rupert’s criterion, arguing that it cannot account for cognitive systems displaying two features, “diachronic dynamicity” along with “functional stability”. The frequency with which a given mechanism causally contributes to the same cognitive task (functional stability) can change over time (diachronic dynamicity). Although De Brigard emphasizes the critical importance of these features for Rupert’s integration criterion via a fanciful thought experiment, he also argues that they are a widespread phenomenon in human brains. Both features are found, for example, in evidence pertaining to the “Hemispheric Asymmetry Reduction in Older Adults”, in which tasks that recruit hemispherically localized regions of prefrontal cortex in younger adults show a reduction in hemispheric asymmetry in older adults. And both are found in “Posterior-Anterior Shift with Aging”, where a task increases activity in anterior brain regions while decreasing activity in posterior regions in older adults, relative to activity invoked by the same task in younger adults.
To replace Rupert’s notion of integration as a criterion for determining which sets of mechanisms constitute a cognitive system, De Brigard points to two promising recent developments in network neuroscience which potentially allow for parametrized time. “Scaled inclusivity” is a method for examining each node in a network and identifying its membership in “community structures” across different iterations of the network. “Temporal-dynamic network analyses” is a way to quantify changes in community structures or modules between networks at different time points. Both methods thereby identify “modular alliances”, which convey both co-activation and dynamic change information in a single model. De Brigard suggests that these are thus the candidates with which cognitive systems could be identified.
Clearly, much remains to be discussed regarding the impact mechanism has come to wield in philosophy of neuroscience over the last decade. But while mechanism has become the most dominant general perspective in the field, work in other areas continues. Michael Anderson defends the relevance of cognitive neuroscience for determining psychology’s taxonomy, independent of any commitment to mechanism. The most detailed development of his approach is in his (2014) book, After Phrenology, based on his influential “neural reuse” hypothesis. Each region of the brain, as recognized by the standard techniques of cognitive neuroscience (especially fMRI), engages in cognitive functions that are highly various, and form different “neural partnerships” with one another under different circumstances. Psychological categories are then to be reconceived along lines suggested by the wide-ranging empirical data in support of neural reuse. A genuine “post-phrenological” science of the mind must jettison the assumption that each brain region performs its own fundamental computation. In this fashion Anderson’s work explicitly continues philosophy of neuroscience’s ongoing interest in localizations of cognitive functions.
In shorter compass, Anderson (2015) investigates the relevance of cognitive neuroscience for reconceiving psychology’s basic categories, starting from a consequence of his neural reuse hypothesis. Attempts to map cognitive processes onto specific neural processes and brain regions reveal “many-to-many” relations. Not only do these relations show that combined anatomical-functional labels for brain regions (e.g., “fusiform face area”) are deceptive; they also call into question the possibility of deciding between alternative psychological taxonomies by appealing to cognitive neuroscientific data.
For all but the strongest proponents of psychology’s autonomy from neuroscience, these many-to-many mappings will suggest that the psychological taxonomy we bring to this mapping project needs revision. One need not be committed to any strong sense of psychoneural reduction, or the epistemological superiority of cognitive neuroscience to psychology, to draw this conclusion. The mere relevance of cognitive neuroscience for psychology’s categories is enough. This debate is thus “about the requirements for a unified science of the mind, and the proper role of neurobiological evidence in the construction of such an ontology” (2015: 70), not about the legitimacy of either.
Anderson divides revisionary projects for psychology into three kinds, based on the degree of revision each kind recommends for psychology, and the extent of one-to-one function-to-structure mappings the proposed revisions predicts will be available. “Conservatives” foresee little need for extensive revisions of psychology’s basic taxonomy, even as more neuroscientific evidence is taken into account than current standard practices pursue. “Moderates” insist that our knowledge of brain function “can (and should) act as one arbiter of the psychologically real” (2015: 70), principally by “splitting” or “merging” psychological concepts that currently are in use. “Radicals” project even more drastic revisions, even to the most primitive concepts of psychology, and even after such revisions they still do not expect that many one-to-one mappings between brain regions and the new psychological primitives will be found. Although Anderson does not stress this connection (eliminative materialism has not been a prominent concern in philosophy of mind or neuroscience for two decades), readers will notice similar themes discussed in section 2 above, only now with scientific, not folk psychology the target of the radical revisionists. A key criterion for any satisfactory reformulation of a cognitive ontology is the degree to which it supports two kinds of inferences: “forward inferences”, from the engagement of a specific cognitive function to the prediction of brain activity; and “reverse inferences”, from the observation that a specific brain region or pattern occurs to the prediction that a specific cognitive operation is engaged. In light of this explicit criterion, Anderson usefully surveys the work of a number of prominent psychologists and cognitive neuroscientists in each of his revisionist groups. Given his broader commitment to neural reuse, and the trek it invites into “evolutionarily-inspired, ecological, and enactive terms”, Anderson’s own sentiments lie with the “radicals”:
language and mathematics, for instance, are best understood as extensions of our basic affordance processing capacities augmented with public symbol systems … The psychological science that results from this reappraisal may well look very different from the one we practice today. (2015: 75)
Landmark neuroscientific hypotheses remain a popular focus in recent philosophy of neuroscience. Berit Brogaard (2012), for example, argues for a reinterpretation of the standard “dissociation” understanding of Melvin Goodale and David Milner’s (1992) celebrated “two visual processing streams”, a landmark, now “textbook” result from late-twentieth century neuroscience. Two components of the standard dissociation are key. The first is that distinct brain regions compute information relevant for visually guided “on-the-fly” actions, and for object recognition, respectively, the dorsal stream (which runs from primary visual cortex through the medial temporal region into the superior and inferior parietal lobules) and the ventral stream (which runs from primary visual cortex through V4 and into inferior temporal cortex). And second, that only information relevant for visual object recognition, processed in the ventral stream, contributes to the character of conscious visual experiences.
Brogaard’s concern is that this standard understanding challenges psychofunctionalism, our currently most plausible “naturalistic” account of mental states. Psychofunctionalism draws its account of mind directly from our best cognitive psychology. If φ is some mental state type that has inherited the content of a visual experience, then according to cognitive psychology a wide range of visually guided beliefs and desires, different kinds of visual memories, and so on, satisfy φ’s description. But by the standard “dissociation” account of Goodale and Milner’s two visual streams, only dorsal-stream states, and not ventral-stream states, represent truly egocentric visual properties, namely “relational properties which objects instantiate from the point of view of believers or perceivers”, (Brogaard 2012: 572). But according to cognitive psychology, dorsal-stream states do not play this wide-ranging φ-role. So according to psychofunctionalism “φ-mental states cannot represent egocentric properties” (2012: 572). But it seems “enormously plausible” that some of our perceptual beliefs and visual memories represent egocentric properties. So either we reject psychofunctionalism, and so our most plausible naturalization project for determining whether a given mental state is instantiated, or we reject the standard dissociation interpretation of Goodale and Milner’s two visual streams hypothesis, despite the wealth of empirical evidence supporting it. Neither horn of this dilemma looks comfortably graspable, although the first horn might be thought to be more so, since psychofunctionalism as a general theory of mind lacks the kind of strong empirical backing that the standard interpretation of Goodale and Milner’s hypothesis enjoys.
Nevertheless, Brogaard recommends retaining psychofunctionalism, and instead rejecting “a particular formulation” of Goodale and Milner’s two visual stream hypothesis. The interpretation to reject insists that “dorsal-stream information cannot contribute to the potentially conscious representations computed by the ventral stream” (2012: 586–587). Egocentric representations of visual information computed by the dorsal stream contribute to conscious visual stream representations “via feedback connections” from dorsal- to ventral-stream neurons (2012: 586). This isn’t to deny dissociation:
Information about the egocentric properties of objects is processed by the dorsal stream, and information about allocentric properties of objects is processed by the ventral stream. (2012: 586)
But this dissociation hypothesis “has no bearing on what information is passed on to parts of the brain that process information which correlated with visual awareness” (2012: 586). With this re-interpretation, psychofunctionalism is rendered consistent with Goodale and Milner’s two stream, dorsal and ventral, “what” and “where/how” hypothesis and the wealth of empirical evidence that supports it. According to Brogaard, psychofunctionalism can thereby “correctly treat perceptual and cognitive states that carry information processed in the ventral visual stream as capable of representing egocentric properties” (2012: 586).
Despite philosophy of neuroscience’s continuing focus on cognitive/systems/computational-neuroscience (see the discussion in section 7 above), interest in neurobiology’s cellular/molecular mainstream appears to be increasing. One notable paper is Ann-Sophie Barwich and Karim Bschir’s (2017) historical-cum-philosophical study of G-protein coupled receptors (GPCRs). Work on the structure and functional significance of these proteins has dominated molecular neuroscience for the past forty years; their role in the mechanisms of a variety of cognitive functions is now empirically documented beyond question. And yet one finds little interest in, or even notice of this shift in mainstream neuroscience among philosophers. Barwich and Bschir’s yeoman history research on the discovery and development of these objects pays off philosophically. The role of manipulability as a criterion for entity realism in the science-in-practice of wet-lab research becomes meaningful “only once scientists have decided how to conceptually coordinate measurable effects distinctly to a scientific object” (2017: 1317). Scientific objects like GPCRs get assigned varying degrees of reality throughout different stages of the discovery process. Such an object’s role in evaluating the reality of “neighboring elements of enquiry” becomes a part of the criteria of its reality as well.
The impact of science-in-practice on philosophy of science generally has been felt acutely in the philosophy of neuroscience, most notably in increased philosophical interest in neuroscientific experimentation. In itself this should not surprise. Neuroscience relies heavily on laboratory experimentation, especially within its cellular and molecular, “Society for Neuroscience” mainstream. So the call to understand experiment should beckon any philosopher who ventures into neuroscience’s cellular/molecular foundations. Two papers by Jacqueline Sullivan (2009, 2010) have been important in this new emphasis. In her (2009) Sullivan acknowledges both Bickle’s (2003) and Craver’s (2007) focus on cellular and molecular mechanisms of long-term potentiation, and experience-driven form of synaptic plasticity. But she insists that broader philosophical commitments, which lead Bickle to ruthless reductionist and Craver to mosaic unity “global” accounts, obscure important aspects of real laboratory neuroscience practice. She emphasizes the role of “subprotocols”, which specify how data are to be gathered, in her model of “the experimental process”, and illustrates these notions with a number of examples. Her analysis reveals an important underappreciated tension among a pair of widely-accepted experiment norms. Pursuing “reliability” drives experimenters more deeply into extensive laboratory controls. Pursuing “external validity” drives them toward enriched experimental environments that more closely represent the messy natural environment beyond the laboratory. These two norms commonly conflict: in order to get more of one, scientists introduce conditions that give them less of the other.
In her (2010) Sullivan offers a detailed history of the Morris water maze task, tracing her account back to Morris’s original publications. Philosophers of neuroscience have uncritically assumed that the water maze is a widely-accepted neuroscience protocol for rodent spatial learning and memory, but the detailed scientific history is not so clear on this interpretation. Scientific commentary over time on what this task measures, including some from Morris himself, reveals no clear consensus. Sullivan traces the source of this scientific inconsistency back to the impact of 1980s-era cellular-molecular reductionism driving experimental behavioral neurobiology protocols like the Morris water maze.
A different motivation drives neurobiologist Alcino Silva, neuroinformaticist Anthony Landreth, and philosopher of neuroscience John Bickle’s (2014) focus on experimentation. All contemporary sciences are growing at a vertiginous pace; but perhaps none more so than neuroscience. It is no longer possible for any single scientist to keep up with all the relevant published literature in even his or her narrow research field, or fully to comprehend its implications. An overall lack of clarity and consensus about what is known, what remains doubtful, and what has been disproven creates special problems for experiment planning. There is a recognized and urgent need to develop strategies and tools to address these problems. Toward this explicit end, Silva, Landreth, and Bickle’s book describes a framework and a set of principles for organizing the published record. They derive their framework and principles directly from landmark case studies from the influential neuroscientific field of molecular and cellular cognition (MCC), and describe how their framework can be used to generate maps of experimental findings. Scientists armed with these research maps can then determine more efficiently what has been accomplished in their fields, and where the knowledge gaps still reside. The technology needed to automate the generation of these maps already exists. Silva, Landreth, and Bickle sketch the transformative, revolutionary impact these maps can have on current science.
Three goals motivate Silva, Landreth, and Bickle’s approach. First, they derive their framework from the cellular and molecular neurobiology of learning and memory. This choice was due strictly to familiarity with the science. Silva was instrumental in bringing gene targeting techniques applied to mammals into behavioral neuroscience, and Bickle’s focus on ruthlessly reductive neuroscience was built on these and other experimental results. And while each of their framework’s different kinds of experiments and evidence have been recognized by others, theirs purports to be the first to systematize this information explicitly toward the goal of facilitating experimental planning by practicing scientists. Silva, Landreth and Bickle insist that important new experiments can be identified and planned by methodically filling in the different forms of evidence recognized by their framework, and applying the different forms of experiments to the gaps in the experimental record revealed by this process.
Second, Silva, Landreth, and Bickle take head-on the problem of the growing amount, complexity and integration of the published literature for experiment planning. They show how graphic weighted representations of research findings can be used to guide research decisions; and how to construct these. The principles for constructing these maps are the principles for integrating experimental results, derived directly from landmark published MCC research. Using a case study from recent molecular neuroscience, they show how to generate small maps that reflect a series of experiments, and how to combine these small maps to illustrate an entire field of neuroscience research.
Finally, Silva, Landreth and Bickle begin to develop a science of experiment planning. They envision the causal graphs that compose their research maps to play a role similar to that played by statistics in the already-developed science of data analysis. Such a resource could have profound implications for further developing citation indices and other impact measures for evaluating contributions to a field, from those of individual scientists to those of entire institutions.
More recently Bickle and Kostko (2018) have extended Silva, Landreth and Bickle’s framework beyond the neurobiology of learning and memory. Their case study comes from developmental and social neuroscience, Michael Meaney and Moshe Szyf’s work on the epigenetics of rodent maternal nursing behaviors on offspring stress responses. Using the details of this case study they elaborate on a notion that Silva, Landreth and Bickle leave underdeveloped, that of experiments designed explicitly for their results, if successful, to be integrated directly into an already-existing background of established results. And they argue that such experiments “integratable by design” with others are aimed not at establishing evidence for individual causal relations among neuroscientific kinds, but rather at formulating entire causal pathways connecting multiple phenomena. Their emphasis on causal paths relates to that of Lauren Ross (forthcoming). Ross’s work is especially interesting in this context because she uses her causal pathway concept to address “causal selection”, which has to do with distinguishing between background conditions and “true” (triggering) causes of some outcome of interest. For Silva, Landreth, and Bickle (2014), accounting for this distinction is likewise crucial, and they rely on a specific kind of connection experiment, “positive manipulations”, to draw it. Bickle and Kostko’s appeal to causal paths in a detailed case study from recent developmental neurobiology might help bridge Silva, Landreth and Bickle’s broader work on neurobiological experimentation with Ross’s work drawn from biology more generally.
Bibliography
- Akins, Kathleen A., 1993a, “What Is It Like to Be Boring and Myopic?”, in Dennett and His Critics: Demystifying Mind, Bo Dahlbom (ed.), Cambridge, MA: Blackwell, 124–160.
- –––, 1993b, “A Bat without Qualities?”, in Readings in Mind and Language, Martin Davies and Glyn W. Humphreys (eds.), (Consciousness: Psychological and Philosophical Essays 2), Cambridge, MA: Blackwell Publishing, 258–273.
- –––, 1996, “Of Sensory Systems and the ‘Aboutness’ of Mental States:”, Journal of Philosophy, 93(7): 337–372. doi:10.2307/2941125
- Anderson, Michael L., 2014, After Phrenology: Neural Reuse and the Interactive Brain, Cambridge, MA: The MIT Press.
- –––, 2015, “Mining the Brain for a New Taxonomy of the Mind: A New Taxonomy of the Mind”, Philosophy Compass, 10(1): 68–77. doi:10.1111/phc3.12155
- Aston-Jones, Gary, Robert Desimone, J. Driver, Steven J. Luck, and Michael Posner, 1999, “Attention”, in Zigmond et al. 1999: 1385–1410.
- Balzer, Wolfgang, C. Ulises Moulines, and Joseph D. Sneed, 1987, An Architectonic for Science, Dordrecht: Springer Netherlands. doi:10.1007/978-94-009-3765-9
- Barwich, Ann-Sophie and Karim Bschir, 2017, “The Manipulability of What? The History of G-Protein Coupled Receptors”, Biology & Philosophy, 32(6): 1317–1339. doi:10.1007/s10539-017-9608-9
- Batterman, Robert W. and Collin C. Rice, 2014, “Minimal Model Explanations”, Philosophy of Science, 81(3): 349–376. doi:10.1086/676677
- Bechtel, William, 1998, “Representations and Cognitive Explanations: Assessing the Dynamicist’s Challenge in Cognitive Science”, Cognitive Science, 22(3): 295–318. doi:10.1207/s15516709cog2203_2
- Bechtel, William and Jennifer Mundale, 1999, “Multiple Realizability Revisited: Linking Cognitive and Neural States”, Philosophy of Science, 66(2): 175–207. doi:10.1086/392683
- Bechtel, William and Robert C. Richardson, 1993, Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research, Princeton, NJ: Princeton University Press.
- Bechtel, William, Pete Mandik, Jennifer Mundale, and Robert Stufflebeam (eds.), 2001, Philosophy and the Neurosciences: A Reader, Malden, MA: Blackwell.
- Bermúdez, José Luis, 1998, The Paradox of Self-Consciousness, (Representation and Mind), Cambridge, MA: MIT Press.
- Bickle, John, 1992, “Revisionary Physicalism”, Biology & Philosophy, 7(4): 411–430. doi:10.1007/BF00130060
- –––, 1993, “Philosophy Neuralized: A Critical Notice of P.M. Churchland’s A Neurocomputational Perspective”, Behavior and Philosophy, 20(2): 75–88.
- –––, 1995, “Psychoneural Reduction of the Genuinely Cognitive: Some Accomplished Facts”, Philosophical Psychology, 8(3): 265–285. doi:10.1080/09515089508573158
- –––, 1998, Psychoneural Reduction: The New Wave, Cambridge, MA: MIT Press.
- –––, 2003, Philosophy and Neuroscience: A Ruthlessly Reductive Account, Norwell, MA: Springer Academic Publishers.
- ––– (ed.), 2009, The Oxford Handbook of Philosophy and Neuroscience, New York: Oxford University Press. doi:10.1093/oxfordhb/9780195304787.001.0001
- Bickle, John and Aaron Kostko, 2018, “Connection Experiments in Neurobiology”, Synthese, 195(12): 5271–5295. doi:10.1007/s11229-018-1838-0
- Biro, J. I., 1991, “Consciousness and Subjectivity”, in Consciousness, Enrique Villanueva (ed.), (Philosophical Issues 1), 113–133. doi:10.2307/1522926
- Bliss, T. V. P. and T. Lømo, 1973, “Long-Lasting Potentiation of Synaptic Transmission in the Dentate Area of the Anaesthetized Rabbit Following Stimulation of the Perforant Path”, The Journal of Physiology, 232(2): 331–356. doi:10.1113/jphysiol.1973.sp010273
- Block, Ned, 1987, “Advertisement for a Semantics for Psychology”, Midwest Studies in Philosophy, 10: 615–678. doi:10.1111/j.1475-4975.1987.tb00558.x
- –––, 1995, “On a Confusion about a Function of Consciousness”, Behavioral and Brain Sciences, 18(2): 227–247. doi:10.1017/S0140525X00038188
- Bogen, Jim, 2005, “Regularities and Causality; Generalizations and Causal Explanations”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2): 397–420. doi:10.1016/j.shpsc.2005.03.009
- Bower, James M. and David Beeman, 1995, The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System, New York: Springer-Verlag.
- Brogaard, Berit (Brit), 2012, “Vision for Action and the Contents of Perception”:, Journal of Philosophy, 109(10): 569–587. doi:10.5840/jphil20121091028
- Caplan, David N. , T. Carr, James L. Gould, and R. Martin, 1999, “Language and Communication”, in Zigmond et al. 1999: 1329–1352.
- Chalmers, David John, 1996, The Conscious Mind: In Search of a Fundamental Theory, (Philosophy of Mind Series), New York: Oxford University Press.
- Chirimuuta, M., 2014, “Minimal Models and Canonical Neural Computations: The Distinctness of Computational Explanation in Neuroscience”, Synthese, 191(2): 127–153. doi:10.1007/s11229-013-0369-y
- Churchland, Patricia Smith, 1986, Neurophilosophy: Toward a Unified Science of the Mind-Brain, (Computational Models of Cognition and Perception), Cambridge, MA: MIT Press.
- Churchland, Patricia Smith and Terrence J. Sejnowski, 1992, The Computational Brain, (Computational Neuroscience), Cambridge, MA: MIT Press.
- Churchland, Paul M., 1979, Scientific Realism and the Plasticity of Mind, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511625435
- –––, 1981, “Eliminative Materialism and the Propositional Attitudes”, The Journal of Philosophy, 78(2): 67–90. doi:10.2307/2025900
- –––, 1987, Matter and Consciousness, revised edition, Cambridge, MA: MIT Press.
- –––, 1989, A Neurocomputational Perspective, Cambridge, MA: MIT Press.
- –––, 1995, The Engine of Reason, the Seat of the Soul, Cambridge, MA: MIT Press.
- –––, 1996, “The Rediscovery of Light”, The Journal of Philosophy, 93(5): 211–228. doi:10.2307/2940998
- Churchland, Paul M. and Patricia S. Churchland, 1997, “Recent Work on Consciousness: Philosophical, Theoretical, and Empirical”, Seminars in Neurology, 17(2): 179–186. doi:10.1055/s-2008-1040928
- Clark, Andy, 2016, Surfing Uncertainty: Prediction, Action, and the Embodied Mind, New York: Oxford University Press. doi:10.1093/acprof:oso/9780190217013.001.0001
- Clark, Austen, 1993, Sensory Qualities, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198236801.001.0001
- Craver, Carl F., 2007, Explaining the Brain: What the Science of the Mind-Brain Could Be, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199299317.001.0001
- Craver, Carl F. and Lindley Darden, 2001, “Discovering Mechanisms in Neurobiology”, in Machamer, Grush, and McLaughlin 2001: 112–137.
- Dennett, Daniel C., 1978, “Why You Can’t Make a Computer That Feels Pain”, Synthese, 38(3): 415–456. doi:10.1007/BF00486638
- –––, 1991, Consciousness Explained, New York: Little Brown.
- –––, 1995, “The Path Not Taken”, Behavioral and Brain Sciences, 18(2): 252–253. doi:10.1017/S0140525X00038243
- De Brigard, Felipe, 2017, “Cognitive Systems and the Changing Brain”, Philosophical Explorations, 20(2): 224–241. doi:10.1080/13869795.2017.1312503
- Dretske, Fred, 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
- –––, 1988, Explaining Behavior, Cambridge, MA: MIT Press.
- Eliasmith, Chris, 2009, “Neurocomputational Models: Theory, Application, Philosophical Consequences”, in Bickle 2009: 346–369. doi:10.1093/oxfordhb/9780195304787.003.0014
- Eliasmith, Chris and Charles H. Anderson, 2003, Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems, (Computational Neuroscience), Cambridge, MA: MIT Press.
- Farah, Martha J., 2008, “Neuroethics and the Problem of Other Minds: Implications of Neuroscience for the Moral Status of Brain-Damaged Patients and Nonhuman Animals”, Neuroethics, 1(1): 9–18. doi:10.1007/s12152-008-9006-8
- Farah, Martha J. and Paul Root Wolpe, 2004, “Monitoring and Manipulating Brain Function: New Neuroscience Technologies and Their Ethical Implications”, The Hastings Center Report, 34(3): 35–45.
- Feyerabend, Paul K., 1963, “Comment: Mental Events and the Brain”, The Journal of Philosophy, 60(11): 295–296. doi:10.2307/2023030
- Flanagan, Owen, 2009, “Neuro‐Eudaimonics or Buddhists Lead Neuroscientists to the Seat of Happiness”, in Bickle 2009: 582–600. doi:10.1093/oxfordhb/9780195304787.003.0024
- Fodor, Jerry A., 1974, “Special Sciences (or: The Disunity of Science as a Working Hypothesis)”, Synthese, 28(2): 97–115. doi:10.1007/BF00485230
- –––, 1981, RePresentations, Cambridge, MA: MIT Press.
- –––, 1987, Psychosemantics, Cambridge, MA: MIT Press.
- Fodor, Jerry and Ernest LePore, 1992, Holism: A Shopper’s Guide, Cambridge, MA: MIT Press.
- Gazzaniga, Michael S. (ed.), 1995, The Cognitive Neurosciences, Cambridge, MA: MIT Press.
- Friston, Karl and Stefan Kiebel, 2009, “Predictive Coding under the Free-Energy Principle”, Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521): 1211–1221. doi:10.1098/rstb.2008.0300
- Georgopoulos, A. P., A. B. Schwartz, and R. E. Kettner, 1986, “Neuronal Population Coding of Movement Direction”, Science, 233(4771): 1416–1419. doi:10.1126/science.3749885
- Goldman, Alvin I., 2006, Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading, New York: Oxford University Press. doi:10.1093/0195138929.001.0001
- Goodale, Melvyn A. and A. David Milner, 1992, “Separate Visual Pathways for Perception and Action”, Trends in Neurosciences, 15(1): 20–25.
- Grush, Rich, 2001, “The Semantic Challenge to Computational Neuroscience”, in Machamer, Grush, and McLaughlin 2001: 155–172.
- –––, 2004, “The Emulation Theory of Representation: Motor Control, Imagery, and Perception”, Behavioral and Brain Sciences, 27(3): 377–396. doi:10.1017/S0140525X04000093
- –––, 2005, “Brain Time and Phenomenological Time”, in Cognition and the Brain: The Philosophy and Neuroscience Movement, Andrew Brook and Kathleen Akins (eds.), Cambridge: Cambridge University Press, 160–207. doi:10.1017/CBO9780511610608.006
- Haken, H., J. A. S. Kelso, and H. Bunz, 1985, “A Theoretical Model of Phase Transitions in Human Hand Movements”, Biological Cybernetics, 51(5): 347–356. doi:10.1007/BF00336922
- Hardcastle, Valerie Gray, 1997, “When a Pain Is Not”, The Journal of Philosophy, 94(8): 381–409. doi:10.2307/2564606
- Hardin, C.L., 1988, Color for Philosophers: Unweaving the Rainbow, Indianapolis, IN: Hackett.
- Haugeland, John, 1985, Artificial Intelligence: The Very Idea, Cambridge, MA: MIT Press.
- Hawkins, Robert D. and Eric R. Kandel, 1984, “Is There a Cell-Biological Alphabet for Learning?” Psychological Review, 91(3): 375–391.
- Hebb, D.O., 1949, The Organization of Behavior: A Neuropsychological Theory, New York: Wiley.
- Hirstein, William, 2005, Brain Fiction: Self-Deception and the Riddle of Confabulation, Cambridge, MA: MIT Press.
- Hodgkin, Alan L., Andrew F. Huxley, and Bernard Katz, 1952, “Measurement of Current‐voltage Relations in the Membrane of the Giant Axon of Loligo”, The Journal of Physiology, 116(4): 424–448. doi:10.1113/jphysiol.1952.sp004716
- Hohwy, Jakob, 2013, The Predictive Mind, New York: Oxford University Press. doi:10.1093/acprof:oso/9780199682737.001.0001
- Hooker, C.A., 1981a, “Towards a General Theory of Reduction. Part I: Historical and Scientific Setting”, Dialogue, 20(1): 38–59. doi:10.1017/S0012217300023088
- –––, 1981b, “Towards a General Theory of Reduction. Part II: Identity in Reduction”, Dialogue, 20(2): 201–236. doi:10.1017/S0012217300023301
- –––, 1981c, “Towards a General Theory of Reduction. Part III: Cross-Categorical Reduction”, Dialogue, 20(3): 496–529. doi:10.1017/S0012217300023593
- Horgan, Terence and George Graham, 1991, “In Defense of Southern Fundamentalism”, Philosophical Studies, 62(2): 107–134. doi:10.1007/BF00419048
- Hubel, D. H. and T. N. Wiesel, 1962, “Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex”, The Journal of Physiology, 160(1): 106–154. doi:10.1113/jphysiol.1962.sp006837
- Jackson, Frank and Philip Pettit, 1990, “In Defence of Folk Psychology”, Philosophical Studies, 59(1): 31–54. doi:10.1007/BF00368390
- Kandel, Eric R., 1976, Cellular Basis of Behavior: An Introduction to Behavioral Neurobiology, Oxford, England: W. H. Freeman.
- Kaplan, David Michael and Carl F. Craver, 2011, “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience: A Mechanistic Perspective*”, Philosophy of Science, 78(4): 601–627. doi:10.1086/661755
- Kawato, M., 1999, “Internal Models for Motor Control and Trajectory Planning”, Current Opinion in Neurobiology, 9(6): 718–727.
- Klein, C., 2010, “Images Are Not the Evidence in Neuroimaging”, The British Journal for the Philosophy of Science, 61(2): 265–278. doi:10.1093/bjps/axp035
- Kolb, Bryan and Ian Q. Whishaw, 1996, Fundamentals of Human Neuropsychology, 4th edition, New York: W.H. Freeman.
- Kosslyn, Stephen M., 1997, “Mental Imagery”, in Conversations in the Cognitive Neurosciences, Michael S. Gazzaniga (ed.), Cambridge, MA: MIT Press, pp. 37–52.
- Lee, Choongkil, William H. Rohrer, and David L. Sparks, 1988, “Population Coding of Saccadic Eye Movements by Neurons in the Superior Colliculus”, Nature, 332(6162): 357–360. doi:10.1038/332357a0
- Lehky, Sidney R. and Terrence J. Sejnowski, 1988, “Network Model of Shape-from-Shading: Neural Function Arises from Both Receptive and Projective Fields”, Nature, 333(6172): 452–454. doi:10.1038/333452a0
- Lettvin, J., H. Maturana, W. McCulloch, and W. Pitts, 1959, “What the Frog’s Eye Tells the Frog’s Brain”, Proceedings of the IRE, 47(11): 1940–1951. doi:10.1109/JRPROC.1959.287207
- Levine, Joseph, 1983, “Materialism and Qualia: The Explanatory Gap”, Pacific Philosophical Quarterly, 64(4): 354–361. doi:10.1111/j.1468-0114.1983.tb00207.x
- Levy, Neil, 2007, Neuroethics: Challenges for the 21st Century, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511811890
- Llinás, Rodolfo R., 1975, “The Cortex of the Cerebellum”, Scientific American, 232(1/January): 56–71. doi:10.1038/scientificamerican0175-56
- Llinás, Rodolfo R. and Patricia Smith Churchland (eds.), 1996, The Mind-Brain Continuum: Sensory Processes, Cambridge, MA: MIT Press.
- Lloyd, Dan, 2002, “Functional MRI and the Study of Human Consciousness”, Journal of Cognitive Neuroscience, 14(6): 818–831. doi:10.1162/089892902760191027
- –––, 2003, Radiant Cool: A Novel Theory of Consciousness, Cambridge, MA: MIT Press.
- Machamer, Peter, Lindley Darden, and Carl F. Craver, 2000, “Thinking about Mechanisms”, Philosophy of Science, 67(1): 1–25. doi:10.1086/392759
- Machamer, Peter K., Rick Grush, and Peter McLaughlin (eds.), 2001, Theory and Method in the Neurosciences (Pittsburgh-Konstanz Series in the Philosophy and History of Science), Pittsburgh, PA: University of Pittsburgh Press.
- Magistretti, Pierre J., 1999, “Brain Energy Metabolism”, in Zigmond et al. 1999: 389–413.
- Nagel, Thomas, 1971, “Brain Bisection and the Unity of Consciousness”, Synthese, 22(3–4): 396–413. doi:10.1007/BF00413435
- –––, 1974, “What Is It Like to Be a Bat?”, The Philosophical Review, 83(4): 435–450. doi:10.2307/2183914
- Piccinini, Gualtiero and Carl Craver, 2011, “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches”, Synthese, 183(3): 283–311. doi:10.1007/s11229-011-9898-4
- Place, U. T., 1956, “Is Consciousness a Brain Process?”, British Journal of Psychology, 47(1): 44–50. doi:10.1111/j.2044-8295.1956.tb00560.x
- Polger, Thomas W., 2004, Natural Minds, Cambridge, MA: MIT Press.
- Prinz, Jesse J., 2004, Gut Reactions: A Perceptual Theory of the Emotions. New York: Oxford University Press.
- –––, 2007, The Emotional Construction of Morals, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199571543.001.0001
- Putnam, Hilary, 1967, “Psychological Predicates”, in Art, Mind, and Religion: Proceedings of the 1965 Oberlin Colloquium in Philosophy, W. H. Capitan and D. D. Merrill (eds.), Pittsburgh, PA: University of Pittsburgh Press, pp. 49–54.
- Rall, Wilfrid, 1959, “Branching Dendritic Trees and Motoneuron Membrane Resistivity”, Experimental Neurology, 1(5): 491–527. doi:10.1016/0014-4886(59)90046-9
- Ramsey, William, 1992, “Prototypes and Conceptual Analysis”, Topoi, 11(1): 59–70. doi:10.1007/BF00768299
- Roskies, Adina L., 2007, “Are Neuroimages Like Photographs of the Brain?”, Philosophy of Science, 74(5): 860–872. doi:10.1086/525627
- –––, 2009, “What’s ‘Neu’ in Neuroethics?”, in Bickle 2009: 454–472. doi:10.1093/oxfordhb/9780195304787.003.0019
- Ross, Lauren N., 2015, “Dynamical Models and Explanation in Neuroscience”, Philosophy of Science, 82(1): 32–54. doi:10.1086/679038
- –––, forthcoming, “Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters”, The British Journal for the Philosophy of Science, first online: 12 December 2018. doi:10.1093/bjps/axy078
- Rumelhart, D. E., G. E. Hinton, and J. L. McClelland, 1986, “A Framework for Parallel Distributed Processing”, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1, D. E. Rumelhart and J. L. McClelland (eds.), Cambridge, MA: MIT Press, pp. 45–76.
- Rupert, Robert D., 2009, Cognitive Systems and the Extended Mind, New York: Oxford University Press. doi:10.1093/acprof:oso/9780195379457.001.0001
- Sacks, Oliver, 1985, The Man Who Mistook his Wife for a Hat and Other Clinical Tales , New York: Summit Books
- Schaffner, Kenneth F., 1992, “Philosophy of Medicine”, in Introduction to the Philosophy of Science: A Text by Members of the Department of the History and Philosophy of Science of the University of Pittsburg, Merrilee H. Salmon, John Earman, Clark Glymour, James G. Lennox, Peter Machamer, J. E. McGuire. John D. Norton, Wesley C. Salmon, and Kenneth F. Schaffner (eds.), Englewood Cliffs, NJ: Prentice-Hall, pp. 310–345.
- Schechter, Elizabeth, 2018, Self-Consciousness and ‘Split’ Brains: The Minds’ I, New York: Oxford University Press. doi:10.1093/oso/9780198809654.001.0001
- Schneider, Susan, 2009, “Future Minds: Transhumanism, Cognitive Enhancement, and the Nature of Persons”, in Penn Center Guide to Bioethics, Vardit Ravitsky, Autumn Fiester, and Arthur L. Caplan (eds.), New York: Springer, pp. 95–110.
- Schroeder, Timothy, 2004, Three Faces of Desire, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195172379.001.0001
- Silberstein, Michael and Anthony Chemero, 2013, “Constraints on Localization and Decomposition as Explanatory Strategies in the Biological Sciences”, Philosophy of Science, 80(5): 958–970. doi:10.1086/674533
- Silva, Alcino J., Anthony Landreth, and John Bickle, 2014, Engineering the Next Revolution in Neuroscience: The New Science of Experiment Planning, New York: Oxford University Press. doi:10.1093/acprof:oso/9780199731756.001.0001
- Smart, J. J. C., 1959, “Sensations and Brain Processes”, The Philosophical Review, 68(2): 141–156. doi:10.2307/2182164
- Stich, Stephen, 1983, From Folk Psychology to Cognitive Science, Cambridge, MA: MIT Press.
- Stufflebeam, Robert S. and William Bechtel, 1997, “PET: Exploring the Myth and the Method”, Philosophy of Science, 64(supplement/December): S95–S106. doi:10.1086/392590
- Sullivan, Jacqueline A., 2009, “The Multiplicity of Experimental Protocols: A Challenge to Reductionist and Non-Reductionist Models of the Unity of Neuroscience”, Synthese, 167(3): 511–539. doi:10.1007/s11229-008-9389-4
- –––, 2010, “Reconsidering ‘Spatial Memory’ and the Morris Water Maze”, Synthese, 177(2): 261–283. doi:10.1007/s11229-010-9849-5
- Suppe, Frederick, 1974, The Structure of Scientific Theories, Urbana, IL: University of Illinois Press.
- Tye, Michael, 1993, “Blindsight, the Absent Qualia Hypothesis, and the Mystery of Consciousness”, in Philosophy and the Cognitive Sciences, Christopher Hookway and Donald M. Peterson (eds.), (Royal Institute of Philosophy Supplement 34), Cambridge: Cambridge University Press, 19–40. doi:10.1017/S1358246100002447
- Van Fraassen, Bas C., 1980, The Scientific Image, New York: Oxford University Press. doi:10.1093/0198244274.001.0001
- Von Eckardt Klein, Barbara, 1975, “Some Consequences of Knowing Everything (Essential) There Is to Know About One’s Mental States”, The Review of Metaphysics, 29(1): 3–18.
- –––, 1978, “Inferring Functional Localization from Neurological Evidence”, in Explorations in the Biology of Language, Edward Walker (ed.), Cambridge, MA: MIT Press, pp. 27–66.
- Weber, Marcel, 2008, “Causes without Mechanisms: Experimental Regularities, Physical Laws, and Neuroscientific Explanation”, Philosophy of Science, 75(5): 995–1007. doi:10.1086/594541
- Woodward, James, 2003, Making Things Happen: A Theory of Causal Explanation, Oxford: Oxford University Press. doi:10.1093/0195155270.001.0001
- Zigmond, Michael J., Floyd E. Bloom, Story C. Landis, James L. Roberts, and Larry R. Squire (eds.), 1999, Fundamental Neuroscience, San Diego, CA: Academic Press.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Philosophy-Neuroscience-Psychology Program at Washington University in St. Louis
- The Graduate Program, Department of History and Philosophy of Science, University of Pittsburgh
- Neurophilosophy and Neuroethics, Georgia State University Neuroscience Institute
- List of Underrepresented Philosophers of Neuroscience
- Society for Neuroscience
Acknowledgments
Jonathan Kanzelmeyer and Mara McGuire assisted with the research for Section 8.