The Neuroscience of Consciousness

First published Tue Oct 9, 2018; substantive revision Wed Apr 3, 2024

Conscious experience in humans depends on brain activity, so neuroscience will contribute to explaining consciousness. What would it be for neuroscience to explain consciousness? How much progress has neuroscience made in doing so? What challenges does it face? How can it meet those challenges? What is the philosophical significance of its findings? This entry addresses these and related questions.

To bridge the gulf between brain and consciousness, we need neural data, computational and psychological models, and philosophical analysis to identify principles to connect brain activity to conscious experience in an illuminating way. This entry will focus on identifying such principles without shying away from the neural details. The theories and data to be considered will be organized around constructing answers to two questions (see section 1.4 for more precise formulations):

  • Generic Consciousness: How might neural properties explain when a state is conscious rather than not?
  • Specific Consciousness: How might neural properties explain what the content of a conscious state is?

A challenge for an objective science of consciousness is to dissect an essentially subjective phenomenon. As investigators cannot experience another subject’s conscious states, they rely on the subject’s observable behavior to track consciousness. Priority is given to a subject’s introspective reports as these express the subject’s take on her experience. Introspection thus provides a fundamental way, perhaps the fundamental way, to track consciousness. That said, consciousness pervasively influences human behavior and affects physiological responses, so other forms of behavior and physiological data beyond introspective reports provide a window on consciousness. How to leverage disparate evidence is a central issue.

The term “neuroscience” covers those scientific fields whose explanations advert to the properties of neurons, populations of neurons, or larger parts of the nervous system.[1] This includes, but is not limited to, psychologists’ and cognitive neuroscientists’ use of various neuroimaging methods to monitor the activity of tens of millions of neurons, computational theorists’ modeling of biological and artificial neural networks, neuroscientists’ use of electrodes inserted into brain tissue to record neural activity from individual or populations of neurons, and clinicians’ study of patients with altered conscious experiences in light of damage to brain areas. Given the breadth of neuroscience so conceived, this review focuses mostly on cortical activity that sustains perceptual consciousness, with emphasis on vision. This is not because visual consciousness is more important than other forms of consciousness. Rather, the level of detail in empirical work on vision often speaks more comprehensively to the issues that we shall confront.

That said, there are many forms of consciousness that we will not discuss. Some are covered in other entries such as split-brain phenomena (see the entry on the unity of consciousness, section 4.1.1), animal consciousness (see the entry on animal consciousness), and neural correlates of the will and agency (see the entry on agency, section 5). In addition, this entry will not discuss the neuroscience of consciousness in audition, olfaction, or gustation; disturbed consciousness in mental disorders such as schizophrenia; conscious aspects of pleasure, pain and the emotions; the phenomenology of thought; the neural basis of dreams; and modulations of consciousness during sleep and anesthesia among other issues. These are important topics, and the principles and approaches highlighted in this discussion will apply to many of these domains.

1. Fundamentals

1.1 A Map of the Brain

The brain can be divided into the cerebral cortex and the subcortex. The cortex is divided into two hemispheres, left and right, each of which can be divided into four lobes: frontal, parietal, temporal and occipital.

a diagram of the brain. Major clockwise sections labeled are the 'Frontal Lobe' with the PFC marked in it. 'Parietal Lobe' with S1 between it and the Frontal Lobe; also includes the SPL and IPL. 'Dorsal Stream' with, in a circle, MST and MT, V6, V3A. 'Occipital Lobe' with V1 and then in concentric circles around V1 are V2 then V3. 'Ventral Stream' with V4. 'Temporal Lobe' with IT. V1 has an arrow pointing to V4 then IT. V1 also has an arrow pointing to V3A which in turn has arrows pointing to V6 and MST/MT. V6 has arrows pointing to SPL and MST/MT. MST/MT has an arrow pointing to IPL.

Figure 1. The Cerebral Cortex and Salient Areas

Figure Legend: The four lobes of the primate brain, shown for the left hemisphere. Some areas of interest are highlighted. Abbreviations: PFC: prefrontal cortex; IT: inferotemporal cortex; S1: primary somatosensory cortex; IPL and SPL: Inferior and Superior Parietal Lobule; MST: medial superior temporal visual area; MT: middle temporal visual area (also called V5 in humans); V1: primary visual cortex; V2-V6 consist of additional visual areas.

The discussion that follows will highlight specific areas of cortex including the prefrontal cortex that will figure in discussions of confidence (section 2.3), the global neuronal workspace (section 3.1) and higher order theories (section 3.3); the dorsal visual stream that projects into parietal cortex and the ventral visual stream that projects into temporal cortex including visual areas specialized for processing places, faces, and word forms (see sections 2.5 on places, 4.1 on visual agnosia and 5.3 on seeing words); primary somatosensory cortex S1 (see section 5.3.2 on tactile sensation); and early visual areas in the occipital cortex including the primary visual area, V1 (see sections 4.2 on blindsight and 5.2 on binocular rivalry) and a motion sensitive area V5, also known as the middle temporal area (MT; section 5.3.1 on seeing motion). Beneath the cortex is the subcortex, divided into the forebrain, midbrain, and hindbrain, which covers many regions although our discussion will largely touch on the superior colliculus and the thalamus. The latter is critical for regulating wakefulness and general arousal, and both areas play an important role in visual processing.

1.2 Neurons and Brain

A neuroscientific explanation of consciousness adduces properties of the brain, typically the brain’s electrical properties. A salient phenomenon is neural signaling through action potentials or spikes. A spike is a large change in electrical potential across a neuron’s cellular membrane which can be transmitted between neurons that form a neural circuit. For a sensory neuron, the spikes it generates are tied to its receptive field. For example, in a visual neuron, its receptive field is understood in spatial terms and corresponds to that area of external space where an appropriate stimulus triggers the neuron to spike. Given this correlation between stimulus and spikes, the latter carries information about the former. Information processing in sensory systems involves processing of information regarding stimuli within receptive fields.

Which electrical property provides the most fruitful explanatory basis for understanding consciousness remains an open question. For example, when looking at a single neuron, neuroscientists are not interested in spikes per se but the spike rate generated by a neuron per unit time. Yet spike rate is one among many potentially relevant neural properties. Consider the blood oxygen level dependent signal (BOLD) measure in functional magnetic resonance imaging (fMRI). The BOLD signal is a measure of changes in blood flow in the brain when neural tissue is active and is postulated to be a function of post-synaptic activity while spikes are tied to presynaptic activity. Furthermore, neuroscientists are often not interested in the response of a single neuron but rather that of a population of neurons, of whole brain regions, and/or their interactions. Higher order properties of brain regions include the local field potential generated by populations of neurons and correlated activity such as synchrony between activity in different areas of the brain as measured by, for example, electrocorticography (EEG).

The number of neural properties potentially relevant to explaining mental phenomena is dizzying. This review focuses on the facts that neural sensory systems carry information about the subject’s environment and that neural information processing can be tied to a notion of neural representation. How precisely to understand neural representation is itself a vexed question (Cao 2012, 2014; Shea 2014; Baker, Lansdell, & Kording 2022), but we will deploy a simple assumption with respect to spikes which can be reconfigured for other properties: where a sensory neuron generates spikes when a stimulus is placed in its receptive field, the spikes carry information about the stimulus. The sensory neuron’s activity thus represent the relevant aspect of the stimulus that drives its response (e.g., direction of motion or intensity of a sound).[2] We shall return to neural representation in the final section when discussing how neural representations might explain conscious contents.

1.3 Access Consciousness and Phenomenal Consciousness

An important distinction separates access consciousness from phenomenal consciousness (Block 1995). “Phenomenal consciousness” refers to those properties of experience that correspond to what it is like for a subject to have those experiences (Nagel 1974 and the entry on qualia). These features are apparent to the subject from the inside, so tracking them arguably depends on one’s having the relevant experience. For example, one understands what it is like to see red only if one has visual experiences of the relevant type (Jackson 1982).

As noted earlier, introspection is the first source of evidence about consciousness. Introspective reports bridge the subjective and objective. They serve as a behavioral measure that expresses the subject’s own take on what it is like for them in having an experience. While there have been recent concerns about the reliability or empirical usefulness of introspection (Schwitzgebel 2011; Irvine 2012a), there are plausibly many contexts where introspection is reliable (Spener 2015; Wu 2023: chap. 7; see Irvine 2012b for an extended discussion of introspection in consciousness science; for philosophical theories, see Smithies & Stoljar 2012).

Introspective reports demonstrate that the subject can access the targeted conscious state. That is, the state is access-conscious: it is accessible for use in reasoning, report, and the control of action. Talk of access-consciousness must keep track of the distinction between actual access versus accessibility. When one reports on one’s conscious state, one accesses the state. Thus, access consciousness provides much of the evidence for empirical theories of consciousness. Still, it seems plausible that a state can be conscious even if one does not access it in report so long as that state is accessible. One can report it. Access-consciousness is usually defined in terms of this dispositional notion of accessibility.

We must also consider the type of access/accessibility. Block’s original characterization of access-consciousness emphasized accessibility in terms of the rational control of behavior, so we can summarize his account as follows:

A representation is access-conscious if it is poised for free use in reasoning and for direct “rational” control of action and speech.

Rational access contrasts with a broader conception of intentional access that takes a mental state to be access-conscious if it can inform goal-directed or intentional behavior including behavior that is not rational or done for a reason. This broader notion allows for additional measurable behaviors as relevant in assessing phenomenal consciousness especially in non-linguistic animals, non-verbal infants, and non-communicative patients. So, if access provides us with evidence for phenomenal consciousness, this can be (a) through introspective reports; (b) through rational behavior, (c) through intentional behavior including nonrational behavior. Indeed, in certain contexts, reflexive behavior and autonomic physiological measures provide measures of consciousness (section 2.2).

1.4 Generic and Specific Consciousness

There are two ways in which consciousness is understood in this entry. The first focuses on a mental state’s being conscious in general as opposed to not being conscious. Call this property generic consciousness, a property shared by specific conscious states such as seeing a red rose, feeling a touch, or being angry. Thus:

Generic Consciousness: What conditions/states N of nervous systems are necessary and/or sufficient for a mental state, M, to be conscious as opposed to not?

If there is such an N, then the presence of N entails that an associated mental state M is conscious and/or its absence entails that M is unconscious.

A second focus will be on the content of consciousness, say that associated with a perceptual experience’s being of some perceptible X. This yields a question about specific contents of consciousness such as experiencing the motion of an object (see section 5.3.1) or a vibration on one’s finger (see section 5.3.2):

Specific Consciousness: What neural states or properties are necessary and/or sufficient for a conscious perceptual state to have content X rather than Y?

In introspectively accessing their conscious states, a subject reports what their experience is like by reporting what they experiences. Thus, the subject can report seeing an object moving, changing color, or being of a certain kind (e.g., a mug) and thus specify the content of the perceptual state. Discussion of specific consciousness will focus on perceptual states described as consciously perceiving X where X can be a particular such as a face or a property such as the color of an object.

Posing a clear question involves grasping its possible answers and in science, this is informed by identifying experiments that can provide evidence for such answers. The emphasis on necessary and sufficient conditions in our two questions indicates how to empirically test specific proposals. To test sufficiency, one would aim to produce or modulate a certain neural state and then demonstrate that consciousness of a certain form arises. To test necessity, one would eliminate a certain neural state and demonstrate that consciousness is abolished. Notice that such tests go beyond mere correlation between neural states and conscious states (see section 1.6 on neural correlates and sections 2.2, 4 and 5 for tests of necessity and sufficiency).

In many experimental contexts, the underlying idea is causal necessity and sufficiency. However, if A = B, then A’s presence is also necessary and sufficient for B’s presence since they are identical. Thus, a brain lesion that eliminates N and thereby eliminates conscious state S might do so either because N is causally necessary for S or because N = S. An intermediate relation is that N constitutes or grounds S which does not imply that N = S (see the entry on metaphysical grounding). Whichever option holds for S, the first step is to find N, a neural correlate of consciousness (section 1.6).

In what follows, to explain generic consciousness, various global properties of neural systems will be considered (section 3) as well as specific anatomical regions that are tied to conscious versus unconscious vision as a case study (section 4). For specific consciousness, fine-grained manipulations of neural representations will be examined that plausibly shift and modulate the contents of perceptual experience (section 5).

1.5 The Hard Problem

David Chalmers presents the hard problem as follows:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. If any problem qualifies as the problem of consciousness, it is this one. (Chalmers 1995: 212)

The Hard Problem can be specified in terms of generic and specific consciousness (Chalmers 1996). In both cases, Chalmers argues that there is an inherent limitation to empirical explanations of phenomenal consciousness in that empirical explanations will be fundamentally either structural or functional, yet phenomenal consciousness is not reducible to either. This means that there will be something that is left out in empirical explanations of consciousness, a missing ingredient (see also the explanatory gap [Levine 1983]).

There are different responses to the hard problem. One response is to sharpen the explanatory targets of neuroscience by focusing on what Chalmers calls structural features of phenomenal consciousness, such as the spatial structure of visual experience, or on the contents of phenomenal consciousness. When we assess explanations of specific contents of consciousness, these focus on the neural representations that fix conscious contents. These explanations leave open exactly what the secret ingredient is that shifts a state with that content from unconsciousness to consciousness. On ingredients explaining generic consciousness, a variety of options have been proposed (see section 3), but it is unclear whether these answer the Hard Problem, especially if any answer to that the Problem has a necessary condition that the explanation must conceptually close off certain possibilities, say the possibility that the ingredient could be added yet consciousness not ignite as in a zombie, a creature without phenomenal consciousness (see the entry on zombies). Indeed, some philosophers deny the hard problem (see Dennett 2018 for a recent statement). Patricia Churchland urges: “Learn the science, do the science, and see what happens” (Churchland 1996: 408).

Perhaps the most common attitude for neuroscientists is to set the hard problem aside. Instead of explaining the existence of consciousness in the biological world, they set themselves to explaining generic consciousness by identifying neural properties that can turn consciousness on and off and explaining specific consciousness by identifying the neural representational basis of conscious contents.

1.6 Neural Correlates of Consciousness

Modern neuroscience of consciousness has attempted to explain consciousness by focusing on neural correlates of consciousness or NCCs (Crick & Koch 1990; LeDoux, Michel, & Lau 2020; Morales & Lau 2020). Identifying correlates is an important first step in understanding consciousness, but it is an early step. After all, correlates are not necessarily explanatory in the sense of answering specific questions posed by neuroscience. That one does not want a mere correlate was recognized by Chalmers who defined an NCC as follows:

An NCC is a minimal neural system N such that there is a mapping from states of N to states of consciousness, where a given state of N is sufficient under conditions C, for the corresponding state of consciousness. (Chalmers 2000: 31)

A similar way of putting this is that an NCC is “the minimal set of neuronal events and mechanisms jointly sufficient for a specific conscious percept” (Koch 2004: 16). One wants a minimal neural system since, crudely put, the brain is sufficient for consciousness but to point this out is hardly to explain consciousness even if it provides an answer to questions about sufficiency. There is, of course, much more to be said that is informative even if one does not drill down to a “minimal” neural system which is tricky to define or operationalize (see Chalmers 2000 for discussion; for criticisms of the NCC approach, see Noë & Thompson 2004; for criticisms of Chalmers’ definition, see Fink 2016).

The emphasis on sufficiency goes beyond mere correlation, as neuroscientists aim to answer more than the question: What is a neural correlate for conscious phenomenon C? For example, Chalmers’ and Koch’s emphases on sufficiency indicate that they aim to answer the question: What neural phenomenon is sufficient for consciousness? Perhaps more specifically: What neural phenomenon is causally sufficient for consciousness? Accordingly, talk of “correlate” is unfortunate since sufficiency implies correlation but not vice versa. After all, assume that the NCC is type identical to a conscious state. Then many neural states will correlate with the conscious state: (1) the NCC’s typical effects, (2) its typical causes, and (3) states that are necessary for the NCCs obtaining (e.g., the presence of sufficient oxygen). Thus, some correlated effects will not be explanatory. For example, citing the effects of consciousness will not provide causally sufficient conditions for consciousness.

Establishing necessary conditions for consciousness is also difficult. Neuroplasticity, redundancy, and convergent evolution make necessity claims extremely hard to support experimentally. Under normal conditions, healthy humans may require certain brain areas or processes for supporting consciousness. However, this does not mean those regions or processes are necessary in any strong metaphysical sense. For example, after a lesion, the brain’s functional connections may change allowing a different structural support for consciousness to emerge. There is also nothing in the cortical arrangement of specialized regions that fix them to a particular function. The so-called “visual cortex” is recruited in blind individuals to perform auditory, numerical, and linguistic processing (Bedny 2017). Similarly, as with many other complex structures, the brain is highly redundant. Different areas may perform the same function, preventing any single one from being strictly necessary. Finally, the way in which the mammalian brain operates is not the only way to support awareness. Birds—as well as cephalopods and insects (Barron & Klein 2016; Birch, Schnell, & Clayton 2020)—and even other primates have different anatomical and functional neural mechanisms and yet their nervous systems may support consciousness (Nieder, Wagener, & Rinnert 2020). Thus, there may not be a single necessary structure or process for supporting conscious awareness.

While many theorists are focused on explanatory correlates, it is not clear that the field has always grasped this, something recent theorists have been at pains to emphasize (Aru et al. 2012; Graaf, Hsieh, & Sack 2012). In other contexts, neuroscientists speak of the neural basis of a phenomenon where the basis does not simply correlate with the phenomenon but also explains and possibly grounds it. However, talk of correlates is entrenched in the neuroscience of consciousness, so one must remember that the goal is to find the subset of neural correlates that are explanatory, in answering concrete questions. Reference to neural correlates in this entry will always mean neural explanatory correlates of consciousness (on occasion, we will speak of these as the neural basis of consciousness). That is, our two questions about specific and generic consciousness focus the discussion on neuroscientific theories and data that contribute to explaining them. This project allows that there are limits to neural explanations of consciousness, precisely because of the explanatory gap (Levine 1983).

2. Methods for Tracking Consciousness

Since studying consciousness requires that scientists track its presence, it will be important to examine various methods used in neuroscience to isolate and probe conscious states.

2.1 Introspection and Report

Scientists primarily study phenomenal consciousness through subjective reports; objective measures such as performance in a task are often used too (for a critical assessment, see Irvine 2013). We can treat reports in neuroscience as conceptual in that they express how the subject recognizes things to be, whether regarding what they perceive (perceptual or observational reports, as in psychophysics) or regarding what mental states they are in (introspective reports). A report’s conceptual content can be conveyed in words or other overt behavior whose significance is fixed within an experiment (e.g., pressing a button to indicate that a stimulus is present or that one sees it). Subjective reports of conscious states draw on distinctively first-personal access to that state. The subject introspects.

Introspection raises questions that science has only recently begun to address systematically in large part because of longstanding suspicion regarding introspective methods or, in contrast, because of an unquestioning assumption that introspection is largely and reliably accurate. Early modern psychology relied on introspection to parse mental processes but ultimately abandoned it due to worries about introspection’s reliability (Feest 2012; Spener 2018). Introspection was judged to be an unreliable method for addressing questions about mental processing (and it is still seen with suspicion by some; Schwitzgebel 2008). To address these worries, we must understand how introspection works, but unlike many other psychological capacities, detailed models of introspection of consciousness are hard to develop (Feest 2014; for theories of introspecting propositional attitudes, see Nichols & Stich 2003; Heal 1996; Carruthers 2011). This makes it difficult to address long-standing worries about introspective reliability regarding consciousness.

In science, questions raised about the reliability of a method are answered by calibrating and testing the method. This calibration has not been done with respect to the type of introspection commonly practiced by philosophers. Such introspection has revealed many phenomenal features that are targets of active investigation such as the phenomenology of mineness (Ehrsson 2009), sense of agency (Bayne 2011; Vignemont & Fourneret 2004; Marcel 2003; Horgan, Tienson, & Graham 2003), transparency (Harman 1990; Tye 1992), self-consciousness (Kriegel 2003: 122), cognitive phenomenology (Bayne & Montague 2011); phenomenal unity (Bayne & Chalmers 2003) among others. A scientist might worry that philosophical introspection merely recycles rejected methods of a century ago, indeed without the stringent controls or training imposed by earlier psychologists. How can we ascertain and ensure the reliability of introspection in the empirical study of consciousness?

Model migration, applying well-understood concepts in less well-understood domains, provides conceptual resources to reveal patterns across empirical systems and to promote theoretical insights (Lin 2018; Knuuttila & Loettgers 2014; see the entry on models in science). Introspection’s range of operation and its reliability conditions—when and why it succeeds and when and why it fails—can be calibrated by drawing parallels to how Signal Detection Theory (SDT) models perception (see Dołęga 2023 for an evaluation of introspective theories). In SDT (Tanner & Swets 1954; Hautus, Macmillan, & Creelman 2021), perceptually detecting or discriminating a stimulus is the joint outcome of the observer’s perceptual sensitivity (e.g., how well a perceptual system can tell signal from noise) and a decision to classify the available perceptual evidence as signal or not (i.e., observers set a response criterion for what counts as signal). Consider trying to detect something moving in the brush at twilight versus at noon. In the latter, the signal will be greatly separated from noise (the object will be easier to detect because it generates a strong internal perceptual response). In contrast, at twilight the signal will not be easy to disentangle from noise (the object will be harder to detect because of the weaker internal perceptual response it generates in the observer). Importantly, even at noon when the signal is strong, rare as they might be, misses and false alarms are to be expected. Yet in either case, one might operate with a conservative response criterion, say because one is afraid to be wrong. Thus, even if the signal is detectable, one might still opt not to report on it given a conservative bias (criterion), say if one is in the twilight scenario and would be ridiculed for “false alarms”, i.e., claiming the object to be present when it is not.

Introspection can be modeled as a signal detection mechanism that operates over conscious states. Pains can be strong or weak, mental images can be vivid or faint, and perceptions can be more or less striking. In other words, conscious experiences admit degrees of intensity (Lee 2023; Morales 2023). According to introspective Signal Detection Theory or iSDT (Morales forthcoming), the intensity of conscious experiences (their mental strength) modulates the introspective internal response generated by the intensity of the conscious experience under examination, which in turn modulates introspective sensitivity. Everything else being equal, introspectors are more likely to introspect accurately an intense experience (e.g., a strong pain, a vivid mental image, etc.) than a weak experience (e.g., a mild pain, a faint mental image, etc.). As in perception, a biased introspective criterion may affect an introspector’s response without necessarily implying changes in the introspectability of a state. Even when rare, introspection errors (misses and false alarms) should be possible and in fact expected. By modeling introspection as any other detection mechanism, iSDT aims to explain the range of reliability of introspection while preserving our intuitions that it provides highly accurate judgments in familiar cases such as detecting severe pain while also accounting for introspection’s potential fallibility.

Thus, in the context of scientific experiments of consciousness, iSDT provides a model that predicts higher introspective reliability when judging strong experiences. When the experiences are weak—as one may expect from stimuli presented quickly and at threshold—introspective judgments are expected to be more unreliable (Dijkstra & Fleming 2023).

Another way to link scientific introspection to consciousness is to connect it to models of attention. Philosophical conceptions of introspective attention construe it as capable of directly focusing on phenomenal properties and experiences. As this idea is fleshed out, however, it is clearly not a form of attention studied by cognitive science, for the posited direct introspective attention is neither perceptual attention nor what psychologists call internal attention (e.g., the retrieval of thought contents as in memory recollection; Chun, Golomb, & Turk-Browne 2011). Calibrating introspection as it is used in the science of consciousness would benefit from concrete models of introspection (Chirimuuta 2014; Spener 2015; Wu 2023; Kammerer & Frankish 2023; Morales forthcoming).

We can draw inspiration from a proposal inspired by Gareth Evans (1982): in introspecting perceptual states, say judging that one sees an object, one draws on the same perceptual capacities used to answer the question whether the object is present. In introspection, one then appends a further concept of “seeing” to one’s perceptual report.[3] Thus, instead of simply reporting that a red stimulus is present, one reports that one sees the red stimulus. Paradoxically, introspection relies on externally directed perceptual attention, but as noted earlier, identifying what one perceives is a way of characterizing what one’s perception is like, so this “outward” perspective can provide information about the inner.

Further, the advantage of this proposal is that questions of reliability come down to questions of the reliability of psychological capacities that can be empirically assessed, say perceptual, attentional and conceptual reliability. For example, Peters and Lau (2015) showed that accuracy in judgments about the visibility of a stimulus, the introspective measure, coincided with accuracy in judgments about stimulus properties, the objective measure (see also Rausch & Zehetleitner 2016). Further, in many of the examples to be discussed, the perceptual attention-based account provides a plausible cognitive model of introspection (Wu, 2023). Subjects report on what they perceptually experience by attending to the object of their experience, and where perception and attention are reliable, a plausible hypothesis is that their introspective judgments will be reliable as well. Accordingly, the reliability of introspection in the empirical studies to be discussed can be assumed. Still, given that no scientist should assert the reliability of a method without calibration, introspection must be subject to the same standards. There is more work to be done.

2.2 Access Consciousness and No-Report Paradigms

Introspection illustrates a type of cognitive access, for a state that is introspected is access conscious. This raises a question that has epistemic implications: is access consciousness necessary for phenomenal consciousness? If it is not, then there can be phenomenal states that are not access conscious, so are in principle not reportable. That is, phenomenal consciousness can overflow access consciousness (Block 2007).

Access is tied to attention, and attention is tied to report. Some views hold that attention is necessary for access, which entails phenomenal consciousness (e.g., the Global Workspace theory [section 3.1]).[4] In contrast, other theories (e.g., recurrent processing theory section 3.2]), hold that there can be phenomenal states that are not accessible.

Many scientists of consciousness take there to be evidence for no phenomenal consciousness without access and little if no evidence of phenomenal consciousness outside of access. While those antagonistic to overflow have argued that it is not empirically testable (Cohen & Dennett 2011) testing that attention is necessary for consciousness may be equally untestable. After all, we must eliminate attention to a target while gathering evidence for the absence of consciousness. Yet if gathering evidence for consciousness requires attention, then in fulfilling the conditions for testing the necessity of attention, we undercut the access needed to substantiate the absence of (Wu 2017; for a monograph length discussion of attention and consciousness, see Montemayor & Haladjian 2015).[5] How then can we gather the required evidence to assess competing theories?

One response is to draw on no-report paradigms which measure reflexive behaviors correlated with conscious states to provide a window on the phenomenal that is independent of access (Lumer & Rees 1999; Tse et al. 2005). In binocular rivalry paradigms (see section 5.2), researchers show subjects different images to each eye. Due to their very different features, these images cannot be fused into a single percept. This results in alternating experiences that transition back and forth between one image and the other. Since the presented images remain constant while experience changes, binocular rivalry has been used to find the neural correlates of consciousness (see Zou, He, & Zhang 2016; and Giles, Lau, & Odegaard 2016 for concerns about using binocular rivalry for studying the neural correlates of consciousness). However, binocular rivalry tasks have typically involved asking subjects to report what their experience is like, requiring the recruitment of attention and explicit report. To overcome this issue, Hesse & Tsao (2020) recently introduced an important variation by adding a small fixation point on different locations of each image. For example, subjects’ right eye was shown a photo of a person with a fixation point drawn on the bottom of the image, and their left eye was presented with a photo of some tacos with a fixation point on the top. People and monkeys were trained to look at the fixation point while ignoring the image. This way, the experimenters could use their eye movement behavior as a proxy for which image they were experiencing without having to collect explicit reports: if subjects look up, they are experiencing the tacos; if they look down, they are experiencing the person. They confirmed with explicit reports from humans that both monkeys and humans in the no-report condition behaved similarly. Importantly, they found from single-cell recordings in the monkeys that neurons in inferotemporal cortex—a downstream region associated with high-level visual processing—represented the experienced image. Because these monkeys were never trained to report their percept (they were just following the fixation point wherever it was), the experimenters could more confidently conclude that the activity linked to the alternating images was due to consciously experiencing them and not due to introspecting or reporting.

No-report paradigms use indirect responses to track the subject’s perceptual experience in the absence of explicit (conceptualized) report. These can include eye movements, pupil changes, electroencephalographic (EEG) activity or galvanic skin conductance changes, among others. No-report paradigms seem to provide a way to track phenomenal consciousness even when access is eliminated. This would broaden the evidential basis for consciousness beyond introspection and indeed, beyond intentional behavior (our “broad” conception of access). However, in practice they do not always result in drastically different results (Michel & Morales 2020). Moreover, they do not fully circumvent introspection (Overgaard & Fazekas 2016). For example, the usefulness of any indirect physiological measure depends on validating its correlation with alternating experience given subjective reports. Once it is validated, monitoring these autonomic responses can provide a way to substitute for subjective reports within that paradigm. One cannot, however, simply extend the use of no-report paradigms outside the behavioral contexts within which the method is validated. With each new experimental context, we must revalidate the measure with introspective report. Moreover, no-report paradigms do not match post-perceptual processing between conscious and unconscious conditions (Block 2019). Even if overt report is matched, the cognitive consequences of perceiving a stimulus consciously and failing to do so are not the same. For example, systematic reflections may be triggered by one stimulus (the person) but not the other (the tacos). These post-cognitive differences would generate different neural activity that is not necessarily related to consciousness. To avoid this issue, post-perceptual cognition would also need to be matched to rule out potential confounds (Block 2019). However, this is easier said than done and no uncontroversial solutions around this issue have been found (Phillips & Morales 2020; Panagiotaropoulos, Dwarakanath, & Kapoor 2020).

Can we use no report paradigms to address whether access is necessary for phenomenal consciousness? A likely experiment would be one that validates no-report correlates for some conscious phenomenon P in a concrete experimental context C. With this validation in hand, one then eliminates accessibility and attention with respect to P in C. If the no-report correlate remains, would this clearly support overflow? Perhaps, it could still be argued that the result does not rule out the possibility that phenomenal consciousness disappears with access consciousness despite the no-report correlate remaining. For example, the reflexive response and phenomenal consciousness might have a common cause that remains even if phenomenal consciousness is selectively eliminated by removing access.

2.3 Confidence and Metacognitive Approaches

Given worries about calibrating introspection, researchers have asked subjects to provide a different metacognitive assessment of conscious states via reports about confidence (Fleming 2023a, 2020; Pouget, Drugowitsch, & Kepecs 2016). A standard approach is to have subjects perform a task, say perceptual discrimination of a stimulus, and then report how confident they are that their perceptual judgment was accurate. This metacognitive judgment, a confidence rating, about perception can be assessed for accuracy by comparing it with perceptual performance (for discussion of formal methods such as metacognitive signal detection theory, see Maniscalco & Lau 2012, 2014). Related paradigms include post-decision wagering where subjects place wagers on specific responses as a way of estimating their confidence (Persaud, McLeod, & Cowey 2007; but see Dienes & Seth 2010).

There are some advantages of using confidence judgments for studying consciousness (Michel 2023; Morales & Lau 2022; Peters 2022). While standard introspective judgments about conscious experiences may capture more directly the phenomenon of interest, confidence judgments are easier to explain to subjects and they are also more interpretable from the experimenter’s point of view. Confidence judgments provide an objective measure of metacognitive sensitivity: how well subjects’ confidence judgments track their performance in the task. Subjects can also receive feedback on those ratings and, at least in principle, improve their metacognitive sensitivity (though this has proved hard to achieve in laboratory tasks; Rouy et al. 2022; Haddara & Rahnev 2022). Metacognitive judgments also have the advantage over direct judgments about conscious experiences (e.g., Ramsøy & Overgaard 2004) in that they allow for comparisons between different domains that have very different phenomenology (e.g., perception vs memory) (e.g., Gardelle, Corre, & Mamassian 2016; Faivre et al. 2017; Morales, Lau, & Fleming 2018; Mazancieux et al. 2020).

One concern with metacognitive approaches is that they also rely on introspection (Rosenthal 2019; see also Sandberg et al. 2010; Dienes & Seth 2010). If metacognition relies on introspection, does it not accrue all the disadvantages of the latter? Perhaps, but an important gain in metacognitive approaches is that it allows for quantitative psychophysical analysis. While it does not replace introspection, it brings an analytical rigor to addressing certain aspects of conscious awareness.

How might we bridge metacognition with consciousness as probed by traditional introspection? The metacognitive judgment reflects introspective assessment of the quality of perceptual states and can provide information about the presence of consciousness. For example, Peters and Lau (2015) found that increases in metacognitive confidence tracked increases in perceptual sensitivity, which presumably underly the quality of perceptual experiences. They also did not find significant differences between asking for confidence or visibility judgments. Even studies that have found (small) differences between the two kinds of judgments indicate considerable association between the two ratings, suggesting similar behavioral patterns (Rausch & Zehetleitner 2016; Zehetleitner & Rausch 2013).

Beyond behavior, neuroscientific work shows that similar regions of the brain (e.g., prefrontal cortex) may be involved in supporting both conscious awareness and metacognitive judgments. Studies with non-human primates and rodents have begun to shed light on neural processing for metacognition (for reviews, see Grimaldi, Lau, & Basso 2015; Pouget, Drugowitsch, & Kepecs 2016). From animal studies, one theory is that metacognitive information regarding perception is already present in perceptual areas that guide observational judgments, and these studies implicate parietal cortex (Kiani & Shadlen 2009; Fetsch et al. 2014) and the superior colliculus (Kim & Basso 2008; but see Odegaard et al. 2018). Alternatively, information about confidence might be read out by other structures (see section 3.3 on Higher-Order Theory; also the entry on higher order theories of consciousness). In both human and animal studies, the prefrontal cortex (specifically, subregions in dorsolateral and orbitofrontal prefrontal cortex) has been found to support subjective reports of awareness (Cul et al. 2009; Lau & Passingham 2006), subjective appearances (Liu et al. 2019); visibility ratings (Rounis et al. 2010); confidence ratings (Fleming, Huijgen, & Dolan 2012; Shekhar & Rahnev 2018) and conscious experiences without reports (Mendoza-Halliday & Martinez-Trujillo 2017; Kapoor et al. 2022; Michel & Morales 2020).

2.4 The Intentional Action Inference

Metacognitive and introspective judgments result from intentional action, so why not look at intentional action, broadly construed, for evidence of consciousness? Often, when subjects perform perception guided actions, we infer that they are relevantly conscious. It would be odd if a person cooks dinner and then denies having seen any of the ingredients. That they did something intentionally provides evidence that they were consciously aware of what they acted on. An emphasis on intentional action embraces a broader evidential basis for consciousness. Consider the Intentional Action Inference to phenomenal consciousness:

If some subject acts intentionally, where her action is guided by a perceptual state, then the perceptual state is phenomenally conscious.

An epistemic version takes the action to provide good evidence that the state is conscious. Notice that introspection is typically an intentional action so it is covered by the inference. In this way, the Inference effectively levels the evidential playing field: introspective reports are simply one form among many types of intentional actions that provide evidence for consciousness. Those reports are not privileged.

The intentional action inference and no-report paradigms highlight that the science of consciousness has largely restricted its behavioral data to one type of intentional action, introspection. What is the basis of privileging one intentional action over others? Consider the calibration issue. For many types of intentional action deployed in experiments, scientists can calibrate performance by objective measures such as accuracy. This has not been formally done for introspection of consciousness, so scientists have privileged an uncalibrated measure over a calibrated one. This seems empirically ill-advised. On the flip side, one worry about the intentional action inference is that it ignores guidance by unconscious perceptual states (see sections 4 and 5.3.1).

2.5 Unresponsive Wakefulness Syndrome and the Intentional Action Inference

The Intentional Action Inference is operative when subjective reports are not available. For example, it is deployed in arguing that some patients diagnosed with unresponsive wakefulness syndrome are conscious (Shea & Bayne 2010; Drayson 2014).

A patient [with unresponsive wakefulness syndrome] appears at times to be wakeful, with cycles of eye closure and eye opening resembling those of sleep and waking. However, close observation reveals no sign of awareness or of a ‘functioning mind’: specifically, there is no evidence that the patient can perceive the environment or his/her own body, communicate with others, or form intentions. As a rule, the patient can breathe spontaneously and has a stable circulation. The state may be a transient stage in the recovery from coma or it may persist until death. (Working Party RCP 2003: 249)

These patients are not clinically comatose but fall short of being in a “minimally conscious state”. Unlike unresponsive wakeful patients, minimally conscious state patients seemingly perform intentional actions.

Recent work suggests that some patients diagnosed with unresponsive wakeful syndrome are conscious. Owen et al. (2006) used fMRI to demonstrate correlated activity in such patients in response to commands to deploy imagination. In an early study, a young female patient was scanned by fMRI while presented with three auditory commands: “imagine playing tennis”, “imagine visiting the rooms in your home”, “now just relax”. The commands were presented at the beginning of a thirty-second period, alternating between imagination and relax commands. The patient demonstrated similar activity when matched to control subjects performing the same task: sustained activation of the supplementary motor area (SMA) was observed during the motor imagery task while sustained activation of the parahippocampal gyrus including the parahippocampal place area (PPA) was observed during the spatial imagery task. Later work reproduced this result in other patients and in one patient, the tasks were used as a proxy for “yes”/ “no” responses to questions (Monti et al. 2010; for a review, see Fernández-Espejo & Owen 2013). Note that these tasks probe specific contents of consciousness by monitoring neural correlates of conscious imagery.

Several authors (Greenberg 2007; Nachev & Husain 2007) have countered that the observed activity was an automatic, non-intentional response to the command sentences, specifically to the words “tennis” and “house”. In normal subjects, reading action words is known to activate sensorimotor areas (Pulvermüller 2005). Owen and colleagues (2007), responded that the sustained activity over thirty-seconds made an automatic response less likely than an intentional response. One way to rule out automaticity is to provide the patient with different sentences such as “do not imagine playing tennis” or “Sharlene was playing tennis”. Owen et al. (2007) demonstrated that presenting “Sharlene was playing tennis” to a normal subject did not induce the same activity as when the subject obeyed the command “imagine playing tennis”, but the same intervention was not tried on patients. However, subsequent experiments using other measures such as EEG (Goldfine et al. 2011; Curley et al. 2018; Cruse et al. 2012) and functional connectivity (Demertzi et al. 2015), indicate conscious awareness is indeed present in patients with (wrongly diagnosed) unresponsive wakefulness syndrome (for a review, see Edlow et al. 2021; for ethical reflections around diagnosing awareness in patients with disorders of consciousness, see Young et al. 2021).

Owen et al. draw on a neural correlate of imagination, a mental action. Arguing that the neural correlate provides evidence of the patient’s executing an intentional action, they invoke a version of the Intentional Action Inference to argue that performance provides evidence for specific consciousness tied to the information carried in the brain areas activated.[6]

3. Neurobiological Theories of Consciousness

Recall that the Generic Consciousness question asks:

What conditions/states N of nervous systems are necessary and/or sufficient for a mental state, M, to be conscious as opposed to not?

Victor Lamme notes:

Deciding whether there is phenomenality in a mental representation implies putting a boundary—drawing a line—between different types of representations…We have to start from the intuition that consciousness (in the phenomenal sense) exists, and is a mental function in its own right. That intuition immediately implies that there is also unconscious information processing. (Lamme 2010: 208)

It is uncontroversial that there is unconscious information processing, say processing occurring in a computer. What Lamme means is that there are conscious and unconscious mental states (representations). For example, there might be visual states of seeing X that are conscious or not (section 4).

In what follows, the theories discussed provide higher level neural properties that are necessary and/or sufficient for generic consciousness of a given state. To provide a gloss on the hypotheses: For the Global Neuronal Workspace, entry into the neural workspace is necessary and sufficient for a state or content to be consciousness. For Recurrent Processing Theory, a type of recurrent processing in sensory areas is necessary and sufficient for perceptual consciousness, so entry into the Workspace is not necessary. For Higher-Order Theories, the presence of a higher-order state tied to prefrontal areas is necessary and sufficient for phenomenal experience, so recurrent processing in sensory areas is not necessary nor is entry into the workspace. For Information Integration Theories, a type of integration of information is necessary and sufficient for a state to be conscious.

3.1 The Global Neuronal Workspace

One explanation of generic consciousness invokes the global neuronal workspace. Bernard Baars first proposed the global workspace theory as a cognitive/computational model (Baars 1988), but we will focus on the neural version of Stanislas Dehaene and colleagues: a state is conscious when and only when it (or its content) is present in the global neuronal workspace making the state (content) globally accessible to multiple systems including long-term memory, motor, evaluational, attentional and perceptual systems (Dehaene, Kerszberg, & Changeux 1998; Dehaene & Naccache 2001; Dehaene et al. 2006). Notice that the previous characterization does not commit to whether it is phenomenal or access consciousness that is being defined.

Access should be understood as a relational notion:

A system X accesses content from system Y iff X uses that content in its computations/processing.

The accessibility of information is then defined as its potential access by other systems. Dehaene (Dehaene et al. 2006) introduces a threefold distinction: (1) neural states that carry information that is not accessible (subliminal information); (2) states that carry information that is accessible but not accessed (not in the workspace; preconscious information); and (3) states whose information is accessed by the workspace (conscious information) and is globally accessible to other systems. So, a necessary and sufficient condition for a state’s being conscious rather than not is the access of a state or content by the workspace, making that state or content accessible to other systems. Hence, only states in (3) are conscious.

see legend. The top figure is a series of dotted concentric circles with a network of lines and nodes imposed on top; the innermost circle is labeled 'Global Workspace'. The circles are divided into five sectors [except the sector radii don't cross the innermost circle]; each sector has a labels in an arrow pointing in [unless otherwise noted] clockwise from the top as, 'Evaluative Systems (VALUE)', 'Attentional Systems (FOCUSING)', 'Motor systems (FUTURE)' [the only one with an arrow pointing out], 'Perceptual systems (PRESENT)', 'Long-Term Memory (PAST)'. The bottom figure is the same network of lines and nodes (minus circles, arrows, and labels but with on the left a picture labeled 'frontal' and on the right a picture labeled 'sensory'.

Figure 2. The Global Neuronal Workspace

Figure Legend: The top figure provides a neural architecture for the workspace, indicating the systems that can be involved. The lower figure sets the architecture within the six layers of the cortex spanning frontal and sensory areas, with emphasis on neurons in layers 2 and 3. Figure reproduced from Dehaene, Kerszberg, and Changeux 1998. Copyright (1998) National Academy of Sciences.

The global neuronal workspace theory ties access to brain architecture. It postulates a cortical structure that involves workspace neurons with long-range connections linking systems: perceptual, mnemonic, attentional, evaluational and motoric.

What is the global workspace in neural terms? Long-range workspace neurons within different systems can constitute the workspace, but they should not necessarily be identified with the workspace. A subset of workspace neurons becomes the workspace when they exemplify certain neural properties. What determines which workspace neurons constitute the workspace at a given time is the activity of those neurons given the subject’s current state. The workspace then is not a rigid neural structure but a rapidly changing neural network, typically only a proper subset of all workspace neurons.

Consider then a neural population that carries content p and is constituted by workspace neurons. In virtue of being workspace neurons, the content p is accessible to other systems, but it does not yet follow that the neurons then constitute the global workspace. A further requirement is that workspace neurons are (1) put into an active state that must be sustained so that (2) the activation generates a recurrent activity between workspace systems. Only when these systems are recurrently activated are they, along with the units that access the information they carry, constituents of the workspace. This activity accounts for the idea of global broadcast in that workspace contents are accessible to further systems. Broadcasting explains the idea of consciousness as for the subject: globally broadcasted content is accessible for the subject’s use in informing behavior.

The global neuronal workspace theory provides an account of access consciousness but what of phenomenal consciousness? The theory predicts widespread activation of a cortical workspace network as correlated with phenomenal conscious experience, and proponents often appeal to imaging results that reveal widespread activation when consciousness is reported (Dehaene & Changeux 2011). There is, however, a potential confound. We track phenomenal consciousness by access in introspective report, so widespread activity during reports of conscious experience correlates with both access and phenomenal consciousness. Correlation cannot tell us whether the observed activity is the basis of phenomenal consciousness or of access consciousness in report (Block 2007). This remains a live question for as discussed in section 2.2, we do not have empirical evidence that overflow is false.

To eliminate the confound, experimenters ensure that performance does not differ between conditions where consciousness is present and where it is not. Where this was controlled, widespread activation was not clearly observed (Lau & Passingham 2006). Still, the absence of observed activity by an imaging technique does not imply the absence of actual activity for the activity might be beyond the limits of detection of that technique. Further, there is a general concern about the significance of null results given that neuroscience studies focused on prefrontal cortex are typically underpowered (for discussion, see Odegaard, Knight, & Lau 2017).

3.2 Recurrent Processing Theory

A different explanation ties perceptual consciousness to processing independent of the workspace, with focus on recurrent activity in sensory areas. This approach emphasizes properties of first-order neural representation as explaining consciousness. Victor Lamme (2006, 2010) argues that recurrent processing is necessary and sufficient for consciousness. Recurrent processing occurs where sensory systems are highly interconnected and involve feedforward and feedback connections. For example, forward connections from primary visual area V1, the first cortical visual area, carry information to higher-level processing areas, and the initial registration of visual information involves a forward sweep of processing. At the same time, there are many feedback connections linking visual areas (Felleman & Van Essen 1991), and later in processing, these connections are activated yielding dynamic activity within the visual system.

Lamme identifies four stages of normal visual processing:

  • Stage 1: Superficial feedforward processing: visual signals are processed locally within the visual system.
  • Stage 2: Deep feedforward processing: visual signals have travelled further forward in the processing hierarchy where they can influence action.
  • Stage 3: Superficial recurrent processing: information has traveled back into earlier visual areas, leading to local, recurrent processing.
  • Stage 4: Widespread recurrent processing: information activates widespread areas (and as such is consistent with global workspace access).

Lamme holds that recurrent processing in Stage 3 is necessary and sufficient for consciousness. Thus, what it is for a visual state to be conscious is for a certain recurrent processing state to hold of the relevant visual circuitry. This identifies the crucial difference between the global neuronal workspace and recurrent processing theory: the former holds that recurrent processing at Stage 4 is necessary for consciousness while the latter holds that recurrent processing at Stage 3 is sufficient. Thus, recurrent processing theory affirms phenomenal consciousness without access by the global neuronal workspace. In that sense, it is an overflow theory (see section 2.2).

Why think that Stage 3 processing is sufficient for consciousness? Given that Stage 3 processing is not accessible to introspective report, we lack introspective evidence for sufficiency. Lamme appeals to experiments with brief presentation of stimuli such as letters where subjects are said to report seeing more than they can identify in report (Lamme 2010). For example, in George Sperling’s partial report paradigm (Sperling 1960), subjects are briefly presented with an array of 12 letters (e.g., in 300 ms presentations) but are typically able to report only three to four letters even as they claim to see more letters (but see Phillips 2011). It is not clear that this is strong motivation for recurrent processing, since the very fact that subjects can report seeing more letters shows that they have some access to them, just not access to letter identity.

Lamme also presents what he calls neuroscience arguments. This strategy compares two neural networks, one taken to be sufficient for consciousness, say the processing at Stage 4 as per Global Workspace theories, and one where sufficiency is in dispute, say recurrent activity in Stage 3. Lamme argues that certain features found in Stage 4 are also found in Stage 3 and given this similarity, it is reasonable to hold that Stage 3 processing suffices for consciousness. For example, both stages exhibit recurrent processing. Global neuronal workspace theorists can allow that recurrent processing in stage 3 is correlated, even necessary, but deny that this activity is explanatory in the relevant sense of identifying sufficient conditions for consciousness.

It is worth reemphasizing the empirical challenge in testing whether access is necessary for phenomenal consciousness (sections 2.1–2.2). The two theories return different answers, one requiring access, the other denying it. As we saw, the methodological challenge in testing for the presence of phenomenal consciousness independently of access remains a hurdle for both theories.

3.3 Higher-Order Theory

A long-standing approach to conscious states holds that one is in a conscious state if and only if one relevantly represents oneself as being in such a state. For example, one is in a conscious visual state of seeing a moving object if and only if one suitably represents oneself being in that visual state. This higher-order state, in representing the first-order state that represents the world, results in the first order state’s being conscious as opposed to not. The intuitive rationale for such theories is that if one were in a visual state but in no way aware of that state, then the visual state would not be conscious. Thus, to be in a conscious state, one must be aware of it, i.e., represent it (Rosenthal 2002; see the entry on higher order theories of consciousness). On certain higher-order theories (Higher-Order Thought Theory, Rosenthal 2005; and Higher-Order Representation of a Representation (HOROR), Brown 2015), one can be in a conscious visual state even if there is no visual system activity, so long as one represents oneself as being in that state (for a debate, see Block 2011; Rosenthal 2011). Other family of theories postulates that experiences are jointly determined by first- and higher-order states [e.g., Higher-Order State Space (HOSS) (Fleming 2020); Perceptual Reality Monitoring (PRM) (Lau 2019)]. An intermediate perspective proposes that higher-order states track our mental attitudes towards first-order states along different dimensions that include familiarity, vividness, value, and so on (Self-Organizing Meta-Representational Account (SOMA)—Cleeremans 2011; Cleeremans et al. 2020). These differences apart, higher-order theories merge with empirical work by tying high-order representations with brain activity, typically in the prefrontal cortex, which is taken to be the neural substrate of the required higher-order representations.

The focus on the prefrontal cortex allows for empirical tests of the higher-order theory against other accounts (Lau & Rosenthal 2011; LeDoux & Brown 2017; Brown, Lau, & LeDoux 2019; Lau 2022). For example, on the higher-order theory, lesions to prefrontal cortex should affect consciousness (see Kozuch 2013, 2022, 2023), testing the necessity of prefrontal cortex for consciousness. Against higher-order theories, some reports claim that patients with prefrontal cortex surgically removed maintain preserved perceptual consciousness (Boly et al. 2017) and that intracranial electrical stimulation (iES) to the prefrontal cortex does not alter consciousness (Raccah, Block, & Fox 2021). This would lend support to recurrent processing theories that hold that prefrontal cortical activity is not necessary for consciousness (and would be evidence against both GWT and higher-order theories). It is not clear, however, that the interventions succeeded in removing all of prefrontal cortex, leaving perhaps sufficient frontal areas needed to sustain consciousness (Odegaard, Knight, & Lau 2017), or that simple, localized stimulation to prefrontal cortex would be the right kind of stimulation for altering awareness (see Naccache et al. 2021). Bilateral suppression of prefrontal activity using transcranial magnetic stimulation also seems to selectively impair visibility as evidenced by metacognitive report (Rounis et al. 2010). Furthermore, certain syndromes and experimental manipulations suggest consciousness in the absence of appropriate sensory processing as predicted by some higher-order accounts (Lau & Brown 2019), a claim that coheres with the theory’s sufficiency claims.

Subjective reports of conscious versus unconscious trials activate frontal regions as shown with EEG (Cul et al. 2009) and fMRI (Lau & Passingham 2006). Liu and colleagues (Liu et al. 2019) leveraged the “double-drift illusion” to show that real and apparent motion shared patterns of neural activity only in lateral and medial frontal cortex, not visual cortex. The double-drift illusion is a dramatic mismatch between physical and apparent motion created by a patch of gratings moving vertically while the gratings cycle horizontally; this creates the illusion of the patch’s path to be more than 45º away from vertical. The conscious experience of seeing a stimulus veer off diagonally, whether physically or illusorily, was similarly encoded only in the prefrontal cortex, suggesting that the conscious experience of stimulus’s diagonal motion was represented there, not in visual cortex. In a carefully designed no-report paradigm, Hatamimajoumerd and colleagues (2022) found that conscious stimuli were decodable from prefrontal cortex well above chance. Intracranial electrophysiological recording, where electrodes are placed directly on the surface of the brain, reveals prefrontal activity related to visual consciousness even when subjects were not required to respond to the stimulus (Noy et al. 2015). Fazekas and Nemeth (2018) discuss studies using different neuroimaging techniques showing significant increases in activity in the prefrontal cortex during dreams, a natural case of phenomenal awareness without report. Convergent evidence about the role of the prefrontal cortex sustaining awareness comes from single-cell recordings in macaques. Using binocular rivalry (see section 5.2), Dwarakanath et al. (2023) and Kapoor et al. (2022) show that dynamic subjective changes in monkeys’ conscious experiences are robustly represented in prefrontal cortex.

3.4 Information Integration Theory

Information Integration Theory of Consciousness (IIT) draws on the notion of integrated information, symbolized by Φ, as a way to explain generic consciousness—specifically the quantity of consciousness present in a system (Tononi 2004, 2008; Oizumi, Albantakis, & Tononi 2014; Albantakis et al. 2023). IIT also aims to explain specific consciousness (i.e., the quality or content of conscious experiences) by appealing to the conceptual causal structure of the integrated information complex (i.e., the set of units of the physical substrate that is maximally integrated).

Integrated information theory (IIT) “starts from phenomenology and makes use of thought experiments to claim that consciousness is integrated information” (Tononi 2008: 216). IIT’s first step is to find “phenomenological axioms”, that is, immediately given essential properties of every conceivable experience. Once properly understood, these phenomenological axioms are taken by IIT to be irrefutably true (Albantakis et al. 2023: 3). These axioms, obtained by drawing on introspection and reason, are that consciousness exists, and that it is intrinsic, specific, unitary, definite, and structured. These axioms lead to postulates of physical existence. In other words, physical implementations that respect the properties first discovered through introspection. Finally, IIT develops mathematical formalisms that aim to preserve all these features and that can in principle help calculate the quality and quantity of integrated information in a system, or Φ.[7]

Integrated information is defined in terms of the effective information carried by the parts of the system in light of its causal profile. For example, consider a circuit (neuronal, electrical, or otherwise). We can focus on a part of the whole circuit, say two connected nodes, and compute the effective information that can be carried by this microcircuit. The system carries integrated information if the effective informational content of the whole is greater than the sum of the informational content of the parts. If there is no partitioning where the summed informational content of the parts equals the whole, then the system as a whole carries integrated information and it has a positive value for Φ. Intuitively, the interaction of the parts adds more to the system than the parts do alone.

IIT holds that an above-zero value for Φ implies that a system is conscious, with more consciousness going with greater values for Φ. For example, Tononi argues that the human cerebellum does not contribute to consciousness due to its highly modular anatomical organization. Thus, it is hypothesized that the cerebellum has a value of zero Φ despite there being four to five times the number of neurons in the cerebellum versus in the human cortex. On IIT, what matters is the presence of appropriate connections and not the number of neurons (the soundness of this argument about cerebellum has been contested; Aaronson 2014b; Merker, Williford, & Rudrauf 2022). For this same reason, IIT also makes counterintuitive predictions. From panpsychist conclusions, such as admitting that even a 2D grid of inactivated logic gates “doing absolutely nothing, may in fact have a large value of PHI” (Tononi 2014 in Other Internet Resources), to neuroscientifically implausible conclusions: “a brain where no neurons were activated, but were kept ready to respond in a differentiated manner to different perturbations, would be conscious (perhaps that nothing was going on)” (Tononi 2004). (In Other Internet Resources, see Aaronson 2014a and 2014b for striking counterexamples; see Tononi 2014 for a response).

It is important to note that IIT is not in and of itself a neuroscientific theory of consciousness. Rather, IIT is probably best understood as a metaphysical theory about the essential features of consciousness. Accordingly, these features could be present not just in organisms with neural systems but in any physical system (organic or not) that integrates information with a Φ larger than 0 (Tononi et al. 2016). Evidence of its metaphysical status are the theory’s idealist corollaries (see entry for Idealism). According to Tononi, and in stark contrast to current neuroscientific assumptions, only intrinsic entities (i.e., conscious entities) “truly exist and truly cause, whereas my neurons or my atoms neither truly exist nor truly cause” (Tononi, et al. 2022: 2 [Other Internet Resources]). Relatedly, IIT has received criticisms about its soundness and its neuroscientific status based on both empirical and theoretical arguments. One notable concern is the lack of clarity between IIT’s metaphysical claims and their potential relevance for a neuroscientific understanding of consciousness.[8]

3.5 Frontal or Posterior?

In recent years, one way to frame the debate between theories of generic consciousness is whether the “front” or the “back” of the brain is crucial. Using this rough distinction allows us to draw the following contrasts: Recurrent processing theories focus on sensory areas (in vision, the “back” of the brain) such that where processing achieves a certain recurrent state, the relevant contents are conscious even if no higher-order thought is formed or no content enters the global workspace. Similarly, proponents of IIT have recently emphasized a “posterior hot zone” covering parietal and occipital areas as a neural correlate for consciousness, as they speculate that this zone may have the highest value for Φ (Boly et al. 2017; but see Lau 2023). For certain higher-order thought theories, having a higher-order state, supported by prefrontal cortex, without corresponding sensory states can suffice for conscious states. In this case, the front of the brain would be sufficient for consciousness. Finally, the global neuronal workspace, drawing on workspace neurons that are present across brain areas to form the workspace, might be taken to straddle the difference, depending on the type of conscious state involved. They require entry into the global workspace such that neither sensory activity nor a higher order thought on its own is sufficient, i.e., neither just the front nor the back of the brain.

The point of talking coarsely of brain anatomy in this way is to highlight the neural focus of each theory and thus, of targets of manipulation as we aim for explanatory neural correlates in terms of what is necessary and/or sufficient for generic consciousness. What is clear is that once theories make concrete predictions of brain areas involved in generic consciousness, neuroscience can test them.

4. Neuroscience of Generic Consciousness: Unconscious Vision as Case Study

Since generic consciousness is a matter of a state’s being conscious or not, we can examine work on specific types of mental state that shift between being conscious or not and isolate neural substrates. Work on unconscious vision provides an informative example. In the past decades, scientists have argued for unconscious seeing and investigated its brain basis especially in neuropsychology, the study of subjects with brain damage. Interestingly, if there is unconscious seeing, then the intentional action inference must be restricted in scope since some intentional behaviors might be guided by unconscious perception (section 2.4). That is, the existence of unconscious perception blocks a direct inference from perceptually guided intentional behavior to perceptual consciousness. The case study of unconscious vision promises to illuminate more specific studies of generic consciousness along with having repercussions for how we attribute conscious states.

4.1 Unconscious Vision and the Two Visual Streams

Since the groundbreaking work of Leslie Ungerleider & Mortimer Mishkin (1982), scientists divide primate cortical vision into two streams: dorsal and ventral (for further dissection, see Kravitz et al. 2011). The dorsal stream projects into the parietal lobe while the ventral stream projects into the temporal lobe (see Figure 1). Controversy surrounds the functions of the streams. Ungerleider and Mishkin originally argued that the streams were functionally divided in terms of what and where: the ventral stream for categorical perception and the dorsal stream for spatial perception. David Milner and Melvyn Goodale (1995) have argued that the dorsal stream is for action and the ventral stream for “perception”, namely for guiding thought, memory and complex action planning (see Goodale & Milner 2004 for an engaging overview). There continues to be debate surrounding the Milner and Goodale account (Schenk & McIntosh 2010) but it has strongly influenced philosophers of mind.

Substantial motivation for Milner and Goodale’s division draws on lesion studies in humans. Lesions to the dorsal stream do not seem to affect conscious vision in that subjects are able to provide accurate reports of what they see (but see Wu 2014a). Rather, dorsal lesions can affect visual-guidance of action with optic ataxia being a common result. Optic ataxic subjects perform inaccurate motor actions. For example, they grope for objects, yet they can accurately report the object’s features (for reviews, see Andersen et al. 2014; Pisella et al. 2009; Rossetti, Pisella, & Vighetto 2003). Lesions in the ventral stream disrupt normal conscious vision, yielding visual agnosia, an inability to see visual form or to visually categorize objects (Farah 2004).

Dorsal stream processing is said to be unconscious. If the dorsal stream is critical in the visual guidance of many motor actions such as reaching and grasping, then those actions would be guided by unconscious visual states. The visual agnosic patient DF provides critical support for this claim.[9] Due to carbon monoxide poisoning, DF suffered focal lesions largely in the ventral stream spanning the lateral occipital complex that is associated with processing of visual form (high resolution imaging also reveals small lesions in the parietal lobe; James et al. 2003). Like other visual agnosics with similar lesions, DF is at chance in reporting aspects of form, say the orientation of a line or the shape of objects. Nevertheless, she retains color and texture vision. Strikingly, DF can generate accurate visually guided action, say the manipulation of objects along specific parameters: putting an object through a slot or reaching for and grasping round stones in a way sensitive to their center of mass. Simultaneously, DF denies seeing the relevant features and, if asked to verbally report them, she is at chance. In this dissociation, DF’s verbal reports give evidence that she does not visually experience the features to which her motor actions remain sensitive.

What is uncontroversial is that there is a division in explanatory neural correlates of visually guided behavior with the dorsal stream weighted towards the visual guidance of motor movements and the ventral stream weighted towards the visual guidance of conceptual behavior such as report and reasoning (see section 5.3 on manipulation of seeing words via ventral stream stimulation). A substantial further inference is that consciousness is segregated away from the dorsal stream to the ventral stream. How strong is this inference?

Recall the intentional action inference. In performing the slot task, DF is doing something intentionally and in a visually guided way. For control subjects performing the task, we conclude that this visually guided behavior is guided by conscious vision. Indeed, a folk-psychological assumption might be that consciousness informs mundane action (Clark 2001; for a different perspective see Wallhagen 2007). Since DF shows similar performance on the same task, why not conclude that she is also visually conscious? Presumably, one hesitates because DF’s introspective reports clash with the intentional action inference. DF denies seeing features she is visually sensitive to in action. Should introspection then trump intentional action in attributing consciousness?

Two issues are worth considering. The first is that introspective reports involve a specific type of intentional action guided by the experience at issue. One type of intentional behavior is being prioritized over another in adjudicating whether a subject is conscious. What is the empirical justification for this prioritization? The second issue is that DF is possibly unique among visual agnosics. It is a substantial inference to move from DF to a general claim about the dorsal stream being unconscious in neurotypical individuals (see Mole 2009 for arguments that consciousness does not divide between the streams; see Wu 2013 for an argument for unconscious visually guided action in normal subjects). What this shows is that the methodological decisions that we make regarding how we track consciousness are substantial in theorizing about the neural bases of conscious and unconscious vision.

4.2 Blindsight

Two issues are worth considering. The first is that introspective reports involve a specific type of intentional action guided by the experience at issue. One type of intentional behavior is being prioritized over another in adjudicating whether a subject is conscious. What is the empirical justification for this prioritization? The second issue is that DF is possibly unique among visual agnosics. It is a substantial inference to move from DF to a general claim about the dorsal stream being unconscious in neurotypical individuals (see Mole 2009 for arguments that consciousness does not divide between the streams; see Wu 2013 for an argument for unconscious visually guided action in normal subjects). What this shows is that the methodological decisions that we make regarding how we track consciousness are substantial in theorizing about the neural bases of conscious and unconscious vision.[10]

The neuroanatomical basis of blindsight capacities remains unclear. Certainly, the loss of V1 deprives later cortical visual areas of a normal source of visual information. Still, there are other ways that information from the eye bypasses V1 to provide inputs to later visual areas. Alternative pathways include the superior colliculus (SC), the lateral geniculate nucleus (LGN) in the thalamus, and the pulvinar as likely sources.

see legend, to the left is a collection of 5 ovals labeled 'retina', 'LGN', 'pulvinar', 'SC', 'amygdala'; to the right are 5 rectangles labeled 'V1' through 'V5'. Above V3 and to its left, V5, which are on the top is a black arrow going right to left labeled 'dorsal stream'; below V1 and to its left V4 is a black arrow going right to left labeled 'ventral stream'. A thick orange arrow connects retina to LGN to V1 and thick blue arrows connect V1 to V2 then from V2 to both V3 and V4 and from V3 to V5. Thin blue arrows connect V1 to V4, V5, and V3 and well as V2 to V5. Thin orange arrows connect retina to pulvinar and SC ; LGN to SC, V5, V2, and V4; SC to LGN, pulvinar, and amygdala; pulvinar to V5, V2, and amygdala.

Figure 3. Subcortical Pathways and their Connection to Cortical Vision (from Urbanski, Coubard, & Bourlon 2014)

Figure Legend: The front of the head is to the left, the back of the head is to the right. One should imagine that the blue-linked regions are above the orange-linked regions, cortex above subcortex. V4 is assigned to the base of the ventral stream; V5, called area MT in nonhuman primates, is assigned to the base of the dorsal stream.

The latter two have direct extrastriate projections (projections to visual areas in the occipital lobe outside of V1) while the superior colliculus synapses onto neurons in the LGN and pulvinar which then connect to extrastriate areas (Figure 3). Which of these provide for the basis for blindsight remains an open question though all pathways might play some role (Cowey 2010; Leopold 2012). If blindsight involves nonphenomenal, unconscious vision, then these pathways would be a substrate for it, and a functioning V1 might be necessary for normal conscious vision.

Campion et al. (1983) raised an important alternative explanation: blindsight subjects in fact have severely degraded conscious vision but merely report on them with low confidence (see Phillips 2021 for a recent version of this critique; see Michel & Lau 2021 for a response). In their reports, blindsight subjects feel like they are guessing about stimuli they can objectively discriminate. Campion et al. drew on signal detection theory, which emphasizes two determinants of detection behavior: perceptual sensitivity and response criterion (see section 2.1). Campion et al. hypothesized that blindsight patients are conscious in that they are aware of visual signal where discriminability is low. Further, blindsight patients are more conservative in their response so will be apt to report the absence of a signal by saying that they do not see the relevant stimulus even though the signal is there, and they can detect it, as verified by their above chance visually guided behavior.

This possibility was explicitly tested by Azzopardi and Cowey (1997) with the well-studied blindsight patient, GY. They compared blindsight performance with normal subjects at threshold vision using signal detection measures and found that with respect to motion stimuli, the difference between discrimination and detection used to argue for blindsight can be explained by changes in response criterion, as Campion et al. hypothesized. That is, GYs claim that he does not see the stimulus is due to a conservative criterion and not to a detection incapacity. Interestingly, for static stimuli, his response criterion did not change but his sensitivity did, as if he was tapping into two different visual processing mechanisms in each task (for an alternative explanation based on shifting response criterion, see Ko & Lau 2012).

In introspecting, what concepts are available to subjects will determine their sensitivity in report. In many studies with blindsight, subjects are given a binary option: do you see the stimulus or do you not see it? The concern is that the do not see option would cover cases of degraded consciousness that subjects might be unwilling to classify as seeing due to a conservative response criterion. So, what if subjects are given more options for report? Ramsøy and Overgaard (2004; see also Overgaard et al. 2006) provided subjects with four categories for introspective report: no experience; brief glimpse; almost clear experience; clear experience. Using this perceptual awareness scale, they found that subjects’ objective performance tracked their introspective reports where performance was at chance when subjects reported no visual experience. As visibility increased, so did performance. When the scale was used with a blindsight patient (Overgaard et al. 2008), no above chance performance was detected when the subject reported no visual experience (see also Mazzi, Bagattini, & Savazzi 2016 for further evidence). A live alternative hypothesis is that blindsight does not present a case of unconscious vision, but of degraded conscious vision with a conservative response bias that affects introspection. At the very least, the issue depends on how introspection is deployed, a topic that deserves further attention (see Phillips 2016, 2021 for further discussion of blindsight).

4.3 Unconscious Vision and the Intentional Action Inference

Blindsight and DF show that damage to specific regions of the brain disrupts normal visual processing, yet subjects can access visual information in preserved visual circuits to inform behavior despite failing to report on the relevant visual contents. The received view is that these subjects demonstrate unconscious vision. One implication is that the normal processing in the ventral stream, tied to normal V1 activity, plays a necessary role in normal conscious vision. Another is that dorsal stream processing or visual stream processing that bypasses V1 via subcortical processing yields only unconscious visual states. This points to a set of networks that begin to provide an answer to what makes visual states conscious or not. An important further step will be to integrate these results with the general theories noted earlier (section 3).

Still, the complexities of the empirical data bring us back to methodological issues about tracking consciousness and the following question: What behavioral data should form the basis of attributions of phenomenal consciousness? The intentional action inference is used in a variety of cases to attribute conscious states, yet the results of the previous sections counsel us to be wary of applying that inference widely. After all, some intentional behavior might be unconsciously guided.

In the case of DF, we noted that unlike many other visual agnosics, she can direct motor actions towards stimuli that she cannot explicitly report and which she denies seeing. In her case, we prioritize introspective reports over intentional action as evidence for unconscious vision. Yet, one might take a broader view that vision for action is always conscious and that what DF vividly illustrates is that some visual contents (dorsal stream) are tied directly to performance of intentional motor behavior and are not directly available to conceptual capacities deployed in report. In contrast, other aspects of conscious vision, supported by the ventral stream, are directly available to guide reports. This functional divergence is explained by the anatomical division in cortical visual processing.

For some time now, these striking cases have been taken as clear cases of unconscious vision and if this hypothesis is correct, the work has begun to identify visual areas critical for creating seeing, sometimes conscious and sometimes not. The neuroanatomy demonstrates that visually-guided behavior has a complex neural basis involving cortical and subcortical structures that demonstrate a substantial level of specialization. Understanding consciousness and unconsciousness in vision will need to be sensitive to the complexities of the underlying neural substrate.

5. Specific Consciousness

We turn to experimental work on specific consciousness:

Specific Consciousness: What neural states or properties are necessary and/or sufficient for a conscious perceptual state to have content X rather than Y?

In this section, we examine attempts to address claims about necessity and sufficiency by manipulation of the contents of consciousness through direct modulation of neural representational content.

5.1 Neural Representationalism

In thinking about neural explanations of specific consciousness, namely the contents of consciousness, we will provisionally assume a type of first-order representationalism about phenomenal content, namely that such content supervenes on neural content (see the entry on representational theories of consciousness). One strong position would be that phenomenal content is identical to appropriate neural content. A weaker correlation claim affirms only supervenience: no change in phenomenal content without a change in neural content. This neural representationalism allows us to link phenomenal properties to the brain via linking neural contents to perceptual contents.

In invoking neural content, the assumption is that neural content mirrors perceptual content, so that if one is experiencing dots moving in a certain direction, there is a neural representation with the same content. This is a simplification and does not cohere with a common current approach to neural content that takes it to be probabilistic (for accessible discussions, see Pouget, Dayan, & Zemel 2003; Colombo & Seriès 2012; Rescorla 2015). Yet perceptual content does not seem probabilistic. This emphasizes a prima facie disconnect between current theories of neural content and those of phenomenal content. One option is to find nonprobabilistic content at the neural level. The other is to find probabilistic content at the phenomenal level (for related ideas, see Morrison 2016 and a response by; Denison 2017; and Beck 2020; also Munton 2016; Block 2018; Shea 2018; Gross 2020; Vance 2021; Siegel 2022; Lee & Orlandi 2022). For simplicity, in what follows we assume a simple mapping between neural content and perceptual experiences.

5.2 The Contrast Strategy: Binocular Rivalry

A common approach, the contrast strategy, enjoins experimentalists to identify relevant correlates for some phenomenon P by contrasting cases where P is present from cases where P is not. Work on binocular rivalry illustrates this strategy (among many reviews, see Tong, Meng, & Blake 2006; Blake, Brascamp, & Heeger 2014; Blake 2022). When each eye receives a different image simultaneously, the subject does not see both, say one stimulus overlapping the other. Rather, visual experience alternates between them. Call this phenomenal alternation. An initial restatement of our question about specific consciousness in respect of binocular rivalry is:

Specific Rivalry: What neural property is necessary and/or sufficient for phenomenal alternation in binocular rivalry in condition C?

That is, empirical theories aim to explain how visual content alternates in binocular rivalry.[11] Notice that this is a question about specific rather than generic consciousness, as the contrast is not between a state’s being conscious versus not but about the contrast between two conscious states with different contents

Neural explanations of binocular rivalry concern competition at some level of visual processing: (a) “interocular” competition between monocular neurons early in the visual system, namely visual neurons that receive input from only one eye or (b) competition between binocular neurons later in the visual system, namely neurons that receive input from both eyes. The winner of competition fixes which stimulus the subject experiences at a given time. Some imaging studies in humans suggest that neural activity in V1 did correlate with alternation of the experienced images. For example, Polonsky et al. used fMRI to demonstrate that V1 activity to competing stimuli tracked perception (Polonsky et al. 2000; but see Maier et al. 2008). In contrast, some of the earliest electrophysiological studies (Leopold & Logothetis 1996; Logothetis, Leopold, & Sheinberg 1996; see also Hesse & Tsao 2020) on awake behaving monkeys supported later binocular processing as the neural basis of binocular rivalry. Processing in later (inferotemporal cortex, IT; see Figure 1) rather than earlier visual areas (V1 or V2) was observed to be best correlated to the monkey’s reported perception based on the monkey’s stimuli-specific response.

Recent accounts have taken binocular rivalry as resulting from processes at multiple levels (Wilson 2003; Freeman 2005; Tong, Meng, & Blake 2006). For example, when the two competing stimuli have parts that can be fused into a coherent stimulus, as when half of a picture is presented to each eye, the subject can perceive the fusion, integrating content from each eye (Kovács et al. 1996; Ngo et al. 2000). This suggests that binocular rivalry can be sensitive to global properties of the stimulus (see Baker & Graf 2009). What unifies the mechanisms, perhaps, is the function of resolving a conflict generated by the stimuli.

Assume that some neural process R resolves interocular competition: when R resolves competition between stimuli X and Y in favor of X, then the subject is phenomenally conscious of X rather than Y and vice versa. Notice that R has the same “gating” function for any stimuli X and Y that are subject to binocular rivalry. So, while the presence of R can explain why the subject is having one conscious visual experience rather than another, R is not tied to a specific content. This suggests that in answering the question about rivalry, we will at best be identifying a necessary but not sufficient condition for a conscious visual state having a content X. R is a general gate for consciousness (cf. attention in global workspace theory).

An example of this gating of awareness can be found in recent studies using no-report paradigms, in which analysis of local field potentials in macaque prefrontal cortex has shown that transitions in frequency field activity track transitions in experience and encoding of the contents of conscious experiences (Dwarakanath et al. 2023). Moreover, feature-selective neurons in the prefrontal cortex are modulated by subjective changes in experiences (regardless if these were due to physical changes or rivalry). This modulation allows trial-by-trial successful decoding of the contents of experience from prefrontal cortex (Kapoor et al. 2022). Together, these recent studies offer robust evidence for a causal role of prefrontal cortex in supporting conscious experiences.

A narrower explanation of specific consciousness would identify the specific neural representations that explain a conscious state’s having the specific content X (rather than Y). By the representationalism assumption, this will involve identifying neural representations with the same content, X. Focusing on a gate in explaining alternation in rivalry stops short of identifying those representations. Still, binocular rivalry can provide a useful method for isolating neural populations that carry relevant content. In principle, for any stimulus type of interest, X (e.g., faces, words, etc.), so long as X is subject to binocular rivalry, we can use rivalry paradigms to isolate brain areas that carry the relevant information that correlate with the subject’s perceiving X. That would allow us to identify potential candidates for the neural basis of conscious content. (For a criticism of binocular rivalry paradigms as a means for discovering the neural basis of perceptual awareness—as opposed to mere perceptual processing—see Zou, He, & Zhang 2016; Giles, Lau, & Odegaard 2016).

5.3 Neural Stimulation

There are limited opportunities to manipulate human brain activity in a targeted way. Recent use of transcranial magnetic stimulation to activate or suppress neural activity has provided illumination, but such interventions are coarse-grained. Ultimately, to locate an explanatory correlate for specific conscious contents, we will need more fine-grained interventions in brain tissue. In humans, such opportunities are generally confined to manipulation before surgical interventions, say for brain tumors or epilepsy.

In the middle of the last century, neurosurgeon Wilder Penfield and colleagues performed a set of direct electrical microstimulations during preoperative procedures (Penfield & Perot 1963), and in certain cases induced hallucinations by stimulating primary sensory cortices such as V1 or S1 (see Figure 1). This provided evidence that endogenous activity could be causally sufficient for phenomenal experiences. Penfield’s interventions, however, were not based on fine-grained targeting of specific neural representations. As Cohen and Newsome note:

Penfield’s approach failed to generate substantial new insights into the neural basis of perception and cognition…because the gross electrical activation elicited by surface electrodes could not be related mechanistically to the information being processed within the excited neural tissue. (Cohen & Newsome 2004: 1)

A different approach begins with a more detailed understanding of underlying neural representations tied to different brain regions (Figure 4). For example, the fusiform face (FFA) area appears to be necessary for normal human face experience in that lesions in FFA lead to prosopagnosia, the inability to see faces even if one can see their parts [whether FFA is necessary for seeing faces specifically, or more generally for visual expertise is a point of contention (Kanwisher 2000; Tarr & Gauthier 2000)]. FFA is part of a larger network that is important in visual processing of faces (Behrmann & Plaut 2013). Recently, microstimulation of FFA in an awake human epilepsy patient induced visual distortions of actual faces as opposed to other objects (Parvizi et al. 2012). Alterations of visual experience were also reported during microstimulating the parahippocampal place (PPA) area in an awake pre-operative epileptic patient that induced visual hallucinations of scenes (Mégevand et al. 2014). PPA is the same area that showed activation in patients with unresponsive wakefulness syndrome when they putatively imagined walking around their home (section 2.4).

See legend. The cortex picture is labeled at the top as 'Anterior' and the bottom as 'Posterior', the left is labeled 'RH' and the right is labeled 'LH'. A picture of a face is labeled 'FFA' in red with and arrow to a region in red of the cortex picture on the far left and about a third up from the bottom, other regions in red are on both sides of the cortex and to the top and about half way up on the left. A picture of a cup is labeled 'LO' in blue and points to a largish blue region in the lower right hand side of the cortex picture; another largish blue region is on the lower left hand side of the picture. A picture of a house is labeled 'PPA' in green with arrows pointing to two smallish green regions above and slightly overlapping the two blue regions

Figure 4. Ventral Stream Areas

A view from the bottom of cortex with location of areas FFA, PPA and LO identified. Occipital cortex is on the bottom. LO is lesioned in the visual agnosic patient, DF (see section 4.1). This figure is modified from figure 1 of Behrmann and Plaut 2013, kindly provided by Marlene Behrmann and used with her permission.


Another successful example of target microstimulation involves the visual word form area. This region in the left midfusiform gyrus (lmFG) is important for normal processing of visual word forms during reading. Microstimulation in human epileptic patients selectively disrupted word and letter reading—presumably altering the visual experience of letters—without changing general form perception (Hirshorn et al. 2016) (see Movies S1 and S2 in Other Internet Resources).

It is worth noting that many neuroscientists of vision take themselves to be investigating seeing in the ordinary sense, one that implies consciousness, but very few of them would characterize their work as about consciousness. That said, their work is of direct relevance to our understanding of specific consciousness even if it is not always characterized as such.[12]

An important approach in visual neuroscience was articulated by A.J. Parker and William Newsome in “Sense and the Single Neuron” (1998) via “principles” to connect electrophysiological data about information processing to perception (for a recent discussion, see Ruff & Cohen 2014). To probe the neural basis of perception, neuroscientists need to explanatorily link neural data to the subject’s perception that guides behavior. The experimenter must ensure that recorded neural content correlates with perceptual content and not just response. Further, manipulation of the neurons carrying information should affect perception: inducing appropriate neural activity should shift perceptual response while abolishing or reducing that activity should eliminate or reduce perceptual response as measured in behavior. These proposals address concerns about necessity and sufficiency.

The intentional action inference is applicable (or at least its evidential version):

If some subject acts intentionally, where her action is guided by a perceptual state, then that state is phenomenally conscious.

We will consider the strength of this inference in two cases. The first case, visual motion perception, introduces the principles that guide the manipulation of neural content while the second case concerns tactile experience of vibration. These cases involve experiments with non-human primates, so we lack introspective reports. But direct manipulation of the human brain along with introspective reports have been performed too as discussed above with respect to face and letter perception.

These experiments involve microstimulation of small populations of neurons that are targeted precisely because of their informational content. Microstimulation involves injecting a small current from the tip of an electrode inserted into brain tissue that directly stimulates nearby neurons or, through synaptic connections to other neurons, indirectly activates more distant neurons (see Histed, Ni, & Maunsell 2013 for a review). It is assumed that neurons tuned in similar ways, that is neurons that respond to similar stimuli, tend to be interconnected, so microstimulation is taken to largely drive similarly tuned neurons.

5.3.1 Visual Motion Perception

We begin with visual motion perception in primates. Since the principles introduced here are central to much perceptual neuroscience and provide the basis for probing the link between neural representations and perceptual content, I examine it carefully. The salient question will be whether conscious experience is changed by the manipulations.

The work we shall discuss was done in awake behaving macaque monkeys. Visual area MT in the monkey brain (called V5 in humans) plays an important role in the visual experience of motion. MT is taken to lie in the dorsal visual stream (Figure 1). Lesions that disrupt MT are known to cause akinetopsia, the inability to see motion. One patient with an MT (V5) lesion reported the following phenomenology: “people were suddenly here or there but I have not seen them moving” (Zihl, von Cramon, & Mai 1983: 315). MT processing looks to be necessary for normal visual motion experience. Furthermore, MT neurons represent (carry information regarding) the direction of motion of visible stimuli: MT neurons are tuned for motion in specific directions with the highest firing rate for a specific direction of motion (for other functions and responses of MT, see Born & Bradley 2005). By placing motion stimuli in a neuron’s receptive field, scientists can map its tuning:

a graph with a y-axis going from 0 to 120 and labeled 'Responses (spikes/s) and a x-axis from -180 to 180 labeled 'Directions of motion (deg)'. There are three lines on the graph: a horizontal dot-dash line almost at the 0 point of the y-axis; a solid line starting about the 15 point of the y-axis rising in a curve to about 90 at the 0 point of the x-axis before symmetrically falling back to 15 at the 180 point of the x-axis; a dashed line that starts and and ends at the same points as the solid line but rises higher to about 110 at the 0 point of x-axis.

Figure 5. MT Neuron Tuning Curve

Figure Legend: Tuning of a neuron in MT showing a peak response in spiking rate at 0 degrees of motion. The dashed curve is generated when the animal is attending to the motion stimulus while it is in the receptive field (we shall not discuss the neural basis of attention, but see Wu 2014b: chap. 2 for a summary of the neuroscience of attention). The solid curve shows MT response when the animal is not attending to the motion stimulus in the receptive field. Figure from (Lee & Maunsell 2009).

What is plotted is the activity of an MT neuron, in spikes per second, to a specific type of motion stimulus placed within its receptive field. How to relate a tuning curve to a determinate content is complicated. Since the neural response is not simply to one stimulus value, it is not obvious that the neuron should be taken to represent 0 degrees of motion, namely the value at its peak response. Indeed, theorists have noted that the tuning curve looks like a probability density function, and many now take neurons to have probabilistic content (section 5.1).

Experimenters have trained macaque monkeys to perform discrimination tasks reporting direction of motion. Typically, the monkey maintains fixation while the moving stimulus is placed within the receptive field of the recorded neuron. The monkey reports the direction of the stimulus by moving its eyes to a target that stands for either leftward or rightward motion (other behavioral reports can be generated such as moving a joystick). Provisionally, we apply the intentional action inference, so we assume that such reports are guided by conscious visual experience of the stimuli. Thus, changes in behavior will be evidence for changes in conscious content.

Early work suggested that the activity of a single neuron provides a strong correlate of the animal’s visually guided performance. This can be seen by plotting both the animal and the neuron’s performance across different stimulus values. In these experiments, the value concerns the percent coherence of motion of a set of dots defined as the number of dots moving in the same direction (0% coherence being random motion; 100% being all dots moving in the same direction). In the first case, we construct a psychometric curve that plots the animal’s percent correct reports relative to percent coherence of motion of the stimuli. As one might expect, percent correct reports drop as coherence drops, and the inflection point reflects where the subject is equally likely to indicate left or right motion. We can do the same for the neural activity of the neuron across the same stimulus values, a neurometric curve.

The experimentalist’s window onto conscious experience is through behavior, the assumption being that report about motion correlates with perceptual experience. Correlation is assessed by asking the following question: would an ideal observer, using the activity of the neuron in question, be able to predict the animal’s visually guided performance? Essentially, do the psychometric and neurometric curves overlap? Strikingly, yes. MT neurons were observed to predict the animal’s behavior (Britten et al. 1992).

A graph labeled 'Neurometric and psychometric curves' with a y-axis going from 0.5 to 1 and labeled 'Proposition correct' and a x-axis labeled 'Motion coherence (%)' going from an unknown point to 10 at the halfway point and 100 at the far right. An elongated S curve goes from the lower left (about 0.5 on the y and a quarter of the way from the origin to 10 on the x) to 1 on the y-axis and 100 on the x-axis. A close look shows there are two lines, one black which seems to end before the other, gray, line. The legend says the black line is 'psychometric' and the gray line 'neurometric'.

Figure 6. Psychometric and Neurometric Curves

Figure Legend: Psychometric and neurometric curves for a single MT neuron during performance of a motion direction detection task. Percent correct performance is plotted on the y-axis while percent motion coherence is plotted in a log scale on the x-axis. Figure modified from Ruff & Cohen 2014 and kindly provided by Doug Ruff.

This shows that the activity of a single MT neuron provides a neural correlate of the animal’s visual discrimination of motion. Note that this is just a neural correlate of behavior. No one suggested that this neuron was causally sufficient for the behavior or for perception. Later results have suggested that individual neurons are not quite as sensitive as Britten suggested, but that small groups of MT neurons are sufficient to predict behavior (Cohen & Newsome 2009).

Earlier, we worried about mere correlates. To get causal or explanatory purchase, the content of the MT neurons correlated to the animal’s behavior must be shown to contribute to perceptual guidance. This predicts that if we manipulate the content of the neurons, i.e., manipulate neural representations, then we should manipulate the content of the animal’s visual experience of motion as reflected by predicted changes in behavior. This would be to test sufficiency with respect to specific consciousness.

Newsome and colleagues demonstrated that microstimulation of MT neurons shifted the animal’s performance in predictable ways. Assume that neural population P, by encoding information about stimulus motion, can inform the subject’s report of motion direction. This information is accessible for the control of behavior. Activation of P by microstimulation should shift behavior in a motion selective way correlated with the direction that P is tuned to (represents). This was first demonstrated by Salzman et al. (1990). They inserted electrodes into MT and identified neurons tuned to a particular orientation. During a motion discrimination task, microstimulation of neurons with that tuning led to a shift in the psychometric curve as if that neuron was given more weight in driving behavior.

In conditions of microstimulation relative to its absence, the monkey was more likely to report that there was motion in the stimulated neuron’s preferred direction. In the original experiment, the psychophysical effect of microstimulation was equivalent to the addition of 7–20% coherence in the stimulus with respect to the neuron’s preferred direction, depending on the experimental conditions. Further, as a test of necessity, a selective lesion of MT disrupted motion discrimination though the animals were able to recover some function suggesting that other visual information streams could be tapped so as to support performance (Newsome & Paré 1988).

Adopting the intentional action inference, one can conclude that the microstimulation shifted perceptual content (or again, that we have good evidence for this shift). That said, given our discussion of unconscious vision (section 4), another possibility is that MT microstimulation only changes unconscious visual representations. Newsome himself asked:

What is the conscious experience that accompanies the stimulation and the monkey’s decision? Even if you knew everything about how the neurons encode and transmit information, you may not know what the monkey experiences when we stimulate his MT. (Singer 2006)

Clearly, having the monkey provide an introspective report would add evidential weight, but obtaining such reports from non-linguistic creatures is difficult. How can we get the animal to turn attention inward to their perceptual states in an experimental context?[13]

5.3.2 Tactile Vibration

What of microstimulation in the absence of a stimulus? Might we induce hallucinations as Penfield did in his patients? Rather than modulation of ongoing perceptual processing, the issue here is to create an internal signal that mimics perception. Romo et al. (1998) demonstrated that monkeys can carry out sensory tasks via activation triggered by microstimulation. The monkeys’ task was to discriminate the frequency of two sequential “flutters” on their fingertips, that is, mechanical vibrations on the skin at specific frequencies. In an experimental trial, an initial sample flutter was presented for 500 ms and after a gap of 1–3 seconds, a second test flutter of either higher or lower frequency was presented. The animal reported whether the second test frequency was higher or lower than the sample.

The experimenters examined whether direct microstimulation in the absence of a stimulus could tap into the same neural representations that guided the animal’s report. They isolated neurons in primary somatosensory cortex responsive to vibration frequency on the fingers (S1, the somatosensory homunculus discovered by Penfield & Boldrey [1937]; see Figure 1). The investigators then stimulated the same neurons in S1 in the absence of the test flutter, so used stimulation as a substitute for an actual vibration. Thus, the animal had to make a comparison between the frequency of a mechanical sample to either a subsequent (1) real mechanical test vibration (i.e., the good case with an actual stimulus) or (2) to a microstimulation test stimulus (i.e., the “hallucinatory” case where direct activation of the S1 neurons occurred in the absence of a stimulus). Romo et al. demonstrated that discrimination performance based either on mechanical stimulation or microstimulation was equivalent. In other words, the animals could match either mechanical or microstimulation to a remembered mechanical sample.[14] (For a related approach with visual stimuli and optogenetic stimulation, see Azadi et al. 2023)

In subsequent work (Romo et al. 2000), the investigators inverted the experiment, using the microstimulation as the sample. In this case, the animals had to remember the information conveyed by the microstimulation (effectively, a hallucination) and then compare it to either a subsequent (a) mechanically generated stimulation on the finger (actual test stimulus) or (b) a microstimulation of S1 as test (i.e., no stimulus). In both cases, performance was similar to earlier results. The striking finding is that behavior could be driven entirely by microstimulation. At least for the tactile stimulations at issue, the animal might have been in the Matrix!

One might think that the intentional action inference is stronger in this paradigm, given the elegant flipping of stimuli in Romo et al. 2000. Still, the authors comment:

This study, therefore, has directly established a strong link between neural activity and perception. However, we do not know yet whether microstimulation of the QA circuit in S1 elicits a subjective flutter sensation in the fingertips. This can only be explored by microstimulating S1 in an attending human observer. (Romo et al. 2000: 277)[15]

Like Newsome, the authors reach for introspection. Yet they might undersell their result, for it seems that the animals are having a tactile hallucination: (a) Penfield showed that stimulation of primary sensory cortices like S1 induces hallucinations in humans; (b) action is engaged not at low stimulation of S1 in monkeys but only at higher level stimulation; (c) at that point, when the stimulation grabs their attention, the monkeys do what they were trained to do, namely discriminate stimuli, either with (d) just mechanical stimulation (normal experience), or (e) with a mix of mechanical and microstimulation or with just microstimulation; (f) given the behavioral equivalence of these three cases, one might then argue that if performance in the mechanical stimulation cases involves conscious tactile experience, then that same experience is involved in the other cases.

Taken together, these cases provide examples of detailed manipulations in different sensory modalities, animals, and contents that test for causal sufficiency and necessity across different levels of the sensory processing hierarchy, from early levels (e.g., S1) to mid-levels (MT) and, as discussed at the outset of this section, to higher levels (lmFG or FFA). One issue that remains open is whether in tapping into neural processing by microstimulation, one has simply identified an earlier causal node in the neural processes that generate perceptual experience, there being more informative neural correlates later in the causal pathway.

6. The Future

Talk of the neuroscience of consciousness has, thus far, focused on the neural correlates of consciousness. Not all neural correlates are explanatory, so finding correlates is a first step in the neuroscience of consciousness. The next step involves manipulation of relevant correlates to test claims about sufficiency and necessity, as isolated in our two questions:

Generic Consciousness: What conditions/states N of nervous systems are necessary and (or) sufficient for a mental state, M, to be conscious as opposed to not?

Specific Consciousness: What neural states or properties are necessary and/or sufficient for a conscious perceptual state to have content X rather than Y?

A productive neuroscience of consciousness requires that we understand the relevant neural properties at the right level of analysis. For generic consciousness, this will involve manipulation of relevant properties in a way that can avoid the access/phenomenal confound, and recent work focuses on pitting the many theories we have considered against each other. For specific consciousness, the critical issue will be to understand neural representational content and to find ways to link experimentally and explanatorily neural content to phenomenal content. We have tools to manipulate neural contents to affect phenomenal content, and in doing so, we can begin to uncover the neural basis of conscious contents. There is much interesting work yet to be done, philosophically and empirically, and we can look forward to a productive interdisciplinary research program.

Bibliography

  • Aglioti, Salvatore, Joseph F.X. DeSouza, & Melvyn A. Goodale, 1995, “Size-Contrast Illusions Deceive the Eye but Not the Hand,” Current Biology, 5 (6): 679–85. doi:10.1016/s0960-9822(95)00133-3
  • Albantakis, Larissa, Leonardo Barbosa, Graham Findlay, Matteo Grasso, Andrew M. Haun, William Marshall, William G. P. Mayner, et al., 2023, “Integrated Information Theory (IIT) 4.0: Formulating the Properties of Phenomenal Existence in Physical Terms,” PLOS Computational Biology, 19 (10): e1011465. doi:10.1371/journal.pcbi.1011465
  • Andersen, Richard A., Kristen N. Andersen, Eun Jung Hwang, & Markus Hauschild, 2014, “Optic Ataxia: From Balint’s Syndrome to the Parietal Reach Region,” Neuron, 81 (5): 967–83. doi:10.1016/j.neuron.2014.02.025
  • Arnold, Derek Henry, 2011a, “I Agree: Binocular Rivalry Stimuli Are Common but Rivalry Is Not,” Frontiers in Human Neuroscience, 5: 157. doi:10.3389/fnhum.2011.00157
  • –––, 2011b, “Why Is Binocular Rivalry Uncommon? Discrepant Monocular Images in the Real World,” Frontiers in Human Neuroscience, 5: 116. doi:10.3389/fnhum.2011.00116
  • Aru, Jaan, Talis Bachmann, Wolf Singer, & Lucia Melloni, 2012, “Distilling the Neural Correlates of Consciousness,” Neuroscience and Biobehavioral Reviews, 36 (2): 737–46. doi:10.1016/j.neubiorev.2011.12.003
  • Azadi, Reza, Simon Bohn, Emily Lopez, Rosa Lafer-Sousa, Karen Wang, Mark A.G. Eldridge, & Arash Afraz, 2023, “Image-Dependence of the Detectability of Optogenetic Stimulation in Macaque Inferotemporal Cortex,” Current Biology, 33 (3): 581–588.e4. doi:10.1016/j.cub.2022.12.021
  • Azzopardi, P., & A. Cowey, 1997, “Is Blindsight like Normal, Near-Threshold Vision?” Proceedings of the National Academy of Sciences, 94 (25): 14190–94. doi:10.1073/pnas.94.25.14190
  • Baars, Bernard J., 1988, A Cognitive Theory of Consciousness, Cambridge University Press.
  • Baker, Ben, Benjamin Lansdell, & Konrad P. Kording, 2022, “Three Aspects of Representation in Neuroscience,” Trends in Cognitive Sciences, 26 (11): 942–58. doi:10.1016/j.tics.2022.08.014
  • Baker, Daniel H., & Erich W. Graf, 2009, “Natural Images Dominate in Binocular Rivalry,” Proceedings of the National Academy of Sciences, 106 (13): 5436–41. doi:10.1073/pnas.0812860106
  • Barrett, Adam B., & Pedro A. M. Mediano, 2019, “The Phi Measure of Integrated Information Is Not Well-Defined for General Physical Systems,” Journal of Consciousness Studies, 26 (1–2): 11–20. [Preprint of Barrett and Mediano 2019 available online
  • Barron, Andrew B., & Colin Klein, 2016, “What Insects Can Tell Us about the Origins of Consciousness,” Proceedings of the National Academy of Sciences, 113 (18): 4900–4908. doi:10.1073/pnas.1520084113
  • Bartlett, Gary, 2022, “Does Integrated Information Theory Make Testable Predictions about the Role of Silent Neurons in Consciousness?” Neuroscience of Consciousness, 2022 (1): niac015. doi:10.1093/nc/niac015
  • Bayne, Tim, 2011, “The Sense of Agency,” in The Senses: Classic and Contemporary Philosophical Perspectives, edited by Fiona Macpherson, 355–74. Oxford: Oxford University Press.
  • –––, 2018, “On the Axiomatic Foundations of the Integrated Information Theory of Consciousness,” Neuroscience of Consciousness, 4 (1): niy007. doi:10.1093/nc/niy007
  • Bayne, Tim, & David J. Chalmers, 2003, “What Is the Unity of Consciousness?” In The Unity of Consciousness: Binding, Integration, and Dissociation, Axel Cleeremans (eds.), Oxford: Oxford University Press, 23–58. doi:10.1093/acprof:oso/9780198508571.003.0002
  • Bayne, Tim, Jakob Hohwy, & Adrian M. Owen, 2016, “Are There Levels of Consciousness?” Trends in Cognitive Sciences, 20 (6): 405–13. doi:10.1016/j.tics.2016.03.009
  • Bayne, Tim, & Michelle Montague, eds., 2011, Cognitive Phenomenology, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199579938.001.0001
  • Beck, Jacob, 2020, “On Perceptual Confidence and ‘Completely Trusting Your Experience,’” Analytic Philosophy, 61 (2): 174–88. doi:10.1111/phib.12151
  • Bedny, Marina, 2017, “Evidence from Blindness for a Cognitively Pluripotent Cortex,” Trends in Cognitive Sciences, 21 (9): 637–48. doi:10.1016/j.tics.2017.06.003
  • Behrmann, Marlene, & David C. Plaut, 2013, “Distributed Circuits, Not Circumscribed Centers, Mediate Visual Recognition,” Trends in Cognitive Sciences, 17 (5): 210–19. doi:10.1016/j.tics.2013.03.007
  • Birch, Jonathan, Alexandra K. Schnell, & Nicola S. Clayton, 2020, “Dimensions of Animal Consciousness,” Trends in Cognitive Sciences, 24 (10): 789–801. doi:10.1016/j.tics.2020.07.007
  • Blake, Randolph, 2022, “The Perceptual Magic of Binocular Rivalry,” Current Directions in Psychological Science, 31 (2): 139–46. doi:10.1177/09637214211057564
  • Blake, Randolph, Jan Brascamp, & David J. Heeger, 2014, “Can Binocular Rivalry Reveal Neural Correlates of Consciousness?” Philosophical Transactions of the Royal Society B: Biological Sciences, 369 (1641): 20130211. doi:10.1098/rstb.2013.0211
  • Block, Ned, 1995, “On a Confusion about a Function of Consciousness,” Behavioral and Brain Sciences, 18 (2): 227–47. doi:10.1017/s0140525x00038188
  • –––, 2007, “Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience,” Behavioral and Brain Sciences, 30 (5–6): 481–548. doi:10.1017/s0140525x07002786
  • –––, 2011, “The Higher Order Approach to Consciousness Is Defunct,” Analysis, 71 (3): 419–31. doi:10.1093/analys/anr037
  • –––, 2018, “If Perception Is Probabilistic, Why Does It Not Seem Probabilistic?” Philosophical Transactions of the Royal Society B: Biological Sciences, 373 (1755): 20170341. doi:10.1098/rstb.2017.0341
  • –––, 2019, “What Is Wrong with the No-Report Paradigm and How to Fix It,” Trends in Cognitive Sciences, 23 (12): 1003–13. doi:10.1016/j.tics.2019.10.001
  • Boly, Melanie, Marcello Massimini, Naotsugu Tsuchiya, Bradley R Postle, Christof Koch, & Giulio Tononi, 2017, “Are the Neural Correlates of Consciousness in the Front or in the Back of the Cerebral Cortex? Clinical and Neuroimaging Evidence,” The Journal of Neuroscience, 37 (40): 9603–13. doi:10.1523/jneurosci.3218–16.2017
  • Born, Richard T., & David C. Bradley, 2005, “Structure and Function of Visual Area MT,” Annual Review of Neuroscience, 28 (1): 157–89. doi:10.1146/annurev.neuro.26.041002.131052
  • Britten, KH, MN Shadlen, WT Newsome, & JA Movshon, 1992, “The Analysis of Visual Motion: A Comparison of Neuronal and Psychophysical Performance,” The Journal of Neuroscience, 12 (12): 4745–65. doi:10.1523/jneurosci.12-12-04745.1992
  • Brown, Richard, 2015, “The HOROR Theory of Phenomenal Consciousness,” Philosophical Studies, 172 (7): 1–12. doi:10.1007/s11098-014-0388-7
  • Brown, Richard, Hakwan Lau, & Joseph E. LeDoux, 2019, “Understanding the Higher-Order Approach to Consciousness,” Trends in Cognitive Sciences, 23 (9): 754–68. doi:10.1016/j.tics.2019.06.009
  • Campion, John, Richard Latto, & Y. M. Smith, 1983, “Is Blindsight an Effect of Scattered Light, Spared Cortex, and near-Threshold Vision?” Behavioral and Brain Sciences, 6 (3): 423–48. doi:10.1017/s0140525x00016861
  • Cao, Rosa, 2012, “A Teleosemantic Approach to Information in the Brain,” Biology & Philosophy, 27 (1): 49–71. doi:10.1007/s10539-011-9292-0
  • –––, 2014, “Signaling in the Brain: In Search of Functional Units,” Philosophy of Science, 81 (5): 891–901. doi:10.1086/677688
  • Carruthers, Peter, 2011, The Opacity of Mind: An Integrative Theory of Self-Knowledge, New York: Oxford University Press.
  • Chalmers, David J., 1995, “Facing up the Problem of Consciousness,” Journal of Consciousness Studies, 2 (3): 200–219
  • –––, 1996, The Conscious Mind, New York: Oxford University Press.
  • –––, 2000, “What Is a Neural Correlate of Consciousness,” in Neural Correlates of Consciousness: Empirical and Conceptual Questions, edited by Thomas Metzinger, 17–39. Cambridge, MA: MIT Press.
  • Chirimuuta, M., 2014, “Psychophysical Methods and the Evasion of Introspection,” Philosophy of Science, 81 (5): 914–26. doi:10.1086/677890
  • Chun, Marvin M., Julie D. Golomb, & Nicholas B. Turk-Browne, 2011, “A Taxonomy of External and Internal Attention,” Annual Review of Psychology, 62 (1): 73–101. doi:10.1146/annurev.psych.093008.100427
  • Churchland, Patricia, 1996, “The Hornswoggle Problem,” Journal of Consciousness Studies, 3 (5–6): 402–408
  • Clark, Andy, 2001, “Visual Experience and Motor Action: Are the Bonds Too Tight?” The Philosophical Review, 110 (4): 495–519. doi:10.2307/3182592
  • Cleeremans, Axel, 2011, “The Radical Plasticity Thesis: How the Brain Learns to Be Conscious,” Frontiers in Psychology, 2: 86. doi:10.3389/fpsyg.2011.00086
  • Cleeremans, Axel, Dalila Achoui, Arnaud Beauny, Lars Keuninckx, Jean-Remy Martin, Santiago Muñoz-Moldes, Laurène Vuillaume, & Adélaïde de Heering, 2020, “Learning to Be Conscious,” Trends in Cognitive Sciences, 24 (2): 112–23. doi:10.1016/j.tics.2019.11.011
  • Cohen, Marlene R., & William T. Newsome, 2004, “What Electrical Microstimulation Has Revealed about the Neural Basis of Cognition,” Current Opinion in Neurobiology, 14 (2): 169–77. doi:10.1016/j.conb.2004.03.016
  • –––, 2009, “Estimates of the Contribution of Single Neurons to Perception Depend on Timescale and Noise Correlation,” The Journal of Neuroscience, 29 (20): 6635–48. doi:10.1523/jneurosci.5179-08.2009
  • Cohen, Michael A., & Daniel C. Dennett, 2011, “Consciousness Cannot Be Separated from Function,” Trends in Cognitive Sciences, 15 (8): 358–64. doi:10.1016/j.tics.2011.06.008
  • Colombo, Matteo, & Peggy Seriès, 2012, “Bayes in the Brain—On Bayesian Modelling in Neuroscience,” The British Journal for the Philosophy of Science, 63 (3): 697–723. doi:10.1093/bjps/axr043
  • Cowey, Alan, 2010, “The Blindsight Saga,” Experimental Brain Research, 200 (1): 3–24. doi:10.1007/s00221-009-1914-2
  • Crick, Francis, & Christof Koch, 1990, “Towards a Neurobiological Theory of Consciousness,” Seminars in The Neurosciences, 2: 263–75
  • Cruse, Damian, Srivas Chennu, Camille Chatelle, Tristan A. Bekinschtein, Davinia Fernández-Espejo, John D Pickard, Steven Laureys, & Adrian M Owen, 2012, “Bedside Detection of Awareness in the Vegetative State: A Cohort Study,” The Lancet, 378 (9809): 2088–94. doi:10.1016/s0140-6736(11)61224-5
  • Cul, A. Del, Stanislas Dehaene, P. Reyes, E. Bravo, & A Slachevsky, 2009, “Causal Role of Prefrontal Cortex in the Threshold for Access to Consciousness,” Brain, 132 (9): 2531–40. doi:10.1093/brain/awp111
  • Curley, William H., Peter B. Forgacs, Henning U. Voss, Mary M. Conte, & Nicholas D. Schiff, 2018, “Characterization of EEG Signals Revealing Covert Cognition in the Injured Brain,” Brain, 141 (5): 1404–21. doi:10.1093/brain/awy070
  • Dehaene, Stanislas, & Jean-Pierre Changeux, 2011, “Experimental and Theoretical Approaches to Conscious Processing” 70 (2): 200–227. doi:10.1016/j.neuron.2011.03.018
  • Dehaene, Stanislas, Jean-Pierre Changeux, Lionel Naccache, Jérôme Sackur, & Claire Sergent, 2006, “Conscious, Preconscious, and Subliminal Processing: A Testable Taxonomy,” Trends in Cognitive Sciences, 10 (5): 204–11. doi:10.1016/j.tics.2006.03.007
  • Dehaene, Stanislas, M. Kerszberg, & J. P. Changeux, 1998, “A Neuronal Model of a Global Workspace in Effortful Cognitive Tasks,” Proceedings of the National Academy of Sciences of the United States of America, 95 (24): 14529–34. doi:10.1073/pnas.95.24.14529
  • Dehaene, Stanislas, & Lionel Naccache, 2001, “Towards a Cognitive Neuroscience of Consciousness: Basic Evidence and a Workspace Framework,” Cognition, 79 (1–2): 1–37. doi:10.1016/s0010-0277(00)00123-2
  • Demertzi, Athena, Georgios Antonopoulos, Lizette Heine, Henning U. Voss, Julia Sophia Crone, Carlo de Los Angeles, Mohamed Ali Bahri, et al., 2015, “Intrinsic Functional Connectivity Differentiates Minimally Conscious from Unresponsive Patients,” Brain, 138 (9): 2619–31. doi:10.1093/brain/awv169
  • Denison, Rachel N., 2017, “Precision, Not Confidence, Describes the Uncertainty of Perceptual Experience: Comment on John Morrison’s ‘Perceptual Confidence.’” Analytic Philosophy, 58 (1): 58–70. doi:10.1111/phib.12092
  • Dennett, Daniel C., 2018, “Facing up to the Hard Question of Consciousness,” Philosophical Transactions of the Royal Society B: Biological Sciences, 373 (1755): 20170342–47. doi:10.1098/rstb.2017.0342
  • Dienes, Zoltán, & Anil Seth, 2010, “Gambling on the Unconscious: A Comparison of Wagering and Confidence Ratings as Measures of Awareness in an Artificial Grammar Task,” Consciousness and Cognition, 19 (2): 674–81. doi:10.1016/j.concog.2009.09.009
  • Dijkstra, Nadine, & Stephen M. Fleming, 2023, “Subjective Signal Strength Distinguishes Reality from Imagination,” Nature Communications, 14: 1627. doi:10.1038/s41467-023-37322-1
  • Doerig, Adrien, Aaron Schurger, Kathryn Hess, & Michael H. Herzog, 2019, “The Unfolding Argument: Why IIT and Other Causal Structure Theories Cannot Explain Consciousness,” Consciousness and Cognition, 72: 49–59. doi:10.1016/j.concog.2019.04.002
  • Dołęga, Krzysztof, 2023, “Models of Introspection vs. Introspective Devices Testing the Research Programme for Possible Forms of Introspection,” Journal of Consciousness Studies, 30 (9): 86–101. doi:10.53765/20512201.30.9.086
  • Drayson, Zoe, 2014, “Intentional Action and the Post-Coma Patient,” Topoi, 33 (1): 23–31. doi:10.1007/s11245-013-9185-8
  • Dretske, Fred, 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
  • Dwarakanath, Abhilash, Vishal Kapoor, Joachim Werner, Shervin Safavi, Leonid A. Fedorov, Nikos K. Logothetis, & Theofanis I. Panagiotaropoulos, 2023, “Bistability of Prefrontal States Gates Access to Consciousness,” Neuron, 111 (10): 1666-1683.e4. doi:10.1016/j.neuron.2023.02.027
  • Edlow, Brian L., Leandro R. D. Sanz, Len Polizzotto, Nader Pouratian, John D. Rolston, Samuel B. Snider, Aurore Thibaut, et al., 2021, “Therapies to Restore Consciousness in Patients with Severe Brain Injuries: A Gap Analysis and Future Directions,” Neurocritical Care, 35 (Supplement 1): 68–85. doi:10.1007/s12028-021-01227-y
  • Ehrsson, Henrik H., 2009, “Rubber Hand Illusion,” in The Oxford Companion to Consciousness, edited by Tim Bayne, Axel Cleeremans, & Patrick Wilken, 531–73. Oxford: Oxford University Press.
  • Evans, Gareth, 1982, The Varieties of Reference, Oxford: Oxford University Press.
  • Faivre, Nathan, Elisa Filevich, Guillermo Solovey, Simone Kuhn, & Olaf Blanke, 2017, “Behavioural, Modeling, and Electrophysiological Evidence for Supramodality in Human Metacognition,” The Journal of Neuroscience, 38 (2): 263–77. doi:10.1523/jneurosci.0322-17.2017
  • Farah, Martha J., 2004, Visual Agnosia: Disorders of Object Recognition and What They Tell Us About Normal Vision, second edition, Cambridge, MA: MIT Press.
  • Fazekas, Peter, & Georgina Nemeth, 2018, “Dream Experiences and the Neural Correlates of Perceptual Consciousness and Cognitive Access,” Philosophical Transactions of the Royal Society B: Biological Sciences, 373 (1755): 20170356. doi:10.1098/rstb.2017.0356
  • Feest, Uljana, 2012, “Introspection as a Method and Introspection as a Feature of Consciousness,” Inquiry, 55 (1): 1–16. doi:10.1080/0020174x.2012.643619
  • –––, 2014, “Phenomenal Experiences, First-Person Methods, and the Artificiality of Experimental Data,” Philosophy of Science, 81 (5): 927–39. doi:10.1086/677689
  • Felleman, Daniel J., & David C. Van Essen, 1991, “Distributed Hierarchical Processing in the Primate Cerebral Cortex,” Cerebral Cortex, 1 (1): 1–47. doi:10.1093/cercor/1.1.1-a
  • Fernández-Espejo, Davinia, & Adrian M. Owen, 2013, “Detecting Awareness after Severe Brain Injury,” Nature Reviews Neuroscience, 14 (11): 801–9. doi:10.1038/nrn3608
  • Fetsch, Christopher R., Roozbeh Kiani, William T. Newsome, & Michael N. Shadlen, 2014, “Effects of Cortical Microstimulation on Confidence in a Perceptual Decision,” 83 (4): 797–804. doi:10.1016/j.neuron.2014.07.011
  • Fink, Sascha Benjamin, 2016, “A Deeper Look at the ‘Neural Correlate of Consciousness’,” Frontiers in Psychology, 7 (July): 1044. doi:10.3389/fpsyg.2016.01044
  • Fleming, Stephen M., 2020, “Awareness as Inference in a Higher-Order State Space,” Neuroscience of Consciousness, 2020 (1): niz020. doi:10.1093/nc/niz020
  • –––, 2023a, “Metacognition and Confidence: A Review and Synthesis,” Annual Review of Psychology, 75 (1). doi:10.1146/annurev-psych-022423-032425
  • –––, Josefien Huijgen, & Raymond J. Dolan, 2012, “Prefrontal Contributions to Metacognition in Perceptual Decision Making,” The Journal of Neuroscience, 32 (18): 6117–25. doi:10.1523/jneurosci.6489-11.2012
  • Franz, Volker H., 2001, “Action Does Not Resist Visual Illusions,” Trends in Cognitive Sciences, 5 (11): 457–59. doi:10.1007/s00221-002-1364-6
  • Franz, Volker H., & Karl R. Gegenfurtner, 2008, “Grasping Visual Illusions: Consistent Data and No Dissociation,” Cognitive Neuropsychology, 25 (7–8): 920–50. doi:10.1080/02643290701862449
  • Freeman, Alan W., 2005, “Multistage Model for Binocular Rivalry,” Journal of Neurophysiology, 94 (6): 4412–20. doi:10.1152/jn.00557.2005
  • Gardelle, Vincent de, François Le Corre, & Pascal Mamassian, 2016, “Confidence as a Common Currency between Vision and Audition,” edited by Suliann Ben Hamed, PLoS ONE, 11 (1): e0147901–11. doi:10.1371/journal.pone.0147901
  • Gelder, Beatrice de, Marco Tamietto, Geert van Boxtel, Rainer Goebel, Arash Sahraie, Jan van den Stock, Bernard M C Stienen, Lawrence Weiskrantz, & Alan Pegna, 2008, “Intact Navigation Skills after Bilateral Loss of Striate Cortex,” Current Biology, 18 (24): R1128–29. doi:10.1016/j.cub.2008.11.002
  • Giles, Nathan, Hakwan Lau, & Brian Odegaard, 2016, “What Type of Awareness Does Binocular Rivalry Assess?” Trends in Cognitive Sciences, 20 (10): 719–20. doi:10.1016/j.tics.2016.08.010
  • Goldfine, Andrew M., Jonathan D. Victor, Mary M. Conte, Jonathan C. Bardin, & Nicholas D. Schiff, 2011, “Determination of Awareness in Patients with Severe Brain Injury Using EEG Power Spectral Analysis,” Clinical Neurophysiology, 122 (11): 2157–68. doi:10.1016/j.clinph.2011.03.022
  • Goodale, Melvyn A., & A. David Milner, 2004, Sight Unseen: An Exploration of Conscious and Unconscious Vision, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199596966.001.0001
  • Graaf, Tom A. de, Po-Jang Hsieh, & Alexander T. Sack, 2012, “The ‘Correlates’ in Neural Correlates of Consciousness,” Neuroscience and Biobehavioral Reviews, 36 (1): 191–97. doi:10.1016/j.neubiorev.2011.05.012
  • Greenberg, D. L., 2007, “Comment on ‘Detecting Awareness in the Vegetative State.’” Science, 315 (5816): 1221b–1221b. doi:10.1126/science.1135284
  • Grimaldi, Piercesare, Hakwan Lau, & Michele A. Basso, 2015, “There Are Things That We Know That We Know, and There Are Things That We Do Not Know We Do Not Know: Confidence in Decision-Making,” Neuroscience and Biobehavioral Reviews, May, 1–10. doi:10.1016/j.neubiorev.2015.04.006
  • Gross, Steven, 2020, “Probabilistic Representations in Perception: Are There Any, and What Would They Be?” Mind & Language, 35 (3): 377–89. doi:10.1111/mila.12280
  • Haddara, Nadia, & Dobromir Rahnev, 2022, “The Impact of Feedback on Perceptual Decision-Making and Metacognition: Reduction in Bias but No Change in Sensitivity,” Psychological Science, 33 (2): 259–75. doi:10.1177/09567976211032887
  • Haffenden, Angela M., & Melvyn A. Goodale, 1998, “The Effect of Pictorial Illusion on Prehension and Perception,” Journal of Cognitive Neuroscience, 10 (1): 122–36. doi:10.1162/089892998563824
  • Haffenden, Angela M., K. C. Schiff, & Melvyn A. Goodale, 2001, “The Dissociation between Perception and Action in the Ebbinghaus Illusion: Nonillusory Effects of Pictorial Cues on Grasp,” Current Biology, 11 (3): 177–81. doi:10.1016/s0960-9822(01)00023-9
  • Hanson, Jake R., & Sara I. Walker, 2019, “Integrated Information Theory and Isomorphic Feed-Forward Philosophical Zombies,” Entropy, 21 (11): 1073. doi:10.3390/e21111073
  • Harman, Gilbert, 1990, “The Intrinsic Quality of Experience,” Philosophical Perspectives, 4: 31–52. doi:10.2307/2214186
  • Hatamimajoumerd, Elaheh, N. Apurva Ratan Murty, Michael Pitts, & Michael A. Cohen, 2022, “Decoding Perceptual Awareness across the Brain with a No-Report FMRI Masking Paradigm,” Current Biology, 32 (19): 4139–49. doi:10.1016/j.cub.2022.07.068
  • Hautus, Michael J., Neil A. Macmillan, & C. Douglas Creelman, 2021, Detection Theory: A User’s Guide, 3rd edition, London: Routledge. doi:10.4324/9781003203636-17
  • Heal, Jane, 1996, “Simulation, Theory, and Content,” in Theories of Theories of Mind, edited by Peter Carruthers & Peter K. Smith, 75–89. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511597985.006
  • Hesse, Janis Karan, & Doris Y. Tsao, 2020, “A New No-Report Paradigm Reveals That Face Cells Encode Both Consciously Perceived and Suppressed Stimuli,” eLife, 9 (November): e58360. doi:10.7554/elife.58360
  • Hirshorn, Elizabeth A., Yuanning Li, Michael J. Ward, R. Mark Richardson, Julie A. Fiez, & Avniel Singh Ghuman, 2016, “Decoding and Disrupting Left Midfusiform Gyrus Activity during Word Reading,” Proceedings of the National Academy of Sciences, 113 (29): 8162–67. doi:10.1073/pnas.1604126113
  • Histed, Mark H., Amy M. Ni, & John H.R. Maunsell, 2013, “Insights into Cortical Mechanisms of Behavior from Microstimulation Experiments,” Progress in Neurobiology, 103: 115–30. doi:10.1016/j.pneurobio.2012.01.006
  • Horgan, Terence, John Tienson, & George Graham, 2003, “The Phenomenology of First Person Agency,” in Physicalism and Mental Causation: The Metaphysics of Mind and Action, edited by Sven Walter & Heinz-Dieter Heckmann, 323–41. Exeter: Imprint Academic.
  • Irvine, Elizabeth, 2012a, “Old Problems with New Measures in the Science of Consciousness,” The British Journal for the Philosophy of Science, 63 (3): 627–48. doi:10.1093/bjps/axs019
  • –––, 2012b, Consciousness as a Scientific Concept: A Philosophy of Science Perspective, Springer Science & Business Media. Dordrecht: Springer Netherlands. doi:10.1007/978-94-007-5173-6
  • –––, 2013, “Measures of Consciousness,” Philosophy Compass, 8 (3): 285–97. doi:10.1111/phc3.12016
  • Jackson, Frank, 1982, “Epiphenomenal Qualia,” The Philosophical Quarterly, 32 (127): 127–36. doi:10.2307/2960077
  • James, Thomas W., Jody Culham, G. Keith Humphrey, A. David Milner, & Melvyn A. Goodale, 2003, “Ventral Occipital Lesions Impair Object Recognition but Not Object-directed Grasping: An FMRI Study,” Brain, 126 (11): 2463–75. doi:10.1093/brain/awg248
  • Kammerer, François, & Keith Frankish, 2023, “What Forms Could Introspective Systems Take? A Research Programme,” Journal of Consciousness Studies, 30 (9): 13–48. doi:10.53765/20512201.30.9.013
  • Kanwisher, Nancy, 2000, “Domain Specificity in Face Perception,” Nature Neuroscience, 3 (8): 759–63. doi:10.1038/77664
  • Kapoor, Vishal, Abhilash Dwarakanath, Shervin Safavi, Joachim Werner, Michel Besserve, Theofanis I. Panagiotaropoulos, & Nikos K. Logothetis, 2022, “Decoding Internally Generated Transitions of Conscious Contents in the Prefrontal Cortex without Subjective Reports,” Nature Communications, 13 (1): 1535. doi:10.1038/s41467-022-28897-2
  • Kiani, R., & M. N. Shadlen, 2009, “Representation of Confidence Associated with a Decision by Neurons in the Parietal Cortex,” Science, 324 (5928): 759–64. doi:10.1126/science.1169405
  • Kim, Byounghoon, & Michele A. Basso, 2008, “Saccade Target Selection in the Superior Colliculus: A Signal Detection Theory Approach,” The Journal of Neuroscience, 28 (12): 2991–3007. doi:10.1523/jneurosci.5424-07.2008
  • Kim, Hyoungkyu, Anthony G. Hudetz, Joseph Lee, George A. Mashour, UnCheol Lee, Michael S. Avidan, Tarik Bel-Bahar, et al., 2018, “Estimating the Integrated Information Measure Phi from High-Density Electroencephalography during States of Consciousness in Humans,” Frontiers in Human Neuroscience, 12: 42. doi:10.3389/fnhum.2018.00042
  • King, Sheila M., Paul Azzopardi, Alan Cowey, John Oxbury, & Susan Oxbury, 1996, “The Role of Light Scatter in the Residual Visual Sensitivity of Patients with Complete Cerebral Hemispherectomy,” Visual Neuroscience, 13 (1): 1–13. doi:10.1017/s0952523800007082
  • Klein, Colin, 2017, “Consciousness, Intention, and Command-Following in the Vegetative State,” The British Journal for the Philosophy of Science, 68 (1): 27–54. doi:10.1093/bjps/axv012
  • Knuuttila, Tarja, & Andrea Loettgers, 2016, “Model Templates within and between Disciplines: From Magnets to Gases – and Socio-Economic Systems,” European Journal for Philosophy of Science, 6 (3): 377–400. doi:10.1007/s13194-016-0145-1
  • Ko, Yoshiaki, & Hakwan Lau, 2012, “A Detection Theoretic Explanation of Blindsight Suggests a Link between Conscious Perception and Metacognition,” Philosophical Transactions of the Royal Society B: Biological Sciences367 (1594): 1401–11. doi:10.1098/rstb.2011.0380
  • Koch, Christof, 2004, The Quest for Consciousness, Englewood, CO: Roberts and Company Publishers.
  • –––, 2012, Consciousness: Confessions of a Romantic Reductionist, Cambridge, MA: MIT Press.
  • Kovács, Ilona, Thomas V. Papathomas, Ming Yang, & Ákos Fehér, 1996, “When the Brain Changes Its Mind: Interocular Grouping during Binocular Rivalry,” Proceedings of the National Academy of Sciences, 93 (26): 15508–11. doi:10.1073/pnas.93.26.15508
  • Kozuch, Benjamin, 2013, “Prefrontal Lesion Evidence against Higher-Order Theories of Consciousness,” Philosophical Studies, 167 (3): 721–46. doi:10.1007/s11098-013-0123-9
  • –––, 2022, “Underwhelming Force: Evaluating the Neuropsychological Evidence for Higher-Order Theories of Consciousness,” Mind & Language, 37 (5): 790–813. doi:10.1111/mila.12363
  • –––, 2023, “A Legion of Lesions: The Neuroscientific Rout of Higher-Order Thought Theory,” Erkenntnis, first online 23 March 2023. doi:10.1007/s10670-023-00669-4
  • Kravitz, Dwight J., Kadharbatcha S. Saleem, Chris I. Baker, & Mortimer Mishkin, 2011, “A New Neural Framework for Visuospatial Processing,” Nature Reviews Neuroscience, 12 (4): 217–30. doi:10.1038/nrn3008
  • Kriegel, Uriah, 2003, “Consciousness as Intransitive Self-Consciousness: Two Views and an Argument,” Canadian Journal of Philosophy, 33 (1): 103–32. doi:10.1080/00455091.2003.10716537
  • Lamme, Victor A. F., 2006, “Towards a True Neural Stance on Consciousness,” Trends in Cognitive Sciences, 10 (11): 494–501. doi:10.1016/j.tics.2006.09.001
  • –––, 2010, “How Neuroscience Will Change Our View on Consciousness,” Cognitive Neuroscience, 1 (3): 204–20. doi:10.1080/17588921003731586
  • –––, 2019, “Consciousness, Metacognition, & Perceptual Reality Monitoring,” PsyArXiv, June. doi:10.31234/osf.io/ckbyf
  • –––, 2022, In Consciousness We Trust, Oxford: Oxford University Press.
  • –––, 2023, “What Is a Pseudoscience of Consciousness? Lessons from Recent Adversarial Collaborations,” PsyArXiv. doi:10.31234/osf.io/28z3y
  • Lau, Hakwan, & Richard Brown, 2019, “The Emperor’s New Phenomenology? The Empirical Case for Conscious Experience without First-Order Representations,” in Blockheads! Essays on Ned Block’s Philosophy of Mind and Consciousness, edited by Adam Pautz & Daniel Stoljar, Ch. 11. Cambridge, MA: MIT Press.
  • Lau, Hakwan, & R. E. Passingham, 2006, “Relative Blindsight in Normal Observers and the Neural Correlate of Visual Consciousness,” Proceedings of the National Academy of Sciences, 103 (49): 18763–68. doi:10.1073/pnas.0607716103
  • Lau, Hakwan, & David Rosenthal, 2011, “Empirical Support for Higher-Order Theories of Conscious Awareness,” Trends in Cognitive Sciences, 15 (8): 365–73. doi:10.1016/j.tics.2011.05.009
  • LeDoux, Joseph E., & Richard Brown, 2017, “A Higher-Order Theory of Emotional Consciousness,” Proceedings of the National Academy of Sciences of the United States of America, 114 (10): E2016–25. doi:10.1073/pnas.1619316114
  • LeDoux, Joseph E., Matthias Michel, & Hakwan Lau, 2020, “A Little History Goes a Long Way toward Understanding Why We Study Consciousness the Way We Do Today,” Proceedings of the National Academy of Sciences of the United States of America, 117 (13): 6976–84. doi:10.1073/pnas.1921623117
  • Lee, Andrew Y., 2023, “Degrees of Consciousness,” Noûs, 57 (3): 553–75. doi:10.1111/nous.12421
  • Lee, Geoffrey, & Nico Orlandi, 2022, “Representing Probability in Perception and Experience,” Review of Philosophy and Psychology, 13 (4): 907–45. doi:10.1007/s13164-022-00647-9
  • Lee, Joonyeol, & John H. R. Maunsell, 2009, “A Normalization Model of Attentional Modulation of Single Unit Responses,” PLoS ONE, 4 (2): e4651. doi:10.1371/journal.pone.0004651
  • Leopold, David A., 2012, “Primary Visual Cortex: Awareness and Blindsight*,” Neuroscience, 35 (1): 91–109. doi:10.1146/annurev-neuro-062111-150356
  • Leopold, David A., & Nikos K. Logothetis, 1996, “Activity Changes in Early Visual Cortex Reflect Monkeys’ Percepts during Binocular Rivalry,” Nature, 379 (6565): 549–53. doi:10.1038/379549a0
  • Levine, Joseph, 1983, “Materialism and Qualia: The Explanatory Gap,” Pacific Philosophical Quarterly, 64 (4): 354–61. doi:10.1111/j.1468-0114.1983.tb00207.x
  • Li, F. F., R. VanRullen, Christof Koch, & Pietro Perona, 2002, “Rapid Natural Scene Categorization in the near Absence of Attention,” Proceedings of the National Academy of Sciences of the United States of America, 99 (14): 9596–9601. doi:10.1073/pnas.092277599
  • Lin, Chia-Hua, 2018, “Tool Migration: A Framework for Analyzing Cross-Disciplinary Use of Mathematical Constructs,” PhilSci Archive, 1–11
  • Liu, Sirui, Qing Yu, Peter U. Tse, & Patrick Cavanagh, 2019, “Neural Correlates of the Conscious Perception of Visual Location Lie Outside Visual Cortex,” Current Biology, 29 (November): 1–9. doi:10.1016/j.cub.2019.10.033
  • Logothetis, Nikos K., David A. Leopold, & David L. Sheinberg, 1996, “What Is Rivalling during Binocular Rivalry?” Nature, 380 (6575): 621–24. doi:10.1038/380621a0
  • Lumer, Erik D., & Geraint Rees, 1999, “Covariation of Activity in Visual and Prefrontal Cortex Associated with Subjective Visual Perception,” Proceedings of the National Academy of Sciences, 96 (4): 1669–73. doi:10.1073/pnas.96.4.1669
  • Maier, Alexander, Melanie Wilke, Christopher Aura, Charles Zhu, Frank Q. Ye, & David A. Leopold, 2008, “Divergence of FMRI and Neural Signals in V1 during Perceptual Suppression in the Awake Monkey,” Nature Neuroscience, 11 (10): 1193–1200. doi:10.1038/nn.2173
  • Maniscalco, Brian, & Hakwan Lau, 2012, “A Signal Detection Theoretic Approach for Estimating Metacognitive Sensitivity from Confidence Ratings,” Consciousness and Cognition, 21 (1): 422–30. doi:10.1016/j.concog.2011.09.021
  • –––, 2014, “Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-D′, Response-Specific Meta-D′, and the Unequal Variance SDT Model,” in The Cognitive Neuroscience of Metacognition, edited by Stephen M Fleming & Christopher D. Frith, 25–66.
  • Marcel, Anthony J., 2003, “The Sense of Agency: Awareness and Ownership of Action,” in Agency and Self-Awareness: Issues in Philosophy and Psychology, edited by Fiona McPherson, 48–93. Oxford: Oxford University Press.
  • Mashour, George A., Pieter Roelfsema, Jean-Pierre Changeux, & Stanislas Dehaene, 2020, “Conscious Processing and the Global Neuronal Workspace Hypothesis,” Neuron, 105 (5): 776–98. doi:10.1016/j.neuron.2020.01.026
  • Massimini, Marcello, Fabio Ferrarelli, Reto Huber, Steve K. Esser, Harpreet Singh, & Giulio Tononi, 2005, “Breakdown of Cortical Effective Connectivity During Sleep,” Science, 309 (5744): 2228–32. doi:10.1126/science.1117256
  • Mazancieux, Audrey, Stephen M. Fleming, Céline Souchay, & Chris J. A. Moulin, 2020, “Is There a G Factor for Metacognition? Correlations in Retrospective Metacognitive Sensitivity across Tasks,” Journal of Experimental Psychology: General, 149 (9): 1788–99. doi:10.1037/xge0000746
  • Mazancieux, Audrey, Michael Pereira, Nathan Faivre, Pascal Mamassian, Chris J. A. Moulin, & Céline Souchay, 2023, “Towards a Common Conceptual Space for Metacognition in Perception and Memory,” Nature Reviews Psychology, 2 (12): 751–66. doi:10.1038/s44159-023-00245-1
  • Mazzi, Chiara, Chiara Bagattini, & Silvia Savazzi, 2016, “Blind-Sight vs. Degraded-Sight: Different Measures Tell a Different Story,” Frontiers in Psychology, 7: 901. doi:10.3389/fpsyg.2016.00901
  • Mediano, Pedro A. M., Fernando E. Rosas, Daniel Bor, Anil K. Seth, & Adam B. Barrett, 2022, “The Strength of Weak Integrated Information Theory,” Trends in Cognitive Sciences, 26 (8): 646–55. doi:10.1016/j.tics.2022.04.008
  • Mégevand, Pierre, David M. Groppe, Matthew S. Goldfinger, Sean T. Hwang, Peter B. Kingsley, Ido Davidesco, & Ashesh D. Mehta, 2014, “Seeing Scenes: Topographic Visual Hallucinations Evoked by Direct Electrical Stimulation of the Parahippocampal Place Area,” The Journal of Neuroscience, 34 (16): 5399–5405. doi:10.1523/jneurosci.5202-13.2014
  • Mendoza-Halliday, Diego, & Julio C. Martinez-Trujillo, 2017, “Neuronal Population Coding of Perceived and Memorized Visual Features in the Lateral Prefrontal Cortex,” Nature Communications, 8 (May): 15471. doi:10.1038/ncomms15471
  • Merker, Bjorn, Kenneth Williford, & David Rudrauf, 2022, “The Integrated Information Theory of Consciousness: A Case of Mistaken Identity,” Behavioral and Brain Sciences, 45 (e41): 1–63. doi:10.1017/s0140525x21000881
  • Michel, Matthias, 2023, “Confidence in Consciousness Research,” Wiley Interdisciplinary Reviews: Cognitive Science14 (2): e1628. doi:10.1002/wcs.1628
  • Michel, Matthias, & Hakwan Lau, 2020, “On the Dangers of Conflating Strong and Weak Versions of a Theory of Consciousness,” Philosophy and the Mind Sciences, 1 (II). doi:10.33735/phimisci.2020.ii.54
  • –––, 2021, “Is Blindsight Possible Under Signal Detection Theory? Comment on Phillips (2021),” Psychological Review, 128 (3): 585–91. doi:10.1037/rev0000266
  • Michel, Matthias, & Jorge Morales, 2020, “Minority Reports: Consciousness and the Prefrontal Cortex,” Mind & Language, 35 (4): 493–513. doi:10.1111/mila.12264
  • Milner, A. David, & Melvyn A. Goodale, 1995, The Visual Brain in Action, New York: Oxford University Press.
  • Mole, C., 2009, “Illusions, Demonstratives, and the Zombie Action Hypothesis,” Mind, 118 (472): 995–1011. doi:10.1093/mind/fzp109
  • Montemayor, Carlos, & Harry Haroutioun Haladjian, 2015, Consciousness, Attention, and Conscious Attention, Cambridge, MA: MIT Press.
  • Monti, Martin M., Audrey Vanhaudenhuyse, Martin R. Coleman, Melanie Boly, John D. Pickard, Luaba Tshibanda, Adrian M. Owen, & Steven Laureys, 2010, “Willful Modulation of Brain Activity in Disorders of Consciousness,” The New England Journal of Medicine, 362 (7): 579–89. doi:10.1056/nejmoa0905370
  • Morales, Jorge, 2023, “Mental Strength: A Theory of Experience Intensity,” Philosophical Perspectives, 37 (1): 248–68. doi:10.1111/phpe.12189
  • –––, forthcoming, “Introspection Is Signal Detection,” The British Journal for the Philosophy of Science.
  • Morales, Jorge, & Hakwan Lau, 2020, “The Neural Correlates of Consciousness,” in Oxford Handbook of the Philosophy of Consciousness, edited by Uriah Kriegel, 233–60. Oxford University Press.
  • –––, 2022, “Confidence Tracks Consciousness,” in Qualitative Consciousness Themes from the Philosophy of David Rosenthal, edited by Josh Weisberg, 91–108. Cambridge: Cambridge University Press.
  • Morales, Jorge, Hakwan Lau, & Stephen M. Fleming, 2018, “Domain-General and Domain-Specific Patterns of Activity Supporting Metacognition in Human Prefrontal Cortex,” The Journal of Neuroscience, 38 (14): 3534–46. doi:10.1523/jneurosci.2360-17.2018
  • Morrison, John, 2016, “Perceptual Confidence,” Analytic Philosophy, 57 (1): 15–48. doi:10.1111/phib.12077
  • Munton, Jessie, 2016, “Visual Confidences and Direct Perceptual Justification,” Philosophical Topics, 44 (2): 301–26. doi:10.5840/philtopics201644225
  • Naccache, Lionel, Jean-Pierre Changeux, Theofanis I Panagiotaropoulos, & Stanislas Dehaene, 2021, “Why Intracranial Electrical Stimulation of the Human Brain Suggests an Essential Role for Prefrontal Cortex in Conscious Processing: A Commentary on Raccah et al.,” PsyArXiv. doi:10.31219/osf.io/zrqp8
  • Nachev, Parashkev, & Masud Husain, 2007, “Comment on ‘Detecting Awareness in the Vegetative State,’” Science, 315 (5816): 1221a. doi:10.1126/science.1135096
  • Nagel, Thomas, 1974, “What Is It Like to Be a Bat?” The Philosophical Review, 83 (4): 435–50. doi:10.2307/2183914
  • Newsome, William T., & Edmond B. Paré, 1988, “A Selective Impairment of Motion Perception Following Lesions of the Middle Temporal Visual Area (MT),” The Journal of Neuroscience, 8 (6): 2201–11. doi:10.1523/jneurosci.08-06-02201.1988
  • Ngo, Trung T., Steven M. Miller, Guang B. Liu, & John D. Pettigrew, 2000, “Binocular Rivalry and Perceptual Coherence,” Current Biology, 10 (4): R134–36. doi:10.1016/s0960-9822(00)00399-7
  • Nichols, Shaun, & Stephen P. Stich, 2003, Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds, Oxford: Oxford University Press. doi:10.1093/0198236107.001.0001
  • Nieder, Andreas, Lysann Wagener, & Paul Rinnert, 2020, “A Neural Correlate of Sensory Consciousness in a Corvid Bird,” Science, 369 (6511): 1626–29. doi:10.1126/science.abb1447
  • Noble, Stephanie, Joshua Curtiss, Luiz Pessoa, & Dustin Scheinost, 2023, “The Tip of the Iceberg: A Call to Embrace Anti-Localizationism in Human Neuroscienceresearch,” PsyArXiv, November. doi:10.31234/osf.io/9eqh6
  • Noë, Alva, & E. Thompson, 2004, “Are There Neural Correlates of Consciousness?” Journal of Consciousness Studies11 (1): 3–28
  • Noy, N., S. Bickel, E. Zion-Golumbic, M. Harel, T. Golan, I. Davidesco, C. A. Schevon, et al., 2015, “Ignition’s Glow: Ultra-Fast Spread of Global Cortical Activity Accompanying Local ‘Ignitions’ in Visual Cortex during Conscious Visual Perception,” Consciousness and Cognition, 35 (September): 206–24. doi:10.1016/j.concog.2015.03.006
  • Odegaard, Brian, Min Yu Chang, Hakwan Lau, & Sing-Hang Cheung, 2018, “Inflation versus Filling-in: Why We Feel We See More than We Actually Do in Peripheral Vision,” Philosophical Transactions of the Royal Society B: Biological Sciences, 373 (1755): 20170345. doi:10.1098/rstb.2017.0345
  • Odegaard, Brian, Robert T. Knight, & Hakwan Lau, 2017, “Should A Few Null Findings Falsify Prefrontal Theories Of Conscious Perception?” The Journal of Neuroscience, 37 (40): 9593–9602. doi:10.1101/122267
  • Oizumi, Masafumi, Larissa Albantakis, & Giulio Tononi, 2014, “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0,” PLoS Computational Biology, 10 (5): e1003588. doi:10.1371/journal.pcbi.1003588.s004
  • O’Shea, Robert Paul, 2011, “Binocular Rivalry Stimuli Are Common but Rivalry Is Not,” Frontiers in Human Neuroscience, 5: 148. doi:10.3389/fnhum.2011.00148
  • Overgaard, Morten, & Peter Fazekas, 2016, “Can No-Report Paradigms Extract True Correlates of Consciousness?” Trends in Cognitive Sciences, 20 (4): 241–42. doi:10.1016/j.tics.2016.01.004
  • Overgaard, Morten, Katrin Fehl, Kim Mouridsen, Bo Bergholt, & Axel Cleeremans, 2008, “Seeing without Seeing? Degraded Conscious Vision in a Blindsight Patient,” PLoS ONE, 3 (8): e3028. doi:10.1371/journal.pone.0003028.t001
  • Overgaard, Morten, Julian Rote, Kim Mouridsen, & Thomas Zoëga Ramsøy, 2006, “Is Conscious Perception Gradual or Dichotomous? A Comparison of Report Methodologies during a Visual Task,” Consciousness and Cognition, 15 (4): 700–708. doi:10.1016/j.concog.2006.04.002
  • Owen, Adrian M., Martin R. Coleman, Melanie Boly, Matthew H. Davis, Steven Laureys, Dietsje Jolles, & John D. Pickard, 2007, “Response to Comments on ‘Detecting Awareness in the Vegetative State.’” Science, 315 (5816): 1221–1221. doi:10.1126/science.1135583
  • Owen, Adrian M., Martin R. Coleman, Melanie Boly, Matthew H. Davis, Steven Laureys, & John D. Pickard, 2006, “Detecting Awareness in the Vegetative State,” Science, 313 (5792): 1402–1402. doi:10.1126/science.1130197
  • Panagiotaropoulos, Theofanis I., Abhilash Dwarakanath, & Vishal Kapoor, 2020, “Prefrontal Cortex and Consciousness: Beware of the Signals,” Trends in Cognitive Sciences, 24 (5): 343–44. doi:10.1016/j.tics.2020.02.005
  • Parker, A. J., & W. T. Newsome, 1998, “Sense and the Single Neuron: Probing the Physiology of Perception,” Annual Review of Neuroscience, 21 (1): 227–77. doi:10.1146/annurev.neuro.21.1.227
  • Parvizi, Josef, Corentin Jacques, Brett L Foster, Nathan Witthoft, Nathan Withoft, Vinitha Rangarajan, Kevin S Weiner, & Kalanit Grill-Spector, 2012, “Electrical Stimulation of Human Fusiform Face-Selective Regions Distorts Face Perception,” Journal of Neuroscience, 32 (43): 14915–20. doi:10.1523/jneurosci.2609-12.2012
  • Penfield, Wilder, & Edwin Boldrey, 1937, “Somatic Motor and Sensory Representation in the Cerebral Cortex of Man as Studied by Electrical Stimulation,” Brain, 60 (4): 389–443. doi:10.1093/brain/60.4.389
  • Penfield, Wilder, & Phanor Perot, 1963, “The Brain’s Record of Auditory and Visual Experience: A Final Summary and Discussion,” Brain, 86 (4): 595–696. doi:10.1093/brain/86.4.595
  • Peng, Yueqing, Sarah Gillis-Smith, Hao Jin, Dimitri Tränkner, Nicholas J. P. Ryba, & Charles S. Zuker, 2015, “Sweet and Bitter Taste in the Brain of Awake Behaving Animals,” Nature, 527 (7579): 512–15. doi:10.1038/nature15763
  • Persaud, Navindra, Peter McLeod, & Alan Cowey, 2007, “Post-Decision Wagering Objectively Measures Awareness,” Nature Neuroscience, 10 (2): 257–61. doi:10.1038/nn1840
  • Pessoa, Luiz, 2022, The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together, Cambridge, MA: MIT Press.
  • –––, 2023, “The Entangled Brain,” Journal of Cognitive Neuroscience, 35 (3): 349–60. doi:10.1162/jocn_a_01908
  • Peters, Megan A. K., 2022, “Towards Characterizing the Canonical Computations Generating Phenomenal Experience,” Neuroscience & Biobehavioral Reviews, 142: 104903. doi:10.1016/j.neubiorev.2022.104903
  • Peters, Megan A. K., & Hakwan Lau, 2015, “Human Observers Have Optimal Introspective Access to Perceptual Processes Even for Visually Masked Stimuli,” eLife, 4: e09651. doi:10.7554/elife.09651
  • Phillips, Ian, 2011, “Perception and Iconic Memory: What Sperling Doesn’t Show,” Mind & Language, 26 (4): 381–411. doi:10.1111/j.1468-0017.2011.01422.x
  • –––, 2016, “Consciousness and Criterion: On Block’s Case for Unconscious Seeing,” Philosophy and Phenomenological Research, 93 (2): 419–51. doi:10.1111/phpr.12224
  • –––, 2021, “Blindsight Is Qualitatively Degraded Conscious Vision,” Psychological Review, 3 (128): 558–84. doi:10.1037/rev0000254
  • Phillips, Ian, & Jorge Morales, 2020, “The Fundamental Problem with No-Cognition Paradigms,” Trends in Cognitive Sciences, 24 (3): 165–67. doi:10.1016/j.tics.2019.11.010
  • Pisella, Laure, Lauren Sergio, Annabelle Blangero, Héloïse Torchin, Alain Vighetto, & Yves Rossetti, 2009, “Optic Ataxia and the Function of the Dorsal Stream: Contributions to Perception and Action,” Neuropsychologia, 47 (14): 3033–44. doi:10.1016/j.neuropsychologia.2009.06.020
  • Polonsky, Alex, Randolph Blake, Jochen Braun, & David J. Heeger, 2000, “Neuronal Activity in Human Primary Visual Cortex Correlates with Perception during Binocular Rivalry,” Nature Neuroscience, 3 (11): 1153–59. doi:10.1038/80676
  • Pouget, Alexandre, Peter Dayan, & Richard S. Zemel, 2003, “Inference and Computation with Population Codes,” Neuroscience, 26 (1): 381–410. doi:10.1146/annurev.neuro.26.041002.131112
  • Pouget, Alexandre, Jan Drugowitsch, & Adam Kepecs, 2016, “Confidence and Certainty: Distinct Probabilistic Quantities for Different Goals,” Nature Neuroscience, 19 (3): 366–74. doi:10.1038/nn.4240
  • Prinz, Jesse, 2012, The Conscious Brain, New York: Oxford University Press.
  • Pulvermüller, Friedemann, 2005, “Brain Mechanisms Linking Language and Action,” Nature Reviews Neuroscience, 6 (7): 576–82. doi:10.1038/nrn1706
  • Raccah, Omri, Ned Block, & Kieran C. R. Fox, 2021, “Does the Prefrontal Cortex Play an Essential Role in Consciousness? Insights from Intracranial Electrical Stimulation of the Human Brain,” The Journal of Neuroscience, 41 (10): 2076–87. doi:10.1523/jneurosci.1141-20.2020
  • Ramsøy, Thomas Zoëga, & Morten Overgaard, 2004, “Introspection and Subliminal Perception,” Phenomenology and the Cognitive Sciences, 3 (1): 1–23. doi:10.1023/b:phen.0000041900.30172.e8
  • Rausch, Manuel, & Michael Zehetleitner, 2016, “Visibility Is Not Equivalent to Confidence in a Low Contrast Orientation Discrimination Task,” Frontiers in Psychology, 7 (e1004519): 47. doi:10.1093/brain/121.1.25
  • Rescorla, M., 2015, “Bayesian Perceptual Psychology,” in The Oxford Handbook of Philosophy of Perception, edited by Mohan Matthen, 694–716. Oxford: Oxford University Press.
  • Romo, Ranulfo, Adrián Hernández, Anótonio Zainos, & Emilio Salinas, 1998, “Somatosensory Discrimination Based on Cortical Microstimulation,” Nature, 392 (6674): 387–90. doi:10.1038/32891
  • Romo, Ranulfo, Adrián Hernández, Antonio Zainos, Carlos D Brody, & Luis Lemus, 2000, “Sensing without Touching Psychophysical Performance Based on Cortical Microstimulation,” Neuron, 26 (1): 273–78. doi:10.1016/s0896-6273(00)81156-3
  • Rosenthal, David, 2002, “Explaining Consciousness,” in Philosophy of Mind: Classical and Contemporary Readings, edited by David J. Chalmers, 109–31. Oxford: Oxford University Press.
  • –––, 2005, Consciousness and Mind, New York: Oxford University Press.
  • –––, 2011, “Exaggerated Reports: Reply to Block,” Analysis, 71 (3): 431–37. doi:10.1093/analys/anr039
  • –––, 2019, “Consciousness and Confidence,” Neuropsychologia, 128: 255–65. doi:10.1016/j.neuropsychologia.2018.01.018
  • Rossetti, Yves, Laure Pisella, & Alain Vighetto, 2003, “Optic Ataxia Revisited: Visually Guided Action versus Immediate Visuomotor Control,” Experimental Brain Research, 153 (2): 171–79. doi:10.1007/s00221-003-1590-6
  • Rounis, Elisabeth, Brian Maniscalco, John C. Rothwell, Richard E. Passingham, & Hakwan Lau, 2010, “Theta-Burst Transcranial Magnetic Stimulation to the Prefrontal Cortex Impairs Metacognitive Visual Awareness,” Cognitive Neuroscience, 1 (3): 165–75. doi:10.1080/17588921003632529
  • Rouy, Martin, Vincent de Gardelle, Gabriel Reyes, Jérôme Sackur, Jean Christophe Vergnaud, Elisa Filevich, & Nathan Faivre, 2022, “Metacognitive Improvement: Disentangling Adaptive Training From Experimental Confounds,” Journal of Experimental Psychology: General, 151 (9): 2083–2091. doi:10.1037/xge0001185
  • Ruff, Douglas A., & Marlene R. Cohen, 2014, “Relating the Activity of Sensory Neurons to Perception,” in The Cognitive Neurosciences, edited by Michael S. Gazzaniga & George R Mangun, 5th ed., 349–62. Cambridge, MA: MIT Press.
  • Salzman, C. Daniel, Kenneth H. Britten, & William T. Newsome, 1990, “Cortical Microstimulation Influences Perceptual Judgements of Motion Direction,” Nature, 346 (6280): 174–77. doi:10.1038/346174a0
  • Sandberg, Kristian, Bert Timmermans, Morten Overgaard, & Axel Cleeremans, 2010, “Measuring Consciousness: Is One Measure Better than the Other?” Consciousness and Cognition, 19 (4): 1069–78. doi:10.1016/j.concog.2009.12.013
  • Schalk, Gerwin, Christoph Kapeller, Christoph Guger, Hiroshi Ogawa, Satoru Hiroshima, Rosa Lafer-Sousa, Zeynep M. Saygin, Kyousuke Kamada, & Nancy Kanwisher, 2017, “Facephenes and Rainbows: Causal Evidence for Functional and Anatomical Specificity of Face and Color Processing in the Human Brain,” Proceedings of the National Academy of Sciences, 114 (46): 12285–90. doi:10.1073/pnas.1713447114
  • Schenk, Thomas, & Robert D. McIntosh, 2010, “Do We Have Independent Visual Streams for Perception and Action?” Cognitive Neuroscience, 1 (1): 52–62. doi:10.1080/17588920903388950
  • Schwitzgebel, Eric, 2008, “The Unreliability of Naive Introspection,” The Philosophical Review, 117 (2): 245–73. doi:10.1215/00318108-2007-037
  • –––, 2011, Perplexities of Consciousness, Cambridge, MA: MIT Press.
  • Shea, Nicholas, 2014, “Neural Signaling of Probabilistic Vectors,” Philosophy of Science, 81 (5): 902–13. doi:10.1086/678354
  • –––, 2018, Representation in Cognitive Science, Oxford: Oxford University Press.
  • Shea, Nicholas, & Tim Bayne, 2010, “The Vegetative State and the Science of Consciousness,” The British Journal for the Philosophy of Science, 61 (3): 459–84. doi:10.1093/bjps/axp046
  • Shekhar, Medha, & Dobromir Rahnev, 2018, “Distinguishing the Roles of Dorsolateral and Anterior PFC in Visual Metacognition,” The Journal of Neuroscience, 38 (22): 5078–87. doi:10.1523/jneurosci.3484-17.2018
  • Siegel, Susanna, 2022, “How Can Perceptual Experiences Explain Uncertainty?” Mind & Language, 37 (2): 134–58. doi:10.1111/mila.12348
  • Singer, Emily, 2006, “Big Brain Thinking,” MIT Technology Review, February 10, 2006. [Singer 2006 available online]
  • Sitt, Jacobo D., Jean-Rémi King, Lionel Naccache, & Stanislas Dehaene, 2013, “Ripples of Consciousness,” Trends in Cognitive Sciences, 17 (11): 552–54. doi:10.1016/j.tics.2013.09.003
  • Smeets, Jeroen B. J., & Eli Brenner, 2006, “10 Years of Illusions,” Journal of Experimental Psychology: Human Perception and Performance, 32 (6): 1501–4. doi:10.1037/0096-1523.32.6.1501
  • Smithies, Declan, & Daniel Stoljar (eds.), 2012, Introspection and Consciousness, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199744794.001.0001
  • Spener, Maja, 2015, “Calibrating Introspection,” Philosophical Issues, 25 (1): 300–321. doi:10.1111/phis.12062
  • –––, 2018, “Introspecting in the 20th Century,” in Philosophy of Mind in the Twentieth and Twenty-First Centuries, edited by Amy Kind, 148–74. London: Routledge.
  • Sperling, George, 1960, “The Information Available in Brief Visual Presentations,” Psychological Monographs: General and Applied, 74 (11): 1–29. doi:10.1037/h0093759
  • Stoerig, Petra, Martin Hübner, & Ernst Pöppel, 1985, “Signal Detection Analysis of Residual Vision in a Field Defect Due to a Post-Geniculate Lesion,” Neuropsychologia, 23 (5): 589–99. doi:10.1016/0028-3932(85)90061-2
  • Tanner, Wilson P., & John A. Swets, 1954, “A Decision-Making Theory of Visual Detection,” Psychological Review, 61 (6): 401–9. doi:10.1037/h0058700
  • Tarr, Michael J., & Isabel Gauthier, 2000, “FFA: A Flexible Fusiform Area for Subordinate-Level Visual Processing Automatized by Expertise,” Nature Neuroscience, 3 (8): 764–69. doi:10.1038/77666
  • Tong, Frank, Ming Meng, & Randolph Blake, 2006, “Neural Bases of Binocular Rivalry,” Trends in Cognitive Sciences10 (11): 502–11. doi:10.1016/j.tics.2006.09.003
  • Tononi, Giulio, 2004, “An Information Integration Theory of Consciousness,” BMC Neuroscience, 5 (42). doi:10.1186/1471-2202-5-42
  • –––, 2008, “Consciousness as Integrated Information: A Provisional Manifesto,” The Biological Bulletin, 215 (3): 216–42. doi:10.2307/25470707
  • Tononi, Giulio, Melanie Boly, Marcello Massimini, & Christof Koch, 2016, “Integrated Information Theory: From Consciousness to Its Physical Substrate,” Nature Reviews Neuroscience, 17 (7): 450–61. doi:10.1038/nrn.2016.44
  • Tse, P. U., S. Martinez-Conde, A. A. Schlegel, & S. L. Macknik, 2005, “Visibility, Visual Awareness, and Visual Masking of Simple Unattended Targets Are Confined to Areas in the Occipital Cortex beyond Human V1/V2,” Proceedings of the National Academy of Sciences of the United States of America, 102 (47): 17178–83. doi:10.1073/pnas.0508010102
  • Tye, Michael, 1992, “Visual Qualia and Visual Content,” n The Contents of Experience, edited by Tim Crane, 158–76.
  • Ungerleider, Leslie, & Mortimer Mishkin, 1982, “Two Cortical Systems,” in Analysis of Visual Behavior, edited by David J. Ingle, Melvyn A. Goodale, & Richard J.W. Mansfield, 549–586. Cambridge, MA: MIT Press.
  • Urbanski, Marika, Olivier A. Coubard, & Clémence Bourlon, 2014, “Visualizing the Blind Brain: Brain Imaging of Visual Field Defects from Early Recovery to Rehabilitation Techniques,” Frontiers in Integrative Neuroscience, 8: 74. doi:10.3389/fnint.2014.00074
  • Vance, Jonna, 2021, “Precision and Perceptual Clarity,” Australasian Journal of Philosophy, 99 (2): 379–95. doi:10.1080/00048402.2020.1767663
  • Vignal, J.P., P. Chauvel, & E. Halgren, 2000, “Localised Face Processing by the Human Prefrontal Cortex: Stimulation-Evoked Hallucinations of Faces,” Cognitive Neuropsychology, 17 (1–3): 281–91. doi:10.1080/026432900380616
  • Vignemont, F. de, & P. Fourneret, 2004, “The Sense of Agency: A Philosophical and Empirical Review of the ‘Who’ System,” Consciousness and Cognition, 13 (1): 1–19. doi:10.1016/s1053-8100(03)00022-9
  • Wallhagen, Morgan, 2007, “Consciousness and Action: Does Cognitive Science Support (Mild) Epiphenomenalism?” The British Journal for the Philosophy of Science, 58 (3): 539–61. doi:10.1093/bjps/axm023
  • Weiskrantz, Lawrence, 1986, Blindsight: A Case Study and Implications, Oxford: Oxford University Press.
  • Westlin, Christiana, Jordan E. Theriault, Yuta Katsumi, Alfonso Nieto-Castanon, Aaron Kucyi, Sebastian F. Ruf, Sarah M. Brown, et al., 2023, “Improving the Study of Brain-Behavior Relationships by Revisiting Basic Assumptions,” Trends in Cognitive Sciences, 27 (3): 246–257. doi:10.1016/j.tics.2022.12.015
  • Wilson, Hugh R., 2003, “Computational Evidence for a Rivalry Hierarchy in Vision,” Proceedings of the National Academy of Sciences, 100 (24): 14499–503. doi:10.1073/pnas.2333622100
  • Working Party of the Royal College of Physicians, 2003, “The Vegetative State: Guidance on Diagnosis and Management,” Clinical Medicine, 3 (3): 249–54. doi:10.7861/clinmedicine.3-3-249
  • Wu, Wayne, 2013, “The Case for Zombie Agency,” Mind, 122 (485): 217–30. doi:10.1093/mind/fzt030
  • –––, 2014a, “Against Division: Consciousness, Information and the Visual Streams,” Mind & Language, 29 (4): 383–406. doi:10.1111/mila.12056
  • –––, 2014b, Attention, London: Routledge
  • –––, 2017, “Attention and Perception: A Necessary Connection?” In Current Controversies in Philosophy of Perception, edited by Bence Nanay, 148–62. Routledge: New York
  • –––, 2023, Movements of the Mind: A Theory of Attention, Intention and Action, Oxford: Oxford University Press.
  • Young, Michael J., Yelena G. Bodien, Joseph T. Giacino, Joseph J. Fins, Robert D. Truog, Leigh R. Hochberg, & Brian L Edlow, 2021, “The Neuroethics of Disorders of Consciousness: A Brief History of Evolving Ideas,” Brain, 144 (11): 3291–3310. doi:10.1093/brain/awab290
  • Zehetleitner, Michael, & Manuel Rausch, 2013, “Being Confident without Seeing: What Subjective Measures of Visual Consciousness Are About,” Attention, Perception, & Psychophysics, 75 (7): 1406–26. doi:10.3758/s13414-013-0505-2
  • Zihl, J., D. von Cramon, & N. Mai, 1983, “Selective Disturbance of Movement Vision after Bilateral Brain Damage,” Brain, 106 (2): 313–40. doi:10.1093/brain/106.2.313
  • Zou, Jinyou, Sheng He, & Peng Zhang, 2016, “Binocular Rivalry from Invisible Patterns,” Proceedings of the National Academy of Sciences of the United States of America, 113 (30): 8408–13. doi:10.1073/pnas.1604816113

Other Internet Resources

Manuscripts and Letters

Movies

  • Movie S1, from Hirshorn et al. 2016: electrical stimulation session with P2. This movie shows P2’s word-naming ability completely disrupted during high stimulation, but no errors during low stimulation.
  • Movie S2, from Hirshorn et al. 2016: electrical stimulation session with P1. This movie shows P1 misnaming letters under high stimulation, but no errors during low stimulation.
  • Movie S3, from Parvizi et al. 2013: electrical stimulation session. This movie shows a patient experiencing distorted faces but no distortion of other objects or during sham stimulation.

Blog Posts

Acknowledgments

Thanks to Hakwan Lau and Susanna Siegel and especially Dave Chalmers who refereed the article. Special thanks to Mark Sprevak and David Barak for organizing discussion groups on the entry at the University of Edinburgh and at Columbia University respectively and for their feedback. Thanks to Doug Ruff for extensive feedback on central sections. Among many others, thanks for comments to: Jake Berger, Ned Block, Richard Brown, Alessandra Buccella, Denis Buehler, Tony Cheng, Mazviita Chirimuuta, Andy Clark, Sam Clarke, Carrie Figdor, Cressida Gaukroger, Michelle Liu, Chris Mole, John Morrison, Will Nalls, David Papineau, David Rosenthal, Ian Phillips, Adina Roskies, Forrest Schreick, Nick Shea, and Cecily Whiteley.

Copyright © 2024 by
Wayne Wu <waynewu@andrew.cmu.edu>
Jorge Morales <j.morales@northeastern.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free