Neuroethics

First published Wed Feb 10, 2016; substantive revision Wed Mar 3, 2021

Neuroethics is an interdisciplinary field focusing on ethical issues raised by our increased and constantly improving understanding of the brain and our ability to monitor and influence it.

1. The rise and scope of neuroethics

Neuroethics focuses on ethical issues raised by our continually improving understanding of the brain, and by consequent improvements in our ability to monitor and influence brain function. Significant attention to neuroethics can be traced to 2002, when the Dana Foundation organized a meeting of neuroscientists, ethicists, and other thinkers, entitled Neuroethics: Mapping the Field. A participant at that meeting, columnist and wordsmith William Safire, is often credited with introducing and establishing the meaning of the term “neuroethics”, defining it as ‘the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain’ (Marcus, 2002, p.5). Others contend that the word “neuroethics” was in use prior to this (Illes, 2003; Racine, 2010), although all agree that these earlier uses did not employ it in a disciplinary sense, or to refer to the entirety of the ethical issues raised by neuroscience.

Another attendee at that initial meeting, Adina Roskies, in response to a perceived lack of recognition of the potential novelty of neuroethics, penned “Neuroethics for the new millennium” (Roskies, 2002), an article in which she proposed a bipartite division of neuroethics into the “ethics of neuroscience”, which encompasses the kinds of ethical issues raised by Safire, and “the neuroscience of ethics”, thus suggesting an extension of the scope of neuroethics to encompass our burgeoning understanding of the biological basis of ethical thought and behavior and the ways in which this could itself influence and inform our ethical thinking. This broadening of the scope of neuroethics highlights the obvious and not-so-obvious ways that understanding our own moral thinking might affect our moral views; it is one aspect of neuroethics that distinguishes it from traditional bioethics. Another way of characterizing the field is as a study of ethical issues arising from what we can do to the brain (e.g. with neurotechnologies) and from what we know about it (including, for example, understanding the basis of ethical behavior).

Although Roskies’ definition remains influential, it has been challenged in various ways. Some have argued that neuroethics should not be limited to the neuroscience of ethics, but rather be broadened to the cognitive science of ethics (Levy, personal communication), since so much work that enables us to understand the brain takes place in disciplines outside of neuroscience, strictly defined. This is in fact in the spirit of the original proposal, since it has been widely recognized that the brain sciences encompass a wide array of disciplines, methods, and questions. However, the most persistent criticisms have been from those who have questioned whether the neuroscience of ethics should be considered a part of neuroethics at all: they argue that understanding our ethical faculties is a scientific and not an ethical issue, and thus should not be part of neuroethics. This argument is usually followed by a denial that neuroethics is sufficiently distinct from traditional bioethics to warrant being called a discipline in its own right.

The response to these critics is different: Whether or not these various branches of inquiry form a natural kind or are themselves a focus of ethical analysis is quite beside the point. Neuroethics is porous. One cannot successfully engage with many of the ethical issues without also understanding the science. In addition, academic or intellectual disciplines are at least in part (if not entirely) social constructs. And in this case the horse is out of the barn: It is clear that interesting and significant work is being pursued regarding the brain bases of ethical thought and behavior, and that this theoretical understanding has influenced, and has the potential to influence, our own thinking about ethics and our ethical practices. That neuroethics exists is undeniable: Neuroethical lines of research have borne interesting fruit over the last 10–15 years; neuroethics is now recognized as an area of study both nationally and internationally; neuroethics courses are taught at many universities; and training programs, professional societies, and research centers for neuroethics have already been established. The NIH BRAIN Initiative has devoted considerable resources to encouraging neuroscientific projects that incorporate neuroethical projects and analyses. Neuroethics is a discipline in its own right in part because we already structure our practices in ways that recognize it as such. What is most significant about neuroethics is not whether both the ethics of neuroscience and the neuroscience of ethics are given the same overarching disciplinary name, but that there are people working on both endeavors and that they are in dialogue (and sometimes, the very same people do both).

Of course, to the extent that neuroethicists asks questions about disease, treatment, and so on, the questions will look familiar, and for answers they can and should look to extant work in traditional bioethics so as not to reinvent the wheel. But, ultimately, Farah is correct in saying that “New ethical issues are arising as neuroscience gives us unprecedented ways to understand the human mind and to predict, influence, and even control it. These issues lead us beyond the boundaries of bioethics into the philosophy of mind, psychology, theology, law and neuroscience itself. It is this larger set of issues that has…earned it a name of its own” (Farah 2010, p. 2).

2. The ethics of neuroscience

Neuroethics is driven by neurotechnologies: it is concerned with the ethical questions that attend the development and effects of novel neurotechnologies, as well as other ethical and philosophical issues that arise from our growing understanding of how brains give rise to the people that we are and the social structures that we inhabit and create. These questions are intimately bound up with scientific questions about what kinds of knowledge can be acquired with particular techniques: what are the scope and limits of what a technique can tell us? With many new techniques, answers to these questions are obscure not only to the lay public, but often to the scientists themselves. The uncertainty about the reach of these technologies adds to the challenge of grappling with the ethical issues raised.

Many new neurotechnologies enable us to monitor brain processes and increasingly, to understand how the brain gives rise to certain behaviors; others enable us to intervene in these processes, to change and perhaps to control behaviors, traits, or abilities. Although it will be impossible to fully canvass the range of questions neuroethics has thus far contemplated, discussion of the issues raised by a few neurotechnologies will allow me to illustrate the range of questions neuroethics entertains. The following is a not-exhaustive list of topics that fall under the general rubric of neuroethics.

2.1 The ethics of enhancement

While medicine’s traditional goal of treating illness is pursued by the development of drugs and other treatments that counteract the detrimental effects of disease or insult, the same kinds of compounds and methods that are being developed to treat disease may also enhance normal cognitive functioning. We already possess the ability to improve some aspects of cognition above baseline, and will certainly develop other ways of doing so. Thus, a prominent topic in neuroethics is the ethics of neuroenhancement: What are the arguments for and against the use of neurotechnologies to enhance one’s brain’s capacities and functioning?

Proponents of enhancement are sometimes called “transhumanists,” and opponents are identified as “bioconservatives”. These value-laden appellations may unnecessarily polarize a debate that need not pit extreme viewpoints against each other, and that offers many nuanced intermediate positions that recognize shared values (Parens, 2005) and make room for embracing the benefits of enhancement while recognizing the need for some type of regulation (e.g. Lin and Alhoff, 2008). The relevance of this debate itself depends to some extent upon a philosophical issue familiar to traditional bioethicists: the notorious difficulty of identifying the line between disease and normal function, and the corresponding difference between treatment and enhancement. However, despite the difficulty attending the principled drawing of this line, there are already clear instances in which a technology such as a drug is used with the aim of improving a capacity or behavior that is by no means clinically dysfunctional, or with the goal of improving a capacity beyond the range of normal functioning. One common example is the use, now widespread on college campuses and beyond, of methylphenidate, a stimulant typically prescribed for the treatment of ADHD. Known by the brand name Ritalin, methylphenidate has been shown to improve performance on working memory, episodic memory and inhibitory control tasks. Many students use it as a study aid, and the ethical standing of such off-label use is a focus of debate among neuroethicists (Sahakian, 2007; Greely et al., 2008).

As in the example above, the enhancements neuroethicists most often discuss are cognitive enhancements: technologies that allow normal people to function cognitively at a higher level than they might without use of the technology (Knafo and Venero, 2015). One standing theoretical issue for neuroethics is a careful and precise articulation of whether, how and why cognitive enhancement has a philosophical status different than any other kind of enhancement, such as enhancement of physical capacities by the use of steroids (Dresler, 2019).

Often overlooked are other interesting potential neuroenhancements. These are less frequently discussed than cognitive enhancements, but just as worthy of consideration. They include social/moral enhancements, such as the use of oxytocin to enhance pro-social behavior, and other noncognitive but biological enhancements, such as potential physical performance enhancers controlled by brain-computer interfaces (BCIs) (see, e.g. Savulescu and Persson, 2012; Douglas, 2008; Dubljevíc and Racine, 2017; Annals of NYAC, 2004). In many ways, discussions regarding these kinds of enhancement effectively recapitulate the cognitive enhancement debate, but in some respects they raise different concerns and prompt different arguments.

2.1.1 Arguments for Enhancement

Naturalness: Although the aim of cognitive enhancement may at first seem ethically questionable at best, it is plausible that humans naturally engage in many forms of enhancement, including cognitive enhancement. Indeed, we typically applaud and value these efforts. After all, the aim of education is to cognitively enhance students (which, we now understand, occurs by changing their brains), and we look askance at those who devalue this particular enhancement, rather than at those who embrace it. So some kinds of cognitive enhancement are routine and unremarkable. Proponents of neuroenhancement will argue that there is no principled difference between the enhancements we routinely engage in, and enhancement by use of drugs or other neurotechnologies. Many in fact argue that we are a species whose nature it is to develop and use technology for augmenting our capacities, and that continual pursuit of enhancement is a mark of the human.

Cognitive liberty: Those who believe that “cognitive liberty” (see section 2.2 below) is a fundamental right argue that an important element of the autonomy at stake in cognitive liberty is the liberty to determine for ourselves what to do with our minds and to them, including cognitive enhancement, if we so choose. Although many who champion “cognitive liberty” do so in the context of a strident political libertarianism (e.g. Boire, 2001), one can recognize the value of cognitive liberty without swallowing an entire political agenda. So, for example, even if we think that there is a prima facie right to determine our own cognitive states, there may be justifiable limits to that right. More work needs to be done to establish the boundaries of the cognitive liberty we ought to safeguard.

Utilitarian arguments: Many proponents of cognitive enhancement point to the positive effects of enhancement and argue that the benefits outweigh the costs. In these utilitarian arguments it is important to consider the positive and negative effects not only for individuals, but also for society more broadly (see, e.g. Selgelid, 2007).

Deontological arguments: Sometimes enhancements are argued to be an avenue for leveling the playing field, in pursuit of fairness and equity. Such arguments are bolstered by the finding that at least for some interventions, enhancement effects are greater for those who have lower baseline functioning than those starting with a higher baseline (President’s Commission on Bioethics, 2015).

Practical arguments: These often point to the difficulty in enforcing regulations of extant technology, or the detrimental effects of trying to do so. They tend to be not really arguments in favor of enhancement, but rather reasons not to oppose its use.

2.1.2 Arguments against Enhancement

There are a variety of arguments against enhancement. Most fall into the following types:

Harms: The simplest and most powerful argument against enhancement is the claim that brain interventions carry with them the risk of harm, risks that make the use of these interventions unacceptable. The low bar for acceptable risk is an effect of the context of enhancement: risks deemed reasonable to incur when treating a deficiency or disease with the potential benefit of restoring normal function may be deemed unreasonable when the payoff is simply augmenting performance above a normal baseline. Some suggest that no risk is justified for enhancement purposes. In evaluating the strength of a harm-based argument against enhancement, several points should be considered: 1) What are the actual and potential harms and benefits (medical and social) of a given enhancement? 2) Who should make the judgments about appropriate tradeoffs? Different individuals may judge differently at what point the risk/benefit threshold occurs, and their judgments may depend upon the precise natures of the risks and benefits. Notice, too, the harm argument is toothless against enhancements that don’t pose any risks.

Unnaturalness: A number of thinkers argue, in one form or another, that use of drugs or technologies to enhance our capacities is unnatural, and the implication is that unnatural implies immoral. Of course, to be a good argument, more reason has to be given both for why it is unnatural (see an argument for naturalness, above), and for why naturalness and morality align. Some arguments suggest that manipulating our cognitive machinery amounts to tinkering with “God-given” capacities, and usurping the role of God as creator can be easily understood as transgressive in a religious-moral framework. Despite its appeal to religious conservatives, a neuroethicist may want to offer a more ecumenical or naturalistic argument to support the link between unnatural and immoral, and will have to counter the claim, above, that it is natural for humans to enhance themselves.

Diminishing human agency: Another argument suggests that the effect of enhancement will be to diminish human agency by undermining the need for real effort, and allowing for success with morally meaningless shortcuts. Human life will lose the value achieved by the process of striving for a goal and will be belittled as a result (see, e.g. Schermer, 2008; Kass, 2003). Although this is a promising form of argument, more needs to be done to undergird the claims that effort is intrinsically valuable. Recent work suggests no general argument to this effect is forthcoming (Douglas, 2019). After all, few find compelling the argument that we ought to abandon transportation by car for horses, walking, or bicycling, because these require more effort and thus have more moral value.

The hubris objection: This interesting argument holds that the type of attitude that seems to underlie pursuit of such interventions is morally defective in some way, or is indicative of a morally defective character trait. So, for example, Michael Sandel suggests that the attitude underlying the attempt to enhance ourselves is a “Promethean” attitude of mastery that overlooks or underappreciates the “giftedness of human life.” It is the expression and indulgence of a problematic attitude of dominion toward life to which Sandel primarily objects: “The moral problem with enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes” (Sandel, 2002). Others have pushed back against this tack, arguing that the hubris objection against enhancement is at base a religious one, or that it fundamentally misunderstands the concepts it relies upon (Kahane, 2011).

Equality and Distributive Justice: One question that routinely arises with new technological advances is “who gets to benefit from them?” As with other technologies, neuroenhancements are not free. However, worries about access are compounded in the case of neuroenhancements (as they may also be with other learning technologies). As enhancements increase capacities of those who use them, they are likely to further widen the already unconscionable gap between the haves and have-nots: We can foresee that those already well-off enough to afford enhancements will use them to increase their competitive advantage against others, leaving further behind those who cannot afford them. Not all arguments in this vein militate against enhancement. For example, the finding mentioned above -- that at least with some cognitive enhancement technologies, those who have lower baseline functioning experience greater improvements than those starting at a higher level -- could ground pro-enhancement fairness and equity arguments for leveling the playing field (President’s Commission on Bioethics, 2015). As public consciousness about racial and economic disparities increases, we should expect more neuroethical work on this topic. Although one can imagine policy solutions to distributive justice concerns, such as having enhancements covered by health insurance, having the state distribute them to those who cannot afford them, etc., widespread availability of neuroenhancements will inevitably raise questions about coercion.

Coercion: The prospect of coercion is raised in several ways. Obviously, if the state decides to mandate an enhancement, treating its beneficial effects as a public health issue, this is effectively coercion. We see this currently in the backlash against vaccinations: they are mandated with the aim of promoting public health, but in some minds the mandate raises concerns about individual liberty. I would submit that the vaccination case demonstrates that at least on some occasions coercion is justified. The question is whether coercion could be justifiable for enhancement, rather than for harm prevention. Although some coercive ideas, such as the suggestion that we put Prozac or other enhancers in the water supply, are unlikely to be taken seriously as a policy issue (however, see Appel 2010 [2011]), less blatant forms of coercion are more realistic. For example, if people immersed in tomorrow’s competitive environment are in the company of others who are reaping the benefits from cognitive enhancement, they may feel compelled to make use of the same techniques just to remain competitive, even though they would rather not use enhancements. The danger is that respecting the autonomy of some may put pressure on the autonomy of others.

There is unlikely to be any categorical resolution of the ethics of enhancement debate. The details of a technology will be relevant to determining whether a technology ought to be made available for enhancement purposes: we ought to treat a highly enhancing technology that causes no harm differently from one that provides some benefit at noticeable cost. Moreover, the magnitude of some of the equality-related issues will depend upon empirical facts about the technologies. Are neurotechnologies equally effective for everyone? As mentioned, there is evidence that some known enhancers such as the psychostimulants are more effective for those with deficiencies than for the unimpaired: studies suggest the beneficial effects of these drugs are proportional to the degree to which a capacity is impaired (Hussain et al., 2011). Other reports claim that normal subjects’ capacities are not actually enhanced by these drugs, and some aspects of functioning may actually be impaired (Mattay, et al., 2000; Ileva et al., 2013). If this is a widespread pattern, it may alleviate some worries about distributive justice and contributions to social and economic stratification, since people with a deficit will benefit proportionately more than those using the drug for enhancement purposes. Bear in mind, however, that biology is rarely that equitable, and it would be surprising if this pattern turned out to be the norm. Since the technologies that could provide enhancements are extremely diverse, ranging from drugs to implants to genetic manipulations, assessment of the risks and benefits and the way in which these technologies bear upon our conception of humanity will have to be empirically grounded.

2.2 Cognitive liberty

Freedom is a cornerstone value in liberal democracies like our own, and one of the most cherished kinds of freedom is freedom of thought. The main elements of freedom of thought, or “cognitive liberty” as it is sometimes called (Sententia, 2013), include privacy and autonomy. Both of these can be challenged by the new developments in neuroscience. The value of, potential threat to, and ways to protect these aspects of freedom are a concern for neuroethics. Several recent papers have posited novel rights in this realm, such as rights to cognitive liberty, to mental privacy, to mental integrity, and to psychological continuity (Ienca and Andorno, 2017), or to psychological integrity and mental self-determination (Bublitz, 2020).

2.2.1 Privacy

As the framers of our constitution were well aware, freedom is intimately linked with privacy: even being monitored is considered potentially “chilling” to the kinds of freedoms our society aims to protect. One type of freedom that has been championed in American jurisprudence is “the right to be let alone” (Warren and Brandeis, 1890), to be free from government or other intrusion in our private lives.

In the past, mental privacy could be taken for granted: the first-person accessibility of the contents of consciousness ensured that the contents of one’s mind remained hidden to the outside world, until and unless they were voluntarily disclosed. Instead, the battles for freedom of thought were waged at the borders where thought meets the outside world -- in expression -- and were won with the First Amendment’s protections for those freedoms (note, however, that these protections are only against government infringement). Over the last half century, technological advances have eroded or impinged upon many traditional realms of worldly privacy. Most of the avenues for expression can be (and increasingly are) monitored by third parties. It is tempting to think that the inner sanctum of the mind remains the last bastion of real privacy.

This may still be largely true, but even the privacy of the mind can no longer to be taken for granted. Our neuroscientific achievements have already made significant headway in allowing others to discern some aspects of our mental content through neurotechnologies. Noninvasive methods of brain imaging have revolutionized the study of human cognition and have dramatically altered the kinds of knowledge we can acquire about people and their minds. Niether is the threat to mental privacy as simple as the naive claim that neuroimaging can read our thoughts, nor are the capabilities of imaging so innocuous and blunt that we needn’t worry about that possibility. A focus of neuroethics is to determine the real nature of the threat to mental privacy, and to evaluate its ethical implications, many of which are relevant to legal, medical, and other social issues (Shen, 2013). For example, in a world in which the bastion of the mind may be lowering its drawbridges, do we need extra protections? Doing so effectively will require both a solid understanding of the neuroscientific technologies and the neural bases of thought, as well as a sensitivity to the ethical problems raised by our growing knowledge and ever-more-powerful neurotechnologies. These dual necessities illustrate why neuroethicists must be trained both in neuroscience and in ethics. In what follows I briefly discuss the most relevant neurotechnology and its limitations and then canvas a few ways in which privacy may be infringed by it.

2.2.1.2 An illustration: Potential threats to privacy with Functional MRI

One of the most prominent neurotechnologies poised to pose a threat to privacy is Magnetic Resonance Imaging, or MRI. MRI can provide both structural and functional information about a person’s brain with minimal risk and inconvenience. In general, MRI is a tool that allows researchers noninvasively to examine or monitor brain structure and activity, and to correlate that structure or function with behavior. Structural or anatomical MRI provides high-resolution structural images of the brain. While structural imaging in the biosciences is not new, MRI provides much higher resolution and better ability to differentiate tissues than prior techniques such as x-rays or CT scans.

However, it is not structural but functional MRI (fMRI) that has revolutionized the study of human cognition. fMRI provides information about correlates of neuronal activity, from which neural activity can be inferred. Recent advances in analysis methods for neuroimaging data such as multi-voxel pattern analysis and related techniques now allow relatively fine-grained “decoding” of brain activity. Decoding involves probabilistic matching, using machine learning, of an observed pattern of brain activation with experimentally established correlations between activity patterns and some kind of functional variable, such as task, behavior, or content. The kind of information provided by functional imaging promises to provide important evidence useful for three goals: Decoding mental content, diagnosis, and prediction. Neuroethical questions arise in all these areas.

Before discussing these issues, it is important to remember that neuroimaging is a technology that is subject to a number of significant limitations, and these technical issues limit how precise the inferences can be. For example:

  • The correlations between the fMRI signal and neural activity are rough: the signal is delayed in time from the neuronal activity, and spatially smeared, thus limiting the spatial and temporal precision of the information that can be inferred.
  • A number of dynamic factors relate the fMRI signal to activity, and the precise underlying model is not yet well-understood.
  • There is relatively low signal-to-noise, necessitating averaging across trials and often across people.
  • Individual brains differ both in brain structure and in function. Variability makes determining when differences are clinically or scientifically relevant difficult, and leads to noisy data. Due to natural individual variability in structure and function, and brain plasticity (especially during development), even large differences in structure or deviation from the norm may not be indicative of any functional deficiency. Cognitive strategies can also affect variability in the data. These sources of variability can complicate the analysis of data and provide even more leeway for differences to exist without implying dysfunction.
  • Activity in a brain area does not entail that the region is necessary for performance of the task.
  • fMRI is so sensitive to motion that it would be virtually impossible to get information from a noncompliant subject. This makes the prospect of reading content from an unwilling mind virtually impossible.

Without appreciating these technical issues and the resulting limits to what can legitimately be inferred from fMRI, one is likely to overestimate or mischaracterize the potential threat that it poses. In fact, much of the fear of mindreading expressed in non-scientific publications stems from a lack of understanding of the science (Racine, 2015). For example, there is no scientific basis to the worry that imaging would enable the reading of mental content without our knowing it. Thus, fears that the government is able to remotely or covertly monitor the thoughts of citizens are unfounded.

2.2.1.1 Decoding of mental content

Noninvasive ways of inferring neural activity have led many to worry that mindreading is possible, not just in theory, but even now. Using decoding techniques fMRI can be used, for example, to reconstruct a visual stimulus from activity of the visual cortex while a subject is looking at a scene or to determine whether a subject is looking at a familiar face, or hearing a particular sound. If mental content supervenes on the physical structure and function of our brains, as most philosophers and neuroscientists think it does, then in principle it should be possible to read minds by reading brains. Because of the potential to identify mental content, decoding raises issues about mental privacy.

Despite the remarkable advances in brain imaging technology, however, when it comes to mental content, our current abilities to “mind-read” are relatively limited, but continually improving (Roskies, 2015, 2020). Although some aspects of content can be decoded from neural data, these tend to be quite general and nonpropositional in character. The ability to infer semantic meaning from ideation or visual stimulation tends to work best when the realm of possible contents are quite constrained. Our current abilities allow us to infer some semantic atoms, such as representations denoting one of a prespecified set of concrete objects, but not unconstrained content, or entire propositions. Of course, future advances might make worries about mindreading more pressing. For example, if we develop robust means for decoding compositional meaning, we may one day come to be able to decode propositional thought.

Still, some worries are warranted. Even if neuroimaging is not at the stage where mindreading is possible, it can nonetheless threaten aspects of privacy in ways that should give us pause. It is possible to identify individuals on the basis of their brain scans (Valizadeh et al., 2018). In addition, neuroimaging can provide some insights into attributes of people that they may not want known or disclosed. In some cases, subjects may not even know that these attributes are being probed, thinking they are being scanned for other purposes. A willing subject may not want certain things to be monitored. In what follows, I consider a few of these more realistic worries.

Implicit bias: Although explicitly acknowledged racial biases are declining, this may be due to a reporting bias attributable to the increased negative social valuation of racial prejudice. Much contemporary research now focuses on examining implicit racial biases, which are automatic or unconscious reflections of racial bias. With fMRI and EEG, it is possible to interrogate implicit biases, sometimes without the subject’s awareness that that is what is being measured (Checkroud, 2014)[3]. While there is disagreement about how best to interpret implicit bias results (e.g., as a measure of perceived threat, as in-group/out-group distinctions, etc.), and what relevance they have for behavior, the possibility that implicit biases can be measured, either covertly or overtly, raises scientific and ethical questions (Molenberghs and Louis, 2018). When ought this information be collected? What procedures must be followed for subjects legitimately to consent to implicit measures? What significance should be attributed to evidence of biases? What kind of responsibility should be attributed to people who hold them? What predictive power might they hold? Should they be used for practical purposes? One can imagine obvious but controversial potential uses for implicit bias measures in legal situations, in employment contexts, in education, and in policing, all areas in which concerns of social justice are significant.

Lie detection: Several neurotechnologies are being used to detect deception or neural correlates of lying or concealing information in experimental situations. For example, both fMRI measures and EEG analysis techniques relying on the P300 signal have been used in the laboratory to detect deception with varying levels of success. These methods are subject to a variety of criticisms (Farah et al., 2014). For example, almost all experimental studies fail to study real lying or deception, but instead investigate some version of instructed misdirection. The context, tasks, and motivations differ greatly between actual instances of lying and these experimental analogs, calling into question the ecological validity of these experimental techniques. Moreover, accuracy, though significantly higher than chance, is far from perfect, and because of the inability to determine base rates of lying, error rates cannot be effectively assessed. Thus, we cannot establish their reliability for real-world uses. Finally, both physical and mental countermeasures decrease the accuracy of these methods (Hsu et al., 2019). Despite these limitations, several companies have marketed neurotechnologies for this purpose.

Character traits: Neurotechnologies have shown some promise in identifying or predicting aspects of personality or character. In an interesting study aimed at determining how well neuroimaging could detect lies, Greene and colleagues gave subjects in the fMRI scanner a prediction task in a game of chance that they could easily cheat on. By using statistical analysis the researchers could identify a group of subjects who clearly cheated and others who did not (Greene and Paxton, 2009). Although they could not determine with neuroimaging on which trials subjects cheated, there were overall differences in brain activation patterns between cheaters and those who played fair and were at chance in their predictions. Moreover, Greene and colleagues repeated this study at several months remove, and found that the character trait of honesty or dishonesty was stable over time: cheaters the first time were likely to cheat (indeed, cheated even more the second time), and honest players remained honest the second time around. Also interesting was the fact that the brain patterns suggested that cheaters had to activate their executive control systems more than noncheaters, not only when they cheated, but also when deciding not to cheat. While the differential activations cannot be linked specifically to the propensity to cheat rather than to the act of cheating, the work suggests that these task-related activation patterns may reflect correlates of trustworthiness.

The prospect of using methods for detecting these sorts of traits or behaviors in real-world situations raises a host of thorny issues. What level of reliability should be required for their employment? In what circumstances should they be admissible as evidence in the courtroom? For other purposes? Using lie detection or decoding techniques from neuroscience in legal contexts may raise constitutional concerns: Is brain imaging a search or seizure as protected by the 4th Amendment? Would its forcible use be precluded by 5th Amendment rights? These questions, though troubling, might not be immediately pressing: in a landmark case (US v. Semrau, 2012) the court ruled that fMRI lie detection is inadmissible, given its current state of development. However, the opinion left open the possibility that it may be admissible in the future, if methods improve. Finally, to the extent that relevant activation patterns may be found to correlate significantly with activation patterns on other tasks, or with a task-free measure such as default-network activity, it raises the possibility that information about character could be inferred merely by scanning them doing something innocuous, without their knowledge of the kind of information being sought. Thus, there are multiple dimensions to the threat to privacy posed by imaging techniques.

2.2.1.2 Diagnosis

Increasingly, neuroimaging information can bear upon diagnoses for diseases, and in some instances may provide predictive information prior to the onset of symptoms. Work on the default network is promising for improving diagnosis in certain diseases without requiring that subjects perform specific tasks in the scanner (Buckner et al., 2008). For some diseases, such as in Alzheimer’s disease, MRI promises to provide diagnostic information that previously could only be established at autopsy (Liu et al., 2018). fMRI signatures have also been linked to a variety of psychiatric diseases, although not yet with the reliability required for clinical diagnosis (Aydin et al., 2019). Neuroethical issues also arise regarding ways to handle incidental findings, that is, evidence of unsymptomatic tumors or potentially benign abnormalities that appear in the course of scanning research subjects for non-medical purposes (Illes et al. 2006; Illes and Sahakian, 2011).

The ability to predict future functional deficits raises a host of issues, many of which have been previously addressed by genethics (the ethics of genetics), since both provide information about future disease risk. What may be different is that the diseases for which neurotechnologies are diagnostically useful are those that affect the brain, and thus potentially mental competence, mood, personality, or sense of self. As such they may raise peculiarly neuroethical questions (see below).

2.2.1.3 Prediction

As discussed, decoding methods allow one to associate observed brain activity with previously observed brain/behavior correlations. In addition, such methods can also be used to predict future behaviors, insofar as these are correlated with observations of brain activity patterns. Some studies have already reported predictive power over upcoming decisions (Soon et al., 2008). Increasingly, we will see neuroscience or neuroimaging data that will give us some predictive power over longer-range future behaviors. For example, brain imaging may allow us to predict the onset of psychiatric symptoms such as psychotic or depressive episodes. In cases in which this behavior is indicative of mental dysfunction it raises questions about stigma, but also may allow more effective interventions.

One confusion regarding neuro prediction should be clarified immediately: When neuroimages are said to “predict” future activity, it means they provide some statistical information regarding likelihood. Prediction in this sense does not imply that the predicted behavior necessarily will come to pass; it does not mean a person’s future is fated or determined. Although scientists occasionally make this mistake when discussing their results, the fact that brain function or structure may give us some information about future behaviors should not be interpreted as a strong challenge to free will. The prevalence of this mistake among both philosophers and scientists again illustrates the importance for neuroethicists of sophistication in both neuroscience and philosophy.

Perhaps the most consequential and most ethically difficult potential use of predictive information is in the criminal justice system. For example, there is evidence that structural brain differences are predictive of scores on the PCL-R, a tool developed to diagnose psychopathy. It is also well-established that psychopaths have high rates of recidivism for violent offenses. Thus, in principle neuroimaging could be used to provide information about an individual’s likelihood of recidivism. Indeed, brain information appears to offer some predictive value when combined with other factors (Poldrack et al., 2019; Delfin et al., 2019). One cautionary tale comes from a recent exchange in the literature: A report suggested that brain activity on a cognitive task predicts recidivism (Aharoni et al., 2013), but a critical reanalysis of the data suggests that methodological concerns led to an overestimate of the predictive value of the neural data (Poldrack et al., 2019; Aharoni et al., 2014), highlighting the importance of technical expertise in assessing the findings and for translating the results of scientific experiments for practical purposes and ethical analysis.

Neuroethical analysis here is essential. Should neural data be admissible for determining sentences or parole decisions? Would that be equivalent to punishing someone for crimes they have not committed? Or is it just a neutral extension of current uses of actuarial information, such as age, gender, and income level? At an extreme, one could imagine using predictive information to detain people who have not yet committed a crime, arresting them before they do. This dystopian scenario, portrayed in the film Minority Report (Speilberg, 2002), also illustrates how our abilities to predict can raise difficult ethical and policy questions when they collide with intuitions about and the value of free will and autonomy. More generally, work in neuroethics could be of significant practical use for the law, and indeed is often called by another moniker, “neurolaw” (see section 2.7).

In sum, neuroimaging techniques raise a number of neuroethical issues. The ones discussed above pertain to the use of fMRI, currently an expensive and cumbersome technique. But other imaging methods exist that could be far more widespread. If car companies install imaging methods, for example using NIRS (near infrared spectroscopy), which is an imaging method that could be used at a distance and without the subject’s knowledge, or some other form of brain monitoring (https://www.jaguarlandrover.com/news/2015/06/jaguar-land-rover-road-safety-research-includes-brain-wave-monitoring-improve-driver) to monitor levels of attention in order to alert drivers who begin to doze off, could that data be used in a court of law in the event of an accident? Even though the kind of information these methods provide is very crude and generally unsuitable for decoding mental content, there are conceivable everyday situations on the horizon in which issues of mental privacy and neurotechnology might arise.

2.2.2 Autonomy and authenticity

A second way in which cognitive liberty could be impacted is by limiting a person’s autonomy. Autonomy is the freedom to be the person one wants to be, to pursue one’s own goals without unjustifiable hindrances or interference, to be self-governing. Although definitions of autonomy differ, it is widely appreciated as a valuable aspect of personhood. Autonomy of the mental can be impacted in a number of ways. Here are several:

Direct interventions: The ability to directly manipulate our brains to control our thoughts or behavior is an obvious threat to our autonomy (Gilbert, 2015; Walker and Mackenzie, 2020). Some of our neurotechnologies offer that potential, although these sorts of neurotechnologies are invasive and used only in cases where they are medically justified. Other types of interventions, such as the administration of drugs to calm a psychotic person, may also impact autonomy.

We know that stimulating certain brain areas in animals will lead to repetitive and often stereotyped behaviors. Scientists have implanted rats with electrodes and have been able to control their foraging behaviors by stimulating their cortex. In theory we could control a person’s behavior by implanting electrodes in the relevant regions of cortex. In practice, we have a few methods that can do this, but only in a limited way. For example, Transcranial Magnetic Stimulation (TMS) applied to motor cortex can elicit involuntary movements in the part of the body controlled by the cortical area affected, or when repetitively administered it can inhibit activity for a period of time, acting as a temporary lesion. Effects will vary depending on what area of the brain is stimulated; higher cognitive functions can be impacted as well. Relatively invasive methods, such as Deep Brain Stimulation (DBS, discussed below) and electrocorticography (ECOG), both techniques requiring brain surgery, demonstrate that direct interventions can affect cognition, action, and emotion, often in very particular and predictable ways.

However much of a threat to autonomy these methods pose in theory, they are rarely used with the aim of compromising autonomy. On the contrary, direct brain interventions, when used, are largely aimed at augmenting or restoring rather than bypassing or diminishing autonomy (Roskies, 2015; Brown, 2015). For example, one rapidly advancing field in neuroscience is the area of neural prostheses and brain computer interfaces (Jebari, 2013; Klein et al. 2015). Neural prostheses are artificial systems that replace defective neural ones, usually of sensory systems. Some of the more advanced and widely-known are artificial cochleas. Other systems have been developed that allow vision-like information to feed to touch-specific receptors, enabling blind people to navigate the visual world. Brain computer interfaces, on the other hand, are systems that read brain activity and use it to guide robotic prostheses for limbs, or to move a cursor on a video screen. Prosthetic limbs that are guided by neural signals have restored motor agency to paraplegics and quadriplegics, and other BCIs have been used to communicate with people who are “locked in” and cannot move their bodies (Abbott and Peck, 2017). Advisory and predictive implants use neural information to warn patients about the risk of, for example, an upcoming seizure, allowing them to prophylactically self-medicate (Brown, 2015; Lazaro-Munoz et al., 2017). Thus, although in principle brain interventions could be used to control people and diminish their autonomy, in general, direct interventions are being developed to restore and enhance it (Lavazza, 2018).

Neuroeconomics and neuromarketing: There are more subtle ways to impact autonomy than direct brain manipulations, and these are well within our grasp: Our thoughts can be manipulated indirectly: old worries prompted by propaganda and subliminal advertising have taken on a renewed currency with the advent of neuroeconomics and neuromarketing (Spence, 2020). By better understanding how we process reward, how we make decisions more generally, and how we can bias or influence that process, we open the door to more effective external indirect manipulations. Indeed, social psychology has been showing how subtle alterations to our external environment can affect beliefs, moods, and behaviors. The precise threats posed by understanding the neural mechanisms of decision making have yet to be fully articulated (Stanton et al., 2017). Is neuromarketing being used merely to design products that satisfy our desires more fully or is it being used to manipulate us? Depending on how you see it, it could be construed as a good or an evil. Does understanding the neural substrates of choice and reward provide advertisers more effective tools than they had merely by using behavioral data, or just more costly ones? Do consumers consequently have less autonomy? How can we compensate for or counteract these measures? These questions have yet to be adequately addressed.

Regulation: Yet another way that autonomy can be impacted is by restricting the things that a person can do with and to her own mind. For instance, banning mind-altering drugs is an externally imposed restraint on people’s ability to choose their states of consciousness (Biore, 2001). The degree to which a person should be prevented from doing what he wishes to his or her self, body or mind, is an ethical issue on which people have differing opinions. Some claim this kind of regulation is a problematic infringement of autonomy, but certain regulations of this type are already largely accepted in our society. Regulation of drugs does impact our autonomy, but it arguably averts potentially great harms. Allowing cognitive enhancing technologies only for treatment uses but not for enhancement purposes is another restriction of mental autonomy. Whether it is one we want to sanction is still up for debate. Regardless, as the coronavirus pandemic has made abundantly clear, complete autonomy is not practically possible in a world in which one person’s actions affect the well-being of others.

Belief in free will: Advances in neuroscience have been frequently claimed to have bearing upon the question of whether we have free will and on whether we can be truly morally responsible for our actions. Although the philosophical problem of free will is generally considered to be a metaphysical problem, demonstrable lack of freedom would have significant ethical consequences. A number of neuroscientists and psychologists have intimated or asserted that neuroscience can show or has shown that free will is an illusion (Brembs, 2010; Libet, 1983; Soon, 2010; Harris, 2012). Others have countered with arguments to the effect that such a demonstration is in principle impossible (Roskies 2006). Regardless of what science actually shows about the nature of free will, the fact that people believe neuroscience evidence supports or undermines free will has been shown to have practical consequences. For example, evidence merely supporting the premise that our minds are a function of our brains, as most of neuroscience does, is perceived by some people to be a challenge to free will. And in several studies, manipulating belief in free will affects the likelihood of cheating (e.g. Vohs and Schooler 2008). The debate within neuroscience about the nature and existence of free will will remain relevant to neuroethics in part because of its impact on our moral, legal and interpersonal practices of blaming and punishing people for their harmful actions.

2.3 Personal Identity

One of the aspects of neuroethics that makes it distinctive and importantly different from traditional bioethics is that we recognize that, in some yet-to-be-articulated sense, the brain is the seat of who we are. For example, we now have techniques that alter memories by blunting them, strengthening them, or selectively editing them. We have drugs that affect sexuality, and others that affect mood. Here, neuroethics rubs up against some of the most challenging and contentious questions in philosophy: What is the self? What sorts of changes can we undergo and still remain ourselves? What is it that makes us the same person over time? Of what value is this temporal persistence? What costs would changing personhood incur?

Because neuroscience intervention techniques can affect memory, desires, personality, mood, impulsivity and other things we might think of as constitutive of the person or the self, the changes they can cause (and combat) have a unique potential to affect both the meaning and quality of the most intimate aspects of our lives. Although neuroethics is quite different from traditional bioethics in this regard, it is not so different from genethics. For a long time, it was argued that “you are your genes”, and so the ability to interrogate our genomes, to change them, or to select among them was seen as both a promising and potentially problematic one, enabling us to understand and manipulate human nature to an extent far beyond any we had previously enjoyed. But as we have discovered, we are not (just) our genes. Our ability to sequence the human genome has not laid bare the causes of cancer, the genetic basis for intelligence, or of psychiatric illness, as many had anticipated. One reason is that our genome is a distal cause of the people we come to be: many complex and intervening factors matter along the way. Our brains, on the other hand, are a far more proximal cause of who we are and what we do. Our moment-to-moment behavior and our long-range plans are directly controlled by our brains, in a way they are not directly controlled by our genomes. If “You are your genes” seemed a plausible maxim, “You are your brain” is far more so.

Despite its plausibility, it is notoriously difficult to articulate the way in which we are our brains: What aspects of our brains makes us the people that we are? What aspects of brain function shape our memories, our personality, our dispositions? What aspects are irrelevant or inessential to who we are? What makes possible a coherent sense of self? The lack of answers we have to these deep neurophilosophical questions does little to alleviate the pragmatic worries raised by neuroscience, since our ability to intervene in brains outstrips our understanding of what we are doing, and can affect all these aspects of our being.

In philosophy, work focusing on persons may address a variety of distinct issues using different constructs. Philosophers might be interested in the nature of personhood, in the nature of the self, in the kinds of traits and psychological states or processes that give an experienced life coherence or authenticity, or in the ingredients for a flourishing life. Each calls for its own analysis. Outside of philosophy, many of these issues are run together, and confusion often results. Neuroethics, while in a unique position to leverage these issues and apply them in a fruitful way, often fails to make the most of the conceptual work philosophers have done in this area. For example, papers in neuroethics often conflate a number of these distinct concepts, referring them under the rubric of “personal identity”. This conflation further muddies already difficult waters, and diminishes the potential value of neuroethical work. Below I try to give a brief roadmap of the separate strands that neuroethicists have been concerned with.

The philosopher’s conception of personal identity refers to the issue of what makes a person at one time numerically identical to a person at another time. This metaphysical question has been addressed by a variety of philosophical theories. For example, some theorists argue that what it is to be the numerically identical over time is to be the same human organism (Olson, 1999), and that being the same organism is determined by sameness of life. If having the same life is the relevant criterion, one could argue that life-sustaining areas of the brainstem are essential to personal identity (Olson, 1999). For those who believe instead that bodily integrity is what is essential, the ability of neuroscience to alter the brain will arguably have little effect on personal identity. Many other philosophers have identified the same person as being grounded in psychological continuity of some sort (e.g., Locke). If this criterion is the correct one, then the stringency of that criterion may be crucial: radical brain manipulation may cause an abrupt enough shift in memories and other psychological states that a person after brain intervention is no longer the same person he or she was prior. The more stringent the criterion, the greater is the potential threat of neurotherapies to personal identity (Jecker and Ko, 2017; Pascalev et al. 2016). On the other hand, if the standards for psychological continuity or connectedness are high enough, changes in personal identity may in fact be commonplace even without neurotherapies. Recognizing this may prompt us to question the criterion and/or the importance or value of personal identity. Parfit, for example, argues that what makes us one and the same person over time, and what we value (psychological continuity and connectedness) come apart (Parfit, 1984).

For some, the question of personhood comes apart from the question of identity. Even if personal (i.e. numerical) identity is unchallenged by neurotechnologies and by brain dysfunction, important neuroethical questions may still be raised. Philosophers less concerned with metaphysical questions about numerical identity have focused more on the self, and on notions of authenticity and self-identification, emphasizing the importance of the psychological perspective of the person in question in creating a coherent self (Mackenzie and Walker, 2015; Erler, 2011; Pugh et al., 2017). In this vein, Schectman has suggested that what is important is the ability to create a coherent narrative, or “narrative self” (Schectman, 2014). There is evidence that the ability to create and sustain a coherent narrative in which we are the protagonist and with which we identify, is a measure of psychological health (Waters et al., 2014). On the other hand, some philosophers deny that they have a narrative self and locate selfhood in a synchronic property (Strawson, 2004). To further complicate matters, it has been suggested that there is a distinction between the narrative person and the narrative self, these being differentiable via degrees of appropriation. Concerns about the nature and coherence of the narrative self, and about authenticity and autonomy, tend to be the ones most relevant to neuroethics, since these constructs clearly can be affected by even modest brain changes. For example, how do we evaluate the costs and ethical issues attending a dramatic change in personality, or a modification of key memories? What are the criteria governing whether one is authentic or inauthentic, and what is the value of authenticity? If neurointerventions promise to result in dramatic shifts in a person’s values and commitments, whose interests should take priority if one person must be favored - the original or the resulting person? The relevance of personhood, self, agency, identity and identification needs further elaboration for neuroethics. In what follows we discuss how one neurotechnology can bear upon some of these questions.

2.3.1 Example: Deep Brain Stimulation

Deep Brain Stimulation (DBS) involves the stimulation of chronically implanted electrodes deep in the brain, and it is FDA approved for treating Parkinson’s Disease, a neurodegenerative disease affecting the dopamine neurons in the striatum. Neuromodulation with DBS often restores motor function in these patients, permitting many to live much improved lives. It is also being explored as treatments for treatment-resistant depression, OCD, addiction, and other neurological and psychiatric issues. Although DBS is clearly a boon to many people suffering from neurological diseases, there are a number of puzzling issues that arise from its adoption. First, it is a highly invasive treatment, requiring brain surgery and permanent implantation of a stimulator, thus posing a real possibility of harm and raising questions of cost/benefit tradeoffs. This is coupled with the fact that scientists have little mechanistic understanding of how the treatment works when it does, and treatment regimes and electrode placement tend to be symptomatic. Occasionally DBS causes unusual side effects, such as mood changes, hypomania or mania, addictive behaviors, or hypersexual behavior. In one case a patient with wide-ranging musical tastes developed a fixation for Johnny Cash’s music, which persisted until stimulation was ceased (Mantione et al., 2014). Other reported cases involve changes in personality. The ethical questions in this area revolve around the ethics of intervening in ways that alter mood and/or personality, which is often discussed in terms of personal identity or “changing who the person is”, and around questions of autonomy and alienation (Klaming et al, 2013; Kraemer, 2013a).

One poignant example from the literature tells of a patient who, without intervention, was bedridden and had to be hospitalized due to severe motor dysfunction caused by Parkinson’s Disease (Leentjens et al., 2003). DBS resulted in a marked improvement in his motor symptoms but also caused him to be untreatably manic, which required institutionalization. Thus, this unfortunate man had to choose between being bedridden and catatonic, or manic and institutionalized. He made the choice (in his unstimulated state) to remain on stimulation (the literature does not mention whether his stimulated self concurred, as he was not deemed mentally competent in that state) (Kraemer, 2013b). While it did not happen in this case, one could imagine a situation in which the patient will choose, while unstimulated, to undergo chronic stimulation, but, while under stimulation, would choose otherwise (or vice versa). The possibility for dilemmas or paradoxes will arise when, for example, we try to determine the value of two potential outcomes that are differently valued by the people who might exist. To which person (or to the person in which state) should we give priority? Or, even more perplexing: if the “identity” (narrative or numerical) of the person is indeed shifted by the treatment, should we give one person the authority to consent to a procedure or choose an outcome that in practice affects a different person? DBS cases like this will provide fodder for neuroethicists for years to come (Skorburg and Sinnott-Armstrong, 2020).

Many other neurotechnologies that have been developed for treating brain dysfunction have primary or side effects that affect some aspect of what we may think of as related to human agency (Zuk et al., 2018). The ethical issues that arise with these neurotechnologies involve determining 1) in which way they do impact our selves or our agency; 2) what value, positive or negative we should put on this impact (or ability to so affect agency); and 3) how to weigh the positive gains against the negatives. One issue that has been raised is whether we possess a clear enough conception of the elements of agency in order to effectively perform this sort of analysis (Roskies, 2015). Moreover, given the likelihood that no objective criteria exist for how to evaluate tradeoffs in these elements, and the fact that different people may value different aspects of themselves differently, the weighing process will likely have to be subjectively relativized.

Finally, DBS as well as neural prostheses and BCIs raise another neuroethical issue: our conception of humanity and our relations to machines. Some contend that these technologies effectively turn a person into a cyborg, making him or her something other than human. While some find this an ethically unproblematic natural extension of our species’ characteristic drive to invent and improve our selves with technology (Clark, 2004), others fear that creating a bio-cybernetic organism raises troubling questions about the nature or value of humanity, about the bounds of self, or about Promethean impulses. These questions too fall squarely in the domain of neuroethics.

2.4 Consciousness, life, and death

The Hard Problem of consciousness (Chalmers, 1995) has yielded little to the probings of neuroscience, and it is not clear whether it ever will. However, in the last decade impressive advances have been made in other realms of consciousness research. Most impressive have been the improvements in detecting altered levels of consciousness with brain imaging. Diagnosing behaviorally unresponsive patients has long been a problem for neurology, although as long as 20 years ago, neurologists had recognized systematic differences between and prognoses for a persistent vegetative state (PVS), a minimally conscious state (MCS), and locked-in syndrome, a syndrome in which the patient has normal levels of awareness but cannot move. Functional brain imaging has fundamentally changed the problems faced by those caring for these patients. Owen and colleagues have shown that it is possible to identify some patients mischaracterized as being in PVS by demonstrating that they are able to understand commands and follow directions (Owen, 2006). In these studies, both normal subjects and brain injured patients were instructed to visualize doing two different activities while in the fMRI scanner. In normal subjects these two tasks activated different parts of cortex. Owen showed that one patient diagnosed as in PVS showed this normal pattern, unlike other PVS patients, who showed no differential activation when given these instructions. This data suggests that some PVS diagnosed subjects can in fact process and understand the instructions, and that they have the capacity for sustained attention and voluntary mental action. These results were later replicated in other such patients, and based on small cohorts, it is estimated that approximately 20% of PVS patients have been misdiagnosed. In a later study the same group used these imagination techniques to elicit from some patients with severe brain injury answers to Yes/No questions (Monti et al., 2010). More recent work aims to adapt these methods for EEG, a cheaper and more portable neurotechnology (Bai et al., 2019). Neuroimaging provides new tools for evaluating and diagnosing patients with disorders of consciousness (Owen, 2013; Campbell, 2020).

These studies have the potential to revolutionize the way in which patients with altered states of consciousness are diagnosed and cared for, may have bearing on when life support is terminated, and raise the possibility of allowing patients to have some control over questions regarding their care and end of life decisions (Peterson et al., 2020; Braddock, 2017). This last possibility, while in some ways alleviating some worries about how to treat severely brain-damaged individuals, raises other thorny ethical problems. One of the most pressing is how to deal with questions of competence and informed consent: These are people with severe brain damage, and even when they do appear capable on occasion of understanding and answering questions, there are still questions about whether their abilities are stable, how sophisticated they are, and whether they can competently make decisions about such weighty issues, as well as whether it is really in their interest to remain on life support (Kahane and Savalescu, 2009; Fischer and Truog, 2017). Nonetheless, these methods open up new possibilities for diagnosis and treatment, and for restoring a measure of autonomy and self-determination to people with severe brain damage.

2.5 Practical neuroethics

Medical practice and neuroscientific research raise a number of neuroethical issues, many of which are common to bioethics. For example, issues of consent, of incidental findings, of competence, and of privacy of information arise here. In addition, practicing neurologists, psychologists and psychiatrists may routinely encounter certain brain diseases, disabilities, or psychological dysfunctions that raise neuroethical issues that they must address in their practices. (For a more detailed discussion of these more applied issues approached from a pragmatic point of view, see for example Racine, 2010; Martineau and Racine, 2020).

2.6 Public perception of neuroscience

The advances of neuroscience have become a common topic in the popular media, with colorful brain images becoming a pervasive illustrative trope in news stories about neuroscience. While no one doubts that popularizing neuroscience is a positive good, neuroethicists have been legitimately worried about the possibilities for misinformation. These include worries about “the seductive allure” of neuroscience, and of misleading and oversimplified media coverage of complex scientific questions.

2.6.1 The seductive allure

There is a documented tendency for the layperson to think that information that makes reference to the brain, or to neuroscience or neurology, is more privileged, more objective, or more trustworthy than information that makes reference to the mind or psychology. For example, Weisberg and colleagues report that subjects with little or no neuroscience training rated bad explanations as better when they made reference to the brain or incorporated neuroscientific terminology (Weisberg et al., 2008). This “seductive allure of neuroscience” is akin to an unwarranted epistemic deference to authority. This differential appraisal extends into real-world settings, with testimony from a neuroscientist or neurologist judged to be more credible than that of a psychologist. The tendency is to view neuroscience as a hard science, in contrast to “soft” methods of inquiry that focus on function or behavior. With neuroimaging methods, this belies a deep misunderstanding of the genesis and significance of the neuroscientific information. What people fail to realize is that neuroimaging information is classified and interpreted by its ties to function, so (barring unusual circumstances) it cannot be more reliable or “harder” than the psychology it relies upon.

Brain images in particular have prompted worries that the colorful images of brains with “hotspots” that accompany media coverage could themselves be misleading. If people intuitively appreciate brain images as if they were akin to a photograph of the brain in action, that this could mislead them into thinking of these images as objective representations of reality, prompting them to overlook the many inferential steps and nondemonstrative decisions that underlie creation of the image they see (Roskies, 2007). The worry is that the powerful pull of the brain image will lend a study more epistemic weight than is justified, and discourage people from asking the many complicated questions that one must ask in order to understand what the image signifies, and what can be inferred from the data. Further work, however, has suggested that once one takes into account the privilege accorded to neuroscience over psychology, the images themselves do not further mislead (Schweitzer et al., 2011).

2.6.2 Media Hype

In this era of indubitably exciting progress in brain research, there is a “brain-mania” that is partially warranted but holds its own dangers. The culture of science is such that it is not uncommon for scientists to describe their work in the most dramatic terms possible in order to secure funding and/or fame. Although the hyperbole can be discounted by knowledgeable readers, those less sophisticated about the science may take it at face value. Studies have shown that the media is rarely critical of the scientific findings they report, and they tend not to present alternative interpretations (Racine, 2006, 2015). The result is that the popular media conveys sometimes wildly inaccurate pictures of legitimate scientific discoveries, which can fuel both overly optimistic enthusiasm as well as fear. One of the clear pragmatic goals of neuroethics, whether it regards basic research or clinical treatments, is to exhort and educate scientists and the media to better convey both the promise and complexities of scientific research. It is the job of both these groups to teach people enough about science in general, and brain science in particular, that they see it as worthy of respect, and also of the same critical assessment that to which scientists themselves subject their own work.

It is admittedly difficult to accurately translate complicated scientific findings for the lay public, but it is essential. Overstatement of the significance of results can instill unwarranted hope in some cases, fear in others, and jadedness and suspicion going forward. Providing fodder for scientific naysayers has policy implications that go far beyond the reach of neuroscience. Mistrust of science is its own epidemic that needs to be inoculated against by careful, early, and continuing education of the public. This is essential for the future status and funding of the basic sciences, and, as we have seen, for the health of democracy and our planet more generally.

2.7 Neuroscience and justice

Social justice is a concern of ethics, and of neuroethics. Many of the ethical questions are not new, but some have novel aspects. Bioethics also has traditionally been concerned with issues of respect for patient’s autonomy and right to self-determination. As mentioned above, these questions take on added weight when the organ at issue is the patient’s brain, and questions about competence arise.

Ethical issues also attend doing neuroscientific research on nonhumans. Like traditional bioethics, neuroethics must address questions about the ethical use of animals for experimental purposes in neuroscience. In addition, however, it ought to consider questions regarding the use of animals as model systems for understanding the human brain and human cognition (Johnson et al., 2020). Animal studies have given us the bulk of our understanding of neural physiology and anatomy and have provided significant insight into function of conserved biological capacities. However, the further we push into unknown territory about higher cognitive functions, the more we will have to attend to the specifics of similarities and differences between humans and other species and evaluating the model system may involve considerable philosophical work. In some cases, the dissimilarities may not warrant animal experiments.

Other issues to which neuroethics also must be attentive to involve social justice. As neuroscience promises to offer treatments and enhancements, it must attend to issues of distributive justice, and play a role in ensuring that the fruits of neuroscientific research do not go only to those who enjoy the best our society has to offer. Moreover, a growing understanding that poverty and socioeconomic status more generally have long-lasting cognitive effects raises moral questions about the social policy and the structure of our society, and the growing gap between rich and poor (Farah, 2017). It seems that the social and neuroscientific realities may reveal the American Dream to be largely hollow, and these findings may undercut some popular political ideologies. There are also global issues to consider (Stein and Singh, 2020). Justice may demand more involvement of neuroethicists in policy decisions.

Finally, neuroethics stretches seamlessly into the law (see, e.g. Vincent, 2013; Morse and Roskies, 2013; Jones et al., 2014). Neuroethical issues arise in criminal law, in particular with the issue of criminal responsibility (see, e.g. Birks & Douglas, 2018). For example, the recognition that a large percentage of prison inmates have some history of head trauma or other abnormality raises the question of where to draw the line between the bad and the mad. Neuroethics has bearing on issues of addiction and juvenile responsibility, as well as on some other areas of law, such as in tort law, employment law, and health care law.

3. The Neuroscience of Ethics

Neuroscience, or more broadly the cognitive and neural sciences, have made significant inroads into understanding the neural basis of ethical thought and social behavior. In the last decades, these fields have begun to flesh out the neural machinery underlying human capacities for moral judgment, altruistic action, and the moral emotions (Liao, 2016). The field of social neuroscience, nonexistent two decades ago, is thriving, and our understanding of the circuitry, the neurochemistry, and the modulatory influences underlying some of our most complex and nuanced interpersonal behaviors is growing rapidly. Neuroethics recognizes that the heightened understanding of the biological bases of social and moral behaviors can itself have effects on how we conceptualize ourselves as social and moral agents, and foresees the importance of the interplay between our scientific conception of ourselves and our ethical views and theories (Roskies, 2002). The interplay and its effects provide reason to view the neuroscience of ethics (or more broadly, of sociality) as part of the domain of neuroethics.

Perhaps the most well-known and controversial example of such an interplay marks the beginning of this kind of exploration. In 2001, Joshua Greene scanned people while they made a series of moral and nonmoral decisions in different scenarios, including dilemmas modeled on the philosophical “Trolley Problem” (Thomson, 1985). He noted systematic differences in the engagement of brain regions associated with moral processing in “personal” as opposed to “impersonal” moral dilemmas and hypothesized that emotional interference was behind the differential reaction times in judgments of permissibility in the footbridge case. In later work, he proposed a dual-process model of moral judgment, where relatively automatic emotion-based reactions and high-level cognitive control jointly determined responses to moral dilemmas, and he related his findings to philosophical moral theories (Greene, 2004, 2008). Most controversially, he suggested that there are reasons to be suspicious of our deontological judgments, and interpreted his work as lending credence to utilitarian theories (Greene 2013). Greene’s work is thus a clear example of how neuroscience might affect our ethical theorizing. Claims regarding the import of neuroscience studies for philosophical questions have sparked a heated debate in philosophy and beyond, and prompted critiques and replies from scholars both within and outside of philosophy (see, e.g. Berker, 2009; Kahane, 2011; Christensen, 2014). One effect of these exchanges is to highlight a problematic tendency for scientists and some philosophers to think they can draw normative conclusions from purely descriptive data; another is to illuminate the ways in which descriptive data might itself masquerade as normative (Roskies, forthcoming).

Greene’s early studies demonstrated that neuroscience can be used in the service of examining extremely high-level behaviors and capacities, and have served as an inspiration for numerous other experiments investigating the neural basis of social and moral behavior and competences (May et al., forthcoming). Neuroethics has already turned its attention to phenomena such as altruism, empathy, well-being, and theory of mind, as well as to disorders such as autism and psychopathy. The relevant works range from imaging studies using a variety of imaging techniques, to manipulation of hormones and neurochemicals, to purely behavioral studies, and the use of virtual reality. In addition, interest in moral and social neuroscience has collided synergistically with the growth of neuroeconomics, which has flourished in large part independently. A recent bibliography has collected almost 400 references to works in the neuroscience of ethics since 2002 (Darragh et al., 2015). We can safely assume that many more advances will be made in the years to come, and that neuroethicists will be called upon to advance, evaluate, expound upon, or deflate claims for the purported ethical implications of our new knowledge.

4. Looking forward: New neurotechnologies

The examples discussed above included pharmaceuticals that are already approved for use, existing brain imaging techniques and invasive neurotherapies. But practical neuroethical concerns, and some theoretical concerns, are highly dependent upon the details of technologies. For example, adaptive DBS or aDBS, which “closes the loop” by concurrently stimulating and recording from neural tissue, and automatically adjusting stimulation based on the state of the brain, raises more pressing and somewhat different concerns about agency and autonomy than does regular DBS (see, e.g. Goering et al., 2017). Several technologies are already on the horizon that are bound to raise some new neuroethical questions, or old questions in new guises. One of the most powerful new tools in the research neuroscientist’s arsenal is “optogenetics”, a method of transfecting brain cells with genetically engineered proteins that make the cell responsive to light of specific wavelengths (Diesseroth, 2011). The cells can then be activated or silenced by shining light upon them, allowing for cell-specific external control. Optogenetics has been successfully used in many model organisms, including rats, and work is underway to use optogenetics in monkeys. One may presume it is only a matter of time before it will be developed for use in humans. The method promises to provide precise control of specific neural populations and relatively noninvasive targeted treatments for diseases. It promises to raise the kind of neuroethical issues raised by many mechanisms that intervene on brain function: Questions of harm, of authenticity, and the prospect of brain cells being controlled by someone other than the agent himself (Gilbert, 2014; Adamczyk & Zawadzki, 2020). A second technique, CRISPR, allows powerful targeted gene editing. Although not strictly a neuroscientific technique, it can be used on neural cells to effect brain changes at the genetic level (Canli, 2015). Genetic engineering might make possible neural gene therapies and designer babies, making real consequences of the genetic revolution thus far only imagined.

These and other technologies were not even imagined a few decades ago, and is likely that other future technologies will emerge which we cannot currently conceive of. If many neuroethical issues are closely tied to the capabilities of neurotechnologies, as I have argued, then we will likely be unlikely to anticipate future technologies in enough detail to predict the constellation of neuroethical issues that they may give rise to. Neuroethics will have to grow as neuroscience does, adapting to novel ethical and technological challenges.

Bibliography

  • Abbott, M., and S. Peck, 2017, “Emerging Ethical Issues Related to the Use of Brain-Computer Interfaces for Patients with Total Locked-in Syndrome,” Neuroethics, 10(2): 235–242. doi:10.1007/s12152-016-9296-1
  • Adamczyk, A. and P. Zawadzki, 2020, “The Memory-Modifying Potential of Optogenetics and the Need for Neuroethics,” NanoEthics. doi:10.1007/s11569-020-00377-1
  • Aharoni, Eyal, Joshua Mallett, Gina M. Vincent, Carla L. Harenski, Vince D. Calhoun, Walter Sinnott-Armstrong, Michael S. Gazzaniga, and Kent A. Kiehl, 2014, “Predictive Accuracy in the Neuroprediction of Rearrest,” Social Neuroscience, 9(4): 332–36. doi:10.1080/17470919.2014.907201
  • Aharoni, E., G.Vincent, C. Harenski, V. Calhoun, W. Sinnott-Armstrong, M. Gazzaniga, and K. Kiehl, 2013, “Neuroprediction of Future Rearrest,” Proceedings of the National Academy of Sciences, 110(15): 6223–28. doi:10.1073/pnas.1219302110
  • Appel, J., 2010 [2011], “Beyond Fluoride: Pharmaceuticals, Drinking Water and the Public Health,” The Huffington Post, 18 March 2010, updated 25 May 2011; available online.
  • Aydin, O., P. Aydin, and A. Arslan, 2019, “Development of Neuroimaging-Based Biomarkers in Psychiatry,” Advances in Experimental Medicine and Biology, 1192: 159–95. doi:10.1007/978-981-32-9721-0_9
  • Bai, Y., Y. Lin, and U. Ziemann, 2020, “Managing Disorders of Consciousness: The Role of Electroencephalography,” Journal of Neurology, doi:10.1007/s00415-020-10095-z
  • Berker, S., 2009, “The Normative Insignificance of Neuroscience,” Philosophy & Public Affairs, 37(4): 293–329.
  • Birks, D. and T. Douglas (eds.), 2018, Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, Oxford: Oxford University Press.
  • Boire, Richard G., 2001, “On Cognitive Liberty”, The Journal of Cognitive Liberties, 2(1): 7–22
  • Braddock, Matthew, 2017, “‘Should We Treat Vegetative and Minimally Conscious Patients as Persons?’” Neuroethics, 10(2): 267–80. doi:10.1007/s12152-017-9309-8
  • Brembs, Björn, 2010, “Towards a Scientific Concept of Free Will as a Biological Trait: Spontaneous Actions and Decision-Making in Invertebrates,” Proceedings of the Royal Society of London B: Biological Sciences, doi:10.1098/rspb.2010.2325
  • Brown, Timothy, 2015, “A Relational Take on Advisory Brain Implant Systems,” AJOB Neuroscience, 6(4): 46–47. doi:10.1080/21507740.2015.1094559
  • Bublitz, J., 2020, The Nascent Right to Psychological Integrity and Mental Self-Determination. In A. Von Arnauld, K. Von der Decken, and M. Susi (eds.), The Cambridge Handbook of New Human Rights: Recognition, Novelty, Rhetoric, pp. 387–403. Cambridge: Cambridge University Press. doi:10.1017/9781108676106.031
  • Buckner, Randy L., Jessica R. Andrews-Hanna, and Daniel L. Schacter, 2008, “The Brain’s Default Network,” Annals of the New York Academy of Sciences, 1124(1): 1–38. doi:10.1196/annals.1440.011
  • Campbell, J., Z. Huang, J. Zhang, X. Wu, P. Qin, G. Northoff, G. Mashour, and A. Hudetz, 2020, “Pharmacologically Informed Machine Learning Approach for Identifying Pathological States of Unconsciousness via Resting-State FMRI,” NeuroImage, 206: 116316. doi:10.1016/j.neuroimage.2019.116316
  • Canli, Turhan, 2015, “Neurogenethics: An Emerging Discipline at the Intersection of Ethics, Neuroscience, and Genomics,” Applied & Translational Genomics, Neurogenomics: Coming of Age, 5: 18–22. doi:10.1016/j.atg.2015.05.002
  • Chalmers, D. J, 1995, “Facing up to the Problem of Consciousness,” Journal of Consciousness Studies, 2(3): 200–219.
  • Chekroud, Adam Mourad, Jim AC Everett, Holly Bridge, and Miles Hewstone, 2014, “A Review of Neuroimaging Studies of Race-Related Prejudice: Does Amygdala Response Reflect Threat?” Frontiers in Human Neuroscience, 8: 179. doi:10.3389/fnhum.2014.00179
  • Clark, Andy, 2004, Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, first edition, New York: Oxford University Press.
  • Darragh, Martina, Liana Buniak, and James Giordano, 2015, “A Four-Part Working Bibliography of Neuroethics: Part 2 – Neuroscientific Studies of Morality and Ethics,” Philosophy, Ethics, and Humanities in Medicine (PEHM), 10. doi:10.1186/s13010-015-0022-0
  • Deisseroth, K, 2010, “Optogenetics,” Nature Methods, 8: 26–29. doi: 10.1038/nmeth.f.324
  • Delfin, Carl, Hedvig Krona, Peter Andiné, Erik Ryding, Märta Wallinius, and Björn Hofvander, 2019, “Prediction of Recidivism in a Long-Term Follow-up of Forensic Psychiatric Patients: Incremental Effects of Neuroimaging Data,” PLoS ONE, 14(5). doi:10.1371/journal.pone.0217127
  • Douglas, Thomas, 2019, “Enhancement and Desert,” Politics, Philosophy & Economics, 18(1): 3–22. doi:10.1177/1470594X18810439
  • –––, 2008,“Moral Enhancement,” Journal of Applied Philosophy, 25(3): 228–45. doi:10.1111/j.1468-5930.2008.00412.x
  • Dresler, Martin, Anders Sandberg, Christoph Bublitz, Kathrin Ohla, Carlos Trenado, Aleksandra Mroczko-Wąsowicz, Simone Kühn, and Dimitris Repantis, 2019, “Hacking the Brain: Dimensions of Cognitive Enhancement,” ACS Chemical Neuroscience 10(3): 1137–48. doi:10.1021/acschemneuro.8b00571
  • Dubljević, Veljko, and Eric Racine, 2017, “Moral Enhancement Meets Normative and Empirical Reality: Assessing the Practical Feasibility of Moral Enhancement Neurotechnologies,” Bioethics, 31(5): 338–48. doi:10.1111/bioe.12355
  • Erler, Alexandre, 2011,“Does Memory Modification Threaten Our Authenticity?” Neuroethics, 4(3): 235–49. doi:10.1007/s12152-010-9090-4
  • Farah, Martha J. (ed.), 2010, Neuroethics: An Introduction with Readings, first Edition, Cambridge, MA: The MIT Press.
  • Farah, Martha J., 2017, “The Neuroscience of Socioeconomic Status: Correlates, Causes, and Consequences,” Neuron, 96(1): 56–71. doi:10.1016/j.neuron.2017.08.034
  • Farah, Martha J., J. Benjamin Hutchinson, Elizabeth A. Phelps, and Anthony D. Wagner, 2014, “Functional MRI-Based Lie Detection: Scientific and Societal Challenges,” Nature Reviews Neuroscience, 15(2): 123–31. doi:10.1038/nrn3665
  • Fischer, David, and Robert D. Truog, 2017, “The Problems with Fixating on Consciousness in Disorders of Consciousness,” American Journal of Bioethics: Neuroscience, 8(3): 135–40.
  • Gilbert, Frederic, 2015, “A Threat to Autonomy? The Intrusion of Predictive Brain Implants,” AJOB Neuroscience, 6(4): 4–11. doi:10.1080/21507740.2015.1076087
  • Goering, Sara, Eran Klein, Darin D. Dougherty, and Alik S. Widge, 2017, “Staying in the Loop: Relational Agency and Identity in Next-Generation DBS for Psychiatry,” AJOB Neuroscience, 8(2): 59–70. doi:10.1080/21507740.2017.1320320
  • Greely, Henry, Barbara Sahakian, John Harris, Ronald C. Kessler, Michael Gazzaniga, Philip Campbell, and Martha J. Farah, 2008,“Towards Responsible Use of Cognitive-Enhancing Drugs by the Healthy,” Nature, 456(7223): 702–5. doi:10.1038/456702a
  • Greene, Joshua D., and Joseph M. Paxton, 2009, “Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions,” Proceedings of the National Academy of Sciences, 106(30): 12506–11. doi:10.1073/pnas.0900152106
  • Greene, Joshua D., Leigh E. Nystrom, Andrew D. Engell, John M. Darley, and Jonathan D. Cohen, 2004, “The Neural Bases of Cognitive Conflict and Control in Moral Judgment,” Neuron, 44(2): 389–400. doi:10.1016/j.neuron.2004.09.027
  • Greene, Joshua D., R. Brian Sommerville, Leigh E. Nystrom, John M. Darley, and Jonathan D. Cohen, 2001, “An fMRI Investigation of Emotional Engagement in Moral Judgment,” Science, 293(5537): 2105–8. doi:10.1126/science.1062872
  • Greene, Joshua D., Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom, and Jonathan D. Cohen, 2008, “Cognitive Load Selectively Interferes with Utilitarian Moral Judgment,” Cognition, 107(3): 1144–54. doi:10.1016/j.cognition.2007.11.004
  • Greene, Joshua, 2013, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, New York: Penguin Press.
  • Hsu, Chun-Wei, Chiara Begliomini, Tommaso Dall’Acqua, and Giorgio Ganis, 2019, “The Effect of Mental Countermeasures on Neuroimaging-Based Concealed Information Tests,” Human Brain Mapping, 40(10): 2899–2916. doi:10.1002/hbm.24567
  • Husain, Masud, and Mitul A. Mehta, 2011, “Cognitive Enhancement by Drugs in Health and Disease,” Trends in Cognitive Sciences, 15(1): 28–36. doi:10.1016/j.tics.2010.11.002
  • Ienca, Marcello, and Roberto Andorno, 2017, “Towards New Human Rights in the Age of Neuroscience and Neurotechnology,” Life Sciences, Society and Policy, 13(1): 5. doi:10.1186/s40504-017-0050-1
  • Ilieva, Irena, Joseph Boland, and Martha J. Farah, 2013, “Objective and Subjective Cognitive Enhancing Effects of Mixed Amphetamine Salts in Healthy People,” Neuropharmacology, Cognitive Enhancers: molecules, mechanisms and minds 22nd Neuropharmacology Conference: Cognitive Enhancers, 64 (January): 496–505. doi:10.1016/j.neuropharm.2012.07.021
  • Illes, Judy, and Barbara J. Sahakian, 2011, Oxford Handbook of Neuroethics, Oxford: Oxford University Press.
  • Illes, Judy, Matthew P. Kirschen, and John D. E. Gabrieli, 2003, “From Neuroimaging to Neuroethics,” Nature Neuroscience, 6(3): 205–205. doi:10.1038/nn0303–205
  • Illes, Judy, Matthew P. Kirschen, Emmeline Edwards, L R. Stanford, Peter Bandettini, Mildred K. Cho, Paul J. Ford, et al., 2006, “Incidental Findings in Brain Imaging Research,” Science, 311(5762): 783–84. doi:10.1126/science.1124665
  • Illes, Judy, 2006, Neuroethics: Defining the Issues in Theory, Practice, and Policy, Oxford: Oxford University Press.
  • Jebari, Karim, 2013, “Brain Machine Interface and Human Enhancement – An Ethical Review,” Neuroethics, 6(3): 617–25. doi:10.1007/s12152-012-9176-2
  • Jecker, Nancy S., and Andrew L. Ko, 2017, “Is That the Same Person? Case Studies in Neurosurgery,” AJOB Neuroscience, 8(3): 160–70. doi:10.1080/21507740.2017.1366578
  • Johnson, L. Syd M., Andrew Fenton, and Adam Shriver (eds.), 2020, Neuroethics and Nonhuman Animals: Advances in Neuroethics, Springer International Publishing. doi:10.1007/978-3-030-31011-0
  • Jones, Owen D., Jeffrey D. Schall, and Francis X. Shen, 2014, Law & Neuroscience, 1st edition. New York: Wolters Kluwer Law & Business.
  • Jones, Owen D., Joshua Buckholtz, Jeffrey D. Schall, and Rene Marois, 2009, Brain Imaging for Legal Thinkers: A Guide for the Perplexed, SSRN Scholarly Paper ID 1563612. Rochester, NY: Social Science Research Network.
  • Kahane, Guy and Julian Savulescu, 2009, “Brain damage and the moral significance of consciousness,” The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 34(1):6–26.
  • Kahane, Guy, Katja Wiech, Nicholas Shackel, Miguel Farias, Julian Savulescu, and Irene Tracey, 2011, “The Neural Basis of Intuitive and Counterintuitive Moral Judgment,” Social Cognitive and Affective Neuroscience, March, nsr005. doi:10.1093/scan/nsr005
  • Kahane, Guy, 2011, “Mastery Without Mystery: Why There Is No Promethean Sin in Enhancement,” Journal of Applied Philosophy, 28(4): 355–68. doi:10.1111/j.1468-5930.2011.00543.x
  • Kass, Leon, 2003, “Beyond Therapy: Biotechnology and the Pursuit of Human Improvement,” President’s Council on Bioethics, Washington, DC, 16.
  • Klaming, Larry and Pim Haselager, 2013, “Did My Brain Implant Make Me Do It? Questions Raised by DBS Regarding Psychological Continuity, Responsibility for Action and Mental Competence,” Neuroethics, 6: 527–39.
  • Klein, Eran, Tim Brown, Matthew Sample, Anjali R. Truitt, and Sara Goering, 2015, “Engineering the Brain: Ethical Issues and the Introduction of Neural Devices,” The Hastings Center Report, 45(6): 26–35. doi:10.1002/hast.515
  • Kraemer, Felicitas, 2013, “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises Questions of Personal Authenticity and Alienation,” Neuroethics, 6: 483–97.
  • Kraemer, Felicitas, 2013, “Authenticity or Autonomy? When Deep Brain Stimulation Causes a Dilemma,” Journal of Medical Ethics, 39(12): 757–60. doi:10.1136/medethics-2011-100427
  • Lavazza, Andrea, 2018, “Freedom of Thought and Mental Integrity: The Moral Requirements for Any Neural Prosthesis,” Frontiers in Neuroscience, 12, doi:10.3389/fnins.2018.00082
  • Lázaro-Muñoz, Gabriel, Amy L. McGuire, and Wayne K. Goodman, 2017, “Should We Be Concerned About Preserving Agency and Personal Identity in Patients With Adaptive Deep Brain Stimulation Systems?” AJOB Neuroscience, 8 (2): 73–75. doi:10.1080/21507740.2017.1320337
  • Leentjens A.F., Visser-Vandewalle V., Temel Y., Verhey F.R., 2004, “Manipulation of mental competence: An ethical problem in case of electrical stimulation of the subthalamic nucleus for severe Parkinson’s disease,” Nederlands Tijdschrift voor Geneeskunde, 148(28): 1394 – 98.
  • Liao, S. Matthew. Moral Brains: The Neuroscience of Morality, first edition, New York, NY: Oxford University Press, 2016.
  • Libet, Benjamin, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl, 1983, “Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (readiness-Potential),” Brain, 106(3): 623–42. doi:10.1093/brain/106.3.623
  • Lin, Patrick, and Fritz Allhoff, 2008, “Against Unrestricted Human Enhancement,” Journal of Evolution & Technology, 18(1): 35–41.
  • Liu, Xiaonan, Kewei Chen, Teresa Wu, David Weidman, Fleming Lure, and Jing Li, 2018, “Use of Multimodality Imaging and Artificial Intelligence for Diagnosis and Prognosis of Early Stages of Alzheimer’s Disease,” Translational Research: The Journal of Laboratory and Clinical Medicine, 194: 56–67. doi:10.1016/j.trsl.2018.01.001
  • Mackenzie, Catriona, and Mary Walker, 2015, “Neurotechnologies, Personal Identity, and the Ethics of Authenticity,” in Handbook of Neuroethics, edited by Jens Clausen and Neil Levy, Dordrecht: Springer Netherlands, pp. 373–92, doi:10.1007/978-94-007-4707-4_10
  • Mantione, Mariska, Martijn Figee, and Damiaan Denys, 2014, “A case of musical preference for Johnny Cash following deep brain stimulation of the nucleus accumbens,” Frontiers in Behavioral Neuroscience, 8: 152.
  • Marcus, Steven J. (ed.), 2002, Neuroethics: Mapping the Field, first edition, New York: Dana Press.
  • Mattay, Venkata S., Joseph H. Callicott, Alessandro Bertolino, Ian Heaton, Joseph A. Frank, Richard Coppola, Karen F. Berman, Terry E. Goldberg, and Daniel R. Weinberger, 2000, “Effects of Dextroamphetamine on Cognitive Performance and Cortical Activation,” NeuroImage, 12(3): 268–75. doi:10.1006/nimg.2000.0610
  • May, Joshua, Clifford Ian Workman, Hyemin Han, and Julia Haas, (forthcoming), “The Neuroscience of Moral Judgment: Empirical and Philosophical Developments,” Preprint. PsyArXiv, May 1, 2020. doi:10.31234/osf.io/89jcx
  • Molenberghs, Pascal, and Winnifred R. Louis, 2018, “Insights From FMRI Studies Into Ingroup Bias,” Frontiers in Psychology, 9, doi:10.3389/fpsyg.2018.01868
  • Monti, Martin M., Audrey Vanhaudenhuyse, Martin R. Coleman, Melanie Boly, John D. Pickard, Luaba Tshibanda, Adrian M. Owen, and Steven Laureys, 2010, “Willful Modulation of Brain Activity in Disorders of Consciousness,” New England Journal of Medicine, 362(7): 579–89. doi:10.1056/NEJMoa0905370
  • Morse, Stephen J., and Adina L. Roskies (eds.), 2013, A Primer on Criminal Law and Neuroscience, Oxford, New York: Oxford University Press.
  • Olson, Eric T., 1999, The Human Animal: Personal Identity without Psychology, New York: Oxford University Press.
  • Owen. Adrian M., 2013, “Detecting Consciousness: A Unique Role for Neuroimaging,” Annual Review of Psychology, 64(1): 109–33. doi.org/10.1146/annurev-psych-113011-143729
  • Owen, Adrian M., Martin R. Coleman, Melanie Boly, Matthew H. Davis, Steven Laureys, and John D. Pickard, 2006, “Detecting Awareness in the Vegetative State,” Science, 313(5792): 1402–1402. doi:10.1126/science.1130197
  • Parens, Erik, 2005, “Authenticity and Ambivalence: Toward Understanding the Enhancement Debate,” The Hastings Center Report, 35(3): 34–41. doi:10.2307/3528804
  • Parfit, Derek, 1984, Reasons and Persons, Oxford: Oxford University Press.
  • Pascalev, Assya, Mario Pascalev, and James Giordano, 2016, “Head Transplants, Personal Identity and Neuroethics,” Neuroethics, 9(1) : 15–22. doi:10.1007/s12152-015-9245-4
  • Peterson, Andrew, Adrian M. Owen, and Jason Karlawish, 2020, “Alive Inside,” Bioethics, 34(3): 295–305. doi:10.1111/bioe.12678
  • Poldrack, Russell A., John Monahan, Peter B. Imrey, Valerie Reyna, Marcus Raichle, David Faigman, and Joshua W. Buckholtz, 2018, “Predicting Violent Behavior: What Can Neuroscience Add?” Trends in Cognitive Sciences, 22(2): 111–23. doi:10.1016/j.tics.2017.11.003
  • Presidential Commission for the Study of Bioethical Issues. “Gray Matters, Vol. 2” https://bioethicsarchive.georgetown.edu/pcsbi/node/4716.html.
  • Pugh, Jonathan, Hannah Maslen, and Julian Savalescu, 2017, “Deep Brain Stimulation, Authenticity and Value,” Cambridge Quarterly of Healthcare Ethics, 26(4): 640–57. doi:10.1017/S0963180117000147
  • Racine, Eric, Ofek Bar-Ilan, and Judy Illes, 2006, “Brain Imaging: A decade of coverage in the print media,” Science Communication, 28(1): 122–42. doi:10.1177/1075547006291990
  • Racine, Eric, 2015, “Neuroscience, Neuroethics, and the Media,” in Handbook of Neuroethics, Jens Clausen and Neil Levy (eds.), Dordrecht: Springer Netherlands, pp. 1465–71, doi:10.1007/978-94-007-4707-4_82
  • Racine, Eric, 2010, Pragmatic Neuroethics: Improving Treatment and Understanding of the Mind-Brain, Cambridge, MA: The MIT Press.
  • Roskies, Adina, 2002, “Neuroethics for the New Millenium,” Neuron, 35(1): 21–23. doi:10.1016/S0896-6273(02)00763-8
  • –––, 2007, “Are Neuroimages like Photographs of the Brain?” Philosophy of Science, 74: 860–72.
  • –––, 2006, “Neuroscientific Challenges to Free Will and Responsibility,” Trends in Cognitive Sciences, 10(9): 419–23. doi:10.1016/j.tics.2006.07.011
  • –––, 2015, “Agency and intervention,” Phil. Trans, R. Soc. B, 370(1677): 20140215. doi:10.1098/rstb.2014.0215
  • –––, 2015. “Mind Reading, Lie Detection, and Privacy,” in Handbook of Neuroethics, Jens Clausen and Neil Levy (eds.), Springer Netherlands, pp. 679–95
  • –––, 2020, “Mindreading and Privacy,” The New Cognitive Neurosciences, 6th Edition, Poeppel, D., Mangun, G.R. and Gazzaniga, M.S. (eds.). Cambridge, CA: MIT Press, pp. 1049–57.
  • –––, forthcoming. “The limits of neuroscience for ethics,” in The Oxford Handbook of Moral Psychology, Vargas and Doris (eds.).
  • Sahakian, Barbara, and Sharon Morein-Zamir, 2007, “Professor’s Little Helper,” Nature, 450(7173): 1157–59. doi:10.1038/4501157a
  • Sandel, Michael, 2002,“What’s Wrong with Enhancement,” President’s Council on Bioethics, Washington, DC, 12.
  • Savulescu, Julian, and Ingmar Persson, 2012, “Moral Enhancement, Freedom and the God Machine,” The Monist , 95(3): 399–421.
  • Schechtman, Marya, 2014, Staying Alive: Personal Identity, Practical Concerns, and the Unity of a Life, Oxford: Oxford University Press.
  • Schermer, Maartje, 2008, “Enhancements, Easy Shortcuts, and the Richness of Human Activities,” Bioethics, 22(7): 355–63. doi:10.1111/j.1467-8519.2008.00657.x
  • Schweitzer, N. J., Michael J. Saks, Emily R. Murphy, Adina L. Roskies, Walter Sinnott-Armstrong, and Lyn M. Gaudet, 2011, “Neuroimages as Evidence in a Mens Rea Defense: No Impact,” Psychology, Public Policy, and Law, 17(3): 357–93. doi:http://dx.doi.org/10.1037/a0023581
  • Selgelid, Michael J., 2007. “An Argument Against Arguments for Enhancement,” Studies in Ethics, Law, and Technology, 1(1).
  • Sententia, Wrye, 2013, “Freedom by Design”, In The Transhumanist Reader, x More and Natasha Vita-More (eds.), John Wiley & Sons, pp. 355–60. doi:10.1002/9781118555927.ch34
  • Shen, Francis, 2013, “Neuroscience, Mental Privacy, and the Law,,” Harvard Journal of Law & Public Policy, 36.
  • Shira Knafo and César Venero, (eds.), 2015, Cognitive Enhancement , San Diego: Academic Press, doi:10.1016/B978-0-12-417042-1.00001-2
  • Skorburg, Joshua August, and Walter Sinnott Armstrong, 2020 “Some Ethics of Deep Brain Stimulation,” in Global Mental Health and Neuroethics, Dan Stein and Ilina Singh (eds.), Academic Press, pp. 117–32.
  • Soon, Chun Siong, Marcel Brass, Hans-Jochen Heinze, and John-Dylan Haynes, 2008, “Unconscious Determinants of Free Decisions in the Human Brain,” Nature Neuroscience, 11(5): 543–45. doi:10.1038/nn.2112
  • Spence, Charles, 2020, “On the Ethics of Neuromarketing and Sensory Marketing,” In Organizational Neuroethics: Reflections on the Contributions of Neuroscience to Management Theories and Business Practices, Joé T. Martineau and Eric Racine (eds.), Cham: Springer International Publishing, pp. 9–29. doi:10.1007/978-3-030-27177-0_3
  • Spielberg, Steven (director), 2002, Film: Minority Report.
  • Stanton, Steven J., Walter Sinnott-Armstrong, and Scott A. Huettel, 2017, “Neuromarketing: Ethical Implications of Its Use and Potential Misuse,” Journal of Business Ethics, 144(4): 799–81. doi:10.1007/s10551-016-3059-0
  • Stein, Dan, and Ilina Singh (eds.), 2020, Global Mental Health and Neuroethics, London: Academic Press.
  • Strawson, Galen, 2004, “Against Narrativity,” Ratio, 17(4): 428–52. doi:10.1111/j.1467-9329.2004.00264.x
  • Thomson, Judith Jarvis, 1985, “The Trolley Problem,” The Yale Law Journal, 94(6): 1395–1415. doi:10.2307/796133
  • United States v. Semrau, No. 11–5396 (6th Cir. 2012).
  • Urban, Kimberly R., and Wen-Jun Gao, 2014, “Performance Enhancement at the Cost of Potential Brain Plasticity: Neural Ramifications of Nootropic Drugs in the Healthy Developing Brain,” Frontiers in Systems Neuroscience, 8 May: 38. doi:10.3389/fnsys.2014.00038
  • Valizadeh, Seyed Abolfazl, Franziskus Liem, Susan Mérillat, Jürgen Hänggi, and Lutz Jäncke, 2018, “Identification of Individual Subjects on the Basis of Their Brain Anatomical Features,” Scientific Reports , 8(1): 5611. doi:10.1038/s41598-018-23696-6
  • Vincent, Nicole A. (ed.), 2013, Neuroscience and Legal Responsibility, first edition, New York: Oxford University Press.
  • Vohs, Kathleen D., and Jonathan W. Schooler, 2008, “The Value of Believing in Free Will Encouraging a Belief in Determinism Increases Cheating,” Psychological Science, 19(1): 49–54. doi:10.1111/j.1467-9280.2008.02045.x
  • Walker, Mary Jean, and Catriona Mackenzie, 2020, “Neurotechnologies, Relational Autonomy, and Authenticity,” IJFAB: International Journal of Feminist Approaches to Bioethics. doi:10.3138/ijfab.13.1.06
  • Warren, Samuel D., and Louis D. Brandeis, 1890, “Right to Privacy,” Harvard Law Review, 4: 193.
  • Waters, Theodore E. A., and Robyn Fivush, 2014, “Relations Between Narrative Coherence, Identity, and Psychological Well-Being in Emerging Adulthood,” Journal of Personality, 83(4): 441–451. doi:10.1111/jopy.12120
  • Weisberg, Deena Skolnick, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, and Jeremy R. Gray, 2008, “The Seductive Allure of Neuroscience Explanations,” Journal of Cognitive Neuroscience, 20(3): 470–77. doi:10.1162/jocn.2008.20040
  • Zuk, Peter, Laura Torgerson, Demetrio Sierra-Mercado, and Gabriel Lázaro-Muñoz, 2018, “Neuroethics of Neuromodulation: An Update,” Current Opinion in Biomedical Engineering, 8: 45–50. doi:10.1016/j.cobme.2018.10.003

Acknowledgments

The author would like to acknowledge the research assistance of Yaning Chen for this project.

Copyright © 2021 by
Adina Roskies <adina.roskies@dartmouth.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free