The Moral/Conventional Distinction
Contemporary interest in the idea that there is a psychologically real and philosophically important distinction between moral judgments and conventional judgments can be traced to the work of psychologist Elliot Turiel. Starting in the 1970s, Turiel and his collaborators borrowed some ideas from philosophers who had written on the nature of morality and convention, and conducted a series of experiments demonstrating that young children react very differently when asked about prototypical moral transgressions, like one child hitting another, and prototypical conventional transgressions, like wearing pajamas to school. On the basis of this work, Turiel proposed and defended a definition of morality according to which moral transgressions involved harm, injustice, or the violation of rights. Other researchers, notably Richard Shweder and Jonathan Haidt, argued that Turiel’s definition “does not travel well”, because people in non-Western cultures treat a much wider range of transgressions as moral. This led to an ongoing debate about the extent to which moral cognition differs across cultures.
During the last two decades, philosophers have become increasingly interested in the empirical and theoretical work of Turiel and his critics. Some philosophers have argued that Turiel’s findings support moral nativism. Other philosophers have invoked work in the Turiel tradition in debates over moral sentimentalism, moral internalism, and the moral responsibility of psychopaths.
Sections 1 and 2 discuss some of the philosophical precursors of Turiel’s work. Sections 3 thru 10 discuss the empirical findings, debates about their empirical implications, and philosophical arguments about how they are best interpreted. Section 11 surveys the philosophical arguments that have invoked work in the Turiel tradition.
- 1. Introduction
- 2. Two Facts About The Literature on the Definition of Morality
- 3. Turiel’s Project
- 4. Does the “Moral” Pattern of Criterion Judgments Actually Identify Moral Judgments?
- 5. The Natural Kind Defense
- 6. The Definition of Morality
- 7. Empirical Challenges to Turiel’s Definition of Morality: Does It Fail in Other Cultures?
- 8. Empirical Challenges to Turiel’s Definition of Morality: Does it Fail with “Non-Prototypical” Transgressions?
- 9. The Origin of Moral Judgments: Reason, Emotion, Social Learning or Innate Cognitive Machinery
- 10. What Does the Moral/Conventional Task Tell Us About Moral Judgments?
- 11. The Uses of The Moral/Conventional Task in Philosophy
- Academic Tools
- Other Internet Resources
- Related Entries
Starting in the early 1950s, with the publication of R.M. Hare’s The Language of Morals (1952), a large philosophical literature began to appear aimed at specifying the distinctive or essential characteristics of moral judgments and moral principles. Participants in these discussions often said their goal was to provide and defend a definition of morality, and The Definition of Morality (Wallace & Walker 1970a) was the title of a valuable collection of papers drawn from that literature that included essays by Elizabeth Anscombe, Philippa Foot, William Frankena, Alasdair MacIntyre, Peter Strawson, and a number of other well-known philosophers. Another large literature, much of it quite technically sophisticated, was inspired by David Lewis’s seminal volume, Convention, published in 1969. But it is only much more recently that philosophers have focused on the distinction between moral judgments and conventional judgments, and sought to use that distinction in arguments about a wide array of philosophical topics, ranging from the plausibility of metaethical theories to the moral responsibility of psychopaths.
The emergence of a philosophical literature analyzing and invoking what has become known as “the moral/conventional distinction” can be traced to a growing awareness, among philosophers, of the enormously influential work of developmental psychologist Elliot Turiel and his many collaborators, most notably Melanie Killen, Larry Nucci, and Judith Smetana. This group of researchers are often called “social domain theorists”. The sections 3 through 10 of this entry will focus on debates, in both psychology and philosophy, about the existence, the nature, and the importance of the moral/conventional distinction. Section 11 will consider some of the ways in which the social domain theorists’ account of the moral/conventional distinction, and related ideas, have been used in philosophical discussions of a variety of issues.
The literature on convention that emerged in the wake of Lewis’s book is rarely discussed in the work of Turiel and his collaborators and it plays almost no role in their thinking. But the philosophical literature on the definition of morality did play an important role in Turiel’s thinking. So before considering social domain theory, it will be useful to recount a pair of relevant facts about that literature.
2. Two Facts About The Literature on the Definition of Morality
The first fact concerns the goal that philosophers who contributed to the definition of morality literature were trying to achieve. That goal is best explained with the help of a crucial distinction. Often, when we ask whether a person’s judgment is moral, what we want to know is whether her judgment on some moral matter is true—or something in that vicinity: correct, or valid, or justified, or wise. What we are asking, to use Frankena’s (1967) useful terminology, is whether the judgment is moral as opposed to immoral. But one point on which these philosophers were crystal clear is that moral truth is not what they were trying to explain. Rather, borrowing again from Frankena, what they were trying to do was to distinguish moral judgments and principles from non-moral judgments and principles. They wanted to know how to determine whether a person’s judgment was a moral judgment rather than some other kind of judgment—prudential, or religious, or aesthetic, or legal. Whether the judgment was true, or valid, or justified was simply not their concern. When considering the work of Turiel and his followers, it is particularly important to keep this distinction in mind, because it sometimes seems that these authors, and some of their critics, don’t. After offering evidence putatively establishing that a particular judgment is moral rather than conventional (and thus that it is moral rather than non-moral), authors will sometimes go on to write as though they had established that the judgment is true (see, for example, Nucci 2001: 104–106).
The second fact that will play an important role in our discussion of social domain theory concerns the outcome of the philosophers’ attempts to find a definition of morality. After decades of effort by dozens of authors, including some of the most eminent philosophers writing in the second half of the twentieth century, the project failed to reach anything close to consensus. Disagreements persisted on just about every issue of methodology and substance. One central methodological issue was whether the goal of the endeavor was descriptive, aimed at characterizing how the concept of morality is actually used, or normative, aimed at specifying how the concept of morality should be used. Frankena took the project to be normative, and he claimed a number of prominent philosophers, including Richard Brandt and G.H. von Wright, as allies. But other eminent figures, including R.M. Hare, H.L.A. Hart, and Alan Gewirth, took the project to be descriptive (Frankena 1967 [1970: 149–151]).
In this philosophical literature, there is a long list of features that were argued to be necessary if a judgment or a rule is to count as moral. Perhaps the most widely discussed of these was Hare’s proposal that moral rules must be “universalizable”. As Hare unpacked the notion, it required that there be no names or definite descriptions in moral rules. Another widely discussed proposal, also due to Hare, was that moral rules are “prescriptive”. What this means is that the “action-guiding force [of moral rules] derives from the fact that they entail imperatives” (Wallace & Walker 1970b: 9). A third proposal was that if an action guiding principle is a moral principle for a person, then she must regard it as “overriding or supremely important” (Frankena 1967 [1970: 155]). “In cases of conflict between moral and nonmoral principles, the former are necessarily overriding” (Taylor 1978: 44; for a similar proposal, see Cooper 1970: 95). Yet another frequently discussed necessary condition was that moral rules are behavior guiding rules whose violation is met with social sanctions, “the reproach of one’s neighbors” (Cooper 1966 [1970: 73]) or something more serious, like ostracism (Sprigge, 1964 [1970: 129 ff]). This was sometimes paired with the idea that moral transgressions are followed by the transgressor sanctioning himself with feelings of guilt or shame or disliking himself (Wallace & Walker 1970: 14; Sprigge 1964 [1970: 130]). All of these proposals were “formal” in the sense that they did not impose any constraints on the contents of moral rules or moral judgments. And this is far from a complete list of the formal conditions that were proposed; there were many more (Frankena 1963 offers an extensive list, along with many references).
There was no shortage of critics of these formal conditions. Wittgensteinians, who maintained that “moral” was a family resemblance term, denied that there are any necessary conditions for the application of the term. MacIntyre (1957), inspired by Sartre, argued that many moral judgments were neither universalizable nor (in Hare’s sense) prescriptive. Sprigge (1964) offered a quite different argument against universalizability, as did Winch (1965). And so it went. Nothing on the long list of formal conditions that were proposed achieved anything even close to consensus in the philosophical literature on the definition of morality.
Even more controversial was the question of whether more substantive social requirements should be built into the definition of morality. For example, Toulmin (1950) argued that a concern for the harmony of society is part of the meaning of “moral”, and Baier (1958 [1970: 199ff]) proposed that moral rules “must be for the good of everyone alike”. On these substantive principles, too, it is clear that no agreement was reached.
3. Turiel’s Project
Elliot Turiel (1938–) was a student of Lawrence Kohlberg (1927–1987), who was, in turn, influenced by the work of Jean Piaget (1896–1980). All three have done seminal work on moral development in children and adolescents, and all three share the view that reasoning and reflection on one’s own experience plays a central role in that process. This view, often called “constructivism”, is commonly contrasted with nativist accounts of moral development that focus on the role of the child’s innate endowment, and with social learning accounts that emphasize the process of acquiring moral principles from parents and other important people in the child’s environment (Killen & Smetana 2015: 705–709; Killen & Dahl 2018: 23–25).
Importantly, Piaget and Kohlberg also shared the view that over the course of development, normative reasoning emerges in stages. According to both Piaget and Kohlberg, in the first stage of this process young children’s view of morality is “heteronomous”—it is a set of rules imposed and enforced by others—and moral behavior is simply a matter of complying with rules set by adults. On Kohlberg’s account, children from about two and a half years of age to about six judge that certain behaviors are wrong because they know those behaviors are likely to be punished, and their understanding of wrongness is, near enough, exhausted by the idea of punishment: wrong behavior is seen as simply behavior that is typically punished. Later, as the child matures, different normative rules come to be viewed in different ways. As Turiel portrays it,
[t]his theoretical perspective can be referred to as a “differentiation” model in that moral reasoning emerges through its differentiation from nonmoral processes: At lower developmental levels convention and morality are presumed to be undifferentiated, while at higher levels the two are differentiated…. (Turiel 1977: 78)
Turiel was skeptical of the differentiation model and the claim that there is only one sort of normative cognition in young children. He was convinced that moral cognition is distinct from cognition about social conventions, that the distinction is present quite early in development, and thus
that social convention and morality should not be treated as part of the same conceptual and developmental package, but as distinct domains. (Turiel 1977: 79)
In order to make the case for this claim, he had to show that children make characteristically moral judgments about some transgressions, and that their judgments about conventional transgressions were systematically different. To do that, Turiel needed an empirical test that would indicate whether a normative judgment made by an experimental participant—child or adult—was a moral judgment or a judgment about a conventional matter. In constructing his test, Turiel drew inspiration from the philosophical literature on the definition of morality. He focused on several of the necessary conditions for moral judgments that philosophers had proposed and used them to guide the construction of the test he needed.
One of these was universalizability. Though Hare had relied on a linguistic account of universalizability, Turiel interpreted universalizability as a claim about how a specific moral judgment might be generalized to situations involving other people, other places, and other times. “Moral prescriptions”, he tells us, “are universally applicable in that they apply to everyone in similar circumstances” (Turiel 1983: 36; italics in the original). So if a young participant in an experiment judges that it is wrong for a child in her own school to push someone off a swing, and if that judgment is a moral judgment, we would expect the participant to judge that it is wrong for a child who attends a different school in another town or in another country to push someone off a swing in similar circumstances. We’d also expect the participant to judge that the same action—pushing a schoolmate off a swing—would be wrong if it happened sometime in the past or if it were to happen sometime in the future in similar circumstances.
A second feature discussed in the philosophical literature that was adopted by Turiel was the “categoricalness” of moral judgments. To motivate this feature of his test he quotes the philosopher Alan Gewirth:
Judgments of moral obligation are categorical in that what persons ought to do sets requirements for them that they cannot rightly evade by consulting their own self-interested desires or variable opinions, ideals, or institutional practices. (Gewirth 1978: 24, quoted in Turiel 1983: 35)
Since institutional practices cannot alter moral obligations, we should expect that if an experimental participant has judged that it is wrong to push someone off a schoolyard swing, and that judgment is a moral judgment, then the participant would judge that it would be wrong in another school where there was no rule against pushing people off a swing, and it would be wrong even if the principal in her own school said that there was no rule against it. In the jargon that has developed in the literature growing out of Turiel’s work, these questions are said to probe for “authority independence”. The test that Turiel proposed to determine whether a judgment is a moral judgment includes one or more questions assessing whether the participant takes her judgment to be universalizable and one or more questions assessing whether she takes her judgment to be authority independent. The examples used and the details of the questions asked vary with the age of the participant and the details of the participant’s culture (Turiel 1989).
Both universalizability and categoricalness are “formal”—they do not impose any constraints on the content of moral rules or moral judgments. But Turiel also held that there are substantive features that moral judgments share. Moral judgments, he maintained, deal with issues linked to harm, justice or rights. Thus if an experimental participant has made a genuinely moral judgment and is asked to explain why the behavior in question is wrong, she will typically appeal to the harm that has been done, or to injustice or the violation of someone’s rights. Though Turiel found inspiration for his focus on harm, justice, and rights in the philosophical literature he favored, his choice of these features was largely motivated by his constructivist account of how children acquire moral beliefs. In simple “prototypical” cases, he maintained, the child can see that the behavior has harmful consequences, and that leads the child to recognize that the behavior is wrong.
Children’s moral judgments are … derived … from features inherent to social relationships—including experiences involving harm to persons, violations of rights, and conflicts of competing claims. (Turiel 1983: 3)
If, for example, one hits another, thereby causing physical harm, an individual’s perception of that event as a transgression stems from features intrinsic to the event (e.g., from a perception of the consequences to the victim). (Turiel 1977: 80)
Since the hypothesis that Turiel hoped to establish is that children distinguish moral judgments from judgments about social convention, he also needed an account of the distinctive features of judgments about social convention. Here too, he sought guidance in the philosophical literature, though his palette was more limited. There are occasional references to Lewis’s Convention, and to other authors, but much of his discussion of convention relies on the work of his Berkeley colleague, John Searle (1969) (see, for example, Turiel 1983: 37–38). Social conventions, on Turiel’s account, are typically local—they differ in different places and at different times. So if a young participant judges that an action is wrong at her school, and if she takes the action to be a transgression of a social convention, then she will not insist that it would be wrong in other schools or at other times. Social conventions are also alterable, since people or their leaders can decide to change them. Thus if a participant views a transgression to be a violation of a conventional rule, she will agree that it would not be wrong if an appropriate authority were to change the rule. The function of social conventions, according to Turiel, is to facilitate social coordination. So if asked why a transgression of a social convention is wrong, a participant will typically focus on the factors that sustain social coordination and the consequences of disrupting it.
With these putative features of moral and conventional judgments in hand, Turiel proceeded to construct an empirical test to determine whether an experimental participant’s judgment about a transgression is a moral judgment or a conventional judgment. The test begins with a brief vignette describing a hypothetical transgression. Since Turiel was interested in determining whether young children could distinguish moral transgressions from conventional transgressions, the transgressions typically involve events that would be familiar to kids. Participants are then asked a series of questions aimed at determining whether they think the action described is wrong, whether they think the wrongness of the action is “authority independent”, and whether they would universalize the judgment, making the same judgment if the transgression occurred at another place or time. These questions can be asked in a variety of ways depending on the age of the participants and the goals of the study. In the jargon favored by Turiel and other social domain theorists, the responses to these questions are called “criterion judgments”, and criterion judgments are crucial in determining whether the participant takes the transgression to be moral or conventional. If the action is judged to be wrong, authority independent, and generalizable, the participant is classified as having viewed the transgression as moral, while if it is judged to be wrong, authority dependent, and not generalizable, the participant is classified as having taken the transgression to be conventional (Turiel 1983: 52; Smetana 1993: 115). Participants are also asked to explain why the transgression is wrong, and responses are assessed to determine whether they invoke harm, justice or rights, or whether they invoke factors, like social coordination, tradition, custom, appeal to authority, or the likelihood of punishment, that Turiel maintains are the sorts of justifications to be expected if participants view the transgression as conventional. Turiel calls these responses “justification categories” (Turiel 1983: 52–53, 66–68).
From the mid-1970s through the mid-1980s, this experimental format was used in a large number of studies where the vignettes recounted events that the experimenters took to be clear cases of moral transgressions (like unprovoked hitting, stealing, and destroying someone else’s property) or clear cases of conventional transgressions (like chewing gum in class, talking without raising one’s hand, and calling the teacher by her first name), and the results, overwhelmingly, accord with Turiel’s prediction. The transgressions that the experimenters took to be moral were typically judged to be wrong, authority independent, and generalizable to other places and times, and judgments that the transgressions are wrong were usually justified by appeal to the harm the transgression had caused. The transgressions that the experimenters took to be conventional were typically judged to be wrong and authority dependent, they were not generalized to other times or places, and they were justified by appeal to social expectations, the need for social coordination, the commands of people in authority, the threat of punishment, and similar factors. Moreover, these distinctions emerged very early in development. By their fourth birthday, and often earlier, most children had systematically different reactions to the sorts of transgressions that were used. These results were an overwhelming victory for Turiel in his disagreement with Kohlberg and Piaget. It is not the case that young children think of all normative rules in the same way; their normative cognition is not “undifferentiated”. Normative cognition in young children is much more subtle, more varied, and more complex than Kohlberg and Piaget had portrayed it. This was a major achievement in the study of child development. But the experimental results and the way they have been interpreted raise a number of important philosophical and psychological questions. We’ll address these questions in the sections that follow.
4. Does the “Moral” Pattern of Criterion Judgments Actually Identify Moral Judgments?
It is clear that studies eliciting criterion judgments and justification categories demonstrate that children respond quite differently to different sorts of normative transgressions. But why should we think that in most or all cases, when a participant judges that an action is wrong and then offers what Turiel and other social domain theorists take to be the moral pattern of criterion judgments, that the participant’s judgment that the action was wrong really was a moral judgment? The question is an important one for assessing Turiel’s achievement. It’s also important, as we will see in Section 10, because facts about Turiel-style criterion judgments and justification categories have been used to support claims about the nature of moral judgments and claims about the moral capacities of various groups of people, including children and psychopaths.
In formulating his account of criterion judgments indicating that a participant’s normative judgment is a moral judgment, Turiel focused on three features—authority independence, universalizability in space, and universalizability in time—that had been discussed by the philosophers who contributed to the definition of morality literature. But, as noted in Section 2, those philosophers reached no agreement about whether these features are necessary conditions for a judgment to count as moral. And Turiel made no effort to advance or resolve the arguments found in the philosophical literature. He contributed nothing to the descriptive project of analyzing the ordinary concept of moral judgment. Nor did he offer any normative arguments aimed at showing how our ordinary concept of moral judgment should be revised. Rather, it seems, he simply picked a cluster of disputed philosophical proposals, and claimed that they could be used to identify genuinely moral judgments. So what justification do we have for the claim that the criterion judgments Turiel uses actually do identify moral judgments? The problem is nicely, though inadvertently, underscored by Judith Smetana in a passage describing work using the social domain theorists’ experimental paradigm. “Children”, she tell us,
have been asked to make judgments along a set of dimensions that are hypothesized to differentiate moral and conventional rules. (Smetana 1993: 114–115; emphasis added)
Turiel and other social domain theorists have indeed hypothesized that the criterion judgments they focus on differentiate moral judgments from conventional judgments. But it is far from clear how they have defended that hypothesis. Can the claim that these criterion judgments enable us to identify moral judgments be defended?
5. The Natural Kind Defense
One possible defense was sketched by Kelly et al. (2007), and developed in more detail by Kumar (2015, 2016) and Stich (2019). The core idea is that the term “moral judgment” can be viewed as a natural kind term. In his widely discussed paper, “The Meaning of ‘Meaning’”, Hilary Putnam (1975) argued that it is the job of empirical science to determine the essential features of natural kinds like water and gold, and that these essential features can be used to specify the meaning of the term denoting the kind. Devitt (1996) and Kornblith (1998), have provided insightful accounts of how this process works. Very roughly, their story goes like this. To begin, the scientist focuses on what seem to be intuitively obvious examples of the kind in question. She seeks to discover properties that are shared by most of these intuitively obvious examples, and that are absent in most things that, intuitively, are not members of the kind. If this process leads to the discovery of a cluster of properties that is present in most intuitive examples and absent in most cases that, intuitively, are not members of the kind, the scientist then hypothesizes that this cluster of properties are nomologically linked and that they are the essential features of the kind.
Kelly and colleagues suggested that this idea might be useful in interpreting Turiel’s work. On this interpretation, the ordinary term “moral judgment” is hypothesized to be a psychological natural kind term, and it is the job of psychology to determine the essential features of the psychological natural kind that the term denotes. The obvious way to do this would be for psychologists to discover a cluster of nomologically linked properties that are shared by most instances of what they take to be intuitively clear cases of moral judgments, one or more of which is missing in most cases of judgments that they intuitively take not to be moral judgments. This, Kelly and colleagues suggested, looks to be exactly what Turiel and his colleagues have done. Turiel and other social domain theorists take some of the transgressions they use in their experiments to be intuitively obvious cases of moral transgressions, and these are judged, by most participants, young and old, to be authority independent, and generalizable to different locations and to different times. The experimenters take other transgressions to be intuitively obvious cases of conventional transgressions, and these are judged, by most participants, to be authority dependent and not generalizable to different locations or to different times. If all of that is true, then—for the reasons set out by Putnam, Devitt, and Kornblith—Turiel and his colleagues can plausibly claim to have shown that both moral judgments and conventional judgments are indeed psychological natural kinds, and to have discovered the essential features of those kinds. And that would provide an excellent reason to think that the pattern of criterion judgments that is hypothesized to identify moral judgments actually does identify moral judgments, and that the pattern of criterion judgments that is hypothesized to identify conventional judgments actually does identify conventional judgments.
For this natural kind defense to work it must, of course, be the case that the features that are used to identify moral and conventional judgments actually do form nomological clusters, and that most intuitive cases of moral judgment evoke the moral pattern of criterion judgments while most intuitive cases of conventional judgment evoke the conventional pattern of criterion judgments. Do the empirical findings support the natural kind defense? There is a large literature indicating that they do. The pattern of criterion judgments that the natural kind defense requires has been found in studies with participants ranging in age from toddlers to adults (Nucci & Turiel 1978; Smetana 1981; Nucci & Nucci 1982). It has also been found in participants of a number of different nationalities and religions (for reviews see Tisak 1995; Smetana 1993; Nucci 2001) and in children with a variety of developmental disorders, including autism (Blair 1996; Blair, Monson, & Frederickson 2001; Nucci & Herman 1982; Smetana, Kelly, & Twentyman 1984; Smetana, Toth, et al. 1999). There have been a few studies that did not fit the expected pattern, notably Nucci and Turiel (1993) who found that orthodox Jewish children judged that a variety of intuitively non-moral religious norm transgressions were authority independent though they were not generalizable, and Nichols (2002) who found that American college students judged that an intuitively conventional etiquette transgression was authority independent though not generalizable. But as Kumar notes, the nomological clustering that the natural kind defense requires “is not supposed to be exceptionless” (Kumar 2015: §5; 2016: §3.2).
However, in almost all of the studies cited in the previous paragraph, the intuitively moral transgressions used were the sorts of behavior that would be familiar to children. Kelly et al. (2007) dub them “schoolyard” transgressions. This was also the case in Blair’s (1995) study in which the experimental participants included incarcerated psychopathic murderers! In a pair of more recent studies Kelly et al. (2007) and Fessler, Barrett, et al. (2015) used a range of more “grown-up” intuitively moral transgressions including whipping, slavery, stealing, wife beating and rape. Both studies found fairly dramatic departures from the pattern of criterion judgments that the natural kind defense requires.  Stich (2019) argues that these studies pose a major challenge to the natural kind defense of the claim that Turiel’s “moral” pattern of criterion judgments can be used to identify genuine moral judgments. But both the methods and the analyses used in the Kelly et al. and the Fessler et al. studies have been challenged, and these critiques have led to lively debates. So perhaps the best conclusion to be drawn at this writing is that the success of the natural kind defense is very much an open question.
6. The Definition of Morality
The argument set out in the previous section suggests that “moral judgment” can be viewed as a psychological natural kind term, and the essential features of moral judgments are that the person making them takes them to be authority independent and generalizable to other times and other places. On the view urged by Putnam, Devitt, and Kornblith, these three features enable us to provide a scientifically justified definition of “moral judgment”. But while Turiel and his collaborators employ these features as “criterion judgments” used to determine whether a participant’s normative judgment is a moral judgment, they do not treat them as providing a definition of “moral judgment”. This is not because the social domain theorists are uninterested in definitions. Quite the opposite! Many of their publications include discussions of “the definition of morality” (or of “the moral domain”). Those discussions often acknowledge the influence of philosophical accounts of morality, and some of them discuss philosophical literature in considerable detail. Indeed, according to Turiel and Davidson,
more than any other discipline, philosophy has provided a basis for definitional criteria …, [and] the working definitions guiding our research on concepts of convention and moral judgments … borrowed liberally from some of those philosophical treatments…. (1986: 118)
However, these definitions of morality, which have evolved somewhat over time, can be rather puzzling. It is often less than clear what the definitions are, what their status is, how they are to be evaluated, and what role they play in the research of social domain theorists. What is clear is that the definitions have provoked a great deal of controversy. Some of the most trenchant and empirically sophisticated critiques of Turiel’s work have focused on his definitions of morality. The primary goal of this section is to offer an account of the ways in which claims about the definition of morality interact with other aspects of the work of Turiel and his colleagues. The two sections that follow will consider some of the more influential critiques of Turiel’s definition.
Let’s start with a few examples. In Turiel (1977: 80) we are told: “The distinction between convention and morality implies a narrow definition of morality as justice”. Much the same idea appears in Nucci and Turiel (1978). After maintaining that “social conventions are defined relative to the social context”, they go on to say that “moral issues are … structured by underlying concepts of justice” (1978: 400–401). But later in that paper, they describe their study in which “events classified as moral involved the justice, welfare, or rights of individuals or groups” (1978: 402). This anticipates the formulation used in a frequently quoted passage at the beginning of Turiel’s The Development of Social Knowledge (1983), where we’re told that
the moral domain refers to prescriptive judgments of justice, rights and welfare pertaining to how people ought to relate to each other. (1983: 3)
Social domain theorists often characterize diminishing someone’s “welfare” as causing “harm”, and in many places the term “harm” replaces “welfare” in their definition. This is particularly important because in the vast majority of studies by Turiel and other social domain theorists the transgressions that they take to be moral result in obvious harm to the victim, and as we’ll see in the following sections, the role of harm in characterizing moral transgressions has become a central focus in the literature. In some recent publications by social domain theorists, fairness and equality have been added to the definition.
What is the status of these evolving definitions? One obvious proposal is that they are simply terminological stipulations intended to inform the reader about how these authors have decided to use the term “morality”. But that’s clearly not what’s intended, since Shweder, Turiel, and Much (1981: 289) insist that “[t]he meaning of ‘morality’ is something discovered, not stipulated”. Another possibility is that the definitions are attempts to capture the ordinary meaning of the term as it is used by speakers of contemporary English. But that proposal is also clearly mistaken. “The terminology”, Turiel tells us,
does not necessarily correspond to general, nonsocial scientific usage of the labels “convention” and “morality”. Clear or systematic patterns of general use of the labels would be difficult to discern. As labels, the terms are often used interchangeably or even inconsistently; sometimes they correspond to the definitions provided here and sometimes they are inconsistent with them. (Turiel 1983: 34)
In trying to understand the status of these definitions, one useful hint is that Turiel and his associates often talk about improving or refining them (see, for example, Killen & Smetana 2008: 4). In Turiel, Killen, and Helwig (1987: 169) we’re told that
[t]he general strategy has been a recursive one: using definitions of domain to inform data-gathering procedures and data to inform the descriptions insofar as they apply to individuals’ judgments. As stressed elsewhere (Turiel & Davidson 1986), these procedures must be regarded as efforts based on working hypotheses, since the range and boundaries of domains are still not precise. The working definitions for the domains have also been partly guided by philosophical treatments….
And in Turiel and Davidson (1986), the strategy is further elaborated as follows:
The formation of systems for classifying domains of knowledge is both a definition and a research issue in that definitions and data interpretations feed back on each other. Our strategy has been one of moving from (a) initial domain identification and definitional criteria…, to (b) the gathering of data bearing on how subjects use and distinguish the categories, to (c) reformulations of the categories and criteria—and so on. (1986: 114)
These passages suggest the following account of the role and justification of definitions of morality in the work of Turiel and his collaborators, and of the way philosophy contributed to their project. As noted in Section 3, the philosophical literature played a role in designing the initial experiments that enabled Turiel to show that children distinguish different sorts of normative transgressions and thus that their normative judgments are not “undifferentiated”. The features probed by criterion judgment questions—authority independence and generalization to other places and other times—were suggested by the philosophical literature. Having shown that a modest number of schoolyard transgressions that seem, intuitively, to be moral evoke the (putatively) moral pattern of criterion judgments, a natural question to ask was how to characterize the full set of transgressions that would evoke moral criterion judgments. Characterizing that set of transgressions would constitute a “definition of morality”. Here again philosophy played a role by suggesting what Turiel, Killen, and Helwig (1987) characterize as their “working definitions” of the moral domain. Using such a working definition, Turiel and his colleagues could fine tune the transgression descriptions and criterion judgment questions used in their experiments. And sometimes the results of the experiments required modification of the working definition of morality, which expanded from matters involving justice, to justice, rights, and welfare or harm, and then expanded again to include matters involving fairness and equality. This back and forth between definitions and data-gathering procedures is the “recursive” strategy that Turiel, Killen, and Helwig (1987) were alluding to.
7. Empirical Challenges to Turiel’s Definition of Morality: Does It Fail in Other Cultures?
The definition of morality proposed by Turiel and his associates is an empirical hypothesis about the sorts of transgressions that will evoke moral criterion judgments. And while the hypothesis has evolved over the last four decades, a core feature of work in the Turiel tradition is that harm plays a central role. Most of the experiments designed to explore the boundaries of the moral domain include transgressions that cause obvious harm, and it is predicted, almost always correctly, that these will lead participants to make moral criterion judgments. It is also predicted that transgressions that do not cause obvious harm, and that are not obvious examples of injustice or the violation of rights—or, more recently, of unfairness or of treating people unequally—will not evoke moral criterion judgments, and thus that people do not treat them as moral transgressions. But this second prediction has been the focus of a widely discussed empirical critique of Turiel’s definition of morality. The critics’ central contention, in Jonathan Haidt’s memorable phrase, is that Turiel’s definition of morality “does not travel well” (Haidt 2012: 22).
The first influential challenge along these lines was mounted by Richard Shweder and colleagues who argued that the sort of normative judgments that Turiel would classify as conventional play a much smaller role in many non-Western societies, and that in those societies the moral domain—the class of transgressions that Turiel’s test or something in that vicinity would classify as moral—includes a wide range of actions in which no one is harmed, no injustice is done, and no rights are violated.
Much of the evidence Shweder offered for this claim came from an extensive study of the normative judgments of orthodox Hindus in India and of Christians and Jews in the USA (Shweder, Mahapatra, & Miller 1987). The Indian participants lived in a residential community surrounding an eleventh and twelfth century Hindu temple in the city of Bhubaneswar. The Americans were recruited from a middle class neighborhood near the University of Chicago. The study collected judgments about a wide range of normatively regulated practices including
forms of address…, sleeping arrangements, incest avoidance, dietary practices, forms of dress, … begging, nepotism, monogamy, wife beating, … the protection of persons from physical and psychological harm, funeral rites …. (1987: 4)
A major goal of the study was to test Turiel’s hypothesis that transgressions that did not involve harm or other elements included in Turiel’s definition of morality would be judged to be conventional rather than moral using questions similar to Turiel’s criterion judgment questions. Participants were presented with a variety of brief vignettes, including these:
- While walking, a man saw a dog sleeping on the road. He walked up to it and kicked it.
- The day after his father’s death, the eldest son had a haircut and ate chicken.
- A widow in your community eats fish two or three times a week.
Not surprisingly, both the Indian and the American participants judged the behavior described in (1) to be wrong, and using the questions that Schweder and colleagues constructed to determine whether participants regarded a judgment to be moral or conventional, both groups regarded (1) to be a moral transgression. That, of course, is what Turiel would predict. But the responses to (2) and (3) appeared to be much more problematic for Turiel. Though the American participants did not judge them to be wrong at all, the Indian participants judged them to be quite serious transgressions, and their responses to questions designed to determine whether they viewed the transgressions as moral or conventional indicated that they viewed them as moral. Since neither the son’s behavior in (2) nor the widow’s behavior in (3) does any obvious harm, commits no obvious injustice, and seems to violate no one’s rights, Shweder and colleagues argued that these responses, along with the Indian participants’ responses about many of the other vignettes, posed a major challenge for Turiel. The class of transgressions treated as moral by the Indian participants was much larger than Turiel’s definition of morality predicted, and the class of conventional transgressions was vanishingly small.
Turiel was not convinced. In a lengthy response, Turiel, Killen, and Helwig (1987) argued that Shweder’s putatively troubling examples relied on profound differences between the non-moral beliefs of Shweder’s Indian and American participants. For the Americans, getting a haircut, eating chicken, and eating fish are all entirely harmless, so it is no surprise that they judged the son’s behavior and the widow’s to be unproblematic. But, as Shweder himself had noted in an earlier paper, the Indians believed that if a son were to eat chicken the day after his father’s death, it would result in his father’s soul not receiving salvation (Shweder & Miller 1985: 48). They also believed that fish is a “hot” food that would stimulate the widow’s sexual appetite and might lead her to have sex, which would offend her husband’s spirit and lead to great suffering (Turiel, Killen, & Helwig 1987: 209). The Indian participants classified more transgressions as moral, Turiel and colleagues argued, not because they have a more inclusive definition of morality, but rather because they believed many more actions to be harmful.
This response led Jonathan Haidt to design a study in which he explored people’s reactions to what he characterized as “harmless taboo violations”, (Haidt 2012: 19). Several of these, including the two below, were, and were intended to be, rather shocking:
- Dog: A family’s dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog’s body and cooked it and ate it for dinner.
- Kissing: A brother and sister like to kiss each other on the mouth. When nobody is around, they find a secret hiding place and kiss each other on the mouth, passionately.
Another vignette that Haidt used has become quite famous:
- Chicken: A man goes to the supermarket once a week and buys a dead chicken. But before cooking the chicken, he has sexual intercourse with it. Then he cooks it and eats it. (Haidt, Koller, & Dias 1993: 617)
After each vignette, rather than simply assuming that participants believed the actions described to be harmless, Haidt asked, “Is anyone hurt by what [the actor] did? Who? How?” (Haidt, Koller, & Dias 1993: 617).
Since Haidt was interested in Shweder’s claim that the moral domain was much larger for people in non-Western societies, his study was designed to compare the judgments of participants near his own university, in Philadelphia, with those of participants in two Brazilian cities—Porto Alegre, in the south, which was industrialized, westernized, and relatively wealthy, and Recife, in the north, which was much poorer, less industrialized, and less westernized. As Shweder would have predicted, Haidt found that participants in Recife were more likely than participants in Philadelphia to view behaviors like those in Dog, Kissing, and Chicken as moral transgressions, using Haidt’s version of the questions intended to determine whether a participant regarded a transgression as moral or conventional. Participants in Porto Alegre were in between the two. But there was also a surprise in the data. Haidt and colleagues had collected information about the socio-economic status (SES) of their participants, and they found that lower SES groups “moralized” more than higher SES groups in all three cities, and that the effect of SES was much larger than the effect of location.
Haidt’s study posed an important and influential challenge to Turiel’s harm-anchored definition of morality. But it was not without problems. In constructing the questions they would use to determine whether participants regarded a judgment to be moral or conventional, Shweder and colleagues had included a question about punishment:
Do you think a person who does (the practice under consideration) should be stopped from doing that or punished in some way? (Shweder, Mahapatra, & Miller 1987: 42)
And Haidt retained a version of the punishment question in his study: “Should [the actor] be stopped or punished in any way?” (Haidt, Koller, & Dias 1993: 617). However, though Turiel and his colleagues sometimes collect judgments about punishment, they explicitly reject their use in assessing whether participants view a transgression as moral or conventional, since both moral and conventional transgressions are often punished. Smetana, Rote, et al. (2012: 683) insist that judgments about punishment “are not a formal criterion for differentiating the domains”. And Smetana, Jambon, and Ball (2018) tell us that while they
commonly assess children’s judgments of … the degree of censure that the transgressor deserves,…. these evaluations are less informative than criterion judgments because all rule violations are, by definition, unacceptable, wrong, and punishable. (2018: 270)
The other question Haidt asked in his pared down version of queries intended to determine whether participants regarded a judgment as moral was aimed at assessing whether participants generalized their judgment:
Suppose you learn about two different foreign countries. In country A people [do that act] very often, and in country B, they never [do that act]. Are both of these customs OK, or is one of them bad or wrong? (Haidt, Koller, & Dias 1993: 617)
Since Turiel’s version of the criterion judgment questions also includes one or more questions aimed at determining whether participants view their judgment to be authority dependent or authority independent, it is not clear that Haidt’s two probes (or the questions that Shweder used) constitute a fair test of the hypothesis that the transgressions evoking the moral pattern of criterion judgments all involve harm.
During the last three decades, there have been a number of studies that cleaved more closely to the authority independence and generalization criterion judgments advocated by Turiel and his associates and explored the sorts of transgressions that evoked those judgments in a variety of religions groups and a variety of cultural settings. Most of the studies undertaken by Turiel and researchers associated with him used intuitive moral transgressions that seemed (to contemporary Western observers) to be obviously harmful, and intuitive conventional transgressions that seemed (again, to contemporary Western observers) to obviously not be harmful. These studies confirm the prediction that transgressions that evoke moral criterion judgments are harmful and transgressions that don’t evoke moral criterion judgments aren’t. But none of these studies are problem free. None of them follow Haidt’s lead and actually ask participants whether the transgressive behavior is harmful. Also most of the transgressions used in these studies are of the tame “schoolyard” variety that can be comfortably used with children. None of these studies used anything like the more flamboyant “harmless taboo violations” that were center stage in Haidt’s study. So we do not know whether people in these cultures and religious groups would offer moral or conventional criterion judgments in response to transgressions like the ones that Haidt used.
By contrast, two recent studies by Berniūnas and his colleagues (2016, 2020) did use Haidt-style transgressions, including eating a pet dog who has been run over, using the national flag to clean the toilet, and defecating in a Buddhist temple. They also followed Haidt in explicitly asking whether anyone was harmed by these actions. But, unlike Haidt, they solicited Turiel-style criterion judgments using questions that cleaved quite closely to those used by Turiel and other social domain theorists. They found that many of the transgressions that the participants themselves characterized as harmless evinced moral rather than conventional criterion judgments.
Overall, the bottom line is that on the question of whether Turiel’s hypothesis about the definition of morality applies cross-culturally, the jury is still out. Many of the earlier studies confront serious methodological issues. And while the two recent studies, by Berniūnas and colleagues, that avoid earlier methodological problems report findings that challenge Turiel’s definition, much more methodologically fastidious research is needed.
8. Empirical Challenges to Turiel’s Definition of Morality: Does it Fail with “Non-Prototypical” Transgressions?
Thus far our focus has been on work that sought to test Shweder’s contention that Turiel’s definition of morality would not travel well because people in other cultures would offer moral-pattern criterion judgments about transgressions that do not involve harm or justice or rights. But Haidt’s analysis of the responses of low SES American participants to transgressions that are edgier than the ones Turiel and his colleagues typically use suggests that we do not have to travel abroad to gather data that might challenge Turiel’s definition of morality. This was an idea that had occurred to Turiel quite independently. Two years before Haidt’s study appeared, Turiel, Hildebrandt, and Wainryb (1991) published a lengthy monograph in which they explored judgments of American high school and college students about a variety of what they labeled “nonprototypical” transgressions. “In the moral domain”, they tell us,
prototypical stimuli essentially present actions entailing harm, fairness, or violations of rights in the absence of strong conflicting concerns. (1991: 7)
The prototypical moral transgressions they used in these studies were killing and rape; the non-prototypical transgressions were abortion, pornography, homosexuality, and incest. Though they did not explicitly ask whether anyone was harmed by these behaviors, they used responses to the justification questions to determine whether participants thought that the behaviors were harmful. Not surprisingly, many of the participants who thought that abortion was “not all right” justified their judgment by appealing to the harm done to the unborn child. Many participants also thought that pornography caused harm, and they used this belief to justify their negative evaluation of pornography. The role of these beliefs is similar to the role of Hindu beliefs about the consequences of eating chicken the day after one’s father has died. Participants who have these beliefs see harm, and offer negative assessments about actions that people who don’t share the beliefs find unproblematic.
But for our purposes, the most interesting nonprototypical transgression used by Turiel and his colleagues was incest. “Of all the nonprototypical issues we investigated”, they tell us, “incest was most closely assimilated to reasoning in the moral domain” (1991: 78). A substantial number of participants judged that incest was “not all right”, they generalized the judgment to other countries, and they indicated that their judgment was authority independent—not contingent on the laws or practices that prevailed in the United States or elsewhere. So their responses fit the “moral” pattern of criterion judgments. But typically these participants did not justify their judgments by appeal to harm, justice or rights. Rather, they appealed to what Turiel and colleagues labeled “deterministic beliefs”, which are “standards dictated by psychological normality, biological order or religious order” or to “custom and tradition” (1991: 24 & 78). So the pattern of criterion judgments about incest provided by these participants looks to pose a direct challenge to Turiel’s definition of morality. They exhibit the “moral” criterion judgment pattern, but they do not judge that the transgression is harmful.
Intriguingly, in summing up these findings, Turiel and his colleagues appear to walk back the claim that “[t]he meaning of ‘morality’ is something discovered, not stipulated”. They recognize that the judgments about incest offered by some participants would be classified as moral on the basis of their criterion judgments. But for several reasons the authors have “not chosen this alternative”. They
believe that it is more precise to restrict “moral” to prototypical judgments and use different terms for judgments that are somewhat different….
most important, our terminology reflects a concern with identifying the components of reasoning, as well as assumptions about reality, that form part of social and moral judgments. (1991: 86)
Though this passage, like much of Turiel’s writing, is less than crystal clear, it seems to be saying that restricting the term “moral” to judgments about “prototypical” transgressions (those that involve “harm, fairness, or violations of rights in the absence of strong conflicting concerns”) and not applying it to judgments about incest is just a terminological choice.
9. The Origin of Moral Judgments: Reason, Emotion, Social Learning or Innate Cognitive Machinery
By the time they reach their fourth birthday, many children offer what social domain theorists take to be the “moral” pattern of criterion judgments about transgressions that involve harm. How do they recognize that these behaviors have a distinct status? According to Turiel’s “constructivist” account, both perception and reasoning play a central role. Here is a passage in which Turiel describes the process in some detail. It focuses on the hypothetical example of “Event A” in which a child in a nursery school who wants to use a swing that is being used by another child pushes the other child off the swing and hits him.
For the child to regard the act in Event A as wrong or as a transgression, it is not necessary to be told by others that it is wrong, or that he or she model others, or to perceive the act as a rule violation. Rather the child can generate prescriptions through abstractions from the experience itself (either as an observer or participant). Important elements in the perception of such an event would be the pain experienced by the victim and the reason for the offender’s act. By coordinating these different concepts, the child can generate prescriptions regarding the event. For instance, the child will connect his or her own experience of pain (an undesirable experience) to the observed experience of the victim. In turn, this will be related to the perceived validity of the child’s motive for harming the other person. Moreover, when the child is the victim, the perception of the undesirability of the offender’s act is even more direct and vivid.
One of the ways children form judgments of moral necessity regarding actions of the type depicted in Event A would be through comparison of performance of the act itself with its opposite. In the case of the act described in Event A, its negation (in the child’s mental activities) would result in a comparison of its occurrence with its nonoccurrence. If the constructed consequences of its nonoccurrence (the victim is unharmed) are judged to be more desirable than the consequences of its occurrence (the victim is harmed), then inferences will be made regarding how people should act under those circumstances (e.g., people ought not to hit each other). In attempting to understand their social experiences, children form judgments about what is right or wrong and fair or unfair. Such judgments evolve from the child’s experiences but are not latent in the experiences themselves. (Turiel 1983: 43)
This account makes it clear why Haidt favors the term “rationalist” for views like Turiel’s. The young child is portrayed as doing quite a lot of reasoning. But there is much in this account that is puzzling. The child is said perceive the validity of the assailant’s motive. How does he do that? How does he know which motives are valid and which are not? The child also judges that it would be more desirable if the victim is unharmed than if the victim is harmed. Why does the child find this outcome more desirable? Having judged that it is more desirable if the victim is unharmed, the child then makes an inference about how people should act—“people ought not to hit each other”. But this conclusion does not follow logically from the premise that it is more desirable if the victim is unharmed. So if the child’s inference is deductive then presumably the child is invoking one or more additional premises. What are they and how did the child come to believe them? If the inference is not deductive, what sort of inference is it, and how does the child come to have the capacity to make this sort of inference? Having made the judgment that the act in Event A is wrong the child will, if asked, make moral-pattern criterion judgments. How does he do that? Obviously, Turiel’s constructivist account of the processes underlying the formation of criterion judgments in response to questions about harmful transgressions leaves a lot unexplained.
One influential researcher who has offered a widely discussed alternative account is Shaun Nichols. Social learning is the process in which normative rules are acquired from parents and other important people in a child’s environment. And while Turiel and other social domain theorists downplay the importance of social learning, it plays a central role in Nichols’ theory. Another widely discussed idea is that emotion or affect is important in moral development and moral cognition. This is a central theme in Shaftesbury, Hutcheson, and Hume and in the work of many contemporary philosophers (Kauppinen 2014 ). Though emotion is rarely mentioned in Turiel’s work, it is a key element in Nichols’ explanation of how people distinguish conventional transgressions from transgressions involving harm. 
On Nichols’ account, as children mature they acquire an increasingly sophisticated cluster of rules—a normative theory—that specifies which actions are prohibited. The normative theory is acquired primarily via social learning and includes both moral rules that proscribe harming others under a wide variety of circumstances, and conventional rules about such matters as appropriate clothing, etiquette, and classroom behavior. However, Nichols maintains, Turiel’s work has shown that both children and adults draw an important distinction in the class of behaviors that are prohibited by the normative theory they have acquired. Questions about transgressions in which a victim has been harmed and questions about harmless transgressions like wearing inappropriate clothing evoke very different responses. What explains the fact that people react differently to different kinds of transgressions? According to Nichols, the answer is provided by the affect system. Thinking about some transgressions—most notably those that cause someone to suffer—arouses strong affect in people, and questions about normative transgressions that induce this strong affective reaction evoke different responses than questions about affectively neutral transgressions. However, transgressions involving harm and suffering are not the only ones that evoke a strong affective response. And this leads Nichols to an intriguing prediction.
On the account of moral judgment that I’ve proposed, the moral/conventional task really taps a distinction between a set of norms that are backed by an affective system and a set of norms that are not backed by an affective system. On this theory, affect-backed normative claims will be treated differently than affect-neutral normative claims. Thus, the account predicts that transgressions of other (non-harm-based) rules that are backed by affective systems should also be treated as non-conventional. As a result, if we find that other affect-backed norms are also distinguished from conventional norms along the dimensions of permissibility, seriousness, authority contingency, and justification type, then this will provide an independent source of evidence for the account of moral judgment that I’ve proposed…. To test the prediction we need to exploit a body of (non-harm based) rules that are backed by an affective system. (2002: 227–228)
The affectively potent non-harm based rules Nichols focused on are those that proscribe behaviors likely to evoke disgust. In his first experiment designed to test the theory, Nichols administered his version of the moral/conventional task to participants who had read short vignettes describing prototypical moral transgressions, like one child hitting another, prototypical conventional transgressions, like a child wearing pajamas to school, and vignettes describing normatively proscribed disgusting behavior. In one of these, Bill, a man at a dinner party, spits in his water glass before drinking it. To assess whether participants viewed Bill’s transgression to be authority independent, Nichols’ asked:
Now what if, before Bill went to the party, the hosts had said, “At our dinner table, anyone can spit in their food or drink”. Would it be O.K. for Bill to spit in his water if the hosts say he can? (2002: 229)
The results were consistent with Nichols’ theory. Participants’ judgments about the disgusting transgressions differed from their judgments about the prototypical conventional transgressions, and resembled their judgments about the prototypical moral transgressions, though the justifications that participants provided were disgust-based rather than harm-based.
In a second experiment, Nichols sought to confirm that it was the emotion of disgust that was responsible for the fact that participants in the first experiment treated disgusting transgressions like prototypical moral transgressions, rather than like prototypical conventional transgressions. To do this, he employed the same experimental strategy used in the first experiment, and then asked participants to fill out a questionnaire that assesses sensitivity to disgusting stimuli. He found that participants who scored high on the disgust sensitivity scale rated the disgusting transgressions as significantly more serious than participants who scored low on disgust sensitivity. Moreover, the low disgust-sensitivity participants were more likely than the high disgust-sensitivity participants to judge that the disgusting transgressions were authority contingent (2002: 231).
The conclusion Nichols urges is that norms prohibiting harmful behavior and norms prohibiting disgusting behavior are part of an important class of norms that he labels “norms with feeling”. Though Nichols calls the test that he uses to identify violations of these norms “the moral/conventional task”, the class of transgressions that evoke what he calls the “non-conventional” response pattern is a much larger, more heterogenous, and arguably much less philosophically interesting class of transgressions than the ones Turiel claimed to be in the moral domain.
Nichols’ theory provoked trenchant criticism from Royzman, Leeman, and Baron (2009), who are more sympathetic to Turiel’s view that transgressions involving harm are central to moral cognition. Their most telling critique focuses on methodological problems with Nichols’ work. Though Nichols’ studies were done about a decade after Haidt, Koller, and Dias (1993) had stressed the importance of asking participants in Turiel-style studies whether they thought the action they were considering had caused any harm, Nichols did not ask participants whether anyone was negatively affected by the behavior of the protagonists in his disgust scenarios. When Royzman et al. repeated Nichols’ spitting in the water glass experiment and explicitly asked “was anyone negatively affected by [the protagonist’s] action?” 71% of their participants judged that other guests at the diner table were negatively affected, and those who thought the disgusting behavior was harmful were more likely to judge that the protagonist’s behavior was not permissible even if the party host allowed it (2009: 166–167).
In a second experiment, Royzman and colleagues explored the possibility that participants’ responses to Nichols’ disgust sensitivity questionnaire might have been influenced by their recent judgments about the disgusting transgression. When they administered the disgust sensitivity questionnaire two weeks before participants were asked about the disgusting transgressions, they found that disgust sensitivity was not correlated with responses to questions about whether the action would be wrong under a range of counterfactual circumstances—if there were no social norm against it, for example, or if it occurred in another country where it was thought to be OK. Royzman and colleagues call judgments that don’t vary under these counterfactual circumstances socially transcendent, and they argue that social transcendent judgments should be taken to be the fundamental component of the moral/conventional task. They found that social transcendence correlated only with ratings of current harm, and not with disgust sensitivity. These experimental findings are, of course, a direct challenge to Nichols’ “norms with feeling” view. What seems to be responsible for “non-conventional” responses on the moral/conventional task used in Royzman et al.’s experiments is not affect, as Nichols’ theory would urge, but a judgment that someone is harmed.
Royzman and colleagues conclude that their findings
are largely consistent with Turiel’s basic insight that the process of selective moralization is effected by a system oriented towards a particular rule content, and that this content is largely defined by acts or dispositions deemed intrinsically harmful to others. (2009: 172–173)
But they do not endorse Turiel’s constructivism. Rather, to explain how people acquire the ability to distinguish moral and conventional transgressions they propose a sophisticated version of nativism.
[T]he data and reflections presented … appear to constrain the answer to the question about the formation of the moral–conventional distinction to the point where it may be plausible to suggest that humans possess, as part as of their evolved cognitive machinery, something analogous to Azimov’s “First Law of Robotics” (in a nutshell: “harm no human” [even if commanded so via the authority of another human, including one’s human master]), a hard-wired proto-prohibition that serves to automatically imbue the socially transmitted rules of a certain kind (those concerned with one’s impact on the utilities of others) with a special (rule-/authority independent) status that registers as social transcendence on the moral-conventional distinction task and that appears to be an aspect of moral competence world-wide. (2009: 173)
Haidt’s findings comparing high and low SES participants suggest that there is substantial and systematic variation in judgments about which transgressions are socially transcendent. It might be thought that this variation is best explained by differences in socially transmitted rules like those that Haidt and his colleagues have explored in their more recent work on Moral Foundations Theory (Haidt & Joseph 2007; Graham, Haidt, & Nosek 2009; Haidt 2012; Graham, Haidt, Koleva, et al. 2013). But Royzman, Landy, and Goodwin (2014) proposed a different, and more disquieting, explanation for SES-linked variation in judgments about the social transcendence of transgressions. Social domain theorists recognize that some of the transgressions they classify as conventional can lead to harm by upsetting onlookers (as in Nichols’ table manners case) or by disrupting established social routines. But, Royzman and colleagues note, those harms
are contingent upon aspects of an existing normative regime that could themselves be deemed arbitrary and modifiable by consensus. (2014: 178)
By contrast, the harms caused by what social domain theorists take to be moral transgressions “involve acts that are (perceived to be) intrinsically harmful to others (Turiel 1983, e.g., p. 44, p. 221)” (2014: 178). To discern the difference between transgressions that are intrinsically harmful and those that are contingently harmful, one needs to engage in counterfactual thinking. One must ask questions like:
- Would it still be harmful to torture a puppy even if everyone in the community thought it was OK? [Yes, it would.] and
- Would it still be harmful to wear pajamas to school even if everyone in the community thought it was OK? [No, it wouldn’t.]
It is questions like this that are used in some versions of the moral/conventional task to assess whether a transgression is socially transcendent. But, Royzman and colleagues note, not everyone is equally inclined to engage in counterfactual thinking or equally good at it. And in a striking experiment they found that people with lower scores on the Cognitive Reflection Test, which measures people’s tendency to give reflective answers instead of following their immediate intuitions, were more likely to judge that behaviors that are not intrinsically harmful (like wearing pajamas to work) would not be OK in a culture, even if everyone in that culture had come together and decided that it would be OK (Royzman, Landy, & Goodwin 2014).
Royzman and colleagues conclude that we should not assume that the sort of socioeconomic differences that Haidt found in his moral/conventional task studies are due to fundamentally divergent values. Rather, since poverty has been linked to impaired cognitive performance, they speculate that the differences between Haidt et al.’s high- and low-SES participants is “a product of the low-SES participants’ cognitive limitations” (Royzman, Landy, & Goodwin 2014: 187) Needless to say, the suggestion that some of the moral judgments made by low SES people in the USA and abroad are the result of impaired cognitive performance is more than a bit provocative! Much more work is needed before accepting this conclusion, and careful skepticism is warranted: Instead of speculating about the causal role of alleged “cognitive limitations” in explaining why low- and high-SES judgments vary, Royzman and colleagues could have actually collected data from both socio-economic groups and tested the claim directly; unfortunately they didn’t. But if Royzman, Landy, and Goodwin are right, then Turiel and other social domain theorists may have been on to something important when they claimed that harm plays a central role in characterizing the moral transgressions picked out by the moral/conventional task, though because of their cognitive style economically challenged people may be less able to assess the moral importance of cases involving harms that are not intrinsic.
10. What Does the Moral/Conventional Task Tell Us About Moral Judgments?
A question that has been center stage in several preceding sections is: What sorts of transgressions do Turiel’s moral criterion judgments, and other related experimental procedures (collectively dubbed the “moral/conventional task”), classify as moral? Social domain theorists claim that the task picks out transgressions that involve harm, justice, and rights (and more recently fairness and equality). Haidt, Shweder, and others claim that their versions of the task picks out a wide range of transgressions, and that different sorts of transgressions are picked out in different cultures. Nichols claims that his version of the task picks out transgressions of socially learned norms that engender high affect. And Royzman and colleagues argue that for experimental participants who are able to think clearly about counterfactual issues, their version of the task picks out harmful transgressions, where harm is construed broadly.
Why is this question important? Turiel and other social domain theorists maintain that the moral/conventional task enables us to identify genuine moral judgments, and Haidt, Shweder, and Royzman agree, though each focuses on a somewhat different version of the task. If that’s right, then determining the sort of transgressions that the task classifies as moral will enable us to characterize the content of moral judgments. So, for example, if Royzman and colleagues are right, then all moral judgments are about behaviors that are taken to be harmful, and normative judgments that are not about behaviors taken to be harmful are not moral judgments. Moreover, if it’s true that the moral/conventional task picks out genuine moral judgments, this fact would be of enormous importance for psychologists and neuroscientists who want to study the processes underlying moral judgments and the factors that can influence them, since it gives these researchers a way of identifying the judgments they want to study. And, as we’ll see in the following section, a test that enables us to identify genuine moral judgments can also be a valuable tool for philosophers, since a number of philosophical arguments turn on the claim that various categories of people—including children and psychopaths—can, or can’t, make genuine moral judgments.
Royzman, Landy, and Goodwin rightly note that the moral/conventional task is “one of the most widely used measures of mature moral judgment” (2014: 178). But why should we believe that any version of the moral/conventional task really does enable us to pick out genuine moral judgments? As we noted in Section 2, there is a great deal of philosophical work aimed at providing a definition of morality, and while some philosophers offered accounts that are compatible with features of the moral/conventional task, many philosophers disagreed. So there is no philosophical consensus about the definition of morality that would support the claim that Turiel’s moral/conventional task, or anything in that vicinity, can be used to identify moral judgments. One way around this impasse, as we saw in Section 5, is the suggestion that “moral judgment” can be regarded as a natural kind term, and the hypothesis that the features used in the moral/conventional task are the essential features of the natural kind the term denotes. For that hypothesis to be plausible, it would have to be the case that the features used in the moral/conventional task form a nomological cluster. But a number of authors, including Kelly and colleagues (2007); Fessler, Barrett, and colleagues (2015); Kumar (2015); and Stich (2019) have argued that findings like those we have reviewed in Section 5 challenge the claim that the features used in the moral/conventional task form a nomological cluster. The study by Fessler, Barrett, and others poses a further challenge to versions of the moral/conventional task that include questions about seriousness since it found that harmful transgressions were judged to be less bad when they occurred long ago or far away. Moreover, in cultures around the world there is a wide range of behaviors that are considered serious transgressions that are far removed from the familiar schoolyard transgressions typically invoked by social domain theorists—transgressions like apostasy and blasphemy, necrophilia and speaking ill of the dead, marrying a cousin, failure to circumcise one’s sons, and many more. Most of these have never been used in moral/conventional task studies. Since people’s judgments about transgressions like these have never been studied, we simply do not know whether they would evoke the pattern of judgments that social domain theorists find for schoolyard transgressions. If they don’t, that might provide additional evidence against the claim that moral/conventional task judgments form nomological clusters. On the other hand, Royzman, Landy, and Goodwin (2014) alert us to the possibility that poor reasoning may be responsible for some, perhaps all, cases where the components of the moral/conventional task come unglued. So much more work will be needed before we can draw a confident conclusion about whether Turiel’s “criterion judgments” version of the moral/conventional task, or any other version, actually enables us to determine which of a person’s normative judgments are moral judgments.
11. The Uses of The Moral/Conventional Task in Philosophy
Susan Dwyer was the pioneer in introducing Turiel’s work to philosophers. In her 1999 paper, “Moral Competence”, she proposed an analogy between the psychological mechanisms underlying moral competence and Chomsky’s account of the psychological mechanisms underlying linguistic competence. The universality of the moral/conventional distinction and the fact that it emerges very early in development, Dwyer argues, are both best explained by the hypothesis that “children come equipped with something akin to a Universal Moral Grammar” (Dwyer 1999: 169). Prior to Dwyer’s paper, references to Turiel and other social domain theorists were infrequent in the philosophical literature, and we know of no substantive discussions of the philosophical implications of their findings. But that soon changed.
As we saw in Section 9, Turiel’s work played an important role in Nichols’ (2002) elaboration and defense of his version of moral sentimentalism. In another pair of publications (Nichols 2004: Ch. 4; 2007) Nichols invoked the work of the social domain theorists (Turiel 1983; Smetana 1989, 1993; Tisak 1995) to mount an argument against what he took, plausibly enough, to be the most visible and carefully developed alternative version of sentimentalism, developed by Alan Gibbard (1990). The problem with Gibbard’s theory, Nichols maintains, is that it is “seriously overintellectualized” and “too ingenious to be a plausible account of normal moral judgment” (Nichols 2007: 261 & 259). On Gibbard’s account,
what a person does is morally wrong if and only if it is rational for him to feel guilty for doing it and for others to resent him for doing it. (Gibbard 1990: 42, quoted in Nichols 2007: 258)
But, Nichols argues,
if moral judgments are judgments of the appropriateness of guilt, then an individual cannot have the capacity to make moral judgments unless she also has the capacity to make judgments about the appropriateness of guilt. (Nichols 2007: 259, italics in the original)
and developmental psychologists tell us that “children don’t understand complex emotions like guilt … until around age 7” (Nichols 2007: 261). However, according to Nichols, children make moral judgments much earlier. The crucial step in Nichols’ argument for this latter claim is that “a basic capacity for moral judgment is reflected by passing the moral/conventional task” (2004: 89). And, as Turiel and his colleagues have repeatedly shown, children succeed at the moral/conventional task—they give typical adult responses to schoolyard examples of moral and conventional transgressions—by the age of four or earlier. Since four year-olds can make moral judgments but don’t understand guilt, Nichols concludes that Gibbard’s version of sentimentalism is incompatible with the empirical facts.
Over the last two decades, discussion of the work of social domain theorists became substantially more common in the philosophical literature. Joseph Heath offers an intriguing explanation for some of the philosophical interest in this work. It is, he maintains, motivated by the widespread resistance, among moral philosophers, “to the idea that morality could simply be a part of culture, because … many elements of culture vary enormously from place to place…” (2017: 277). These philosophers, Heath contends,
are inclined to think that what is morally acceptable or unacceptable is universal and invariant, so that certain actions are morally wrong even if the prevailing set of norms in a culture fails to condemn them as such …. Acknowledging … that moral rules are a species of social norm, and that they have a lot in common with rules of etiquette, appears to set in motion a dynamic that leads to moral relativism.
In recent years, the impulse to distinguish moral rules from other types of social norms has received what many philosophers take to be empirical support from the psychological research of Elliot Turiel and his collaborators. Turiel is well-known for having argued that there are two distinct “domains” of social cognition, the “moral” and the “conventional,” and that even very young children are able to distinguish the two. Many philosophers have been quick to jump on this, as proof that moral rules are fundamentally different from “conventional” social norms. Jesse Prinz, for instance, appeals to Turiel’s authority in claiming that “By the time children are 3, they recognize that some rules are moral (e.g. don’t hit other children) and others are merely conventional (don’t talk without raising your hand)” (Prinz 2006, 29–43; see also Joyce 2006[a], 136–137; Nichols 2004, 5–7). Gerald Gaus, again citing Turiel, writes that “even children see moral rules as non-conventional” (2010, 172). This is often a prelude to the claim that moral judgments are generated by some special moral faculty, or set of emotional reactions, or rational capacity, that puts them outside the space of social norms (and thus, many will argue, exempts them from cultural variability). (Heath 2017: 277–278)
The idea that moral judgments are generated by a “special moral faculty” can be developed in a variety of ways, some of which have led to a resurgence of interest in moral nativism. As noted earlier, Turiel and his fellow social domain theorists are constructivists who reject nativist accounts of the morality. But a number of philosophers, including Susan Dwyer (Dwyer 1999, 2006, 2007; Dwyer, Huebner, & Hauser 2010), Richard Joyce (2006a, 2006b), John Mikhail (2014, 2022), and Shaun Nichols (2005) have used the findings of the social domain theorists in arguments aimed at establishing that substantial aspects of morality are innate. Some of these authors go on to argue morality is the product of natural selection and that it is an adaptation. Other philosophers, notably Jesse Prinz (2007a: ch. 7; 2007b) and Edouard Machery and Ron Mallon (2010) have elaborated detailed responses.
The most extensive appeal to empirical work on the moral/conventional distinction in the philosophical literature has been in the intertwined debates over moral internalism and the moral responsibility of psychopaths. To explain these debates, a bit of background is needed. When someone judges that an action is the morally right or morally required thing to do it is often the case that she goes on to do it. Why? What is her motivation for doing what she judges to be morally required? From the ancient Greeks onward, philosophers have been interested in moral motivation, and in recent decades debates over the nature of moral motivation have played a prominent role in metaethics. One central component in this multi-faceted discussion is the debate between various versions of internalism, which hold
that motivation is internal to moral judgment, in the sense that moral judgment itself motivates without need of an accompanying desire,
and various versions of externalism which insist that
moral motivation occurs when a moral judgment combines with a desire, and the content of the judgment is related to the content of the desire so as to rationalize the action.
The quotes are from Rosati’s outstanding Stanford Encyclopedia of Philosophy overview of the moral motivation literature (Rosati 2016: §3; entry on moral motivation) in which she goes on to note that
debates between internalists and externalists often center on the figure of the “amoralist”—the person who apparently makes moral judgments, while remaining wholly unmoved to comply with them.
Internalists typically insist that the amoralist is a conceptual impossibility, and that
the person who appears to be making a moral judgment while remaining unmoved, must really … be speaking insincerely…. [S]he judges an act “right” only in an “inverted commas” sense (R.M. Hare, 1963)…. (Rosati 2016: §3)
However, some philosophers think that psychopaths provide a prima facie challenge to this claim.
Psychopaths, who are diagnosed using a standard psychopathy checklist (Robert Hare 1991) are individuals who appear to be cognitively normal. They typically do not have impaired reasoning abilities, and they are aware of and seem able to comprehend prevailing moral and social rules. However, psychopaths often engage in antisocial behavior and they manifest no guilt or remorse for their morally aberrant behavior. Since psychopaths apparently can, and often do make moral judgments that they are not motivated to comply with, they seem to pose a problem for internalists. Indeed, borrowing an apt locution from Roskies (2003), psychopaths would appear to be “walking counterexamples” to internalism.
It’s at this point that the moral/conventional distinction becomes relevant. In an influential paper, psychologist R. James Blair (1995) reported using a version of the moral/conventional task with incarcerated psychopathic offenders of normal intelligence and with incarcerated non-psychopathic offenders, all of whom were charged with murder or manslaughter. The responses of the non-psychopathic offenders were similar to those that social domain theorists have reported with normal adults, but the responses of the psychopathic offenders were notably different. They responded in the same way to moral and conventional transgressions. Similar results were reported in a study that compared children with psychopathic tendencies to non-psychopathic children who had other emotional and behavioral issues (Blair 1997). These results led a number of authors to conclude that psychopaths do not have “normal moral concepts” (Prinz 2007a: 42–47), or that their moral concepts are “impaired” (Schroeder, Roskies, & Nichols 2010: 96; Nichols 2004: 19 & 113), and thus that they “do not mean what the rest of us mean when using moral terms” (Kennett & Fine 2007: 175). If that’s right, then, these authors maintained, psychopaths do not really make moral judgments, as ordinarily understood, and they pose no challenge to internalism. However, Kumar (2016) argues that philosophers have often overinterpreted Blair’s findings. Though the studies indicate that psychopaths “do not have a full grasp of moral concepts”, Kumar maintains that the studies do not show that psychopaths “completely fail to grasp them”. So while psychopaths don’t make “full-fledged moral judgments”, they can and do make what Kumar calls “proto moral judgments”. Because of this, Kumar maintains, the implications of research on psychopaths for the debate between internalists and externalists is “less straightforward” then philosophers have assumed. In a similar vein, Roskies (2007: 204) argues that studies of psychopaths do not establish “that they lack moral concepts or an adequate understanding of them” or that they are “incapable of anything but moral judgments in an inverted comma sense”.
In addition to being invoked in debates between internalists and externalists, Blair’s studies have also played a role in philosophical discussions of the conditions under which people are morally responsible for their behavior. Neil Levy has argued that psychopaths “ought to be excused moral responsibility for their wrongdoing” because their “failure to grasp the moral/conventional distinction” indicates that they do not understand “what makes a moral norm moral” (Levy 2007b: 163–164).
Psychopaths know, at least typically, that their actions are widely perceived to be wrong, to be sure, but they are unable to grasp the distinctive nature and significance of their wrongness. Psychopaths apparently take harm to others to be wrong only because such harms are against the rules. For them, stealing from, or hurting, another is no more wrong than, say, double-parking or line-jumping. But the kind and degree of wrongness, and therefore blame, that attaches to infringement of the rules is very different, and usually much less significant, than the kind and degree attaching to moral wrongs. For psychopaths, all offences are merely conventional, and therefore—from their point of view—none of them are all that serious. Hence, their degree of responsibility is smaller, arguably much smaller, than it would be for a comparable harm committed by a normal agent. Indeed, there are grounds for excusing them from moral responsibility altogether. (Levy 2007a: 132)
Fine and Kennett defend a very similar conclusion:
[W]hile psychopathic offenders certainly appear to know what acts are prohibited by society or the law (and therefore know that their transgressions are legally wrong), they do not appear to have the capacity to judge an act to be morally wrong…. We would argue that psychopathic offenders, who fail to understand the distinction between moral wrongs and conventional wrongs, cannot be considered to be moral agents. (2004: 432)
Since Levy, Fine, and Kennett all take moral responsibility to be necessary for criminal responsibility, they maintain that psychopaths are not criminally responsible.
Other authors draw quite different conclusions from Blair’s studies. Both Maibom (2008) and Vargas and Nichols (2007) note that despite their apparent inability to distinguish moral from conventional transgressions, psychopaths are sensitive to the fact that there are norms prohibiting a wide range of behaviors, and that violating these norms can lead to sanctions. For Maibom, this awareness is enough to undergird both moral and criminal responsibility. Endorsing a similar view, Vargas and Nichols note that “we do think it is appropriate to blame and punish transgressors of conventional rules”. Why, they ask, “is this not enough to blame and punish psychopaths?” (2007: 159).
David Shoemaker (2011) is skeptical about drawing any conclusions about the moral or criminal responsibility of psychopaths based on Blair’s studies. In support of his skepticism, he draws attention to critiques of work in the Turiel tradition by Nissan (1987); Haidt, Koller, and Dias (1993); Kelly et al. (2007); and others. More pointedly, he stresses that judgments about the seriousness of a transgression play a central role in Blair’s version of the moral/conventional task, and he challenges the assumption that moral transgressions are judged to be more serious than putatively conventional transgressions. Shoemaker references a study by Elizabeth Fitton and Ann Dowker who controlled for perceived seriousness and found that
when the so-called conventional transgressions matched the so-called moral transgressions in perceived seriousness … [normal] children failed to differentiate the domains “in terms of rule contingency, perceived obligations and the justifications used.” Indeed, these children tended to judge that the conventional behaviors were more obligatory than the moral ones. (Shoemaker 2011: 107, italics in original, quoting a 2002 paper presented at the MOSAIC conference in Liverpool “Qualitative and Quantitative Dimensions of Domain Differentiation”)
Shoemaker’s skepticism about the philosophical importance of Blair’s findings is reinforced by the work of Aharoni and colleagues (2012) who undertook a conceptual replication of Blair’s prisoner study using a modified version of the moral/conventional task that focused on authority independence and used a forced-choice format to minimize strategic responding. Their study included 109 incarcerated criminal offenders. (In Blair’s original study there were only ten psychopathic participants and 10 non-psychopaths.) To assess the level of psychopathy for each inmate participant, they used Hare’s Revised Psychopathy Checklist along with two other measures. The crucial finding was that
total psychopathy score did not predict performance on the [moral/conventional] task.
Aharoni and colleagues conclude that
contrary to earlier claims, insufficient data exist to infer that psychopathic individuals cannot know what is morally wrong. (2012: 484)
The moral/conventional task is not the only tool that psychologists have used to explore deficits in moral reasoning. In a survey of the literature, Schaich Borg and Sinnott-Armstrong (2013) review the findings of studies with psychopaths using the moral/conventional task and five quite different tools. Their conclusion is hardly encouraging for philosophers who hope that studies of moral cognition in psychopaths will yield important philosophical payoffs.
The studies reviewed in this chapter support a tentative and qualified conclusion: If psychopaths have any deficits or abnormalities in their moral judgments, their deficits seem subtle—much more subtle than might be expected from their blatantly abnormal behavior. Indeed, the current literature … suggests that psychopaths might not have any specific deficits in moral cognition, despite their differences in moral action, emotion, and empathy. (2013: 123)
The take-home message of this section is that authors who invoke studies using the moral/conventional task in arguing for philosophical conclusions should be very cautious. There are many different versions of the task, and it is far from clear that any of them give us reliable information about the nature of moral cognition or about the moral cognition of children, psychopaths or other groups of philosophical interest.
- Aharoni, Eyal, Walter Sinnott-Armstrong, and Kent A. Kiehl, 2012, “Can Psychopathic Offenders Discern Moral Wrongs? A New Look at the Moral/Conventional Distinction”, Journal of Abnormal Psychology, 121(2): 484–497. doi:10.1037/a0024796
- Bagnoli, Carla, 2011 , “Constructivism in Metaethics”, The Stanford Encyclopedia of Philosophy (Summer 2020 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/constructivism-metaethics/>.
- Baier, Kurt, 1958 , The Moral Point of View: A Rational Basis of Ethics, Ithaca, NY: Cornell University Press. Excerpts reprinted in Wallace and Walker 1970a: 188–210. Page references are to the Wallace and Walker reprint.
- Berniūnas, Renatas, Vilius Dranseika, and Paulo Sousa, 2016, “Are There Different Moral Domains? Evidence from Mongolia”, Asian Journal of Social Psychology, 19(3): 275–282. doi:10.1111/ajsp.12133
- Berniūnas, Renatas, Vytis Silius, and Vilius Dranseika, 2020, “Beyond the Moral Domain: The Normative Sense Among the Chinese”, Psichologija, 60: 86–105. doi:10.15388/Psichol.2019.11
- Blair, Robert James Richard, 1995, “A Cognitive Developmental Approach to Morality: Investigating the Psychopath”, Cognition, 57(1): 1–29. doi:10.1016/0010-0277(95)00676-P
- –––, 1996, “Brief Report: Morality in the Autistic Child”, Journal of Autism and Developmental Disorders, 26(5): 571–579. doi:10.1007/BF02172277
- –––, 1997, “Moral Reasoning and the Child with Psychopathic Tendencies”, Personality and Individual Differences, 22(5): 731–739. doi:10.1016/S0191-8869(96)00249-8
- Blair, Robert James Richard, Jey Monson, and Norah Frederickson, 2001, “Moral Reasoning and Conduct Problems in Children with Emotional and Behavioural Difficulties”, Personality and Individual Differences, 31(5): 799–811. doi:10.1016/S0191-8869(00)00181-1
- Carruthers, Peter, Stephen Laurence, and Stephen Stich (eds.),
2005–2007, The Innate Mind, Oxford/New York: Oxford
- 2005, Volume 1: Structure and Contents, doi:10.1093/acprof:oso/9780195179675.001.0001
- 2006, Volume 2: Culture and Cognition, doi:10.1093/acprof:oso/9780195310139.001.0001
- 2007, Volume 3: Foundations and the Future, doi:10.1093/acprof:oso/9780195332834.001.0001
- Cooper, Neil, 1966 , “Two Concepts of Morality”, Philosophy, 41(155): 19–33. Reprinted in Wallace and Walker 1970a: 72–90. Page references are to the Wallace and Walker reprint. doi:10.1017/S0031819100066122
- –––, 1970, “Morality and Importance”, in Wallace and Walker 1970a: 91–97.
- Devitt, Michael, 1996, Coming to Our Senses: A Naturalistic Program for Semantic Localism, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511609190
- Doris, John M. and The Moral Psychology Research Group, 2010, The Moral Psychology Handbook, Oxford/New York: Oxford University Press. doi:10.1093/acprof:oso/9780199582143.001.0001
- Dworkin, Ronald, 1983, “To Each His Own: Review of Spheres of Justice by Michael Walzer”, New York Review of Books, 14 April 1983, pp. 4–5.
- Dwyer, Susan, 1999, “Moral Competence”, in Philosophy and Linguistics, Kumiko Murasugi and Robert Stainton (eds.), Boulder, CO: Westview Press, chap. 5.
- –––, 2006, “How Good Is the Linguistic Analogy?” in Carruthers, Laurence, and Stich 2006: chap. 15.
- –––, 2007, “How Not to Argue that Morality Isn’t Innate: Comments on Prinz”, in Sinnott-Armstrong MP1: 407–418 (chap. 7.1).
- Dwyer, Susan, Bryce Huebner, and Marc D. Hauser, 2010, “The Linguistic Analogy: Motivations, Results, and Speculations”, Topics in Cognitive Science, 2(3): 486–510. doi:10.1111/j.1756-8765.2009.01064.x
- Edwards, Carolyn, 1987, “Culture and the Construction of Moral Values: A Comparative Ethnography of Moral Encounters in Two Cultural Settings”, in Kagan and Lamb 1987: 123–154.
- Fessler, Daniel M. T., H. Clark Barrett, Martin Kanovsky, Stephen Stich, Colin Holbrook, Joseph Henrich, Alexander H. Bolyanatz, Matthew M. Gervais, Michael Gurven, Geoff Kushnick, Anne C. Pisor, Christopher von Rueden, and Stephen Laurence, 2015, “Moral Parochialism and Contextual Contingency across Seven Societies”, Proceedings of the Royal Society B: Biological Sciences, 282(1813): 20150907. doi:10.1098/rspb.2015.0907
- Fessler, Daniel M. T., Colin Holbrook, Martin Kanovsky, H. Clark Barrett, Alexander H. Bolyanatz, Matthew M. Gervais, Michael Gurven, Joseph Henrich, Geoff Kushnick, Anne C. Pisor, Stephen Stich, Christopher von Rueden, and Stephen Laurence, 2016, “Moral Parochialism Misunderstood: A Reply to Piazza and Sousa”, Proceedings of the Royal Society B: Biological Sciences, 283(1823): 20152628. doi:10.1098/rspb.2015.2628
- Fine, Cordelia and Jeanette Kennett, 2004, “Mental Impairment, Moral Understanding and Criminal Responsibility: Psychopathy and the Purposes of Punishment”, International Journal of Law and Psychiatry, 27(5): 425–443. doi:10.1016/j.ijlp.2004.06.005
- Frankena, William, 1963, “Recent Conceptions of Morality”, in Morality and the Language of Conduct, Hector-Neri Castañeda and George Nakhnikian (eds), Detroit, MI: Wayne State University Press, 1–24.
- –––, 1967 , “The Concept of Morality”, in University of Colorado Studies, Series in Philosophy, No. 3, Boulder, CO: University of Colorado Press, pp. 1–22. Reprinted in Wallace and Walker 1970a: 146–173. Page references are to the Wallace and Walker reprint.
- Gaus, Gerald, 2010, The Order of Public Reason: A Theory of Freedom and Morality in a Diverse and Bounded World, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511780844
- Gewirth, Alan, 1978, Reason and Morality, Chicago: University of Chicago Press.
- Gibbard, Alan, 1990, Wise Choices, Apt Feelings, Cambridge, MA: Harvard University Press.
- Gill, Michael B. and Shaun Nichols, 2008, “Sentimentalist Pluralism: Moral Psychology and Philosophical Ethics”, Philosophical Issues, 18: 143–163. doi:10.1111/j.1533-6077.2008.00142.x
- Graham, Jesse, Jonathan Haidt, and Brian A. Nosek, 2009, “Liberals and Conservatives Rely on Different Sets of Moral Foundations”, Journal of Personality and Social Psychology, 96(5): 1029–1046. doi:10.1037/a0015141
- Graham, Jesse, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P. Wojcik, and Peter H. Ditto, 2013, “Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism”, in Advances in Experimental Social Psychology, Volume 47, Patricia Devine and Ashby Plant (eds.), Elsevier, 55–130. doi:10.1016/B978-0-12-407236-7.00002-4
- Gray, Kurt, Chelsea Schein, and Adrian F. Ward, 2014, “The Myth of Harmless Wrongs in Moral Cognition: Automatic Dyadic Completion from Sin to Suffering”, Journal of Experimental Psychology: General, 143(4): 1600–1615. doi:10.1037/a0036149
- Haidt, Jonathan, 2012, The Righteous Mind, New York: Pantheon.
- Haidt, Jonathan and Craig Joseph, 2007, “The Moral Mind: How 5 Sets of Innate Intuitions Guide the Development of Many Culture-specific Virtues, and Perhaps Even Modules”, in Carruthers, Laurence, and Stich 2007: chap. 19.
- Haidt, Jonathan, Silvia Helena Koller, and Maria G. Dias, 1993, “Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog?”, Journal of Personality and Social Psychology, 65(4): 613–628. doi:10.1037/0022-3518.104.22.1683
- Hare, Richard M., 1952, The Language of Morals, Oxford: Clarendon Press.
- –––, 1963, Freedom and Reason, Oxford: Clarendon Press.
- Hare, Robert D., 1991, The Hare Psychopathy Checklist-Revised, Toronto: Multi-Health Systems.
- Heath, Joseph, 2017, “Morality, Convention and Conventional Morality”, Philosophical Explorations, 20(3): 276–293. doi:10.1080/13869795.2017.1362030
- Hollos, Marida, Philip E. Leis, and Elliot Turiel, 1986, “Social Reasoning in Ijo Children and Adolescents in Nigerian Communities”, Journal of Cross-Cultural Psychology, 17(3): 352–374. doi:10.1177/0022002186017003007
- Huebner, Bryce, James Lee, and Marc Hauser, 2010, “The Moral-Conventional Distinction in Mature Moral Competence”, Journal of Cognition and Culture, 10(1–2): 1–26. doi:10.1163/156853710X497149
- Joyce, Richard, 2006a, The Evolution of Morality, Cambridge, MA: MIT Press. doi:10.7551/mitpress/2880.001.0001
- –––, 2006b, “Is Human Morality Innate?” in Carruthers, Laurence and Stich 2006: chap. 16.
- Kagan, Jerome and Sharon Lamb (eds.), 1987, The Emergence of Morality in Young Children, Chicago: University of Chicago Press.
- Kant, Immanuel, 1785 , Grundlegung zur Metaphysik der Sitten, Riga: Johann Friedrich Hartknoch. Translated as Foundations of the Metaphysics of Morals, Lewis White Beck (trans.), Indianapolis, IN: Bobbs-Merrill, 1959.
- Kauppinen, Antti, 2014 , “Moral Sentimentalism”, The Stanford Encyclopedia of Philosophy (Winter 2018 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2018/entries/moral-sentimentalism/>.
- Kelly, Daniel, Stephen Stich, Kevin J. Haley, Serena J. Eng, and Daniel M. T. Fessler, 2007, “Harm, Affect, and the Moral/Conventional Distinction”, Mind & Language, 22(2): 117–131. doi:10.1111/j.1468-0017.2007.00302.x
- Kennett, Jeanette and Cordellia Fine, 2007, “Internalism and the Evidence from Psychopaths and ‘Acquired Sociopaths’”, in Sinnott-Armstrong MP3: 173–190 (chap. 4).
- Killen, Melanie and Audun Dahl, 2018, “Moral Judgment: Reflective, Interactive, Spontaneous, Challenging, and Always Evolving”, in Atlas of Moral Psychology, Kurt Gray and Jesse Graham (eds.), New York, NY: The Guilford Press, 20–30.
- Killen, Melanie and Judith G. Smetana, 2008, “Moral Judgment and Moral Neuroscience: Intersections, Definitions, and Issues”, Child Development Perspectives, 2(1): 1–6. doi:10.1111/j.1750-8606.2008.00033.x
- –––, 2015, “Origins and Development of Morality”, in M. Lamb (Ed.), Handbook of Child Psychology, Volume 3: Social and Emotional Development, seventh edition, New York: Wiley/Blackwell Publishers, 701–749 (chap. 17).
- Kornblith, Hilary, 1998, “The Role of Intuition in Philosophical Inquiry: An Account with No Unnatural Ingredients”, in Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry, Michael DePaul and William Ramsey (eds.), (Studies in Epistemology and Cognitive Theory), Lanham, MD: Rowman & Littlefield, 129–141.
- Kumar, Victor, 2015, “Moral Judgment as a Natural Kind”, Philosophical Studies, 172(11): 2887–2910. doi:10.1007/s11098-015-0448-7
- –––, 2016, “Psychopathy and Internalism”, Canadian Journal of Philosophy, 46(3): 318–345. doi:10.1080/00455091.2016.1165571
- Lapsley, Daniel K., 1996, Moral Psychology, Boulder, CO: Westview Press.
- Levy, Neil, 2007a, “The Responsibility of the Psychopath Revisited”, Philosophy, Psychiatry, & Psychology, 14(2): 129–138. doi:10.1353/ppp.0.0003
- –––, 2007b, “Norms, Conventions, and Psychopaths”, Philosophy, Psychiatry, & Psychology, 14(2): 163–170. doi:10.1353/ppp.0.0002
- Lewis, David K., 1969, Convention: A Philosophical Study, Cambridge, MA: Harvard University Press.
- Machery, Edouard and Ron Mallon, 2010, “Evolution of Morality”, in Doris and the Moral Psychology Research Group 2010: 3–46 (chap. 1).
- Macintyre, Alasdair, 1957 , “What Morality Is Not”, Philosophy, 32(123): 325–335. Reprinted in Wallace and Walker 1970a: 26–39. Page references are to the Wallace and Walker reprint. doi:10.1017/S0031819100051950
- Maibom, Heidi L., 2008, “The Mad, the Bad, and the Psychopath”, Neuroethics, 1(3): 167–184. doi:10.1007/s12152-008-9013-9
- Mikhail, John, 2014, “Any Animal Whatever? Harmful Battery and Its Elements as Building Blocks of Moral Cognition”, Ethics, 124(4): 750–786. doi:10.1086/675906
- –––, 2022, “Moral Intuitions and Moral Nativism”, in The Oxford Handbook of Moral Psychology, Manuel Vargas and John Doris (eds), New York: Oxford University Press, pp. 364–387. doi:10.1093/oxfordhb/9780198871712.013.21
- Nichols, Shaun, 2002, “Norms with Feeling: Towards a Psychological Account of Moral Judgment”, Cognition, 84(2): 221–236. doi:10.1016/S0010-0277(02)00048-3
- –––, 2004, Sentimental Rules: On the Natural Foundations of Moral Judgement, New York: Oxford University Press. doi:10.1093/0195169344.001.0001
- –––, 2005, “Innateness and Moral Psychology”, in Carruthers, Laurence and Stich 2005: 353–369 (chap. 20).
- –––, 2007, “Sentimentalism Naturalized”, in Sinnott-Armstrong MP2: 255–274 (chap. 5).
- –––, 2021, Rational Rules: Towards a Theory of Moral Learning, Oxford: Oxford University Press. doi:10.1093/oso/9780198869153.001.0001
- Nichols, Shaun and Ron Mallon, 2006, “Moral Dilemmas and Moral Rules”, Cognition, 100(3): 530–542. doi:10.1016/j.cognition.2005.07.005
- Nisan, Mordecai, 1987, “Moral Norms and Social Conventions: A Cross-Cultural Comparison”, Developmental Psychology, 23(5): 719–725. doi:10.1037/0012-1622.214.171.1249
- Nucci, Larry P., 1982, “Conceptual Development in the Moral and Conventional Domains: Implications for Values Education”, Review of Educational Research, 52(1): 93–122. doi:10.3102/00346543052001093
- –––, 2001, Education in the Moral Domain, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511605987
- Nucci, Larry P., Cleanice Camino, and Clary Milnitsky Sapiro, 1996, “Social Class Effects on Northeastern Brazilian Children’s Conceptions of Areas of Personal Choice and Social Regulation”, Child Development, 67(3): 1223–1242. doi:10.2307/1131889
- Nucci, Larry P. and Susan Herman, 1982, “Behavioral Disordered Children’s Conceptions of Moral, Conventional, and Personal Issues”, Journal of Abnormal Child Psychology, 10(3): 411–425. doi:10.1007/BF00912330
- Nucci, Larry P. and Maria Santiago Nucci, 1982, “Children’s Social Interactions in the Context of Moral and Conventional Transgressions”, Child Development, 53(2): 403–412. doi:10.2307/1128983
- Nucci, Larry P. and Elliot Turiel, 1978, “Social Interactions and the Development of Social Concepts in Preschool Children”, Child Development, 49(2): 400–407. doi:10.2307/1128704
- –––, 1993, “God’s Word, Religious Rules, and Their Relation to Christian and Jewish Children’s Concepts of Morality”, Child Development, 64(5): 1475–1491. doi:10.2307/1131547
- Nucci, Larry P., Elliot Turiel, and Gloria Encarnacion-Gawrych, 1983, “Children’s Social Interactions and Social Concepts: Analyses of Morality and Convention in the Virgin Islands”, Journal of Cross-Cultural Psychology, 14(4): 469–487. doi:10.1177/0022002183014004006
- Piazza, Jared and Paulo Sousa, 2016, “When Injustice Is at Stake, Moral Judgements Are Not Parochial”, Proceedings of the Royal Society B: Biological Sciences, 283(1823): 20152037. doi:10.1098/rspb.2015.2037
- Prinz, Jesse J., 2006, “The Emotional Basis of Moral Judgments”, Philosophical Explorations, 9(1): 29–43. doi:10.1080/13869790500492466
- –––, 2007a, The Emotional Construction of Morals, Oxford/New York: Oxford University Press. doi:10.1093/acprof:oso/9780199571543.001.0001
- –––, 2007b, “Is Morality Innate?” in Sinnott-Armstrong MP1: 367–406 (chap. 7).
- Putnam, Hilary, 1975, “The Meaning of ‘Meaning’”, in Language, Mind and Knowledge, K. Gunderson (ed.), Minnesota Studies in the Philosophy of Science, volume 7, Minneapolis, MN: University of Minnesota Press, 131–193.
- Rawls, John, 1971, A Theory of Justice. Cambridge, MA, Harvard University Press.
- Rescorla, Michael, 2007 , “Convention”, The Stanford Encyclopedia of Philosophy (Summer 2019 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/convention/>.
- Rosas, Alejandro, 2012, “Mistakes To Avoid In Attacking The Moral/Conventional Distinction”, in Baltic International Yearbook of Cognition, Logic and Communication, 7(Morality and the Cognitive Sciences): art. 8. doi:10.4148/biyclc.v7i0.1779
- Rosati, Connie S., 2016, “Moral Motivation”, The Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/moral-motivation/>.
- Roskies, Adina, 2003, “Are Ethical Judgments Intrinsically Motivational? Lessons from ‘Acquired Sociopathy’”, Philosophical Psychology, 16(1): 51–66. doi:10.1080/0951508032000067743
- –––, 2007, “Internalism and the Evidence from Pathology”, in Sinnott-Armstrong MP3: 191–206 (ch. 4.1).
- Royzman, Edward B., Geoffrey P. Goodwin, and Robert F. Leeman, 2011, “When Sentimental Rules Collide: ‘Norms with Feelings’ in the Dilemmatic Context”, Cognition, 121(1): 101–114. doi:10.1016/j.cognition.2011.06.006
- Royzman, Edward B., Kwanwoo Kim, and Robert F. Leeman, 2015, “The Curious Tale of Julie and Mark: Unraveling the Moral Dumbfounding Effect”, Judgment and Decision Making, 10(4): 296–313.
- Royzman, Edward B., Justin F. Landy, and Geoffrey P. Goodwin, 2014, “Are Good Reasoners More Incest-Friendly? Trait Cognitive Reflection Predicts Selective Moralization in a Sample of American Adults”, Judgment and Decision Making, 9(3): 176–190.
- Royzman, Edward B., Robert F. Leeman, and Jonathan Baron, 2009, “Unsentimental Ethics: Towards a Content-Specific Account of the Moral–Conventional Distinction”, Cognition, 112(1): 159–174. doi:10.1016/j.cognition.2009.04.004
- Schaich Borg, Jana and Walter P. Sinnott-Armstrong, 2013, “Do Psychopaths Make Moral Judgments?”, in Handbook on Psychopathy and Law, Kent A. Kiehl and Walter P. Sinnott-Armstrong (eds.), (Oxford Series in Neuroscience, Law, and Philosophy), New York: Oxford University Press, 107–128.
- Schroeder, Timothy, Adina L. Roskies, and Shaun Nichols, 2010, “Moral Motivation”, in Doris and the Moral Psychology Research Group 2010: 72–110.
- Searle, John R., 1969, Speech Acts: An Essay in the Philosophy of Language, Cambridge: Cambridge University Press. doi:10.1017/CBO9781139173438
- Shoemaker, David W., 2011, “Psychopathy, Responsibility, and the Moral/Conventional Distinction”, The Southern Journal of Philosophy, 49(s1): 99–124. doi:10.1111/j.2041-6962.2011.00060.x
- Shweder, Richard, Elliot Turiel, and Nancy Much, 1981, “The Moral Intuitions of the Child”, in Social Cognitive Development: Frontiers and Possible Futures, John H. Flavell and Lee Ross (eds.), (Cambridge Studies in Social and Emotional Development), Cambridge/New York: Cambridge University Press, 288–303.
- Shweder, Richard A. and Joan G. Miller, 1985, “The Social Construction of the Person: How Is It Possible?”, in The Social Construction of the Person, Kenneth J. Gergen and Keith E. Davis (eds.), New York, NY: Springer-Verlag, 41–69. doi:10.1007/978-1-4612-5076-0_3
- Shweder, Richard, Manamohan Mahapatra, and Joan Miller, 1987, “Culture and Moral Development”, in Kagan and Lamb 1987: 1–83.
- Sinnott-Armstrong, Walter (ed.), 2007, Moral Psychology,
Cambridge, MA: The MIT Press.
- MP1, Volume 1: The Evolution of Morality: Adaptations and Innateness. doi:10.7551/mitpress/7481.001.0001
- MP2, Volume 2: The Cognitive Science of Morality : Intuition and Diversity,
- MP3, Volume 3, The Neuroscience of Morality: Emotion, Brain Disorders, and Development
- Smetana, Judith G., 1981, “Preschool Children’s Conceptions of Moral and Social Rules”, Child Development, 52(4): 1333–1336. doi:10.2307/1129527
- –––, 1989, “Toddlers’ Social Interactions in the Context of Moral and Conventional Transgressions in the Home”, Developmental Psychology, 25(4): 499–508. doi:10.1037/0012-16126.96.36.1999
- –––, 1993, “Understanding of Social Rules”, in The Child as Psychologist: An Introduction to the Development of Social Cognition, Mark Bennett (ed.), New York: Harvester Wheatsheaf, 111–141.
- Smetana, Judith G., Marc Jambon, and Courtney L. Ball, 2018, “Normative Changes and Individual Differences in Early Moral Judgments: A Constructivist Developmental Perspective”, Human Development, 61(4–5): 264–280. doi:10.1159/000492803
- Smetana, Judith G., Mario Kelly, and Craig T. Twentyman, 1984, “Abused, Neglected, and Nonmaltreated Children’s Conceptions of Moral and Social-Conventional Transgressions”, Child Development, 55(1): 277–287. doi:10.2307/1129852
- Smetana, Judith G., Wendy M. Rote, Marc Jambon, Marina Tasopoulos-Chan, Myriam Villalobos, and Jessamy Comer, 2012, “Developmental Changes and Individual Differences in Young Children’s Moral Judgments: Young Children’s Moral Judgments”, Child Development, 83(2): 683–696. doi:10.1111/j.1467-8624.2011.01714.x
- Smetana, Judith G., Sheree L. Toth, Dante Cicchetti, Jacqueline Bruce, Peter Kane, and Christopher Daddis, 1999, “Maltreated and Nonmaltreated Preschoolers’ Conceptions of Hypothetical and Actual Moral Transgressions”, Developmental Psychology, 35(1): 269–281. doi:10.1037/0012-16188.8.131.529
- Song, Myung-ja, Judith G. Smetana, and Sang Yoon Kim, 1987, “Korean Children’s Conceptions of Moral and Conventional Transgressions”, Developmental Psychology, 23(4): 577–582. doi:10.1037/0012-16184.108.40.2067
- Sousa, Paulo, Colin Holbrook, and Jared Piazza, 2009, “The Morality of Harm”, Cognition, 113(1): 80–92. doi:10.1016/j.cognition.2009.06.015
- Sousa, Paulo and Jared Piazza, 2014, “Harmful Transgressions qua Moral Transgressions: A Deflationary View”, Thinking & Reasoning, 20(1): 99–128. doi:10.1080/13546783.2013.834845
- Southwood, Nicholas, 2011, “The Moral/Conventional Distinction”, Mind, 120(479): 761–802. doi:10.1093/mind/fzr048
- Southwood, Nicholas and Lina Eriksson, 2011, “Norms and Conventions”, Philosophical Explorations, 14(2): 195–217. doi:10.1080/13869795.2011.569748
- Sprigge, Timothy L. S., 1964 , “Definition of a Moral Judgment”, Philosophy, 39(150): 301–322. Reprinted in Wallace and Walker 1970a: 118–145. Page references are to the Wallace and Walker reprint. doi:10.1017/S0031819100055777
- Stich, Stephen, 2019, “The Quest for the Boundaries of Morality”, in The Routledge Handbook of Moral Epistemology, Aaron Zimmerman, Karen Jones, and Mark Timmons (eds), New York: Routledge, 15–37.
- Stich, Stephen, Daniel M. T. Fessler, and Daniel Kelly, 2009, “On the Morality of Harm: A Response to Sousa, Holbrook and Piazza”, Cognition, 113(1): 93–97. doi:bad10.1016/j.cognition.2009.06.013
- Stoddart, Trish and Elliot Turiel, 1985, “Children’s Concepts of Cross-Gender Activities”, Child Development, 56(5): 1241–1252. doi:10.2307/1130239
- Taylor, Paul W., 1978, “On Taking The Moral Point Of View”, Midwest Studies in Philosophy, 3: 35–61. doi:10.1111/j.1475-4975.1978.tb00347.x
- Tisak, Marie, 1995, “Domains of Social Reasoning and Beyond”, in Annals of Child Development, vol. 11, Ross Vasta (ed.), London: Jessica Kingsley.
- Toulmin, Stephen, 1950, An Examination of the Place of Reason in Ethics, Cambridge: Cambridge, University Press.
- Turiel, Elliot, 1977, “Distinct Conceptual and Developmental Domains: Social Convention and Morality”, Nebraska Symposium on Motivation, 25: 77–116.
- –––, 1983, The Development of Social Knowledge: Morality and Convention, (Cambridge Studies in Social and Emotional Development), Cambridge/New York: Cambridge University Press.
- –––, 1989, “Domain-Specific Social Judgments and Domain Ambiguities”, Merrill-Palmer Quarterly, 35(1): 89–114.
- –––, 2002, The Culture of Morality: Social Development, Context, and Conflict, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511613500
- Turiel, Elliot and Philip Davidson, 1986, “Heterogeneity, Inconsistency, and Asynchrony in the Development of Cognitive Structures”, in Stage and Structure: Reopening the Debate, Iris Levin (ed.), Norwood, NJ: Ablex Publishing, 106–143.
- Turiel, Elliot, Melanie Killen, and Charles Helwig, 1987, “Morality: Its Structure, Functions, and Vagaries”, in Kagan and Lamb 1987: 155–243.
- Turiel, Elliot, Larry P. Nucci, and Judith G. Smetana, 1988, “A Cross-Cultural Comparison about What? A Critique of Nisan’s (1987) Study of Morality and Convention”, Developmental Psychology, 24(1): 140–143. doi:10.1037/0012-16220.127.116.11
- Turiel, Elliot, Carolyn Hildebrandt, and Cecilia Wainryb, 1991, “Judging Social Issues: Difficulties, Inconsistencies, and Consistencies”, Monographs of the Society for Research in Child Development, 56(2): 1–103. doi:10.2307/1166056
- Turiel, Elliot and Judith Smetana, 1984, “Social Knowledge and Action: The Coordination of Domains”, in Morality, Moral Behavior, and Moral Development, William M. Kurtines and Jacob L. Gewirtz (eds), New York: John Wiley & Sons, 261–282.
- Vargas, Manuel and Shaun Nichols, 2007, “Psychopaths and Moral Knowledge”, Philosophy, Psychiatry, & Psychology, 14(2): 157–162. doi:10.1353/ppp.0.0000
- Wallace, Gerald and Arthur David McKinnon Walker (eds), 1970a, The Definition of Morality, London: Methuen.
- –––, 1970b, “Introduction”, to Wallace and Walker 1970a: 1–20.
- Winch, Peter, 1965, “The Universalizability of Moral Judgements”, The Monist, 49(2): 196–214. doi:10.5840/monist196549214
- Yau, Jenny and Judith G. Smetana, 2003, “Conceptions of Moral, Social-Conventional, and Personal Events Among Chinese Preschoolers in Hong Kong”, Child Development, 74(3): 647–658. doi:10.1111/1467-8624.00560
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
- Moral Psychology Research Group
- Social Domain Theory. Website and Resource Hub for the Social Domain Theory International Colloquium (SDTIC).
- PhilPapers.org collections