Please Read How You Can Help Keep the Encyclopedia Free
Virtually every aspect of the current philosophical discussion of self-deception is a matter of controversy including its definition and paradigmatic cases. We may say generally, however, that self-deception is the acquisition and maintenance of a belief (or, at least, the avowal of that belief) in the face of strong evidence to the contrary motivated by desires or emotions favoring the acquisition and retention of that belief. Beyond this, philosophers divide over whether this action is intentional or not, whether self-deceivers recognize the belief being acquired is unwarranted on the available evidence, whether self-deceivers are morally responsible for their self-deception, and whether self-deception is morally problematic (and if it is in what ways and under what circumstances). The discussion of self-deception and its associated puzzles gives us insight into the ways in which motivation affects belief acquisition and retention. And yet insofar as self-deception represents an obstacle to self-knowledge, which has potentially serious moral implications, self-deception is more than an interesting philosophical puzzle. It is a problem of particular concern for moral development, since self-deception can make us strangers to ourselves and blind to our own moral failings.
- 1. Definitional Issues
- 2. Intentionalist Approaches
- 3. Non-Intentionalist Approaches
- 4. Twisted Self-Deception
- 5. Morality and Self-deception
- 6. Collective Self-Deception
- Academic Tools
- Other Internet Resources
- Related Entries
What is self-deception? Traditionally, self-deception has been modeled on interpersonal deception, where A intentionally gets B to believe some proposition p, all the while knowing or believing truly ~p. Such deception is intentional and requires the deceiver to know or believe ~p and the deceived to believe p. One reason for thinking self-deception is analogous to interpersonal deception of this sort is that it helps us to distinguish self-deception from mere error, since the acquisition and maintenance of the false belief is intentional not accidental. If self-deception is properly modeled on such interpersonal deception, self-deceivers intentionally get themselves to believe p, all the while knowing or believing truly ~p. On this traditional model, then, self-deceivers apparently must (1) hold contradictory beliefs, and (2) intentionally get themselves to hold a belief they know or believe truly to be false.
The traditional model of self-deception, however, has been thought to raise two paradoxes: One concerns the self-deceiver's state of mind—the so-called ‘static’ paradox. How can a person simultaneously hold contradictory beliefs? The other concerns the process or dynamics of self-deception—the so-called ‘dynamic’ or ‘strategic’ paradox. How can a person intend to deceive herself without rendering her intentions ineffective? (Mele 1987a; 2001)
The requirement that the self-deceiver holds contradictory beliefs raises the ‘static’ paradox, since it seems to pose an impossible state of mind, namely, consciously believing p and ~p at the same time. As the deceiver, she must believe ~p, and, as the deceived, she must believe p. Accordingly, the self-deceiver consciously believe p and ~p. But if believing both a proposition and its negation in full awareness is an impossible state of mind to be in, then self-deception as it has traditionally been understood seems to be impossible as well.
The requirement that the self-deceiver intentionally gets herself to hold a believe she knows to be false raises the ‘dynamic’ or ‘strategic’ paradox, since it seems to involve the self-deceiver in an impossible project, namely, both deploying and being duped by some deceitful strategy. As the deceiver, she must be aware she's deploying a deceitful strategy; but, as the deceived, she must be unaware of this strategy for it to be effective. And yet it is difficult to see how the self-deceiver could fail to be aware of her intention to deceive. A strategy known to be deceitful, however, seems bound to fail. How could I be taken in by your efforts to get me to believe something false, if I know what you're up to? But if it's impossible to be taken in by a strategy one knows is deceitful, then, again, self-deception as it has traditionally been understood seems to be impossible as well.
These paradoxes have led a minority of philosophers to be skeptical that self-deception is possible (Paluch 1967; Haight 1980). In view of the empirical evidence that self-deception is not only possible, but pervasive (Sahdra & Thagard 2003), most have sought some resolution to these paradoxes. These approaches can be organized into two main groups: those that maintain that the paradigmatic cases of self-deception are intentional, and those that deny this. Call these approaches intentionalist and non-intentionalist respectively. Intentionalists find the model of intentional interpersonal deception apt, since it helps to explain the apparent responsibility of self-deceivers for their self-deception, its selectivity and difference from other sorts of motivated belief such as wishful thinking. Non-intentionalists are impressed by the static and dynamic paradoxes allegedly involved in modeling self-deception on intentional interpersonal deception and, in their view, the equally puzzling psychological models used by intentionalists to avoid these paradoxes, such as semi-autonomous subsystems, unconscious beliefs and intentions and the like.
The chief problem facing intentional models of self-deception is the dynamic paradox, namely, that it seems impossible to form an intention to get oneself to believe what one currently disbelieves or believes is false. For one to carry out an intention to deceive oneself one must know what one is doing, to succeed one must be ignorant of this same fact. Intentionalists agree on the proposition that self-deception is intentional, but divide over whether it requires the holding of contradictory beliefs, and thus over the specific content of the alleged intention involved in self-deception. Insofar as even the bare intention to acquire the belief that p for reasons not having to do with one's evidence for p seems unlikely to succeed if directly known, most intentionalists introduce temporal or psychological divisions that serve to insulate self-deceivers from the awareness of their deceptive strategy. When self-deceivers are not consciously aware of their beliefs to the contrary or their deceptive intentions, no paradox seems to be involved in deceiving oneself. Many approaches utilize some combination of psychological and temporal division (e.g., Bermúdez 2000).
Some intentionalists argue that self-deception is a complex process that is often extended over time and as such a self-deceiver can consciously set out to deceive herself into believing p, knowing or believing ~p, and along the way lose her belief that ~p, either forgetting her original deceptive intention entirely, or regarding it as having, albeit accidentally, brought about the true belief she would have arrived at anyway (Sorensen 1985; Bermúdez 2000). So, for instance, an official involved in some illegal behavior might destroy any records of this behavior and create evidence that would cover it up (diary entries, emails and the like), knowing that she will likely forget having done these things over the next few months. When her activities are investigated a year later, she has forgotten her tampering efforts and based upon her falsified evidence comes to believe falsely that she was not involved in the illegal activities of which she is accused. Here the self-deceiver need never simultaneously hold contradictory beliefs even though she intends to bring it about that she believe p, which she regards as false at the outset of the process of deceiving herself and true at its completion. The self-deceiver need not even forget her original intention to deceive, so an unbeliever who sets out to get herself to believe in God (since she thinks such a belief is prudent, having read Pascal) might well remember such an intention at the end of the process and deem that by God's grace even this misguided path led her to the truth. It is crucial to see here that what enables the intention to succeed in such cases is the operation of what Johnston (1988) terms ‘autonomous means’ (e.g., the normal degradation of memory, the tendency to believe what one practices, etc.) not the continued awareness of the intention. Some non-intentionalists take this to be a hint that the process by which self-deception is accomplished is subintentional (Johnston 1988). In any case, while it is clear that such temporal partitioning accounts apparently avoid the static and dynamic paradoxes, many find such cases fail to capture the distinctive opacity, indirection and tension associated with garden-variety cases of self-deception (e.g., Levy 2004).
Another strategy employed by intentionalists is the division of the self into psychological parts that play the role of the deceiver and deceived respectively. These strategies range from positing strong division in the self, where the deceiving part is a relatively autonomous subagency capable of belief, desire and intention (Rorty 1988); to more moderate division, where the deceiving part still constitutes a separate center of agency (Pears 1984, 1986; 1991); to the relatively modest division of Davidson, where there need only be a boundary between conflicting attitudes (1982, 1985). Such divisions are prompted in large part by the acceptance of the contradictory belief requirement. It isn't simply that self-deceivers hold contradictory beliefs, which though strange, isn't impossible. One can believe p and believe ~p without believing p & ~p, which would be impossible. The problem such theorists face stems from the appearance that the belief that ~p motivates and thus form a part of the intention to bring it about that one acquire and maintain the false belief p (Davidson 1985). So, for example, the Nazi official's recognition that his actions implicate him in serious evil motivates him to implement a strategy to deceive himself into believing he is not so involved; he can't intend to bring it about that he holds such a false belief if he doesn't recognize it is false, and he wouldn't want to bring such a belief about if he didn't recognize the evidence to the contrary. So long as this is the case, the deceptive subsystem, whether it constitutes a separate center of agency or something less robust, must be hidden from the conscious self being deceived if the self-deceptive intention is to succeed. While these psychological partitioning approaches seem to resolve the static and dynamic puzzles, they do so by introducing a picture of the mind that raises many puzzles of its own. On this point, there appears to be consensus even among intentionalists that self-deception can and should be accounted for without invoking divisions not already used to explain non-self-deceptive behavior, what Talbott (1995) calls ‘innocent’ divisions.
Some intentionalists reject the requirement that self-deceivers hold contradictory beliefs (Talbott 1995; Bermúdez 2000). According to such theorists, the only thing necessary for self-deception is the intention to bring it about that one believe p where lacking such an intention one would not have acquired that belief. The self-deceiver thus need not believe ~p. She might have no views at all regarding p, possessing no evidence either for or against p; or she might believe p is merely possible, possessing evidence for or against p too weak to warrant belief that p or ~p (Bermúdez 2000). Self-deceivers in this minimal sense intentionally acquire the belief that p, despite their recognition at the outset that they do not possess enough evidence to warrant this belief by selectively gathering evidence supporting p or otherwise manipulating the belief-formation process to favor belief that p. Even on this minimal account, such intentions will often be unconscious, since a strategy to acquire a belief in violation of one's normal evidential standards seems unlikely to succeed if one is aware of it.
A number of philosophers have moved away from modeling self-deception on intentional interpersonal deception, opting instead to treat it as a species of motivationally biased belief. These non-intentionalists allow that phenomena answering to the various intentionalist models available may be possible, but everyday or ‘garden-variety’ self-deception can be explained without adverting to subagents, or unconscious beliefs and intentions, which, even if they resolve the static and dynamic puzzles of self-deception, raise many puzzles of there own. If such non-exotic explanations are available, intentionalist explanations seem unwarranted.
The main paradoxes of self-deception seem to arise from modeling self-deception too closely on intentional interpersonal deception. Accordingly, non-intentionalists suggest the intentional model be jettisoned in favor of one that takes ‘to be deceived’ to be nothing more than to believe falsely or be mistaken in believing (Johnston 1988; Mele 2001). For instance, Sam mishears that it will be a sunny day and relays this misinformation to Joan with the result that she believes it will be a sunny day. Joan is deceived in believing it will be sunny and Sam has deceived her, albeit unintentionally. Initially, such a model may not appear promising for self-deception, since simply being mistaken about p or accidentally causing oneself to be mistaken about p doesn't seem to be self-deception at all but some sort of innocent error—Sam doesn't seem self-deceived, just deceived. Non-intentionalists, however, argue that in cases of self-deception the false belief is not accidental but motivated by desire (Mele 2001), anxiety (Johnston 1988, Barnes 1997) or some other emotion regarding p or related to p. So, for instance, when Allison believes against the preponderance of evidence available to her that her daughter is not having learning difficulties, the non-intentionalist will explain the various ways she misreads the evidence by pointing to such things as her desire that her daughter not have learning difficulties, her fear that she has such difficulties, or anxiety over this possibility. In such cases, Allison's self-deceptive belief that her daughter is not having learning difficulties, fulfills her desire, quells her fear or reduces her anxiety, and it is this function (not an intention) that explains why her belief formation process is bias. Allison's false belief is not an innocent mistake, but a consequence of her motivational states.
Some non-intentionalists suppose that self-deceivers recognize at some level that their self-deceptive belief p is false, contending that self-deception essentially involves an ongoing effort to resist the thought of this unwelcome truth or is driven by anxiety prompted by this recognition (Bach 1981; Johnston 1988). So, in Allison's case, her belief that her daughter is having learning difficulties along with her desire that it not be the case motivate her to employ means to avoid this thought and to believe the opposite. Others, however, argue the needed motivation can as easily be supplied by uncertainty or ignorance whether p, or suspicion that ~p (Mele 2001, Barnes 1997). Thus, Allison need not hold any opinion regarding her daughter's having learning difficulties for her false belief that she is not experiencing difficulties to count as self-deception, since it is her regarding evidence in a motivationally biased way in the face of evidence to the contrary, not her recognition of this evidence, that makes her belief self-deceptive. Accordingly, Allison need not intend to deceive herself nor believe at any point that her daughter is in fact having learning difficulties. If we think someone like Allison is self-deceived, then self-deception requires neither contradictory beliefs nor intentions regarding the acquisition or retention of the self-deceptive belief.
On such deflationary views of self-deception, one need only hold a false belief p, possess evidence that ~p, and have some desire or emotion that explains why p is believed and retained. In general, if one possesses evidence that one normally would take to support ~p and yet believes p instead due to some desire, emotion or other motivation one has related to p, then one is self-deceived.
Intentionalists contend these deflationary accounts do not adequately distinguish self-deception from other sorts of motivated believing, cannot explain the peculiar selectivity associated with self-deception, and lack a compelling explanation for why, typically, self-deceivers are thought to be responsible and open to censure in many instances (See section 5.1 for this last item).
What distinguishes wishful thinking from self-deception, according to intentionalists just is that the latter is intentional while the former is not (e.g., Bermúdez 2000). Non-intentionalists respond that what distinguishes wishful thinking from self-deception is that self-deceivers recognize evidence against their self-deceptive belief whereas wishful thinkers do not (Bach 1981; Johnston 1988), or merely possess, without recognizing, greater counterevidence than wishful thinkers. Some contend wishful thinking is a species of self-deception, but self-deception includes unwelcome as well as wishful belief, and thus may be motivated by something other than a desire that the target belief be true (See section 4 for more on this variety of self-deception).
Another objection raised by intentionalists is that deflationary accounts cannot explain the selective nature of self-deception, termed the ‘selectivity problem’ by Bermúdez (1997, 2000). Why is it, such intentionalists ask, that we are not rendered bias in favor of the belief that p in many cases where we have a very strong desire that p (or anxiety or some other motivation related to p)? Intentionalists argue that an intention to get oneself to acquire the belief that p offers a relatively straightforward answer to this question. Mele (2001), drawing on empirical research regarding lay hypothesis testing, argues that selectivity may be explained in terms of the agent's assessment of the relative costs of erroneously believing p and ~p. So, for example, Josh would be happier believing falsely that the gourmet chocolate he finds so delicious isn't produced by exploited farmers than falsely believing that it is, since he desires that it not be so produced. Because Josh considers the cost of erroneously believing his favorite chocolate is tainted by exploitation to be very high—no other chocolate gives him the same pleasure, it takes a great deal more evidence to convince him that his chocolate is so tainted than it does to convince him otherwise. It is the low subjective cost of falsely believing the chocolate is not tainted that facilitates Josh's self-deception. But we can imagine Josh having the same strong desire that his chocolate not be tainted by exploitation and yet assessing the cost of falsely believing it is not tainted differently. Say, for example, he works for an organization promoting fair trade and non-exploitive labor practices among chocolate producers and believes he has an obligation to accurately represent the labor practices of the producer of his favorite chocolate and would, furthermore, lose credibility if the chocolate he himself consumes is tainted by exploitation. In these circumstances, Josh is more sensitive to evidence that his favorite chocolate is tainted, despite his desire that it not be, since the subjective cost of being wrong is higher for him than it was before. It is the relative subjective costs of falsely believing p and ~p that explains why desire or other motivation biases belief in some circumstances and not others. Challenging this solution, Bermúdez (2000) suggests that the selectivity problem may reemerge, since it isn't clear why in cases where there is a relatively low cost for holding a self-deceptive belief favored by our motivations we frequently do not become self-deceived. Mele (2001), however, points out that intentional strategies have their own ‘selectivity problem', since it isn't clear why some intentions to acquire a self-deceptive belief succeed while others do not.
Self-deception that involves the acquisition of an unwanted belief, termed ‘twisted self-deception’ by Mele (1999, 2001), has generated a small but growing literature of its own, most recently, Barnes (1997), Mele (1999, 2001), Scott-Kakures (2000; 2001). A typical example of such self-deception is the jealous husband who believes on weak evidence that his wife is having an affair, something he doesn't want to be the case. In this case, the husband apparently comes to have this false belief in the face of strong evidence to the contrary in ways similar to those ordinary self-deceivers come to believe something they want to be true.
One question philosophers have sought to answer is how a single unified account of self-deception can explain both welcome and unwelcome beliefs. If a unified account is sought, then it seems self-deception cannot require that the self-deceptive belief itself be desired. Pears (1984) has argued that unwelcome belief might be driven by fear or jealousy. My fear of my house burning down might motivate my false belief that I have left the stove burner on. This unwelcome belief serves to ensure that I avoid what I fear, since it leads me to confirm that the burner is off. Barnes (1997) argues that the unwelcome belief must serve to reduce some relevant anxiety; in this case my anxiety that my house is burning. Scott-Kakures (2000; 2001) argues, however, that since the unwelcome belief itself does not in many cases serve to reduce but rather to increase anxiety or fear, their reduction cannot be the purpose of that belief. Instead, he contends that we think of the belief as serving to make the agent's goals and interests more probable than not, in my case, preserving my house. My testing and confirming an unwelcome belief may be explained by the costs I associate with being in error, which is determined in view of my relevant aims and interests. If I falsely believe that I have left the burner on, the cost is relatively low—I am inconvenienced by confirming that it is off. If I falsely believe that I have not left the burner on, the cost is extremely high—my house being destroyed by fire. The asymmetry between these relative costs alone may account for my manipulation of evidence confirming the false belief that I have left the burner on. Drawing upon recent empirical research, both Mele (2001) and Scott-Kakures (2000) advocate a model of this sort, since it helps to account for the roles desires and emotions apparently play in cases of twisted self-deception. Nelkin (2002) argues that the motivation for self-deceptive belief formation be restricted to a desire to believe p. She points out that the phrase “unwelcome belief” is ambiguous, since a belief itself might be desirable even if its being true is not. I might want to hold the belief that I have left the burner on, but not want it to be the case that I have left it on. The belief is desirable in this instance, because holding it ensures that it will not be true. In Nelkin's view, then, what unifies cases of self-deception—both twisted and straight—is that the self-deceptive belief is motivated by a desire to believe p; what distinguishes them is that twisted self-deceivers do not want p to be the case, while straight self-deceivers do. Restricting the motivating desire to a desire to believe p, according to Nelkin, makes clear what twisted and straight self-deception have in common as well as why other forms of motivated belief formation are not cases of self-deception. Though non-intentional models of twisted self-deception dominate the landscape, whether desire, emotion or some combination of these attitudes plays the dominant role in such self-deception and whether their influence merely triggers the process or continues to guide it throughout remain matters of controversy.
Despite the fact that much of the contemporary philosophical discussion of self-deception has focused on epistemology, philosophical psychology and philosophy of mind, the morality of self-deception has been the central focus of discussion historically. As a threat to moral self-knowledge, a cover for immoral activity, and a violation of authenticity, self-deception has been thought to be morally wrong or, at least, morally dangerous. Some thinkers, what Martin (1986) calls ‘the vital lie tradition’, however, have held that self-deception can in some instances be salutary, protecting us from truths that would make life unlivable (e.g., Rorty 1972; 1994). There are two major questions regarding the morality of self-deception: First, can a person be held morally responsible for self-deception and if so under what conditions? Second, is there is anything morally problematic with self-deception, and if so, what and under what circumstances? The answers to these questions are clearly intertwined. If self-deceivers cannot be held responsible for self-deception, then their responsibility for whatever morally objectionable consequences it might have will be mitigated if not eliminated. Nevertheless, self-deception might be morally significant even if one cannot be taxed for entering into it. To be ignorant of one's moral self, as Socrates saw, may represent a great obstacle to a life well lived whether or not one is at fault for such ignorance.
Whether self-deceivers can be held responsible for their self-deception is largely a question of whether they have the requisite control over the acquisition and maintenance of their self-deceptive belief. In general, intentionalists hold that self-deceivers are responsible, since they intend to acquire the self-deceptive belief, usually recognizing the evidence to the contrary. Even when the intention is indirect, such as when one intentionally seeks evidence in favor of p or avoids collecting or examining evidence to the contrary, self-deceivers seem intentionally to flout their own normal standards for gathering and evaluating evidence. So, minimally, they are responsible for such actions and omissions.
Initially, non-intentionalist approaches may seem to remove the agent from responsibility by rendering the process by which she is self-deceived subintentional. If my anxiety, fear, or desire triggers a process that ineluctably leads me to hold the self-deceptive belief, I cannot be held responsible for holding that belief. How can I be held responsible for processes that operate without my knowledge and which are set in motion without my intention? Most non-intentionalist accounts, however, do hold self-deceivers responsible for individual episodes of self-deception, or for the vices of cowardice and lack of self-control from which they spring, or both. To be morally responsible in the sense of being an appropriate target for praise or blame requires, at least, that agents have control over the actions in question. Mele (2001), for example, argues that many sources of bias are controllable and that self-deceivers can recognize and resist the influence of emotion and desire on their belief acquisition and retention, particularly in matters they deem to be important, morally or otherwise. The extent of this control, however, is an empirical question.
Other non-intentionalists take self-deceivers to be responsible for certain epistemic vices such as cowardice in the face of fear or anxiety and lack of self-control with respect the biasing influences of desire and emotion. Thus, Barnes (1997) argues that self-deceivers “can, with effort, in some circumstances, resist their biases” (83) and “can be criticized for failing to take steps to prevent themselves from being biased; they can be criticized for lacking courage in situations where having courage is neither superhumanly difficult nor costly” (175). Whether self-deception is due to a character defect or not, ascriptions of responsibility depend upon whether the self-deceiver has control over the biasing effects of her desires and emotions.
Levy (2004) has argued that non-intentional accounts of self-deception that deny the contradictory belief requirement should not suppose that self-deceivers are typically responsible, since it is rarely the case that self-deceivers possess the requisite awareness of the biasing mechanisms operating to produce their self-deceptive belief. Lacking such awareness, self-deceivers do not appear to know when or on which beliefs such mechanisms operate, rendering them unable to curb the effects of these mechanisms, even when they operate to form false beliefs about morally significant matters. Levy also argues that if self-deceivers typically lack the control necessary for moral responsibility in individual episodes of self-deception, they also lack control over being the sort of person disposed to self-deception. Non-intentionalists may respond by claiming that self-deceivers often are aware of the potentially biasing effects their desires and emotions might have and can exercise control over them. They might also challenge the idea the self-deceivers must be aware in the ways Levy suggests. One well known account of control, employed by Levy, holds that a person is responsible just in case she acts on a mechanism that is moderately responsive to reasons (including moral reasons), such that were she to possess such reasons this same mechanism would act upon those reasons in at least one possible world (Fischer and Ravizza 1999). Guidance control, in this sense, requires that the mechanism in question be capable of recognizing and responding to moral and non-moral reasons sufficient for acting otherwise. In cases of self-deception, deflationary views may suggest that the biasing mechanism, while sensitive and responsive to motivation, is too simple to itself be responsive to reasons. However, the question isn't whether the biasing mechanism itself is reasons responsive but whether the mechanism governing its operation is, that is, whether self-deceivers typically could recognize and respond to moral and non-moral reasons to resist the influence of their desires and emotions and instead exercise special scrutiny of the belief in question. At the very least, it isn't obvious that they could not. Moreover, that some overcome their self-deception seems to indicate such a capacity and thus control over ceasing to be self-deceived at least.
Insofar as it seems plausible that in some cases self-deceivers are apt targets for censure, what prompts this attitude? Take the case of a mother who deceives herself into believing her husband is not abusing their daughter because she can't bear the thought that he is a moral monster (Barnes 1997). Why do we blame her? Here we confront the nexus between moral responsibility for self-deception and the morality of self-deception. Understanding what obligations may be involved and breached in cases of this sort will help to clarify the circumstances in which ascriptions of responsibility are appropriate.
While some instances of self-deception seem morally innocuous and others may even be thought salutary in various ways (Rorty 1994), the majority of theorists have thought there to be something morally objectionable about self-deception or its consequences in many cases. Self-deception has been considered objectionable because it facilitates harm to others (Linehan 1982) and to oneself, undermines autonomy (Darwall 1988; Baron 1988), corrupts conscience (Butler 1722), violates authenticity (Sartre 1943), and manifests a vicious lack of courage and self-control that undermine the capacity for compassionate action (Jenni 2003). Linehan (1982) argues that we have an obligation to scrutinize the beliefs that guide our actions that is proportionate to the harm to others such actions might involve. When self-deceivers induce ignorance of moral obligations, of the particular circumstances, of likely consequences of actions, or of their own engagements, by means of their self-deceptive beliefs, they are culpable. They are guilty of negligence with respect to their obligation to know the nature, circumstances, likely consequences and so forth of their actions (Jenni 2003). Self-deception, accordingly, undermines or erodes agency by reducing our capacity for self-scrutiny and change. (Baron 1988) If I am self-deceived about actions or practices that harm others or myself, my ability to take responsibility and change are also severely restricted. Joseph Butler, in his well-known sermon “On Self-Deceit”, emphasizes the ways in which self-deception about one's moral character and conduct, ‘self-ignorance’ driven by inordinate ‘self-love', not only facilitates vicious actions but hinders the agent's ability to change by obscuring them from view. Such ignorance, claims Butler, “undermines the whole principle of good … and corrupts conscience, which is the guide of life” (“On Self-Deceit”). Existentialist philosophers such as Kierkegaard and Sartre, in very different ways, viewed self-deception as a threat to ‘authenticity’ insofar as self-deceivers fail to take responsibility for themselves and their engagements past, present and future. By alienating us from our own principles, self-deception may also threaten moral integrity (Jenni 2003). Furthermore, self-deception also manifests certain weakness of character that dispose us to react to fear, anxiety, or the desire for pleasure in ways that bias our belief acquisition and retention in ways that serve these emotions and desires rather than accuracy. Such epistemic cowardice and lack of self-control may inhibit the ability of self-deceivers to stand by or apply moral principles they hold by biasing their beliefs regarding particular circumstances, consequences or engagements, or by obscuring the principles themselves. In all these ways and a myriad of others, philosophers have found some self-deception objectionable in itself or for the consequences it has on our ability to shape our lives.
Those finding self-deception morally objectionable, generally assume that self-deception or, at least, the character that disposes us to it, is under our control to some degree. This assumption need not entail that self-deception is intentional only that it is avoidable in the sense that self-deceivers could recognize and respond to reasons for resisting bias by exercising special scrutiny (see section 5.1). It should be noted, however, that self-deception still poses a serious worry even if one cannot avoid entering into it, since self-deceivers may nevertheless have an obligation to overcome it. If exiting self-deception is under the guidance control of self-deceivers, then they might reasonably be blamed for persisting in their self-deceptive beliefs when they regard matters of moral significance.
But even if agents don't bear specific responsibility for their being in that state, self-deception may nevertheless be morally objectionable, destructive and dangerous. If radically deflationary models of self-deception do turn out to imply that our own desires and emotions, in collusion with social pressures toward bias, lead us to hold self-deceptive beliefs and cultivate habits of self-deception of which we are unaware and from which cannot reasonably be expected to escape on our own, self-deception would still undermine autonomy, manifest character defects, obscure us from our moral engagements and the like. For these reasons, Rorty (1994) emphasizes the importance of the company we keep. Our friends, since they may not share our desires or emotions, are often in a better position to recognize our self-deception than we are. With the help of such friends, self-deceivers may, with luck, recognize and correct morally corrosive self-deception.
Evaluating self-deception and its consequences for ourselves and others is a difficult task. It requires, among other things: determining the degree of control self-deceivers have; what the self-deception is about (Is it important morally or otherwise?); what ends the self-deception serves (Does it serve mental health or as a cover for moral wrongdoing?); how entrenched it is (Is it episodic or habitual?); and, whether it is escapable (What means of correction are available to the self-deceiver?). In view of the many potentially devastating moral problems associated with self-deception, these are questions that demand our continued attention.
Collective self-deception has received scant direct philosophical attention as compared with its individual counterpart. Collective self-deception might refer simply to a group of similarly self-deceived individuals or to a group-entity, such as a corporation, committee, jury or the like, that is self-deceived. These alternatives reflect two basic perspectives social epistemologists have taken on ascriptions of propositional attitudes to collectives. On the one hand, such attributions might be taken summatively as simply an indirect way of attributing those states to members of the collective (Quinton 1975/1976). This summative understanding, then, considers attitudes attributed to groups to be nothing more than metaphors expressing the sum of the attitudes held by their members. To say that students think tuition is too high is just a way of saying that most students think so. On the other hand, such attributions might be understood non-summatively as applying to collective entities, themselves ontologically distinct from the members upon which they depend. These so-called ‘plural subjects’ (Gilbert 1989, 1994, 2005) or ‘social integrates’ (Pettit 2003), while supervening upon the individuals comprising them, may well express attitudes that diverge from individual members. For instance, saying NASA believed the O-rings on the space shuttle's booster rockets to be safe need not imply that most or all the members of this organizations personally held this belief only that the institution itself did. The non-summative understanding, then, considers collectives to be, like persons, apt targets for attributions of propositional attitudes, and potentially of moral and epistemic censure as well. Following this distinction, collective self-deception may be understood in either a summative or non-summative sense.
In the summative sense, collective self-deception refers to self-deceptive belief shared by a group of individuals, who each come to hold the self-deceptive belief for similar reasons and by similar means, varying according to the account of self-deception followed. We might call this self-deception across a collective. In the non-summative sense, the subject of collective self-deception is the collective itself, not simply the individuals comprising it. The following sections offer an overview of these forms of collective self-deception, noting the significant challenges posed by each.
Understood summatively, we might define collective self-deception as the holding of a false belief in the face of evidence to the contrary by a group of people as a result of shared desires, emotions, or intentions (depending upon the account of self-deception) favoring that belief. Collective self-deception is distinct from other forms of collective false belief—such as might result from deception or lack of evidence—insofar as the false belief issues from the agents' own self-deceptive mechanisms (however these are construed), not the absence of evidence to the contrary or presence of misinformation. Accordingly, the individuals constituting the group would not hold the false belief if their vision weren't distorted by their attitudes (desire, anxiety, fear or the like) toward the belief. What distinguishes collective self-deception from solitary self-deception just is its social context, namely, that it occurs within a group that shares both the attitudes bringing about the false belief and the false belief itself. Compared to its solitary counterpart, self-deception within a collective is both easier to foster and more difficult to escape, being abetted by the self-deceptive efforts of others within the group.
Virtually all self-deception has a social component, being wittingly or unwittingly supported by one's associates (See Ruddick 1988). In the case of collective self-deception, however, the social dimension comes to the fore, since each member of the collective unwittingly helps to sustain the self-deceptive belief of the others in the group. For example, my cancer stricken friend might self-deceptively believe her prognosis to be quite good. Faced with the fearful prospect of death, she does not form accurate beliefs regarding the probability of her full recovery, attending only to evidence supporting full recovery and discounting or ignoring altogether the ample evidence to the contrary. Caring for her as I do, I share many of the anxieties, fears and desires that sustain my friend's self-deceptive belief, and as a consequence I form the same self-deceptive belief via the same mechanisms. In such a case, I unwittingly support my friend's self-deceptive belief and she mine—our self-deceptions are mutually reinforcing. We are collectively or mutually self-deceived, albeit on a very small scale. Ruddick (1988) calls this ‘joint self-deception.’
On a larger-scale, sharing common attitudes, large segments of a society might deceive themselves together. For example, we share a number of self-deceptive beliefs regarding our consumption patterns. Many of the goods we consume are produced by people enduring labor conditions we do not find acceptable and in ways that we recognize are environmentally destructive and likely unsustainable. Despite our being at least generally aware of these social and environmental ramifications of our consumptive practices, we hold the overly optimistic beliefs that the world will be fine, that its peril is overstated, that the suffering caused by the exploitive and ecologically degrading practices are overblown, that our own consumption habits are unconnected to these sufferings anyway, even that our minimal efforts at conscientious consumption are an adequate remedy (See, Goleman 1989). When self-deceptive beliefs such as these are held collectively, they become entrenched and their consequences, good or bad, are magnified (Surbey 2004).
The collective entrenches self-deceptive beliefs by providing positive reinforcement by others sharing the same false belief, as well as protection from evidence that would destabilize the target belief. There are, however, limits to how entrenched such beliefs can become and remain self-deceptive. The social support cannot be the sole or primary cause of the self-deceptive belief, for then the belief would simply be the result of unwitting interpersonal deception and not the deviant belief formation process that characterizes self-deception. If the environment becomes so epistemically contaminated as to make counter-evidence inaccessible to the agent, then we have a case of false belief, not self-deception. Thus, even within a collective a person is self-deceived just in case she would not hold her false belief if she did not possess the motivations skewing her belief formation process. This said, relative to solitary self-deception, the collective variety does present greater external obstacles to avoiding or escaping self-deception, and is for this reason more entrenched. If the various proposed psychological mechanisms of self-deception pose an internal challenge to the self-deceiver's power to control her belief formation, then these social factors pose an external challenge to the self-deceiver's control. Determining the how superable this challenge is will affect our assessment of individual responsibility for self-deception as well as the prospects of unassisted escape from it.
Collective self-deception can also be understood from the perspective of the collective itself in a non-summative sense. Though there are varying accounts of group belief, generally speaking, a group can be said to believe, desire, value or the like just in case its members “jointly commit” to these things as a body (Gilbert 2005). A corporate board, for instance, might be jointly committed as a body to believe, value and strive for whatever the CEO recommends. Such commitment need not entail that each individual board member personally endorses such beliefs, values or goals, only that as members of the board they do (Gilbert 2005). While philosophically precise accounts of non-summative self-deception remain largely unarticulated, the possibilities mirror those of individual self-deception. When collectively held attitudes motivate a group to espouse a false belief despite the group's possession of evidence to the contrary, we can say that the group is collectively self-deceived in a non-summative sense.
For example, Robert Trivers (2000) suggests that ‘organizational self-deception’ led to NASA's failure to represent accurately the risks posed by the space shuttle's O-ring design, a failure that eventually led to the Challenger disaster. The organization as a whole, he argues, had strong incentives to represent such risks as small. As a consequence, NASA's Safety Unit mishandled and misrepresented data it possessed that suggested that under certain temperature conditions the shuttle's O-rings were not safe. NASA, as an organization, then, self-deceptively believed the risks posed by O-ring damage were minimal. Within the institution, however, there were a number of individuals who did not share this belief, but both they and the evidence supporting their belief were treated in a bias manner by the decision-makers within the organization. As Trivers (2000) puts it, this information was relegated “to portions of … the organization that [were] inaccessible to consciousness (we can think of the people running NASA as the conscious part of the organization).” In this case, collectively held values created a climate within NASA that clouded its vision of the data and led to its endorsement of a fatally false belief.
Collective self-deceit may also play a significant role in facilitating unethical practices by corporate entities. For example, a collective commitment by members of a corporation to maximizing profits might lead members to form false beliefs about the ethical propriety of the corporation's practices. Gilbert (2005) suggests that such a commitment might lead executives and other members to “simply lose sight of moral constraints and values they previously held”. Similarly, Tenbrunsel and Messick (2004) argue that self-deceptive mechanisms play a pervasive role in what they call ‘ethical fading’, acting as a kind of ‘bleach’ that renders organizations blind to the ethical dimensions of their decisions. They argue that such self-deceptive mechanisms must be recognized and actively resisted at the organizational level if unethical behavior is to be avoided. More specifically, Gilbert (2005) contends that collectively accepting that “certain moral constraints must rein in the pursuit of corporate profits” might shift corporate culture in such a way that efforts to respect these constraints are recognized as part of being a good corporate citizen. In view of the ramifications this sort of collective self-deception has for the way we understand corporate misconduct and responsibility, understanding its specific nature in greater detail remains an important task.
Collective self-deception understood in either the summative or non-summative sense raises a number of significant questions such as whether individuals within collectives bear responsibility for their self-deception or the part they play in the collective's self-deception, whether collective entities can be held responsible for their epistemic failures. Finally, collective self-deception prompts us to ask what means are available collectives and their members to resist, avoid and escape self-deception. To answer these and other questions, more precise accounts of these forms of self-deception are needed. Given the capacity of collective self-deception to entrench false beliefs and to magnify their consequences—sometimes with disastrous results—collective self-deception is not just a philosophical puzzle; it is a problem that demands attention.
- Ames, R.T., and W. Dissanayake, (eds.), 1996, Self and Deception, New York: State University of New York Press.
- Audi, R., 1976, “Epistemic Disavowals and Self-Deception,” The Personalist, 57: 378-385.
- –––, 1982, “Self-Deception, Action, and Will,” Erkenntnis, 18: 133–158.
- –––, 1989, “Self-Deception and Practical Reasoning,” Canadian Journal of Philosophy, 19: 247–266.
- Bach, K., 1997, “Thinking and Believing in Self-Deception,” Behavioral and Brain Sciences, 20: 105.
- –––, 1981, “An Analysis of Self-Deception,” Philosophy and Phenomenological Research, 41: 351–370.
- Barnes, A., 1997, Seeing through Self-Deception, New York: Cambridge University Press.
- Baron, M., 1988, “What is Wrong with Self-Deception,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Bok, S., 1980, “The Self Deceived,” Social Science Information, 19: 923–935.
- –––, 1989, “Secrecy and Self-Deception,” in Secrets: On the Ethics of Concealment and Revelation, New York: Vintage
- Bermúdez, J., 2000, “Self-Deception, Intentions, and Contradictory Beliefs,” Analysis 60(4): 309–319.
- –––, 1997, “Defending Intentionalist Accounts of Self-Deception,” Behavioral and Brain Sciences, 20: 107–8.
- Bird, A., 1994, “Rationality and the Structure of Self-Deception,” in S. Gianfranco (ed.), European Review of Philosophy (Volume 1: Philosophy of Mind), Stanford: CSLI Publications.
- Brown, R., 2003, “The Emplotted Self: Self-Deception and Self-Knowledge.,” Philosophical Papers, 32: 279-300.
- Butler, J., 1726, “Upon Self-Deceit,” in D.E. White (ed.), 2006, The Works of Bishop Butler, Rochester: Rochester University Press. [Available online]
- Chisholm, R. M., and Feehan, T., 1977, “The Intent to Deceive,” Journal of Philosophy, 74: 143–159.
- Cook, J. T., 1987, “Deciding to Belief without Self-deception,” Journal of Philosophy, 84: 441–446.
- Dalton, P., 2002, “Three Levels of Self-Deception (Critical Commentary on Alfred Mele's Self-Deception Unmasked),” Florida Philosophical Review, 2(1): 72–76.
- Darwall, S., 1988, “Self-Deception, Autonomy, and Moral Constitution,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Davidson, D., 1985, “Deception and Division,” in Actions and Events, E. LePore and B. McLaughlin (eds.), New York: Basil Blackwell.
- –––, 1982, “Paradoxes of Irrationality,” in Philosophical Essays on Freud, R. Wollheim and J. Hopkins (eds.), Cambridge: Cambridge University Press.
- Demos, R., 1960, “Lying to Oneself,” Journal of Philosophy, 57: 588–95.
- Dennett, D., 1992, “The Self as a Center of Narrative Gravity,” in Consciousness and Self: Multiple Perspectives, F. Kessel, P. Cole, and D. Johnson (eds.), Hillsdale, NJ: L. Erlbaum.
- de Sosa, R., 1978, “Self-Deceptive Emotions,” Journal of Philosophy, 75: 684–697.
- –––, 1970, “Self-Deception,” Inquiry, 13: 308–321.
- DeWeese-Boyd, I., 2007, “Taking Care: Self-Deception, Culpability and Control,” teorema, 26(3): 161–176.
- Dunn, R., 1995, “Motivated Irrationality and Divided Attention,” Australasian Journal of Philosophy, 73: 325–336.
- –––, 1995, “Attitudes, Agency and First-Personality,” Philosophia, 24: 295-319.
- –––, 1994, “Two Theories of Mental Division,” Australasian Journal of Philosophy, 72: 302–316.
- Dupuy, J-P., (ed.), 1998, Self-Deception and Paradoxes of Rationality (Lecture Notes 69), Stanford: CSLI Publications.
- Elster, J., (ed.), 1985, The Multiple Self, Cambridge: Cambridge University Press.
- Fairbanks, R., 1995, “Knowing More Than We Can Tell,” The Southern Journal of Philosophy, 33: 431–459.
- Fingarette, H., 1998, “Self-Deception Needs No Explaining,” The Philosophical Quarterly, 48: 289–301.
- Fingarette, H., 1969, Self-Deception, Berkeley: University of California Press; reprinted, 2000.
- Fischer, J. and Ravizza, M., 1998, Responsibility and Control. Cambridge: Cambridge University Press.
- Funkhouser, E., 2005, “Do the Self-Deceived Get What They Want?,” Pacific Philosophical Quarterly, 86(3): 295–312.
- Gendler, T. S., 2007, “Self-Deception as Pretense,” Philosophical Perspectives, 21: 231–258.
- Gilbert, Margaret, 1989, On Social Facts, London: Routledge.
- –––, 1994, “Remarks on Collective Belief,” in Socializing Epistemology, F. Schmitt (ed.), Lanham, MD: Rowman and Littlefield.
- ––, 2005, “Corporate Misbehavior and Collective Values,” Brooklyn Law Review, 70(4): 1369–80.
- Goleman, Daniel, 1989, “What is negative about positive illusions?: When benefits for the individual harm the collective,” Journal of Social and Clinical Psychology, 8: 190–197.
- Haight, R. M., 1980, A Study of Self-Deception, Sussex: Harvester Wheatsheaf.
- Hales, S. D., 1994, “Self-Deception and Belief Attribution,” Synthese, 101: 273–289.
- Hernes, C., 2007, “Cognitive Peers and Self-Deception,” teorema, 26(3): 123-130.
- Hauerwas, S. and Burrell, D., 1977, “Self-Deception and Autobiography: Reflections on Speer's Inside the Third Reich,” in Truthfulness and Tragedy, S. Hauerwas with R. Bondi and D. Burrell, Notre Dame: University of Notre Dame Press.
- Jenni, K., 2003, “Vices of Inattention,” Journal of Applied Philosophy, 20(3): 279–95.
- Johnston, M., 1988, “Self-Deception and the Nature of Mind,” in Perspectives on Self-Deception, B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
- Kirsch, J., 2005, “What's So Great about Reality?,” Canadian Journal of Philosophy, 35(3): 407–428.
- Lazar, A., 1999, “Deceiving Oneself Or Self-Deceived?,” Mind, 108: 263–290.
- –––, 1997, “Self-Deception and the Desire to Believe,” Behavioral and Brain Sciences, 20: 119–120.
- Levy, N., 2004, “Self-Deception and Moral Responsibility,” Ratio (new series), 17: 294–311.
- Linehan, E. A. 1982, “Ignorance, Self-deception, and Moral Accountability,” Journal of Value Inquiry, 16: 101–115.
- Lockhard, J. and Paulhus, D. (eds.), 1988, Self-Deception: An Adaptive Mechanism?, Englewood Cliffs: Prentice-Hall.
- Martin, M., 1986, Self-Deception and Morality, Lawrence: University Press of Kansas.
- –––, (ed.), 1985, Self-Deception and Self-Understanding. Lawrence: University Press of Kansas.
- Martínez Manrique, F., 2007, “Attributions of Self-Deception,” teorema, 26(3): 131-143.
- McLaughlin, B. and Rorty, A. O. (eds.), 1988, Perspectives on Self-Deception, Berkeley: University of California Press.
- Mele, A., 2001, Self-Deception Unmasked, Princeton: Princeton University Press.
- –––, 2000, “Self-Deception and Emotion,” Consciousness and Emotion, 1: 115–139.
- –––, 1999, “Twisted Self-Deception,” Philosophical Psychology, 12: 117–137.
- –––, 1997, “Real Self-Deception,” Behavioral and Brain Sciences, 20: 91–102.
- –––, 1987a, Irrationality: An Essay on Akrasia, Self-Deception, Self-Control, Oxford: Oxford University Press.
- –––, 1987b, “Recent Work on Self-deception,” American Philosophical Quarterly, 24: 1–17.
- –––, 1983, “Self-Deception,” Philosophical Quarterly, 33: 365–377.
- Moran, R., 1988, “Making Up Your Mind: Self-Interpretation and Self-constitution,” Ratio (new series), 1: 135–151.
- Nelkin, D., 2002, “Self-Deception, Motivation, and the Desire to Believe,” Pacific Philosophical Quarterly, 83: 384–406.
- Nicholson, A., 2007.“Cognitive Bias, Intentionality and Self-Deception,” teorema, 26(3): 45-58.
- Noordhof, P., 2003, “Self-Deception, Interpretation and Consciousness,” Philosophy and Phenomenological Research, 67: 75–100.
- Paluch, S., 1967, “Self-Deception,” Inquiry, 10: 268–78.
- Patten, D., 2003, “How do we deceive ourselves?,” Philosophical Psychology, 16(2): 229–46.
- Pears, D., 1991, “Self-Deceptive Belief Formation,” Synthese, 89: 393–405.
- –––, 1984, Motivated Irrationality, New York: Oxford University Press.
- Pettit, Philip, 2003, “Groups with Minds of Their Own,” in Socializing Metaphysics, F. Schmitt (ed.), Lanham, MD: Rowman and Littlefield.
- –––, 2006, “When to Defer to Majority Testimony — and When Not,” Analysis, 66(3): 179–187.
- Philström, S., 2007, “Transcendental Self-Deception,” teorema, 26(3): 177-189.
- Quinton, Anthony, 1975/1976, “Social Objects,” Proceedings of the Aristotelian Society, 75: 1–27.
- Räikkä, J. 2007, “Self-Deception and Religious Beliefs,” Heythrop Journal, 48: 513–526.
- Rorty, A. O., 1994, “User-Friendly Self-Deception,” Philosophy, 69: 211–228.
- –––, 1983, “Akratic Believers,” American Philosophical Quarterly, 20: 175–183.
- –––, 1980, “Self-Deception, Akrasia and Irrationality,” Social Science Information, 19: 905–922.
- –––, 1972, “Belief and Self-Deception,” Inquiry, 15: 387-410.
- Sartre, J-P., 1946, L'etre et le néant, Paris: Gallimard; trans. H. E. Barnes, 1956, Being and Nothingness, New York, Washington Square Press.
- Sahdra, B. and Thagard, P., 2003, “Self-Deception and Emotional Coherence,” Minds and Machines, 13: 213–231.
- Scott-Kakures, D., 2002, “At Permanent Risk: Reasoning and Self-Knowledge in Self-Deception,” Philosophy and Phenomenological Research, 65: 576–603.
- –––, 2001, “High anxiety: Barnes on What Moves the Unwelcome Believer,” Philosophical Psychology, 14: 348–375.
- –––, 2000, “Motivated Believing: Wishful and Unwelcome,” Noûs, 34: 348–375.
- Sorensen, R., 1985, “Self-Deception and Scattered Events,” Mind, 94: 64–69.
- Surbey, Michele (2004) “Self-deception: Helping and hindering personal and public decision making,” in Evolutionary Psychology, Public Policy and Personal Decisions, C. Crawford and C. Salmon (eds.), Mahwah, NJ: Lawrence Earlbaum Associates.
- Talbott, W. J., 1997, “Does Self-Deception Involve Intentional Biasing,” Behavoir and Brain Sciences, 20: 127.
- –––, 1995, “Intentional Self-Deception in a Single Coherent Self,” Philosophy and Phenomenological Research, 55: 27–74.
- Tenbrusel, A.E. and D. M Messick, 2004, “Ethical Fading: The Role of Self-Deception in Unethical Behavior,” Social Justice Research, 7(2): 223–236.
- Trivers, R., 2000, “The Elements of a Scientific Theory of Self-Deception,” in Evolutionary Perspectives on Human Reproductive Behavior, Dori LeCroy and Peter Moller (eds.), Annals of the New York Academy of Sciences, 907: 114–131.
- Tversky, A., 1985, “Self-Deception and Self-Perception,” in The Multiple Self, Jon Elster (ed.), Cambridge: Cambridge University Press.
- Van Fraassen, B., 1995, “Belief and the Problem of Ulysses and the Sirens,” Philosophical Studies, 77: 7–37.
- –––, 1984, “Belief and Will,” Journal of Philosophy, 81: 235–256.
- Whisner, W., 1993, “Self-Deception and Other-Person Deception,” Philosophia, 22: 223–240.
- –––, 1989, “Self-Deception, Human Emotion, and Moral Responsibility: Toward a Pluralistic Conceptual Scheme,” Journal for the Theory of Social Behaviour, 19: 389–410.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
- Self-Deception Bibliography, compiled by David Chalmers and David Bourget, Australian National University
The author would like to thank Margaret DeWeese-Boyd and Douglas Young and the editors for their help in constructing and revising this entry.