Egoism
Egoism can be a descriptive or a normative position. Psychological egoism, the most famous descriptive position, claims that each person has but one ultimate aim: her own welfare. Normative forms of egoism make claims about what one ought to do, rather than describe what one does do. Ethical egoism claims I morally ought to perform some action if and only if, and because, performing that action maximizes my self-interest. Rational egoism claims that I ought to perform some action if and only if, and because, performing that action maximizes my self-interest. (Here the “ought” is not restricted to the moral “ought”.)
- 1. Psychological Egoism
- 2. Ethical Egoism
- 3. Rational Egoism
- 4. Conclusion
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries
1. Psychological Egoism
All forms of egoism require explication of “self-interest” (or “welfare” or “well-being”). There are three main theories. Preference or desire accounts identify self-interest with the satisfaction of one’s desires. Often, and most plausibly, these desires are restricted to self-regarding desires. What makes a desire self-regarding is controversial, but there are clear cases and counter-cases: a desire for my own pleasure is self-regarding; a desire for the welfare of others is not. Objective accounts identify self-interest with the possession of states (such as virtue or knowledge) that are valuable independently of whether they are desired. Hybrid accounts give a role to both desires (or pleasure) and states that are valuable independently of whether they are desired. For example, perhaps the increase to my well-being brought about by a satisfied desire (or a pleasure) itself increases insofar as it is a desire for (or pleasure in) knowledge. Or perhaps the increase to my well-being brought about by a piece of knowledge itself increases insofar as I desire (or take pleasure in) it. Hedonism, which identifies self-interest with pleasure, is either a preference or an objective account, according to whether what counts as pleasure is determined by one’s desires.
Psychological egoism claims that each person has but one ultimate aim: her own welfare. This allows for action that fails to maximize perceived self-interest, but rules out the sort of behavior psychological egoists like to target — such as altruistic behavior or motivation by thoughts of duty alone. It allows for weakness of will, since in weakness of will cases I am still aiming at my own welfare; I am weak in that I do not act as I aim. And it allows for aiming at things other than one’s welfare, such as helping others, where these things are a means to one’s welfare.
Psychological egoism is supported by our frequent observation of self-interested behavior. Apparently altruistic action is often revealed to be self-interested. And we typically motivate people by appealing to their self-interest (through, for example, punishments and rewards).
A common objection to psychological egoism, made famously by Joseph Butler, is that I must desire things other than my own welfare in order to get welfare. Say I derive welfare from playing hockey. Unless I desired, for its own sake, to play hockey, I would not derive welfare from playing. Or say I derive welfare from helping others. Unless I desired, for its own sake, that others do well, I would not derive welfare from helping them. Welfare results from my action, but cannot be the only aim of my action.
The psychological egoist can concede that I must have desires for particular things, such as playing hockey. But there is no need to concede that the satisfaction of these desires is not part of my welfare. My welfare might consist simply in the satisfaction of self-regarding desires. In the case of deriving welfare from helping others, the psychological egoist can again concede that I would not derive welfare without desiring some particular thing, but need not agree that what I desire for its own sake is that others do well. That I am the one who helps them may, for example, satisfy my self-regarding desire for power.
A bigger problem for psychological egoism is that some behavior does not seem to be explained by self-regarding desires. Say a soldier throws himself on a grenade to prevent others from being killed. It does not seem that the soldier is pursuing his perceived self-interest. It is plausible that, if asked, the soldier would have said that he threw himself on the grenade because he wanted to save the lives of others or because it was his duty. He would deny as ridiculous the claim that he acted in his self-interest.
The psychological egoist might reply that the soldier is lying or self-deceived. Perhaps he threw himself on the grenade because he believed that he could not bear to live with himself afterwards if he did not do so. He has a better life, in terms of welfare, by avoiding years of guilt. The main problem here is that while this is a possible account of some cases, there is no reason to think it covers all cases. Another problem is that guilt may presuppose that the soldier has a non-self-regarding desire for doing what he takes to be right.
The psychological egoist might reply that some such account must be right. After all, the soldier did what he most wanted to do, and so must have been pursuing his perceived self-interest. In one sense, this is true. If self-interest is identified with the satisfaction of all of one’s preferences, then all intentional action is self-interested (at least if intentional actions are always explained by citing preferences, as most believe). Psychological egoism turns out to be trivially true. This would not content defenders of psychological egoism, however. They intend an empirical theory that, like other such theories, it is at least possible to refute by observation.
There is another way to show that the trivial version of psychological egoism is unsatisfactory. We ordinarily think there is a significant difference in selfishness between the soldier’s action and that of another soldier who, say, pushes someone onto the grenade to avoid being blown up himself. We think the former is acting unselfishly while the latter is acting selfishly. According to the trivial version of psychological egoism, both soldiers are equally selfish, since both are doing what they most desire.
The psychological egoist might handle apparent cases of self-sacrifice, not by adopting the trivial version, but rather by claiming that facts about the self-interest of the agent explain all behavior. Perhaps as infants we have only self-regarding desires; we come to desire other things, such as doing our duty, by learning that these other things satisfy our self-regarding desires; in time, we pursue the other things for their own sakes.
Even if this picture of development is true, however, it does not defend psychological egoism, since it admits that we sometimes ultimately aim at things other than our welfare. An account of the origins of our non-self-regarding desires does not show that they are really self-regarding. The soldier’s desire is to save others, not increase his own welfare, even if he would not have desired to save others unless saving others was, in the past, connected to increasing his welfare.
The psychological egoist must argue that we do not come to pursue things other than our welfare for their own sakes. In principle, it seems possible to show this by showing that non-self-regarding desires do not continue for long once their connection to our welfare is broken. However, evidence for this dependence claim has not been forthcoming.
Indeed, when examining the empirical evidence, two sorts of approach have been used to argue against psychological egoism.
First, Daniel Batson and colleagues found that increased empathy leads to increased helping behaviour. One hypothesis is altrustic: empathy causes a non-instrumental desire to help. There are many competing egoistic hypotheses. Empathy might cause an unpleasant experience that subjects believe they can stop by helping; or subjects might think failing to help in cases of high empathy is more likely to lead to punishment by others, or that helping here is more likely to be rewarded by others; or subjects might think this about self-administered punishment or reward. In an ingenious series of experiments, Batson compared the egoistic hypotheses, one by one, against the altruistic hypothesis. He found that the altruistic hypothesis always made superior predictions. Against the unpleasant experience hypothesis, Batson found that giving high-empathy subjects easy ways of stopping the experience other than by helping did not reduce helping. Against the punishment by others hypothesis, Batson found that letting high-empathy subjects believe that their behaviour would be secret did not reduce helping. Against the self-administered reward hypothesis, Batson found that the mood of high-empathy subjects depended on whether they believed that help was needed, whether or not they could do the helping, rather than on whether they helped (and so could self-reward). Against the self-administered punishment hypothesis, Batson found that making high-empathy subjects believe they would feel less guilt from not helping (by letting them believe that few others had volunteered to help) did not reduce helping.
One might quibble with some of the details. Perhaps subjects did not believe that the easy ways of stopping the painful experience Batson provided, such as leaving the viewing room, would stop it. (For an account of an experiment done in reply, favouring Batson, see Stich, Doris and Roedder 2010, as well as Batson 2011 135–145.) Perhaps a Batson-proof egoistic hypothesis could be offered: say that subjects believe that the only way of stopping the pain (or avoiding self-punishment) is by helping (though whether subjects have this belief might be tested for on its own). But on the whole, Batson’s experiments are very bad news for psychological egoism. (For further discussion of Batson, see May 2011a and Slote 2013.)
Second, Elliot Sober and David Wilson argue that evolutionary theory supports altruism. Parental care might be explained on egoistic grounds: a belief about the child’s distress causes the parent pain that the parent believes she can alleviate by helping, or the parent believes that she will be caused pain if she does not help. Parental care might also be explained on altruistic grounds: the parent has a non-instrumental desire that the child do well. Lastly, parental care might be explained by a combination of these mechanisms. Sober and Wilson argue that more reliable care would be provided by the altruistic or combination mechanisms. Given the importance of parental care, this is a reason for thinking that natural selection would have favoured one of these mechanisms. The egoistic mechanism is less reliable for several reasons: beliefs about the child’s distress may fail to cause the parent pain (even bodily injury does not always cause pain, so pain is unlikely to be always caused by beliefs about distress); the parent may fail to believe that helping will best reduce her pain; there may not be enough pain produced; the combination view has the advantage of an extra mechanism.
This argument has drawbacks. Natural selection does not always provide back-up mechanisms (I have but one liver). Natural selection sometimes has my desires caused by affect that is produced by a belief rather than directly by the belief (my desire to run away from danger is often caused by my fear, rather than by the mere belief that there is danger). And in these cases, as in the case of the imperfectly correlated pain and bodily injury, there seems usually to be enough affect. The altruistic hypothesis also has some of the same problems: for example, just as there might not be enough pain, the non-instrumental desire that the child do well might not be strong enough to defeat other desires. Indeed, without an estimate of how strong this desire is, there is no reason to think the egoistic hypothesis is less reliable. It may have more points at which it can go wrong, but produce more care than a direct but weak altruistic mechanism. (For many of these worries, and others, see Stich, Doris and Roedder 2010.)
Even if evolutionary arguments can be met, however, psychological egoism faces the problems noted earlier. In response, the psychological egoist might move to what Gregory Kavka (1986, 64–80) calls “predominant egoism:” we act unselfishly only rarely, and then typically where the sacrifice is small and the gain to others is large or where those benefiting are friends, family, or favorite causes. Predominant egoism is not troubled by the soldier counter-example, since it allows exceptions; it is not trivial; and it seems empirically plausible. (For other weakened positions, see LaFollette 1988 and Mercer 2001.)
2. Ethical Egoism
Ethical egoism claims that I morally ought to perform some action if and only if, and because, performing that action maximizes my self-interest. (There are possibilities other than maximization. One might, for example, claim that one ought to achieve a certain level of welfare, but that there is no requirement to achieve more. Ethical egoism might also apply to things other than acts, such as rules or character traits. Since these variants are uncommon, and the arguments for and against them are largely the same as those concerning the standard version, I set them aside.)
One issue concerns how much ethical egoism differs in content from standard moral theories. It might appear that it differs a great deal. After all, moral theories such as Kantianism, utilitarianism, and common-sense morality require that an agent give weight to the interests of others. They sometimes require uncompensated sacrifices, particularly when the loss to the agent is small and the gain to others is large. (Say the cost to me of saving a drowning person is getting my shirtsleeve wet.) Ethical egoists can reply, however, that egoism generates many of the same duties to others. The argument runs as follows. Each person needs the cooperation of others to obtain goods such as defense or friendship. If I act as if I give no weight to others, others will not cooperate with me. If, say, I break my promises whenever it is in my direct self-interest to do so, others will not accept my promises, and may even attack me. I do best, then, by acting as if others have weight (provided they act as if I have weight in return).
It is unlikely that this argument proves that ethical egoism generates all of the standard duties to others. For the argument depends on the ability of others to cooperate with me or attack me should I fail to cooperate. In dealings with others who lack these abilities, the egoist has no reason to cooperate. The duties to others found in standard moral theories are not conditional in this way. I do not, for example, escape a duty to save a drowning person, when I can easily do so, just because the drowning person (or anyone watching) happens never to be able to offer fruitful cooperation or retaliation.
The divergence between ethical egoism and standard moral theories appears in other ways.
First, the ethical egoist will rank as most important duties that bring her the highest payoff. Standard moral theories determine importance at least in part by considering the payoff to those helped. What brings the highest payoff to me is not necessarily what brings the highest payoff to those helped. I might, for example, profit more from helping the local Opera society refurbish its hall than I would from giving to famine relief in Africa, but standard moral theories would rank famine relief as more important than Opera hall improvements.
Second, the cooperation argument cannot be extended to justify extremely large sacrifices, such as the soldier falling on the grenade, that standard moral theories rank either as most important or supererogatory. The cooperation argument depends on a short-term loss (such as keeping a promise that it is inconvenient to keep) being recompensed by a long-term gain (such as being trusted in future promises). Where the immediate loss is one’s life (or irreplaceable features such as one’s sight), there is no long-term gain, and so no egoist argument for the sacrifice.
An ethical egoist might reply by taking the cooperation argument further. Perhaps I cannot get the benefits of cooperation without converting to some non-egoist moral theory. That is, it is not enough that I act as if others have weight; I must really give them weight. I could still count as an egoist, in the sense that I have adopted the non-egoist theory on egoist grounds.
One problem is that it seems unlikely that I can get the benefits of cooperation only by conversion. Provided I act as if others have weight for long enough, others will take me as giving them weight, and so cooperate, whether I really give them weight or not. In many situations, others will neither have the ability to see my true motivation nor care about it.
Another problem is that conversion can be costly. I might be required by my non-egoist morality to make a sacrifice for which I cannot be compensated (or pass up a gain so large that passing it up will not be compensated for). Since I have converted from egoism, I can no longer reject making the sacrifice or passing up the gain on the ground that it will not pay. It is safer, and seemingly feasible, to remain an egoist while cooperating in most cases. If so, ethical egoism and standard moralities will diverge in some cases. (For discussion of the cooperation argument, see Frank 1988; Gauthier 1986 ch. 6; Kavka 1984 and 1986 Part II; Sidgwick 1907 II.V.)
There is another way to try to show that ethical egoism and standard moral theories do not differ much. One might hold one particular objective theory of self-interest, according to which my welfare lies in possessing the virtues required by standard moral theories. This requires an argument to show that this particular objective theory gives the right account of self-interest. It also faces a worry for any objective theory: objective theories seem implausible as accounts of welfare. If, say, all my preferences favor my ignoring the plight of others, and these preferences do not rest on false beliefs about issues such as the likelihood of receiving help, it seems implausible (and objectionably paternalistic) to claim that “really” my welfare lies in helping others. I may have a duty to help others, and the world might be better if I helped others, but it does not follow that I am better off by helping others. (For a more optimistic verdict on this strategy, noting its roots in Socrates, Plato, Aristotle, the Stoics, and the British Idealists, see Brink 1997 and 2003.)
Of course the divergence between ethical egoism and standard moral theories need not bother an ethical egoist. An ethical egoist sees egoism as superior to other moral theories. Whether it is superior depends on the strength of the arguments for it. Two arguments are popular.
First, one might argue for a moral theory, as one argues for a scientific theory, by showing that it best fits the evidence. In the case of moral theories, the evidence is usually taken to be our most confident common-sense moral judgments. Egoism fits many of these, such as the requirements of cooperation in ordinary cases. It fits some judgments better than utilitarianism does. For example, it allows one to keep some good, such as a job, for oneself, even if giving the good to someone else would help him slightly more, and it captures the intuition that I need not let others exploit me. The problem is that, as the discussion of the cooperation argument shows, it also fails to fit some of the confident moral judgments we make.
Second, one might argue for a moral theory by showing that it is dictated by non-moral considerations -- in particular, by facts about motivation. It is commonly held that moral judgments must be practical, or capable of motivating those who make them. If psychological egoism were true, this would restrict moral judgments to those made by egoism. Other moral judgments would be excluded since it would be impossible to motivate anyone to follow them.
One problem with this argument is that psychological egoism seems false. Replacing psychological with predominant egoism loses the key claim that it is impossible to motivate anyone to make an uncompensated sacrifice.
The ethical egoist might reply that, if predominant egoism is true, ethical egoism may require less deviation from our ordinary actions than any standard moral theory. But fit with motivation is hardly decisive; any normative theory, including ethical egoism, is intended to guide and criticize our choices, rather than simply endorse whatever we do. When I make an imprudent choice, this does not count against ethical egoism, and in favor of a theory recommending imprudence.
The argument has other problems. One could deny that morality must be practical in the required sense. Perhaps morality need not be practical at all: we do not always withdraw moral judgments when we learn that the agent could not be motivated to follow them. Or perhaps moral judgments must be capable of motivating not just anyone, but only idealized versions of ourselves, free from (say) irrationality. In this case, it is insufficient to describe how we are motivated; what is relevant is a description of how we would be motivated were we rational.
Finally, if I do not believe that some action is ultimately in my self-interest, it follows from psychological egoism that I cannot aim to do it. But say I am wrong: the action is in my self-interest. Ethical egoism then says that it is right for me to do something I cannot aim to do. It violates practicality just as any other moral theory does.
So far a number of arguments for ethical egoism have been considered. There are a number of standard arguments against it.
G. E. Moore argued that ethical egoism is self-contradictory. If I am an egoist, I hold that I ought to maximize my good. I deny that others ought to maximize my good (they should maximize their own goods). But to say that x is “my good” is just to say that my possessing x is good. (I cannot possess the goodness.) If my possession of x is good, then I must hold that others ought to maximize my possession of it. I both deny and am committed to affirming that others ought to maximize my good. (Sometimes Moore suggests instead that “my good” be glossed as “x is good and x is mine.” This does not yield the contradiction above, since it does not claim that my possession of x is good. But it yields a different contradiction: if x is good, everyone ought to maximize it wherever it appears; egoists hold that I ought to maximize x only when it appears in me.)
In reply, C. D. Broad rightly noted that this does not show that egoism is self-contradictory, since it is not part of egoism to hold that what is good ought to be pursued by everyone (Broad 1942). But that reply does not defend egoism from the charge of falsity. To do so, one might understand “my good” not as composed from what Moore calls “good absolutely,” but as being a sui generis concept, good-for-me (Mackie 1976, Smith 2003), or as analyzed in terms of what I, from my point of view, ought to desire. In neither of these cases does it follow from “my possession of x is good-for-me” that others ought to maximize what is good-for-me. One might even argue that claims about “good absolutely” do not justify claims about what one ought to do, without in addition there being a special relation between the agent and the proposed change. If so, it does not follow simply from my possession of x being good that others ought to do anything (Prichard 2002 217).
Moore also suggests that the reason for me to pursue my good is the goodness of the thing I obtain. If what I obtain is good, then there is reason for everyone to pursue it, not just in me, but anywhere. Again, moving to good-for-me avoids this consequence. But something close to this argument is plausible, especially for some bad things. One might argue that it is the way my pain feels — its badness — and not any connection between me and the pain that gives me reason to alleviate it. If so, I have reason to alleviate the pain of others (Nagel 1986, Rachels 2002). (This argument can be directed against rational egoism as well.)
A second argument against ethical egoism was made by H. A. Prichard. He argues that self-interest is the wrong sort of reason. I do not, for example, think the reason I have a duty to help a drowning child is that helping benefits me (Prichard 2002 1, 9, 26, 29, 30, 122, 123, 171, 188). Similarly, Prichard chastises Sidgwick for taking seriously the view that there is “a duty...to do those acts which we think will lead to our happiness” (Prichard 2002 135).
This is convincing when “duty” means “moral duty.” It is less convincing when, as Prichard also thinks, the issue is simply what one ought to do. He takes there to be only one sense of “ought,” which he treats as “morally ought.” Any other “ought” is treated as really making the non-normative claim that a certain means is efficient for attaining a certain end. But ethical egoism can be seen as making categorical ought-claims. And the historical popularity of ethical egoism, which Prichard so often notes, indicates that self-interest is not obviously irrelevant to what one ought to do (in a not specifically moral sense).
One might also object to Prichard-style arguments that (a) they are question-begging, since egoists will hardly agree that my reason for helping is something other than the benefit to me, and (b) given disagreement over this claim about my reason, the appropriate response is to suspend judgment about it. Alison Hills, in 2010 parts II and III (directed at rational egoism), replies to (a) that moralists can assure themselves by giving arguments that start from premises like “I have a reason to help regardless of whether doing so contributes to my self-interest,” provided this premiss is not inferred from the falsity of rational egoism — perhaps it is self-evident. In reply to (b), she argues that disagreement over the premiss does not require moralists to suspend judgment about it, although disagreement over an egoistic premiss like “I have reason to help only because doing so benefits me” does require egoists to suspend judgment. The difference is that rational egoists aim at knowledge, and for putative knowledge, in cases of disagreement between epistemic peers, suspension of belief is required. Moralists aim primarily not at knowledge but at the ability to draw, on their own, true moral conclusions from the evidence. Since aiming at this ability requires not giving weight to the conclusions of others, suspension of belief in cases of disagreement is not required of them.
Obviously, much here depends on the claim about the aim of moralists. One might object that moralists care much more about getting true moral conclusions than about arriving at them on their own. If I could guarantee that I do the right act by relying on a Moral Answers Machine (and not otherwise), I ought to do so. In addition, since moralists do want true moral conclusions, and peer disagreement is relevant to pursuing truth, Hills’ moralists both need and cannot (by one means) pursue truth.
A third argument, like Moore’s, claims that ethical egoism is inconsistent in various ways. Say ethical egoism recommends that A and B both go to a certain hockey game, since going to the game is in the self-interest of each. Unfortunately, only one seat remains. Ethical egoism, then, recommends an impossible state of affairs. Or say that I am A and an ethical egoist. I both claim that B ought to go to the game, since that is in her self-interest, and I do not want B to go to the game, since B’s going to the game is against my self-interest.
Against the first inconsistency charge, the ethical egoist can reply that ethical egoism provides no neutral ranking of states of affairs. It recommends to A that A go to the game, and to B that B go to the game, but is silent on the value of A and B both attending the game.
Against the second inconsistency charge, the ethical egoist can claim that she morally recommends that B go to the game, although she desires that B not go. This is no more odd than claiming that my opponent in a game would be wise to adopt a particular strategy, while desiring that he not do so. True, the ethical egoist is unlikely to recommend ethical egoism to others, to blame others for violations of what ethical egoism requires, to justify herself to others on the basis of ethical egoism, or to express moral attitudes such as forgiveness and resentment. These publicity worries may disqualify ethical egoism as a moral theory, but do not show inconsistency.
A fourth argument against ethical egoism is just that: ethical egoism does not count as a moral theory. One might set various constraints on a theory’s being a moral theory. Many of these constraints are met by ethical egoism — the formal constraints, for example, that moral claims must be prescriptive and universalizable. Ethical egoism issues prescriptions — “do what maximizes your self-interest” — and it issues the same prescriptions for people in relevantly similar situations. But other constraints are problematic for ethical egoism: perhaps a moral theory must sometimes require uncompensated sacrifices; or perhaps it must supply a single, neutral ranking of actions that each agent must follow in cases where interests conflict; or perhaps it must respect principles such as “that I ought to do x is a consideration in favor of others not preventing me from doing x;” or perhaps it must be able to be made public in the way, just noted, that ethical egoism cannot. (For sample discussions of these two objections, see Baier 1958 189–191; Campbell 1972; Frankena 1973 18–20; Kalin 1970.)
The issue of what makes for a moral theory is contentious. An ethical egoist could challenge whatever constraint is deployed against her. But a neater reply is to move to rational egoism, which makes claims about what one has reason to do, ignoring the topic of what is morally right. This gets at what ethical egoists intend, while skirting the issue of constraints on moral theories. After all, few if any ethical egoists think of egoism as giving the correct content of morality, while also thinking that what they have most reason to do is determined by some non-egoist consideration. One could then, if one wished, argue for ethical egoism from rational egoism and the plausible claim that the best moral theory must tell me what I have most reason to do.
3. Rational Egoism
Rational egoism claims that I ought to perform some action if and only if, and because, performing that action maximizes my self-interest. (As with ethical egoism, there are variants which drop maximization or evaluate rules or character traits rather than actions. There are also variants which make the maximization of self-interest necessary but not sufficient, or sufficient but not necessary, for an action to be the action I ought to perform. Again, I set these issues aside.) Rational egoism makes claims about what I ought, or have reason, to do, without restricting the “ought” or “reason” to a moral “ought” or “reason.”
Like ethical egoism, rational egoism needs arguments to support it. One might cite our most confident judgments about rational action and claim that rational egoism best fits these. The problem is that our most confident judgments about rational action seem to be captured by a different, extremely popular theory — the instrumental theory of rationality. According to the instrumental theory, I ought to perform some action if and only if, and because, performing that action maximizes the satisfaction of my preferences. Since psychological egoism seems false, it may be rational for me to make an uncompensated sacrifice for the sake of others, for this may be what, on balance, best satisfies my (strong, non-self-interested) preferences. This conflict with the instrumental theory is a major problem for rational egoism.
The rational egoist might reply that the instrumental theory is equally a problem for any standard moral theory that claims to give an account of what one ought rationally, or all things considered, to do. If, for example, a utilitarian claims that I have most reason to give to charity, since that maximizes the general happiness, I could object that giving to charity cannot be rational given my particular preferences, which are for things other than the general happiness.
A different problem for rational egoism is that it appears arbitrary. Suppose I claim that I ought to maximize the welfare of blue-eyed people, but not of other people. Unless I can explain why blue-eyed people are to be preferred, my claim looks arbitrary, in the sense that I have given no reason for the different treatments. As a rational egoist, I claim that I ought to maximize the welfare of one person (myself). Unless I can explain why I should be preferred, my claim looks equally arbitrary.
One reply is to argue that non-arbitrary distinctions can be made by one’s preferences. Say I like anchovies and hate broccoli. This makes my decision to buy anchovies rather than broccoli non-arbitrary. Similarly, my preference for my own welfare makes my concentration on my own welfare non-arbitrary.
There are two problems for this reply.
First, we do not always take preferences to establish non-arbitrary distinctions. If I defend favoring blue-eyed people simply by noting that I like blue-eyed people, without any justification for my liking, this seems unsatisfactory. The rational egoist must argue that hers is a case where preferences are decisive.
Second, if psychological egoism is false, I might lack a preference for my own welfare. It would follow that for me, a distinction between my welfare and that of others would be arbitrary, and the rational egoist claim that each ought to maximize his own welfare would be unjustified when applied to me. The proposal that preferences establish non-arbitrary distinctions supports the instrumental theory better than rational egoism.
Another reply to the arbitrariness worry is to claim that certain distinctions just are non-arbitrary. Which distinctions these are is revealed by looking at whether we ask for justifications of the relevance of the distinction. In the case of my maximizing of the welfare of the blue-eyed, we do ask for a justification; we do not take “because they’re blue-eyed” as an adequate defense of a reason to give to the blue-eyed. In the case of my maximizing my own welfare, however, “because it will make me better off” may seem a reasonable justification; we do not quickly ask “why does that matter?”
In a much-quoted passage, Sidgwick claimed that rational egoism is not arbitrary: “It would be contrary to Common Sense to deny that the distinction between any one individual and any other is real and fundamental, and that consequently ‘I’ am concerned with the quality of my existence as an individual in a sense, fundamentally important, in which I am not concerned with the quality of the existence of other individuals: and this being so, I do not see how it can be proved that this distinction is not to be taken as fundamental in determining the ultimate end of rational action for an individual” (Sidgwick 1907, 498). This can be interpreted in various ways (Shaver 1999, 82–98; Phillips 2011, ch. 5).
On the most natural interpretation, Sidgwick is noting various non-normative facts. I have a distinct history, memories, and perhaps special access to my mental contents. But it is not clear how these facts support the normative conclusion Sidgwick draws. Utilitarians, for example, agree about these facts. (Some of the facts may also not give the sharp distinction Sidgwick wants. I may usually know more about my pain than yours, but this difference seems a matter of degree.)
Sidgwick might instead be claiming that attacks on rational egoism from certain views of personal identity (as in Parfit, discussed below) fail because they rest on a false view of personal identity. But this would only defend rational egoism against one attack. Since there are other attacks, it would not follow that the distinction between people matters.
Finally, Sidgwick might be claiming that my point of view, like an impartial point of view, is non-arbitrary. But there are other points of view, such as that of my species, family or country. Sidgwick finds them arbitrary. It is hard to see why my point of view, and an impartial point of view, are non-arbitrary, while anything inbetween is arbitrary. For example, in favour of my point of view, Sidgwick could note that I am an individual rather than a hive-member. But I am a member of various groups as well. And if my being an individual is important, this cuts against the importance of taking up an impartial point of view just as it cuts against the importance of taking up the point of view of various groups. Similarly, if the impartial point of view is defended as non-arbitrary because it makes no distinctions, both the point of view of various groups and my individual point of view are suspect.
Debate over rational egoism was revitalized by Parfit 1984 pts. II-III. Parfit gives two main arguments against rational egoism. Both focus on the rational egoist’s attitude toward the future: the rational egoist holds that the time at which some good comes is by itself irrelevant, so that, for example, I ought to sacrifice a small present gain for a larger future gain.
First, one could challenge rational egoism, not only with the instrumental theory, but also with the “present-aim” theory of rationality. According to the present-aim theory, I have most reason to do what maximizes the satisfaction of my present desires. Even if all of these desires are self-regarding, the present-aim theory need not coincide with rational egoism. Suppose I know that in the future I will desire a good pension, but I do not now desire a good pension for myself in the future; I have different self-regarding desires. Suppose also that, looking back from the end of my life, I will have maximized my welfare by contributing now to the pension. Rational egoism requires that I contribute now. The present-aim theory does not. It claims that my reasons are relative not only to who has a desire — me rather than someone else — but also to when the desire is held — now rather than in the past or future. The obvious justification an egoist could offer for not caring about time — that one should care only about the amount of good produced — is suicidal, since that should lead one not to care about who receives the good. One reason the present-aim theory is important is that it shows there is a coherent, more minimal alternative to rational egoism. The rational egoist cannot argue that egoism is the most minimal theory, and that standard moral theories, by requiring more of people, require special, additional justification. (For a very different argument to show again that an alternative to morality is less minimal than expected — directed more at the instrumental theory than rational egoism — see Korsgaard 2005.)
Second, rational egoism might be challenged by some views of personal identity. Say half of my brain will be transplanted to another body A. My old body will be destroyed. A will have my memories, traits, and goals. It seems reasonable for me to care specially about A, and indeed to say that A is identical to me. Now say half of my brain will go in B and half in C. Again B and C will have my memories, traits, and goals. It seems reasonable for me to care specially about B and C. But B and C cannot be identical to me, since they are not identical to one another (they go on to live different lives). So the ground of my care is not identity, but rather the psychological connections through memories, etc. Even in the case of A, what grounds my care are these connections, not identity: my relation to A is the same as my relation to B (or C), so what grounds my care about A grounds my care about B (or C) — and that cannot be identity. (To make the point in a different way — I would not take steps to ensure that only one of B and C come about.) If so, I need not care specially about some of my future selves, since they will not have these connections to me. And I do have reason to care specially about other people who bear these connections to me now.
One worry is that psychological continuity might substitute for identity. Say F1 and F2 are psychologically connected because (for example) F2 has a memory of F1’s experiences. Suppose that F3 has a memory of F2’s experiences but no memory of F1. F1 and F3 are psychologically continuous, though not psychologically connected. (Parfit’s view is that psychological connection and continuity both ground special care, if special care is grounded at all.) In the cases above, A, B and C are continuous with me. An egoist might claim that continuity alone matters for special care; that fits the cases. If so, I do have reason to care specially about all of the future selves I am continuous with, and do not have this reason to care specially about others with whom I am not continuous. (For this and other worries about Parfit, see Brink 1992, Johnston 1997, Hills 2010 111–116.)
Parfit could reply that continuity might not suffice for special care. It is not clear that F1 has reason to care specially about F3 — F3 might seem a stranger, perhaps even an unlikeable one. When young, some worry about becoming someone they would not now like. They see no reason for special care for this future person. This worry makes sense, but if continuity were sufficient for special care, it would not. If so, perhaps both continuity and connection, or perhaps continuity and admirability, are needed. This would let Parfit keep the conclusion that I need not care specially for some of my future selves, but would not justify the conclusion that I have reason to care specially about other people who are merely connected to me now (or are merely admirable).
A worry is that some do care specially about merely continuous future selves. With opposed intuitions about when special care is due, the tactic of arguing from intuitions about special care to the grounds of this care is indecisive.
There is another recent argument against rational egoism (Rachels and Alter 2005, Tersman 2008, and especially de Lazari-Radek and Singer 2014). (1) Believing that rational egoism is true increases my reproductive fitness, whether or not rational egoism is true. (2) Therefore my belief that rational egoism is true (or, better, that rational egoism appears to me true upon reflection) does not help to justify rational egoism, since I would have that belief whether or not rational egoism is true. (3) For some other normative beliefs (such as belief in utilitarianism), having the belief does not increase reproductive fitness. (4) Therefore my belief that (say) utilitarianism is true can help justify utilitarianism. (Without (3) and (4), there is no argument against rational egoism in particular.)
Here I put aside general objections to evolutionary debunking arguments (see, for example, Shafer-Landau 2012).
One worry is that what best increases reproductive fitness is acting as a kin altruist rather than as a rational egoist (Crisp 2012, Other Internet Resources). Presumably, then, it is believing that I ought to act as a kin altruist, rather than as a rational egoist, that best increases my reproductive fitness. (If there is a tie between what increases reproductive fitness and belief, and believing that rational egoism is true is best for reproductive fitness, one would expect many to believe that rational egoism is true. But very few do, while many endorse Broad’s “self-referential altruism” (Broad 1971b).) De Lazari-Radek and Singer reply that the recommendations of rational egoism are very close to those of kin altruism, and much closer to those of kin altruism than are the recommendations of utilitarianism (2014 194). But rational egoism and kin altruism do make opposed recommendations. For example, kin altruism might recommend that I sacrifice myself for my family, whether I care about them or not, whereas rational egoism would recommend sacrifice only if my welfare were to be higher were I to sacrifice and die rather than not sacrifice and live. It is also hard to think of a plausible argument which has kin altruism as a premiss and rational egoism as the conclusion, so doubts about kin altruism do not seem to undercut arguments for rational egoism. Nor is it clear how noting a difference in the closeness of recommendations justifies concluding that rational egoism is debunked and utilitarianism not debunked.
Another worry is that if my belief that I have reason to care about my own well-being is unjustified, an argument that starts with that reason as a premiss, and then adds that the focus on my own well-being is arbitrary and so should be broadened to include everyone, is undercut. One might reply (with de Lazari-Radek and Singer 2014 191) that there are other ways of arriving at the conclusion that I have reason to care about the well-being of everyone. Perhaps something like utilitarianism is justified as self-evident rather than inferred from some other reasons. The evolutionary argument targets conclusions that can be reached only by appeal to a belief whose support can be undercut by noting that we would have the belief whether or not it is true. It is then open to the rational egoist to say that there is some other way of arriving at rational egoism. Perhaps this is unpromising, since the obvious way to justify rational egoism, as self-evident, is to be undercut by (1) and (2). However, (i) if believing that one ought to act as a kin altruist rather than as a rational egoist is what best increases reproductive fitness, rational egoism is, like utilitarianism, not undercut by (1) and (2). (ii) A component of utilitarianism (and any plausible theory), the belief that pain is bad, seems to be a belief that best increases reproductive fitness whether or not it is true (see Kahane 2011 and 2014). Even if nothing is good or bad, believing that pain is bad might increase my motivation to avoid pain and so lead me to survive longer.
A further worry is that it is not clear that having the belief best increases reproductive fitness. De Lazari-Radek and Singer argue, in reply to the objection that their argument takes away the justification for believing that pain is bad, that there is no advantage to believing that pain is bad; I am sufficiently motivated to avoid pain without any such belief (de Lazari-Radek and Singer 2014 268–269; for the general point, see Parfit 2011 v. 2 527–30). The same seems to go for rational egoism: I am sufficiently motivated to act egoistically without any belief in the truth of rational egoism.
4. Conclusion
Prospects for psychological egoism are dim. Even if some version escapes recent empirical arguments, there seems little reason, once the traditional philosophical confusions have been noted, for thinking it is true. At best it is a logical possibility, like some forms of scepticism.
Ethical egoists do best by defending rational egoism instead.
Rational egoism faces objections from arbitrariness, Nagel, Parfit, and evolutionary debunking. These worries are not decisive. Given this, and given the historical popularity of rational egoism, one might conclude that it must be taken seriously. But there is at least reason to doubt the historical record. Some philosophers stressed the connection beween moral action and self-interest because they were concerned with motivation. It does not follow that self-interest is for them a normative standard. And many philosophers may have espoused rational egoism while thinking that God ensured that acting morally maximized one’s self-interest. (Some were keen to stress that virtue must pay in order to give God a role.) Once this belief is dropped, it is not so clear what they would have said (Shaver 1999 ch. 4).
Bibliography
Psychological Egoism
- Batson, C. D., 1991, The Altruism Question, Hillsdale, N. J.: Lawrence Erlbaum Associates, part III.
- Batson, C. D., 2011, Altruism in Humans, Oxford: Oxford University Press, part II.
- Broad, C. D., 1971a, “Egoism as a Theory of Human Motives,” in Broad, Broad’s Critical Essays in Moral Philosophy, London: George Allen and Unwin.
- Brunero, J. S., 2002, “Evolution, Altruism and ‘Internal Reward’ Explanations,”, Philosophical Forum, 33: 413–24.
- Butler, J., 1900, Fifteen Sermons Preached at the Rolls Chapel, in The Works of Bishop Butler, ed. J. H. Bernard, London: Macmillan, Sermons I and XI.
- Feinberg, J., 1978 “Psychological Egoism,” in Feinberg, Reason and Responsibility, fourth edition (and other editions), Belmont: Wadsworth.
- Hume, D., 1975, An Enquiry Concerning the Principles of Morals, in Enquiries, ed. L. A. Selby-Bigge and P. H. Nidditch, Oxford: Oxford University Press, Appendix II.
- Kavka, G., 1986, Hobbesian Moral and Political Theory, Princeton: Princeton University Press, 35–44, 51–64.
- LaFollette, H., 1988, “The Truth in Psychological Egoism,” in J. Feinberg, Reason and Responsibility, seventh edition, Belmont: Wadsworth.
- May, J., 2011a, “Relational Desires and Empirical Evidence against Psychological Egoism,” European Journal of Philosophy, 19: 39–58.
- May, J., 2011b, “Egoism, Empathy, and Self-Other Merging,” Southern Journal of Philosophy (Spindel Supplement), 49: 25–39.
- Mercer, M., 2001, “In Defense of Weak Psychological Egoism,” Erkenntnis, 55: 217–37.
- Rosas, A., 2002, “Psychological and Evolutionary Evidence for Altruism,” Biology and Philosophy, 17: 93–107.
- Schulz, A., 2011, “Sober and Wilson’s Evolutionary Arguments for Psychological Altruism: A Reassessment,” Biology and Philosophy, 26: 251–60.
- Sidgwick, H., 1907, The Methods of Ethics, Indianapolis: Hackett, seventh edition, 1981, I.IV.
- Slote, M. A., 1964, “An Empirical Basis for Psychological Egoism,” Journal of Philosophy, 61: 530–537.
- Slote, M. A., 2013, “Egoism and Emotion,” Philosophia, 41: 313–35.
- Sober, E., and D. S. Wilson, 1998, Unto Others, Cambridge, MA: Harvard University Press, ch. 10.
- Stich, S., J. M. Doris, E. Roedder, 2010, “Altruism,” in The Moral Psychology Handbook, ed. Doris, New York: Oxford, 147–205.
Ethical Egoism
- Baier, K., 1958, The Moral Point of View, Ithaca: Cornell.
- Brink, D., 1997, "Self-love and Altruism," Social Philosophy and Policy, 14: 122–157.
- Brink, D., 2003, Perfectionism and the Common Good: Themes in the Philosophy of T. H. Green, Oxford: Oxford University Press.
- Broad, C. D., 1942, “Certain Features of Moore’s Ethical Doctrines,” in The Philosophy of G. E. Moore, ed. P. Schilpp, New York: Tudor, 41–67.
- Burgess-Jackson, K., 2013, “Taking Egoism Seriously,” Ethical Theory and Moral Practice, 16: 529–42.
- Campbell, R., 1972, “A Short Refutation of Ethical Egoism,” Canadian Journal of Philosophy, 2: 249–54.
- Frank, R. H., 1988, Passions Within Reason, New York: Norton.
- Frankena, W. K., 1973, Ethics, Englewood Cliffs: Prentice-Hall.
- Gauthier, D., 1986, Morals By Agreement, Oxford: Clarendon.
- Hobbes, T., 1968, Leviathan, ed. C. B. Macpherson, Harmondsworth: Penguin, chs. 14–15.
- Hurka, T., 2010, “Underivative Duty: Prichard on Moral Obligation,” Social Philosophy and Policy, 27 (2): 111–134.
- Kalin, J., 1970, “In Defense of Egoism,” in D. Gauthier, Morality and Rational Self-Interest, Englewood Cliffs: Prentice-Hall.
- Kavka, G., 1984, “The Reconciliation Project,” in Morality, Reason, and Truth, ed. D. Copp and D. Zimmerman, Totowa: Rowman and Allanheld.
- Kavka, G., 1986, Hobbesian Moral and Political Theory, Part II, Princeton: Princeton University Press.
- Mackie, J. L., 1976, “Sidgwick’s Pessimism,” Philosophical Quarterly, 26: 317–27.
- McConnell, T. C. 1978, “The Argument from Psychological Egoism to Ethical Egoism,” Australasian Journal of Philosophy, 56: 41–47.
- Moore, G. E., 1903, Principia Ethica, Cambridge: Cambridge University Press, sec. 59.
- Nagel, T., 1986, The View From Nowhere, New York: Oxford University Press, Ch. 8.
- Prichard, H. A., 2002, Moral Writings, Oxford: Oxford University Press.
- Rachels, S., 2002, “Nagelian Arguments against Egoism,” Australasian Journal of Philosophy, 80: 191–208.
- Sidgwick, H., 1907, The Methods of Ethics, Indianapolis: Hackett, seventh edition, 1981, II.V and concluding chapter.
- Smith, M., 2003, “Neutral and Relative Value after Moore,” Ethics, 113: 576–98.
Rational Egoism
- Brink, D. 1992, “Sidgwick and the Rationale for Rational Egoism,” in Essays on Henry Sidgwick, ed. B. Schultz, Cambridge: Cambridge University Press.
- Broad, C. D., 1971b, “Self and Others,” in Broad, Broad’s Critical Essays in Moral Philosophy, London: George Allen and Unwin.
- Johnston, M, 1997, “Human Concerns Without Superlative Selves,” in Reading Parfit, ed. J. Dancy, Oxford: Blackwell, 149–179.
- Kagan, S., 1986, “The Present-Aim Theory of Rationality,” Ethics, 96: 746–759.
- Hills, A., 2010, The Beloved Self, Oxford: Oxford University Press.
- Kahane, G., 2011, “Evolutionary Debunking Arguments,” Noûs, 45: 103–25.
- Kahane, G., 2014, “Evolution and Impartiality,” Ethics, 124: 327–41.
- Korsgaard, C, 2005, “The Myth of Egoism,”, in Practical Conflicts: New Philosophical Essays, ed. P. Baumann and M. Betzler, Cambridge: Cambridge University Press, 59–91.
- Lazari-Radek, K. de, and Singer, P., 2014, The Point of View of the Universe: Sidgwick and Contemporary Ethics, Oxford: Oxford University Press, chapter 7.
- Parfit, D., 1984, Reasons and Persons, Oxford: Oxford University Press.
- Parfit, D., 1986, Reply to Kagan, Ethics, 96: 843–846, 868–869.
- Parfit, D., 2011, On What Matters, Oxford: Oxford University Press.
- Phillips, D., 2011, Sidgwickian Ethics, Oxford: Oxford University Press.
- Rachels, S. and Alter, T., 2005, “Nothing Matters in Survival,” Journal of Ethics, 9: 311–330.
- Shafer-Landau, R., 2012, “Evolutionary Debunking, Moral Realism, and Moral Knowledge,” Journal of Ethics and Social Philosophy, 7.1. doi:10.26556/jesp.v7i1.68
- Shaver, R., 1999, Rational Egoism, Cambridge: Cambridge University Press.
- Shaver, R., 2011, “Review of Hills, The Beloved Self,” Philosophical Quarterly 61: 658–60.
- Sidgwick, H., 1907, The Methods of Ethics, Indianapolis: Hackett, seventh edition, 1981, II.I, IV.II, and concluding chapter.
- Sterba, J., 2013, From Rationality to Equality, Oxford: Oxford University Press, ch. 3.
- Tersman, F., 2008, “The Reliability of Moral Intuitions: A Challenge from Neuroscience,” Australasian Journal of Philosophy 86: 389–405.
Academic Tools
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.