Notes to Moral Decision-Making Under Uncertainty

1. Though often referred to as “Jackson cases”, cases with the relevant structure are also given by Regan (1980: 265) and Parfit (2011: 159).

2. Though many philosophers have found this idea appealing, the concept of a moral principle being “usable” or “action-guiding” is notoriously difficult to explicate. For an important recent discussion, see Holly M. Smith (2018).

3. More precisely, some philosophers hold that the normative “ought”, practical “ought”, or moral “ought” is univocal. The word “ought” may have entirely different meanings—for instance, the “ought” of natural expectation in a sentence like “The Moon ought to rise in the next ten minutes”—that are irrelevant for our purposes.

4. Although it is natural to frame these debates in terms of senses or meanings of English words like “ought”, the important question for ethical purposes is not about the meanings of these words in ordinary language. Rather, the question is about the existence and importance of fact-relative, belief-relative, and evidence-relative normative standards or properties, for which a philosopher might stipulatively use terms like “ought” or “rightness” (without too much risk of misleading, insofar as the stipulated meanings depart from ordinary meanings).

5. For defenses of the “objectivist” view that privileges fact-relative moral concepts, see Carlson (1995), Graham (2010), and Driver (2012). For the “subjectivist” view that privileges belief-relative moral concepts, see Hudson (1989). For the “prospectivist” view that privileges the evidence-relative moral concepts, see Zimmerman (2008) and Mason (2013). Jackson (1991) and Howard-Snyder (1997) both reject objectivism while remaining non-committal between subjectivism and prospectivism. For “divider” views that distinguish objective and non-objective moral concepts without privileging one over the other, see Oddie & Menzies (1992; though their view is borderline “objectivist”), Holly M. Smith (2010), and Parfit (2011). Finally, see Kolodny & MacFarlane (2010) for an influential argument against the use of Jackson-style cases to motivate the “divider” position. For further citations to the very extensive literature on these questions, see fns. 1–3 of Mason (2013) and fn. 2 of Sepielli (2018b).

6. The use of the word “utility” here should be carefully distinguished from other uses in ethics; in particular, it has nothing directly to do with the “utility” in “utilitarianism”. Further interpretive comments will follow.

7. For an introduction to these theorems, see section 2.2 of the entry on expected utility theory.

8. The forms of contraction and expansion consistency stated here are sometimes called “Property \(\alpha\)” and “Property \(\beta\)”, in reference to Sen (1971). As noted above, expected utility theory is usually presented as a theory about preferences, and thus deals with a binary relation \(R(A,B)\) interpreted as “\(A\) is weakly preferred to \(B\)”. In the context of permissibility, we can reinterpret \(R(A,B)\) as “it is sometimes permissible to choose \(A\) when \(B\) is available”. Given suitable background assumptions, the force of Properties \(\alpha\) and \(\beta\) is that this \(R\) is a complete, transitive ordering, and that an option is permissible if and only if it is maximal with respect to \(R\) (i.e., \(A\) is permissible if and only if \(R(A,B)\) for every other available option \(B\)) (Sen 1971: 8). Of course, other possible interpretations of \(R(A,B)\) are salient in a moral context, such as “\(A\) is at least as good as \(B\)”.

9. The sure thing principle originates in Savage (1954); our informal version matches his motivating discussion more closely than his formal statement.

10. See, e.g., the risk-weighted expected utility theory of Buchak (2013), inspired by Quiggin (1982).

11. See, e.g., the lexicographic expected utility theory of Hausner (1954) and Fishburn (1971).

12. A large literature starts from Aumann (1962); in the recent philosophical literature, Hare (2010) has been much discussed, particularly with respect to violations of stochasticism—see for instance Schoenfield (2014); Bales, Cohen, & Handfield (2014); Bader (2018).

13. In terms of our example, however, it is worth noting that the best-defended forms of utilitarianism, like those mentioned in the next paragraph, involve risk-neutrality.

14. Sometimes a distinction is drawn between “risk” and “uncertainty”. The distinction is not always completely clear, but, roughly speaking, the former term covers cases where the decision-relevant probabilities are precise and accessible, and the latter term covers harder cases. In those terms, cluelessness about consequences presumably involves “uncertainty” rather than “risk”.

15. To be a bit more careful: to tell which of two options has higher expected value, we do not necessarily need to know the expected value of each option separately. We just need to know whether the difference in expected value is positive or negative. (By analogy, if you see two people at a distance, you might know which one is taller while being very uncertain about their individual heights.) But the answer to this latter question can still depend sensitively on various probabilities and on the differences between them. To return to an earlier example, using my left hand rather than my right makes roughly zero difference to the probability of an extra typhoon; but, since the typhoon would be extremely destructive, even a small non-zero difference in the probability could make a decisive difference in expected value.

16. Constraints can also take a positive form; e.g., keep your promises even if that will result if fewer total instances of promise-keeping.

17. Relatedly, this view violates the continuity axiom of section section 2, which is used in the standard version of expected utility theory to get utilities that are numbers rather than vectors.

18. This reasoning may be too quick, since, as noted in section 2, the notion of an “outcome” in expected utility theory is quite flexible. For example, Stefánsson & Bradley (2015) apply expected utility theory using outcomes that include facts about objective chances; they can therefore distinguish between an outcome in which B is not harmed, and there was no chance of B’s being harmed, from an outcome in which B is not harmed despite facing a significant objective risk. They argue that it could be rational to prefer the former option. Note, however, that the deontological rationales described in the main text also seem to cover subjective or evidential probabilities of harm.

19. This literature also tends to frame things less in terms of risks of violating rights or constraints, and more in terms of risks of harm. However, in rejecting the consequentialist position that a risky activity should be permitted as long as the benefits outweigh the harms, most participants in this literature arguably take there to be something like a (non-absolute) constraint against causing certain kinds of harm.

20. The best-known example of such a prospect is the St. Petersburg game, in which a fair coin is flipped repeatedly until it hands heads, with the player then receiving a payoff of \(\$ 2^n\), where \(n\) is the total number of flips. The expected monetary reward from playing this game is

\[\frac{1}{2} \times 2 + \frac{1}{4} \times 4 + \frac{1}{8} \times 8\ldots = 1 + 1 + 1 \ldots = \infty.\]

So, it seems, a gambler who maximizes their expected monetary reward should be willing to pay any finite amount to play the St. Petersburg game, meaning that they strictly prefer the game to all of its possible outcomes (each of which is a finite amount of money). See the entry on the St. Petersburg paradox for more on this and related puzzles in infinite decision theory.

Copyright © 2024 by
Christian Tarsney <christian.tarsney@gmail.com>
Teruji Thomas <teruji.thomas@philosophy.ox.ac.uk>
William MacAskill <will@effectivealtruism.org>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free