Decision Theory

First published Wed Dec 16, 2015; substantive revision Fri Oct 9, 2020

Decision theory is concerned with the reasoning underlying an agent’s choices, whether this is a mundane choice between taking the bus or getting a taxi, or a more far-reaching choice about whether to pursue a demanding political career. (Note that “agent” here stands for an entity, usually an individual person, that is capable of deliberation and action.) Standard thinking is that what an agent chooses to do on any given occasion is completely determined by her beliefs and desires or values, but this is not uncontroversial, as will be noted below. In any case, decision theory is as much a theory of beliefs, desires and other relevant attitudes as it is a theory of choice; what matters is how these various attitudes (call them “preference attitudes”) cohere together.

The focus of this entry is normative decision theory. That is, the main question of interest is what criteria an agent’s preference attitudes should satisfy in any generic circumstances. This amounts to a minimal account of rationality, one that sets aside more substantial questions about appropriate desires and reasonable beliefs, given the situation at hand. The key issue for a minimal account is the treatment of uncertainty. The orthodox normative decision theory, expected utility (EU) theory, essentially says that, in situations of uncertainty, one should prefer the option with greatest expected desirability or value. (Note that in this context, “desirability” and “value” should be understood as desirability/value according to the agent in question.) This simple maxim will be the focus of much of our discussion.

The structure of this entry is as follows: Section 1 discusses the basic notion of “preferences over prospects”, which lies at the heart of decision theory. Section 2 describes the development of normative decision theory in terms of ever more powerful and flexible measures of preferences. Section 3 discusses the two best-known versions of EU theory. Section 4 considers the broader significance of EU theory for practical action, inference, and valuing. Section 5 turns to prominent challenges to EU theory, while Section 6 addresses sequential decisions, and how this richer setting bears on debates about rational preferences.

1. What are preferences over prospects?

The two central concepts in decision theory are preferences and prospects (or equivalently, options). Roughly speaking, when we (in this entry) say that an agent “prefers” the “option” \(A\) over \(B\) we mean that the agent takes \(A\) to be more desirable or choice-worthy than \(B\). This rough definition makes clear that preference is a comparative attitude. Beyond this, there is room for argument about what preferences over options actually amount to, or in other words, what it is about an agent (perhaps oneself) that concerns us when we talk about his/her preferences over options. This section considers some elementary issues of interpretation that set the stage for introducing (in the next section) the decision tables and expected utility rule that for many is the familiar subject matter of decision theory. Further interpretive questions regarding preferences and prospects will be addressed later, as they arise.

Let us nonetheless proceed by first introducing basic candidate properties of (rational) preference over options and only afterwards turning to questions of interpretation. As noted above, preference concerns the comparison of options; it is a relation between options. For a domain of options we speak of an agent’s preference ordering, this being the ordering of options that is generated by the agent’s preference between any two options in that domain.

In what follows, \(\preceq\) represents a weak preference relation. So \(A\preceq B\) means that the agent we are interested in considers option \(B\) to be at least as preferable as option \(A\). From the weak preference relation we can define the strict preference relation, \(\prec\), as follows: \(A\prec B\Leftrightarrow A\preceq B \ \& \ \neg (B\preceq A)\), where \(\neg X\) means “it is not the case that \(X\)”. The indifference relation, \(\sim\), is defined as: \(A\sim B \Leftrightarrow A\preceq B \ \& \ B\preceq A\). This represents that the agent we are interested in considers \(A\) and \(B\) to be equally preferable.

We say that \(\preceq\) weakly orders a set \(S\) of options whenever it satisfies the following two conditions:

Axiom 1 (Completeness)
For any \(A, B\in S\): either \(A\preceq B\) or \(B\preceq A\).

Axiom 2 (Transitivity)
For any \(A, B, C\in S\): if \(A\preceq B\) and \(B\preceq C\) then \(A\preceq C\).

The above can be taken as a preliminary characterisation of rational preference over options. Even this limited characterisation is contentious, however, and points to divergent interpretations of “preferences over prospects/options”.

Start with the Completeness axiom, which says that an agent can compare, in terms of the weak preference relation, all pairs of options in \(S\). Whether or not Completeness is a plausible rationality constraint depends both on what sort of options are under consideration, and how we interpret preferences over these options. If the option set includes all kinds of states of affairs, then Completeness is not immediately compelling. For instance, it is questionable whether an agent should be able to compare the option whereby two additional people in the world are made literate with the option whereby two additional people reach the age of sixty. If, on the other hand, all options in the set are quite similar to each other, say, all options are investment portfolios, then Completeness is more compelling. But even if we do not restrict the kinds of options under consideration, the question of whether or not Completeness should be satisfied turns on the meaning of preference. For instance, if preferences merely represent choice behaviour or choice dispositions, as they do according to the “revealed preference theory” popular amongst economists (see Sen 1973), then Completeness is automatically satisfied, on the assumption that a choice must inevitably be made. By contrast, if preferences are understood rather as mental attitudes, typically considered judgments about whether an option is better or more desirable than another, then the doubts about Completeness alluded to above are pertinent (for further discussion, see Mandler 2001).

Most philosophers and decision theorists subscribe to the latter interpretation of preference as a kind of judgment that explains, as opposed to being identical with, choice dispositions and resultant choice behaviour (see, e.g., Hausman 2011a, 2011b; Dietrich and List, 2016a & 2016b; Bradley 2017; although see also Thoma 2020b and Vredenburgh 2020 for recent defences of “revealed preference theory”, at least in the context of empirical economics). Moreover, many hold that Completeness is not rationally required, since they think that rationality makes demands only on the judgments an agent actually holds, but says nothing of whether a judgement must be held in the first place. Nevertheless, following Richard Jeffrey (1983), most decision theorists suggest that rationality requires that preferences be coherently extendible. This means that even if your preferences are not complete, it should be possible to complete them without violating any of the conditions that are rationally required, in particular Transitivity.

This brings us to the Transitivity axiom, which says that if an option \(B\) is weakly preferred to \(A\), and \(C\) weakly preferred to \(B\), then \(C\) is weakly preferred to \(A\). A recent challenge to Transitivity turns on heterogeneous sets of options, as per the discussion of Completeness above. But here a different interpretation of preference is brought to bear on the comparison of options. The idea is that preferences, or judgments of desirability, may be responsive to a salience condition. For example, suppose that the most salient feature when comparing cars \(A\) and \(B\) is how fast they can be driven, and \(B\) is no worse than \(A\) in this regard, yet the most salient feature when comparing cars \(B\) and \(C\) is how safe they are, and that \(C\) is no worse than \(B\) in this regard. Furthermore, when comparing \(A\) and \(C\), the most salient feature is their beauty. In such a case, some argue (e.g., Temkin 2012) that there is no reason why Transitivity should be satisfied with respect to the preferences concerning \(A\), \(B\) and \(C\). Others (e.g., Broome 1991a) argue that Transitivity is part of the very meaning of the betterness relation (or objective comparative desirability); if rational preference is a judgment of betterness or desirability, then Transitivity is non-negotiable. With respect to the car example, Broome would argue that the desirability of a fully specified option should not vary, simply in virtue of what other options it is compared with. Either the choice context affects how the agent perceives the option at hand, in which case the description of the option should reflect this, or else the choice context does not affect the option. Either way, Transitivity should be satisfied.

There is a more straightforward defence of Transitivity in preference; a defence that hinges on the sure losses that may befall anyone who violates the axiom. This is the so-called money pump argument (see Davidson et. al. 1955 for an early argument of this sort, but for recent discussion and revision of this argument, see Gustafsson 2010 & 2013). It is based on the assumption that if you find \(X\) at least as desirable as \(Y\), then you should be happy to trade the latter for the former. Suppose you violate Transitivity; for you: \(A\preceq B\), \(B\preceq C\) but \(C\prec A\). Moreover, suppose you presently have \(A\). Then you should be willing to trade \(A\) for \(B\). The same goes for \(B\) and \(C\): you should be willing to trade \(B\) for \(C\). You strictly prefer \(A\) to \(C\), so you should be willing to trade in \(C\) plus some sum \(\$x\) for \(A\). But now you are in the same situation as you started, having \(A\) and neither \(B\) nor \(C\), except that you have lost \(\$x\)! So in a few steps, each of which was consistent with your preferences, you find yourself in a situation that is clearly worse, by your own lights, than your original situation. The picture is made more dramatic if we imagine that the process could be repeated, turning you into a “money pump”. Hence, the argument goes, there is something (instrumentally) irrational about your intransitive preferences. If your preferences were transitive, then you would not be vulnerable to choosing a dominated option and serving as a money pump. Therefore, your preferences should be transitive.

While the aforementioned controversies have not been settled, the following assumptions will be made in the remainder of this entry: i) the objects of preference may be heterogeneous prospects, incorporating a rich and varied domain of properties, ii) preference between options is a judgment of comparative desirability or choice-worthiness, and iii) preferences satisfy both Completeness and Transitivity (although the former condition will be revisited in Section 5). The question that now arises is whether there are further general constraints on rational preference over options.

2. Utility measures of preference

In our continuing investigation of rational preferences over prospects, the numerical representation (or measurement) of preference orderings will become important. The numerical measures in question are known as utility functions. The two main types of utility function that will play a role are the ordinal utility function and the more information-rich interval-valued (or cardinal) utility function.

2.1 Ordinal utilities

It turns out that as long as the set of prospects/options, \(S\), is finite, any weak order of the options in \(S\) can be represented by an ordinal utility function. To be precise, let us say that \(u\) is a utility function with domain \(S\). We say that the function \(u\) represents the preference \(\preceq\) between the options in \(S\) just in case:

\[\tag{1}\text{For any}\ A, B \in S: u(A)\leq u(B) \Leftrightarrow A\preceq B\]

Another way to put this is that, when the above holds, the preference relation can be represented as maximising utility, since it always favours option with higher utility.

The only information contained in an ordinal utility representation is how the agent whose preferences are being represented orders options, from least to most preferable. This means that if \(u\) is an ordinal utility function that represents the ordering \(\preceq\), then any utility function \(u'\) that is an ordinal transformation of \(u\)—that is, any transformation of \(u\) that also satisfies the biconditional in (1)—represents \(\preceq\) just as well as \(u\) does. Hence, we say that an ordinal utility function is unique only up to ordinal transformations.

The result referred to above can be summarised as follows:

Theorem 1 (Ordinal representation). Let \(S\) be a finite set, and \(\preceq\) a weak preference relation on \(S\). Then there is an ordinal utility function that represents \(\preceq\) just in case \(\preceq\) is complete and transitive.

This theorem should not be too surprising. If \(\preceq\) is complete and transitive over \(S\), then the options in \(S\) can be put in an order, from the most to the least preferred, where some options may fall in the same position (if they are deemed equally desirable) but where there are no cycles, loops, or gaps. Theorem 1 just says that we can assign numbers to the options in \(S\) in a way that represents this order. (For a simple proof of Theorem 1, except for a strict rather than a weak preference relation, consult Peterson 2009: 95.)

Note that ordinal utilities are not very mathematically “powerful”, so to speak. It does not make sense, for instance, to compare the probabilistic expectations of different sets of ordinal utilities. For example, consider the following two pairs of prospects: the elements of the first pair are assigned ordinal utilities of 2 and 4, while those in the second pair are assigned ordinal utilities of 0 and 5. Let us specify a “flat” probability distribution in each case, such that each element in the two pairs corresponds to a probability of 0.5. Relative to this probability assignment, the expectation of the first pair of ordinal utilities is 3, which is larger than 2.5, the expectation of the second pair. Yet when we transform the ordinal utilities in a permissible way—for instance by increasing the highest utility in the second pair from 5 to 10—the ordering of expectations reverses; now the comparison is between 3 and 5. The significance of this point will become clearer in what follows, when we turn to the comparative evaluation of lotteries and risky choices. An interval-valued or cardinal utility function is necessary for evaluating lotteries/risky prospects in a consistent way. By the same token, in order to construct or conceptualise a cardinal utility function, one typically appeals to preferences over lotteries. (Although see Alt 1936 for a “risk-free” construction of cardinal utility, that is, one that does not appeal to lotteries.)

2.2 Cardinalizing utility

In order to get a cardinal (interval-valued) utility representation of a preference ordering—i.e., a measure that represents not only how an agent orders the options but also says something about the desirabilistic “distance” between options—we need a richer setting; the option set and the corresponding preference ordering will need to have more structure than for an ordinal utility measure. One such account, owing to John von Neumann and Oskar Morgenstern (1944), will be cashed out in detail below. For now, it is useful to focus on the kind of option that is key to understanding and constructing a cardinal utility function: lotteries.[1]

Consider first an ordering over three regular options, e.g., the three holiday destinations Amsterdam, Bangkok and Cardiff, denoted \(A\), \(B\) and \(C\) respectively. Suppose your preference ordering is \(A\prec B \prec C\). This information suffices to ordinally represent your judgement; recall that any assignment of utilities is then acceptable as long as \(C\) gets a higher value than \(B\) which gets a higher value than \(A\). But perhaps we want to know more than can be inferred from such a utility function—we want to know how much \(C\) is preferred over \(B\), compared to how much \(B\) is preferred over \(A\). For instance, it may be that Bangkok is considered almost as desirable as Cardiff, but Amsterdam is a long way behind Bangkok, relatively speaking. Or else perhaps Bangkok is only marginally better than Amsterdam, compared to the extent to which Cardiff is better than Bangkok. This kind of information about the relative distance between options, in terms of strength of preference or desirability, is precisely what is given by an interval-valued utility function. The problem is how to ascertain this information.

To solve this problem, Ramsey (1926) and later von Neumann and Morgenstern (hereafter vNM) made the following suggestion: we construct a new option, a lottery, \(L\), that has \(A\) and \(C\) as its possible “prizes”, and we figure out what chance the lottery must confer on \(C\) for you to be indifferent between this lottery and a holiday in Bangkok. The basic idea is that your judgment about Bangkok, relative to Cardiff on the one hand and Amsterdam on the other, can be measured by the riskiness of the lottery \(L\) involving Cardiff and Amsterdam that you deem equally desirable as Bangkok. For instance, if you are indifferent between Bangkok and a lottery that provides a very low chance of winning a trip to Cardiff, then you evidently do not regard Bangkok to be much better than Amsterdam, vis-à-vis Cardiff; for you, even a small improvement on Amsterdam, i.e., a lottery with a small chance of Cardiff rather than Amsterdam, is enough to match Bangkok.

The above analysis presumes that lotteries are evaluated in terms of their expected choice-worthiness or desirability. That is, the desirability of a lottery is effectively the sum of the chances of each prize multiplied by the desirability of that prize. Consider the following example: Suppose you are indifferent between the lottery and the holiday in Bangkok when the chance of the lottery resulting in a holiday in Cardiff is \(3/4\). Call this particular lottery \(L'\). The idea is that Bangkok is therefore three quarters of the way up a desirability scale that has Amsterdam at the bottom and Cardiff at the top. If we stipulate that \(u(A)=0\) and \(u(C)=1\), then \(u(B)=u(L')=3/4\). This corresponds to the expected desirability—or, as it is usually called, the expected utility—of the lottery, since \(1/4\cdot 0 + 3/4\cdot 1 = 3/4 = u(L')\). That is, the desirability of the lottery is a probability weighted sum of the utilities of its prizes, where the weight on each prize is determined by the probability that the lottery results in that prize.

We thus see that an interval-valued utility measure over options can be constructed by introducing lottery options. As the name suggests, the interval-valued utility measure conveys information about the relative sizes of the intervals between the options according to some desirability scale. That is, the utilities are unique after we have fixed the starting point of our measurement and the unit scale of desirability. In the above example, we could have, for instance, assigned a utility value of 1 to \(A\) and 5 to \(C\), in which case we would have had to assign a utility value of 4 to \(B\), since 4 is 3/4 of the way between 1 and 5. In other words, once we have assigned utility values to \(A\) and \(C\), the utility of \(L'\) and thus \(B\) has been determined. Let us call this second utility function \(u'\). It is related to our original function as follows: \(u'=4\cdot u +1\). This relationship always holds between two such functions: If \(u\) is an interval-valued utility function that represents a preference ordering, \(\preceq\), and \(u'\) is another utility function that also represents this same preference ordering, then there are constants \(a\) and \(b\), where \(a\) must be positive, such that \(u'=a\cdot u + b\). This is to say that interval-valued utility functions are unique only up to positive linear transformation.

Before concluding this discussion of measuring utility, two related limitations regarding the information such measures convey should be mentioned. First, since the utilities of options, whether ordinal or interval-valued, can only be determined relative to the utilities of other options, there is no such thing as the absolute utility of an option, at least not without further assumptions.[2] Second, by the same reasoning, neither interval-valued nor ordinal utility measures, as discussed here, are interpersonally commensurable with respect to levels and units of utility. By way of a quick illustration, suppose that both you and I have the preference ordering described above over the holiday options: \(A\prec B \prec C\). Suppose too that, as per the above, we are both indifferent between \(B\) and the lottery \(L'\) that has a \(3/4\) chance of yielding \(C\) and a \(1/4\) chance of yielding \(A\). Can we then say that granting me Cardiff and you Bangkok would amount to the same amount of “total desirability” as granting you Cardiff and me Bangkok? We are not entitled to say this. Our shared preference ordering is, for instance, consistent with me finding a vacation in Cardiff a dream come true while you just find it the best of a bad lot. Moreover, we are not even entitled to say that the difference in desirability between Bangkok and Amsterdam is the same for you as it is for me. According to me, the desirability of the three options might range from living hell to a dream come true, while according to you, from bad to quite bad; both evaluations are consistent with the above preference ordering. In fact, the same might hold for our preferences over all possible options, including lotteries: even if we shared the same total preference ordering, it might be the case that you are just of a negative disposition—finding no option that great—while I am very extreme—finding some options excellent but others a sheer torture. Hence, utility functions, whether interval-valued or ordinal, do not allow for meaningful interpersonal comparisons. (Elster and Roemer 1993 contains a number of papers discussing these issues; see also the entry on social choice theory.)

2.3 The von Neumann and Morgenstern (vNM) representation theorem

The last section provided an interval-valued utility representation of a person’s preferences over lotteries, on the assumption that lotteries are evaluated in terms of expected utility. Some might find this a bit quick. Why should we assume that people evaluate lotteries in terms of their expected utilities? The vNM theorem effectively shores up the gaps in reasoning by shifting attention back to the preference relation. In addition to Transitivity and Completeness, vNM introduce further principles governing rational preferences over lotteries, and show that an agent’s preferences can be represented as maximising expected utility whenever her preferences satisfy these principles.

Let us first define, in formal terms, the expected utility of a lottery: Let \(L_i\) be a lottery from the set \(\bL\) of lotteries, and \(O_{ik}\) the outcome, or prize, of lottery \(L_i\) that arises with probability \(p_{ik}\). The expected utility of \(L_i\) is then defined as:

The vNM equation.

\[EU(L_i) \mathbin{\dot{=}} \sum_k u(O_{ik}) \cdot p_{ik}\]

The assumption made earlier can now be formally stated:

\begin{equation}\tag{2} \text{For any}\ L_i, L_j\in \bL: L_i\preceq L_j\Leftrightarrow EU(L_i)\leq EU(L_j) \end{equation}

When the above holds, we say that there is an expected utility function that represents the agent’s preferences; in other words, the agent can be represented as maximising expected utility.

The question that vNM address is: What sort of preferences can be thus represented? To answer this question, we must return to the underlying preference relation \(\preceq\) over the set of options, in this case involving lotteries. The vNM theorem requires the set \(\bL\) of lotteries to be rather extensive: it is closed under “probability mixture”, that is, if \(L_i, L_j\in \bL\), then compound lotteries that have \(L_i\) and \(L_j\) as possible prizes are also in \(\bL\). (Another technical assumption, that will not be discussed in detail, is that compound lotteries can always be reduced, in accordance with the laws of probability, to simple lotteries that only involve basic prizes.)

A basic rationality constraint on the preference relation has already been discussed—that it weakly orders the options (i.e., satisfies Transitivity and Completeness). The following notation will be used to introduce the two additional vNM axioms of preference: \(\{pA, (1-p)B\}\) denotes a lottery that results either in \(A\), with probability \(p\), or \(B\), with probability \(1-p\), where \(A\) and \(B\) can be final outcomes but can also be lotteries.

Axiom 3 (Continuity)
Suppose \(A\preceq B\preceq C\). Then there is a \(p\in [0,1]\) such that:

\[\{pA, (1-p)C\}\sim B\]

Axiom 4 (Independence)
Suppose \(A\preceq B\). Then for any \(C\), and any \(p\in [0,1]\):

\[\{pA, (1-p)C\}\preceq \{pB, (1-p)C\}\]

Continuity implies that no outcome \(A\) is so bad that you would not be willing to take some gamble that might result in you ending up with that outcome, but might otherwise result in you ending up with an outcome (\(C\)) that you find to be a marginal improvement on your status quo (\(B\)), provided that the chance of \(A\) is small enough. Intuitively, Continuity guarantees that an agent’s evaluations of lotteries are appropriately sensitive to the probabilities of the lotteries’ prizes.

Independence implies that when two alternatives have the same probability for some particular outcome, our evaluation of the two alternatives should be independent of our opinion of that outcome. Intuitively, this means that preferences between lotteries should be governed only by the features of the lotteries that differ; the commonalities between the lotteries should be effectively ignored.

Some people find the Continuity axiom an unreasonable constraint on rational preference. Is there any probability \(p\) such that you would be willing to accept a gamble that has that probability of you losing your life and probability \((1-p)\) of you gaining $10? Many people think there is not. However, the very same people would presumably cross the street to pick up a $10 bill they had dropped. But that is just taking a gamble that has a very small probability of being killed by a car but a much higher probability of gaining $10! More generally, although people rarely think of it this way, they constantly take gambles that have minuscule chances of leading to imminent death, and correspondingly very high chances of some modest reward.

Independence seems a compelling requirement of rationality, when considered in the abstract. Nevertheless, there are famous examples where people often violate Independence without seeming irrational. These examples involve complementarities between the possible lottery outcomes. A particularly well-known such example is the so-called Allais Paradox, which the French economist Maurice Allais (1953) first introduced in the early 1950s. The paradox turns on comparing people’s preferences over two pairs of lotteries similar to those given in Table 1. The lotteries are described in terms of the prizes that are associated with particular numbered tickets, where one ticket will be drawn randomly (for instance, \(L_1\) results in a prize of $2500 if one of the tickets numbered 2–34 is drawn).

  1 2–34 35–100
\(L_1\) $0 $2500 $2400
\(L_2\) $2400 $2400 $2400
  1 2–34 35–100
\(L_3\) $0 $2500 $0
\(L_4\) $2400 $2400 $0

Table 1. Allais’ paradox

In this situation, many people strictly prefer \(L_2\) over \(L_1\) but also \(L_3\) over \(L_4\) (as evidenced by their choice behaviour, as well as their testimony), a pair of preferences which will be referred to as Allais’ preferences.[3] A common way to rationalise Allais’ preferences, is that in the first choice situation, the risk of ending up with nothing when one could have had $2400 for sure does not justify the increased chance of a higher prize. In the second choice situation, however, the minimum one stands to gain is $0 no matter which choice one makes. Therefore, in that case many people do think that the slight extra risk of $0 is worth the chance of a better prize.

While the above reasoning may seem compelling, Allais’ preferences conflict with the Independence axiom. The following is true of both choice situations: whatever choice you make, you will get the same prize if one of the tickets in the last column is drawn. Therefore, Independence implies that both your preference between \(L_1\) and \(L_2\) and your preference between \(L_3\) and \(L_4\) should be independent of the prizes in that column. But when you ignore the last column, \(L_1\) becomes identical to \(L_3\) and \(L_2\) to \(L_4\). Hence, if you prefer \(L_2\) over \(L_1\) but \(L_3\) over \(L_4\), there seems to be an inconsistency in your preference ordering. And there is definitely a violation of Independence (given how the options have been described; an issue to which we return in Section 5.1). As a result, the pair of preferences under discussion cannot be represented as maximising expected utility. (Thus the “paradox”: many people think that Independence is a requirement of rationality, but nevertheless also want to claim that there is nothing irrational about Allais’ preferences.)

Decision theorists have reacted in different ways to Allais’ Paradox. This issue will be revisited in Section 5.1, when challenges to EU theory will be discussed. The present goal is simply to show that Continuity and Independence are compelling constraints on rational preference, although not without their detractors. The result vNM proved can be summarised thus:

Theorem 2 (von Neumann-Morgenstern)
Let \(\bO\) be a finite set of outcomes, \(\bL\) a set of corresponding lotteries that is closed under probability mixture and \(\preceq\) a weak preference relation on \(\bL\). Then \(\preceq\) satisfies axioms 1–4 if and only if there exists a function \(u\), from \(\bO\) into the set of real numbers, that is unique up to positive linear transformation, and relative to which \(\preceq\) can be represented as maximising expected utility.

David Kreps (1988) gives an accessible illustration of the proof of this theorem.

3. Making real decisions

The vNM theorem is a very important result for measuring the strength of a rational agent’s preferences over sure options (the lotteries effectively facilitate a cardinal measure over sure options). But this does not get us all the way to making rational decisions in the real world; we do not yet really have a decision theory. The theorem is limited to evaluating options that come with a probability distribution over outcomes—a situation decision theorists and economists often describe as “choice under risk” (Knight 1921).

In most ordinary choice situations, the objects of choice, over which we must have or form preferences, are not like this. Rather, decision-makers must consult their own probabilistic beliefs about whether one outcome or another will result from a specified option. Decisions in such circumstances are often described as “choices under uncertainty” (Knight 1921). For example, consider the predicament of a mountaineer deciding whether or not to attempt a dangerous summit ascent, where the key factor for her is the weather. If she is lucky, she may have access to comprehensive weather statistics for the region. Nevertheless, the weather statistics differ from the lottery set-up in that they do not determine the probabilities of the possible outcomes of attempting versus not attempting the summit on a particular day. Not least, the mountaineer must consider how confident she is in the data-collection procedure, whether the statistics are applicable to the day in question, and so on, when assessing her options in light of the weather.

Some of the most celebrated results in decision theory address, to some extent, these challenges. They consist in showing what conditions on preferences over “real world options” suffice for the existence of a pair of utility and probability functions relative to which the agent can be represented as maximising expected utility. The standard interpretation is that, just as the utility function represents the agent’s desires, so the probability function represents her beliefs. The theories are referred to collectively as subjective expected utility (SEU) theory as they concern an agent’s preferences over prospects that are characterised entirely in terms of her own beliefs and desires (but we will continue to use the simpler label EU theory). In this section, two of these results will be briefly discussed: that of Leonard Savage (1954) and Richard Jeffrey (1965).

Note that these EU decision theories apparently prescribe two things: (a) you should have consistent preference attitudes, and (b) you should prefer the means to your ends, or at least you should prefer the means that you assess will on average lead to your ends (cf. Buchak 2016). The question arises: What is the relationship between these prescriptions? The EU representation theorems that will be outlined shortly seem to show that, despite appearances, the two prescriptions are actually just one: anyone who has consistent attitudes prefers the means to her ends, and vice versa. But the puzzle remains that there are many ways to have consistent preference attitudes, and surely not all of these amount to preferring the means to one’s own true ends. This puzzle is worth bearing in mind when appraising EU theory in its various guises; it will come up again later.

3.1 Savage’s theory

Leonard Savage’s decision theory, as presented in his (1954) The Foundations of Statistics, is without a doubt the best-known normative theory of choice under uncertainty, in particular within economics and the decision sciences. In the book Savage presents a set of axioms constraining preferences over a set of options that guarantee the existence of a pair of probability and utility functions relative to which the preferences can be represented as maximising expected utility. Nearly three decades prior to the publication of the book, Frank P. Ramsey (1926) had actually proposed that a different set of axioms can generate more or less the same result. Nevertheless, Savage’s theory has been much more influential than Ramsey’s, perhaps because Ramsey neither gave a full proof of his result nor provided much detail of how it would go (Bradley 2004). Savage’s result will not be described here in full detail. However, the ingredients and structure of his theorem will be laid out, highlighting its strengths and weaknesses.

The options or prospects in Savage’s theory are similar to lotteries, except that the possible outcomes do not come with probabilities but rather depend on whether a particular state of the world is actual. Indeed, the primitives in Savage’s theory are outcomes[4] and states (of the world). The former are the good or bad states of affairs that ultimately affect and matter to an agent, while the latter are the features of the world that the agent has no control over and which are the locus of her uncertainty about the world. Sets of states are called events. This distinction between outcomes and states serves to neatly separate desire and belief: the former are, according to Savage’s theory, the target of desire, while the latter are the target of belief.

The lottery-like options over which the agent has preferences are a rich set of acts that effectively amount to all the possible assignments of outcomes to states of the world. That is, acts are functions from the state space to the outcome space, and the agent’s preference ordering is taken to be defined over all such possible functions. Some of these acts will look quite sensible: consider the act that assigns to the event “it rains” the outcome “miserable wet stroll” and assigns to the event “it does not rain” the outcome “very comfortable stroll”. This is apparently the act of going for a stroll without one’s umbrella. Other Savage acts will not look quite so sensible, such as the constant act that assigns to both “it rains” and “it does not rain” the same outcome “miserable wet stroll”. (Note that the constant acts provide a way of including sure outcomes within the preference ordering.) The problem with this act (and many others) is that it does not correspond to anything that an agent could even in principle choose to do or perform.[5]

Savage’s act/state(event)/outcome distinction can be naturally represented in tabular form, with rows serving as acts that yield a given outcome for each state/event column. Table 2 depicts the two acts mentioned above plus a third one that the decision maker might care about: the acts i) “go for stroll without umbrella”, ii) “go for stroll with umbrella”, and iii) the bizarre constant act. Of course, the set of acts required for Savage’s theorem involve even more acts that account for all the possible combinations of states and outcomes.

  no rain rain
stroll without umbrella very comfortable stroll miserable wet stroll
stroll with umbrella comfortable stroll comfortable stroll
constant act miserable wet stroll miserable wet stroll

Table 2. Savage-style decision table

Before discussing Savage’s axioms, let us state the result that they give rise to. The following notation will be used: \(f\), \(g\), etc, are various acts, i.e., functions from the set \(\bS\) of states of the world to the set \(\bO\) of outcomes, with \(\bF\) the set of these functions. \(f(s_i)\) denotes the outcome of \(f\) when state \(s_i\in\bS\) is actual. The expected utility of \(f\), according to Savage’s theory, denoted \(U(f)\), is given by:

Savage’s equation
\(U(f)=\sum_i u(f(s_i))\cdot P(s_i)\)

The result Savage proved can be stated as follows:[6]

Theorem 3 (Savage).
Let \(\preceq\) be a weak preference relation on \(\bF\). If \(\preceq\) satisfies Savage’s axioms, then the following holds:
  • The agent’s confidence in the actuality of the states in \(\bS\) can be represented by a unique (and finitely additive) probability function, \(P\);

  • the strength of her desires for the ultimate outcomes in \(\bO\) can be represented by a utility function, \(u\), that is unique up to positive linear transformation;

  • and the pair \((P, u)\) gives rise to an expected utility function, \(U\), that represents her preferences for the alternatives in \(\bF\); i.e., for any \(f, g\in\bF\):

    \[f\preceq g\Leftrightarrow U(f)\leq U(g)\]

The above result may seem remarkable; in particular, the fact that a person’s preferences can determine a unique probability function that represents her beliefs. On a closer look, however, it is evident that some of our beliefs can be determined by examining our preferences. Suppose you are offered a choice between two lotteries, one that results in you winning a nice prize if a coin comes up heads but getting nothing if the coin comes up tails, another that results in you winning the same prize if the coin comes up tails but getting nothing if the coin comes up heads. Then assuming that the desirability of the prize (and similarly the desirability of no prize) is independent of how the coin lands, your preference between the two lotteries should be entirely determined by your comparative beliefs for the two ways in which the coin can land. For instance, if you strictly prefer the first lottery to the second, then that suggests you consider heads more likely than tails.

The above observation suggests that one can gauge an agent’s comparative beliefs, and perhaps more, from her preferences. Savage went one step further than this, and defined comparative beliefs in terms of preferences. To state Savage’s definition, let \(\wcbrel\) be a weak comparative belief relation, defined on the set \(\bS\) of states of the world. (\(\cbrel\) and \(\wcbsim\) are defined in terms of \(\wcbrel \) in the usual way.)

Definition 1 (Comparative Belief).
Suppose \(E\) and \(F\) are two events (i.e., subsets of \(\bS\)). Suppose \(X\) and \(Y\) are two outcomes and \(f\) and \(g\) two acts, with the following properties:

  • \(f(s_i)=X\) for all \(s_i\in E\), but \(f(s_i)=Y\) for all \(s_i\not\in E\),
  • \(g(s_i)=X\) for all \(s_i\in F\), but \(g(s_i)=Y\) for all \(s_i\not\in F\),
  • \(Y\preceq X\).

Then \(E \wcbrel F\Leftrightarrow f\preceq g\).

Definition 1 is based on the simple observation that one would generally prefer to stake a good outcome on a more rather than less probable event. But the idea that this defines comparative beliefs might seem questionable. We could, for instance, imagine people who are instrumentally irrational, and as a result fail to prefer \(g\) to \(f\), even when the above conditions all hold and they find \(F\) more likely than \(E\). Moreover, this definition raises the question of how to define the comparative beliefs of those who are indifferent between all outcomes (Eriksson and Hájek 2007). Perhaps no such people exist (and Savage’s axiom P5 indeed makes clear that his result does not pertain to such people). Nevertheless, it seems a definition of comparative beliefs should not preclude that such people, if existent, have strict comparative beliefs. Savage suggests that this definition of comparative beliefs is plausible in light of his axiom P4, which will be stated below. In any case, it turns out that when a person’s preferences satisfy Savage’s axioms, we can read off her preferences a comparative belief relation that can be represented by a (unique) probability function.

Without further ado, let us state Savage’s axioms in turn. These are intended as constraints on an agent’s preference relation, \(\preceq\), over a set of acts, \(\bF\), as described above. The first of Savage’s axioms is the basic ordering axiom.

P1. (Ordering)
The relation \(\preceq\) is complete and transitive.

The next axiom is reminiscent of vNM’s Independence axiom. We say that alternative \(f\) “agrees with” \(g\) in event \(E\) if, for any state in event \(E\), \(f\) and \(g\) yield the same outcome.

P2. (Sure Thing Principle)
If \(f\), \(g\), and \(f'\), \(g'\) are such that:

  • \(f\) agrees with \(g\) and \(f'\) agrees with \(g'\) in event \(\neg E\),
  • \(f\) agrees with \(f'\) and \(g\) agrees with \(g'\) in event \(E\),
  • and \(f\preceq g\),

then \(f'\preceq g'\).

The idea behind the Sure Thing Principle (STP) is essentially the same as that behind Independence: since we should be able to evaluate each outcome independently of other possible outcomes, we can safely ignore states of the world where two acts that we are comparing result in the same outcome. Putting the principle in tabular form may make this more apparent. The setup involves four acts with the following form:

  \(E\) \(\neg E\)
\(f\) X Z
\(g\) Y Z
\(f'\) X W
\(g'\) Y W

The intuition behind the STP is that if \(g\) is weakly preferred to \(f\), then that must be because the consequence \(Y\) is considered at least as desirable as \(X\), which by the same reasoning implies that \(g'\) is weakly preferred to \(f'\).

Savage also requires that the desirability of an outcome be independent of the state in which it occurs, as this is necessary for it to be possible to determine a comparative belief relation from an agent’s preferences. To formalise this requirement, Savage introduces the notion of a null event, defined as follows:

Definition 2 (Null)
Event E is null just in case for any alternatives \(f,g\in\bF\), \(f\sim g\) given E.

The intuition is that null events are those events an agent is certain will not occur. If and only if an agent is certain that \(E\) will not occur, then it is of indifference to her what the acts before her yield under \(E\). The following axiom then stipulates that knowing what state is actual does not affect the preference ordering over outcomes:

P3. (State Neutrality)
If \(f(s_i)=X\) and \(g(s_i)=Y\) whenever \(s_i\in E\) and \(E\) is not null, then \(f\preceq g\) given \(E\) just in case \(X\preceq Y\).

The next axiom is also necessary for it to be possible to determine a comparative belief relation from an agent’s preferences. Above it was suggested that by asking you to stake a prize on whether a coin comes up heads or tails, it can be determined which of these events, heads or tails, you find more likely. But that suggestion is only plausible if the size of the prize does not affect your judgement of the relative likelihood of these two events. That assumption is captured by the next axioms. Since the axiom is rather complicated it will be stated in tabular form:

P4. Consider the following acts:

  \(E\) \(\neg E\)
\(f\)   \(X\) \(X'\)
\(g\)   \(Y\) \(Y'\)
  \(F\) \(\neg F\)
\(f'\) \(X\) \(X'\)
\(g'\) \(Y\) \(Y'\)

Now suppose:

\[\begin{align} X' &\preceq X, \\ Y' &\preceq Y, \\ f' &\preceq f \end{align}\]

Then

\[g'\preceq g.\]

Less formally (and stated in terms of strict preference), the idea is that if you prefer to stake the prize \(X\) on \(f\) rather than \(f'\), you must consider \(E\) more probable than \(F\). Therefore, you should prefer to stake the prize \(Y\) on \(g\) rather than \(g'\) since the prize itself does not affect the probability of the events.

The next axiom is arguably not a rationality requirement, but one of Savage’s “structural axioms” (Suppes 2002). An agent needs to have some variation in preference for it to be possible to read off her comparative beliefs from her preferences; and, more generally, for it to be possible to represent her as maximising expected utility. To this end, the next axiom simply requires that there be some alternatives between which the agent is not indifferent:

P5.
There are some \(f,g\in\bF\) such that \(f\prec g\).

When these five axioms are satisfied, the agent’s preferences give rise to a comparative belief relation, \(\wcbrel \), which has the property of being a qualitative probability relation, which is necessary for it to be possible to represent \(\wcbrel \) by a probability function. In other words, \(\wcbrel \) satisfies the following three conditions, for any events \(E\), \(F\) and \(G\):

  1. \(\wcbrel \) is transitive and complete,

  2. if \(E\cap G=\emptyset=F\cap G\), then \(E \wcbrel F\Leftrightarrow E\cup G \wcbrel F\cup G\),

  3. \(\emptyset \wcbrel E,\)   \(\emptyset \cbrel \bS\)

Being a qualitative probability relation is, however, not sufficient to ensure the possibility of probabilistic representation. To ensure this possibility, Savage added the following structural axiom:

P6. (Non-atomicity)
Suppose \(f\prec g\). Then for any \(X\in\bO\), there is a finite partition, \(\{E_1, E_2, … E_m\}\), of \(\bS\) such that:

  • \(f'(s_i)=X\) for any \(s_i\in E_j\), but \(f'(s_i)=f(s_i)\) for any \(s_i\not\in E_j\),
  • \(g'(s_i)=X\) for any \(s_i\in E_j\), but \(g'(s_i)=g(s_i)\) for any \(s_i\not\in E_j\),
  • \(f'\prec g\) and \(f\prec g'\).

Like the Continuity axiom of vNM, Non-Atomicity implies that no matter how bad an outcome \(X\) is, if \(g\) is already preferred to \(f\), then if we add \(X\) as one of the possible outcomes of \(f\)—thereby constructing a new alternative \(f'\)—\(g\) will still be preferred to the modified alternative as long as the probability of \(X\) is sufficiently small. In effect, Non-Atomicity implies that \(\bS\) contains events of arbitrarily small probability. It is not too difficult to imagine how that could be satisfied. For instance, any event \(F\) can be partitioned into two equiprobable sub-events according to whether some coin would come up heads or tails if it were tossed. Each sub-event could be similarly partitioned according to the outcome of the second toss of the same coin, and so on.

Savage showed that whenever these six axioms are satisfied, the comparative belief relation can be represented by a unique probability function. Having done so, he could rely on the vNM representation theorem to show that an agent who satisfies all six axioms[7] can be represented as maximising expected utility, relative to a unique probability function that plausibly represents the agent’s beliefs over the states and a cardinal utility function that plausibly represents the agent’s desires for ultimate outcomes (recall the statement of Savage’s theorem above).[8] Savage’s own proof is rather complicated, but Kreps (1988) provides a useful illustration of it.

There is no doubt that Savage’s expected utility representation theorem is very powerful. There are, however, two important questions to ask about whether Savage achieves his aims: 1) Does Savage characterise rational preferences, at least in the generic sense? And 2) Does Savage’s theorem tell us how to make rational decisions in the real world? Savage’s theory has problems meeting these two demands, taken together. Arguably the core weakness of the theory is that its various constraints and assumptions pull in different directions when it comes to constructing realistic decision models, and furthermore, at least one constraint (notably, the Sure Thing Principle) is only plausible under decision modelling assumptions that are supposed to be the output, not the input, of the theory.

One well recognised decision-modelling requirement for Savage’s theory is that outcomes be maximally specific in every way that matters for their evaluation. If this were not the case, the axiom of State Neutrality, for instance, would be a very implausible rationality constraint. Suppose we are, for example, wondering whether to buy cocoa or lemonade for the weekend, and assume that how good we find each option depends on what the weather will be like. Then we need to describe the outcomes such that they include the state of the weather. For if we do not, the desirability of the outcomes will depend on what state is actual. Since lemonade is, let us suppose, better on hot days than cold, an outcome like “I drink lemonade this weekend” would be more or less desirable depending on whether it occurs in a state where it is hot or cold. This would be contrary to the axiom of State Neutrality. Therefore, the appropriate outcomes in this case are those of the form “I drink lemonade this weekend in hot weather”. (Of course, this outcome must be split into even more fine-grained outcomes if there are yet further features that would affect the choice at hand, such as sharing the drink with a friend who loves lemonade versus sharing the drink with a friend who loves hot cocoa, and so on.)

The fact that the outcomes in the above case must be specific enough to contain the state of the weather may seem rather innocuous. However, this requirement exacerbates the above-mentioned problem that many of the options/acts that Savage requires for his representation theorem are nonsensical, in that the semantic content of state/outcome pairs is contradictory. Recall that the domain of the preference ordering in Savage’s theory amounts to every function from the set of states to the set of outcomes (what Broome 1991a refers to as the Rectangular Field Assumption). So if “I drink lemonade this weekend in hot weather” is one of the outcomes we are working with, and we have partitioned the set of states according to the weather, then there must, for instance, be an act that has this outcome in the state where it is cold! The more detailed the outcomes (as required for the plausibility of State Neutrality), the less plausible the Rectangular Field Assumption. This is an internal tension in Savage’s framework. Indeed, it is difficult to see how/why a rational agent can/should form preferences over nonsensical acts (although see Dreier 1996 for an argument that this is not such an important issue). Without this assumption, however, the agent’s preference ordering will not be adequately rich for Savage’s rationality constraints to yield the EU representation result.[9]

The axiom in Savage’s theory that has received most attention is the Sure Thing Principle. It is not hard to see that this principle conflicts with Allais’ preferences for the same reason these preferences conflict with Independence (recall Section 2.3). Allais’ challenge will be discussed again later. For now, our concern is rather the Sure Thing Principle vis-à-vis the internal logic of Savage’s theory. To begin with, the Sure Thing Principle, like State Neutrality, exacerbates concerns about the Rectangular Field Assumption. This is because the Sure Thing Principle is only plausible if outcomes are specific enough to account for any sort of dependencies between outcomes in different states of the world. For instance, if the fact that one could have chosen a risk-free alternative—and thereby guaranteed an acceptable outcome—makes a difference to the desirability of receiving nothing after having taken a risk (as in Allais’ problem), then that has to be accounted for in the description of the outcomes. But again, if we account for such dependencies in the description of the outcomes, we run into the problem that there will be acts in the preference ordering that are nonsensical (see, e.g., Broome 1991a: ch. 5).

There is a further internal problem with Savage’s theory associated with the Sure Thing Principle: the principle is only reasonable when the decision model is constructed such that there is probabilistic independence between the acts an agent is considering and the states of the world that determine the outcomes of these acts. Recall that the principle states that if we have four options with the following form:

  \(E\) \(\neg E\)
\(f\) X Z
\(g\) Y Z
\(f'\) X W
\(g'\) Y W

then if \(g\) is weakly preferred to \(f\), \(g'\) must be weakly preferred to \(f'\). Suppose, however, that there is probabilistic dependency between the states of the world and the alternatives we are considering, and that we find \(Z\) to be better than both \(X\) and \(Y\), and we also find \(W\) to be better than both \(X\) and \(Y\). Moreover, suppose that \(g\) makes \(\neg E\) more likely than \(f\) does, and \(f'\) makes \(\neg E\) more likely than \(g'\) does. Then it seems perfectly reasonable to prefer \(g\) over \(f\) but \(f'\) over \(g'\).

Why is the requirement of probabilistic independence problematic? For one thing, in many real-world decision circumstances, it is hard to frame the decision model in such a way that states are intuitively probabilistically independent of acts. For instance, suppose an agent enjoys smoking, and is trying to decide whether to quit or not. How long she lives is amongst the contingencies that affect the desirability of smoking. It would be natural to partition the set of states according to how long the agent lives. But then it is obvious that the options she is considering could, and arguably should, affect how likely she finds each state of the world, since it is well recognised that life expectancy is reduced by smoking. Savage would thus require an alternative representation of the decision problem—the states do not reference life span directly, but rather the agent’s physiological propensity to react in a certain way to smoking.

Perhaps there is always a way to contrive decision models such that acts are intuitively probabilistically independent of states. But therein lies the more serious problem. Recall that Savage was trying to formulate a way of determining a rational agent’s beliefs from her preferences over acts, such that the beliefs can ultimately be represented by a probability function. If we are interested in real-world decisions, then the acts in question ought to be recognisable options for the agent (which we have seen is questionable). Moreover, now we see that one of Savage’s rationality constraints on preference—the Sure Thing Principle—is plausible only if the modelled acts are probabilistically independent of the states. In other words, this independence must be built into the decision model if it is to facilitate appropriate measures of belief and desire. But this is to assume that we already have important information about the beliefs of the agent whose attitudes we are trying to represent; namely what state-partitions she considers probabilistically independent of her acts.

The above problems suggest there is a need for an alternative theory of choice under uncertainty. Richard Jeffrey’s theory, which will be discuss next, avoids all of the problems that have been discussed so far. But as we will see, Jeffrey’s theory has well-known problems of its own, albeit problems that are not insurmountable.

3.2 Jeffrey’s theory

Richard Jeffrey’s expected utility theory differs from Savage’s in terms of both the prospects (i.e., options) under consideration and the rationality constraints on preferences over these prospects. The distinct advantage of Jeffrey’s theory is that real-world decision problems can be modelled just as the agent perceives them; the plausibility of the rationality constraints on preference do not depend on decision problems being modelled in a particular way. We first describe the prospects or decision set-up and the resultant expected utility rule, before turning to the pertinent rationality constraints on preferences and the corresponding theorem.

Unlike Savage, Jeffrey does not make a distinction between the objects of instrumental and non-instrumental desire (acts and outcomes respectively) and the objects of belief (states of the world). Rather, Jeffrey assumes that propositions describing states of affairs are the objects of both desire and belief. On first sight, this seems unobjectionable: just as we can have views about whether it will in fact rain, we can also have views about how desirable that would be. The uncomfortable part of this setup is that acts, too, are just propositions—they are ordinary states of affairs about which an agent has both beliefs and desires. Just as the agent has a preference ordering over, say, possible weather scenarios for the weekend, she has a preference ordering over the possible acts that she may perform, and in neither case is the most preferred state of affairs necessarily the most likely to be true. In other words, the only thing that picks out acts as special is their substantive content—these are the propositions that the agent has the power to choose/make true in the given situation. It is as if the agent assesses her own options for acting from, rather, a third-person perspective. If one holds that a decision model should convincingly represent the subjective perspective of the agent in question, this is arguably a weakness of Jeffrey’s theory, although it may be one without consequence.[10]

Before proceeding, a word about propositions may be helpful: they are abstract objects that can be either true or false, and are commonly identified with sets of possible worlds. A possible world can be thought of as an abstract representation of how things are or could be (Stalnaker 1987; see also entry on possible worlds). The proposition that it rains at time \(t\), for example, is just the set of all worlds where it rains at time \(t\). And this particular proposition is true just in case the actual world happens to be a member of the set of all worlds where it rains at time \(t\).

The basic upshot of Jeffrey’s theory is that the desirability of a proposition, including one representing acts, depends both on the desirabilities of the different ways in which the proposition can be true, and the relative probability that it is true in these respective ways. To state this more precisely, \(p\), \(q\), etc., will denote propositional variables. Let \(\{p_1, p_2, …, p_n\}\) be one amongst many finite partitions of the proposition \(p\); that is, sets of mutually incompatible but jointly exhaustive ways in which the proposition \(p\) can be realised. For instance, if \(p\) is the proposition that it is raining, then we could partition this proposition very coarsely according to whether we go to the beach or not, but we could also partition \(p\) much more finely, for instance according to the precise millimetres-per-hour amount of rain. The desirability of \(p\) according to Jeffrey, denoted \(Des(p)\), is given by:

Jeffrey’s equation.
\(Des(p)=\sum_i Des(p_i)\cdot P(p_i\mid p)\)

This is effectively a conditional expected utility formula for evaluating \(p\). As noted, a special case is when the content of \(p\) is such that it is recognisably something the agent can choose to make true, i.e., an act.

One important difference between Jeffrey’s desirability formula and Savage’s expected utility formula, is that there is no distinction made between desirability and “expected” desirability, unlike what has to be done in Savage’s theory, where there is a clear distinction between utility, measuring an agent’s fundamental desires for ultimate outcomes, and expected utility, measuring an agent’s preferences over uncertain prospects or acts. This disanalogy is due to the fact that there is no sense in which the \(p_i\)s that \(p\) is evaluated in terms of need to be ultimate outcomes; they can themselves be thought of as uncertain prospects that are evaluated in terms of their different possible realisations.

Another important thing to notice about Jeffrey’s way of calculating desirability, is that it does not assume probabilistic independence between the alternative that is being evaluated, \(p\), and the possible ways, the \(p_i\)s, that the alternative may be realised. Indeed, the probability of each \(p_i\) is explicitly conditional on the \(p\) in question. When it comes to evaluating acts, this is to say (in Savage’s terminology) that the probabilities for the possible state-outcome pairs for the act are conditional on the act in question. Thus we see why the agent can describe her decision problem just as she sees it; there is no requirement that she identify a set of states (in Jeffrey’s case, this would be a partition of the proposition space that is orthogonal to the act partition) such that the states are appropriately fine-grained and probabilistically independent of the acts.

It should moreover be evident, given the discussion of the Sure Thing Principle (STP) in Section 3.1, that Jeffrey’s theory does not have this axiom. Since states may be probabilistically dependent on acts, an agent can be represented as maximising the value of Jeffrey’s desirability function while violating the STP. Moreover, unlike Savage’s, Jeffrey’s representation theorem does not depend on anything like the Rectangular Field Assumption. The agent is not required to have preferences over artificially constructed acts or propositions that turn out to be nonsensical, given the interpretation of particular states and outcomes. In fact, only those propositions the agent considers to be possible (in the sense that she assigns them a probability greater than zero) are, according to Jeffrey’s theory, included in her preference ordering.

Of course, we still need certain structural assumptions in order to prove a representation theorem for Jeffrey’s theory. In particular, the set \(\Omega\), on which the preference ordering \(\preceq\) is defined, has to be an atomless Boolean algebra of propositions, from which the impossible propositions, denoted \(\bot\), have been removed. A Boolean algebra is just a set of e.g. propositions or sentences that is closed under the classical logical operators and negation. An algebra is atomless just in case all of its elements can be partitioned into finer elements. The assumption that \(\Omega\) is atomless is thus similar to Savage’s P6, and can be given a similar justification: any way \(p_i\) in which \(p\) can be true can be partitioned into two further propositions according to how some coin would land if tossed.

So under what conditions can a preference relation \(\preceq\) on the set \(\Omega\) be represented as maximising desirability? Some of the required conditions on preference should be familiar by now and will not be discussed further. In particular, \(\preceq\) has to be transitive, complete and continuous (recall our discussion in Section 2.3 of vNM’s Continuity preference axiom).

The next two conditions are, however, not explicitly part of the two representation theorems that have been considered so far:

Averaging
If \(p,\ q\in \Omega\) are mutually incompatible, then

\[p\preceq q\Leftrightarrow p\preceq p\cup q\preceq q\]

Impartiality
Suppose \(p,\ q\in \Omega\) are mutually incompatible and \(p\sim q\). Then if \(p\cup r\sim q\cup r\) for some \(r\) that is mutually incompatible with both \(p\) and \(q\) and is such that \(\neg(r\sim p)\), then \(p\cup r\sim q\cup r\) for every such \(r\).

Averaging is the distinguishing rationality condition in Jeffrey’s theory. It can actually be seen as a weak version of Independence and the Sure Thing Principle, and it plays a similar role in Jeffrey’s theory. But it is not directly inconsistent with Allais’ preferences, and its plausibility does not depend on the type of probabilistic independence that the STP implies. The postulate requires that no proposition be strictly better or worse than all of its possible realisations, which seems to be a reasonable requirement. When \(p\) and \(q\) are mutually incompatible, \(p\cup q\) implies that either \(p\) or \(q\) is true, but not both. Hence, it seems reasonable that \(p\cup q\) should be neither strictly more nor less desirable than both \(p\) and \(q\). Suppose one of \(p\) or \(q\) is more desirable than the other. Then since \(p\cup q\) is compatible with the truth of either the more or the less desirable of the two, \(p\cup q\)’s desirability should fall strictly between that of \(p\) and that of \(q\). However, if \(p\) and \(q\) are equally desirable, then \(p\cup q\) should be as desirable as each of the two.

The intuitive appeal of Impartiality, which plays a similar role in Jeffrey’s theory as P4 does in Savage’s, is not as great as that of Averaging. Jeffrey himself admitted as much in his comment:

The axiom is there because we need it, and it is justified by our antecedent belief in the plausibility of the result we mean to deduce from it. (1965: 147)

Nevertheless, it does seem that an argument can be made that any reasonable person will satisfy this axiom. Suppose you are indifferent between two propositions, \(p\) and \(q\), that cannot be simultaneously true. And suppose now we find a proposition \(r\), that is pairwise incompatible with both \(p\) and \(q\), and which you find more desirable than both \(p\) and \(q\). Then if it turns out that you are indifferent between \(p\) joined with \(r\) and \(q\) joined with \(r\), that must be because you find \(p\) and \(q\) equally probable. Otherwise, you would prefer the union that contains the one of \(p\) and \(q\) that you find less probable, since that gives you a higher chance of the more desirable proposition \(r\). It then follows that for any other proposition \(s\) that satisfies the aforementioned conditions that \(r\) satisfies, you should also be indifferent between \(p\cup s\) and \(q\cup s\), since, again, the two unions are equally likely to result in \(s\).

The first person to prove a theorem stating sufficient conditions for a preference relation to be representable as maximising the value of a Jeffrey-desirability function was actually not Jeffrey himself, but the mathematician Ethan Bolker (1966, 1967). He proved the following result (recall the definition of a “desirability measure” given above):[11]

Theorem 4 (Bolker)
Let \(\Omega\) be a complete and atomless Boolean algebra of propositions, and \(\preceq\) a continuous, transitive and complete relation on \(\Omega \setminus \bot \), that satisfies Averaging and Impartiality. Then there is a desirability measure on \(\Omega \setminus \bot \) and a probability measure on \(\Omega\) relative to which \(\preceq\) can be represented as maximising desirability.

Unfortunately, Bolker’s representation theorem does not yield a result anywhere near as unique as Savage’s. Even if a person’s preferences satisfy all the conditions in Bolker’s theorem, then it is neither guaranteed that there will be just one probability function that represents her beliefs nor that the desirability function that represents her desires will be unique up to a positive linear transformation (unless her preferences are unbounded). Even worse, the same preference ordering satisfying all these axioms could be represented as maximising desirability relative to two probability functions that do not even agree on how to order propositions according to their probability.[12]

For those who think that the only way to determine a person’s comparative beliefs is to look at her preferences, the lack of uniqueness in Jeffrey’s theory is a big problem. Indeed, this may be one of the main reasons why economists have largely ignored Jeffrey’s theory. Economists have traditionally been skeptical of any talk of a person’s desires and beliefs that goes beyond what can be established by examining the person’s preferences, which they take to be the only attitude that is directly revealed by a person’s behaviour. For these economists, it is therefore unwelcome news if we cannot even in principle determine the comparative beliefs of a rational person by looking at her preferences.

Those who are less inclined towards behaviourism might, however, not find this lack of uniqueness in Bolker’s theorem to be a problem. James Joyce (1999), for instance, thinks that Jeffrey’s theory gets things exactly right in this regard, since one should not expect that reasonable conditions imposed on a person’s preferences would suffice to determine a unique probability function representing the person’s beliefs. It is only by imposing overly strong conditions, as Savage does, that we can achieve this. However, if uniqueness is what we are after, then we can, as Joyce points out, supplement the Bolker-Jeffrey axioms with certain conditions on the agent’s comparative belief relation (e.g. those proposed by Villegas 1964) that, together with the Bolker-Jeffrey axioms, ensure that the agent’s preferences can be represented by a unique probability function and a desirability function that is unique up to a positive linear transformation.

Instead of adding specific belief-postulates to Jeffrey’s theory, as Joyce suggests, one can get the same uniqueness result by enriching the set of prospects. Richard Bradley (1998) has, for instance, shown that if one extends the Boolean algebra in Jeffrey’s theory to indicative conditionals, then a preference relation on the extended domain that satisfies the Bolker-Jeffrey axioms (and some related axioms that specifically apply to conditionals) will be representable as maximising desirability, where the probability function is unique and the desirability function is unique up to a positive linear transformation.

4. Broader implications of Expected Utility (EU) theory

It was noted from the outset that EU theory is as much a theory of rational choice, or overall preferences amongst acts, as it is a theory of rational belief and desire. This section expands, in turn, on the epistemological and evaluative commitments of EU theory.

4.1 On rational belief

Some refer to EU theory as Bayesian decision theory. This label brings to the forefront the commitment to probabilism, i.e., that beliefs may come in degrees which, on pain of irrationality, can be represented numerically as probabilities. So there is a strong connection between EU theory and probabilism, or more generally between rational preference and rational belief. (The finer details of rational preference and associated rational belief are not the focus here; challenges to EU theory on this front are addressed in Section 5 below.)

Some take the connection between rational preference and rational belief to run very deep indeed. At the far end of the spectrum is the position that the very meaning of belief involves preference. Indeed, recall this manoeuvre in Savage’s theory, discussed earlier in Section 3.1. Many question the plausibility, however, of equating comparative belief with preferences over specially contrived prospects. A more moderate position is to regard these preferences as entailed by, but not identical with, the relevant comparative beliefs. Whether or not beliefs merely ground or are defined in terms of preference, there is a further question as to whether the only justification for rational belief having a certain structure (say, conforming to the probability calculus) is a pragmatic one, i.e., an argument resting on the agent’s preferences being otherwise inconsistent or self-defeating. A recent defender of this kind of pragmatism (albeit cast in more general terms) is Rinard (e.g., 2017). Others contend that accounts of rational belief can and should be ultimately justified on epistemic grounds; Joyce (1998), for instance, offers a non-pragmatic justification of probabilism that rests on the notion of overall “distance from the truth” of one’s beliefs. (For further developments of this position, see the entry on epistemic utility arguments for probabilism.)

Notwithstanding these finer disputes, Bayesians agree that pragmatic considerations play a significant role in managing beliefs. One important way, at least, in which an agent can interrogate her degrees of belief is to reflect on their pragmatic implications. Furthermore, whether or not to seek more evidence is a pragmatic issue; it depends on the “value of information” one expects to gain with respect to the decision problem at hand. The idea is that seeking more evidence is an action that is choice-worthy just in case the expected utility of seeking further evidence before making one’s decision is greater than the expected utility of making the decision on the basis of existing evidence. This reasoning was made prominent in a paper by Good (1967), where he proves that one should always seek “free evidence” that may have a bearing on the decision at hand. (Precursors of this theorem can be found in Ramsey 1990, published posthumously, and Savage 1954.) Note that the theorem assumes the standard Bayesian learning rule known as “conditionalisation”, which requires that when one’s learning experience has the form of coming to know some proposition (to which one had assigned positive probability) for sure, one’s new degrees of belief should be equal to one’s old degrees of belief conditional on the proposition that now has probability one. Indeed, the fact that conditionalisation plays a crucial role in Good’s result about the non-negative value of free evidence is taken by some as providing some justification for this learning rule.

So EU theory or Bayesian decision theory underpins a powerful set of epistemic norms. It has been taken as the appropriate account of scientific inference, giving rise to a school of statistical inference and experimental design and inviting formal interpretations of key concepts like “evidence”, “evidential support”, “induction” versus “abduction”, and the bearing of “coherence” and “explanatory power” on truth (see the relevant related entries). The major competitor to Bayesianism, as regards scientific inference, is arguably the collection of approaches known as Classical or Error statistics, which deny the sense of “degrees of support” (probabilistic or otherwise) conferred on a hypothesis by evidence. These approaches focus instead on whether a hypothesis has survived various “severe tests”, and inferences are made with an eye to the long-run properties of tests as opposed to how they perform in any single case, which would require decision-theoretic reasoning (see the entry on philosophy of statistics).

4.2 On rational desire

EU theory takes a stance on the structure of rational desire too. In this regard, the theory has been criticised on opposing fronts. We consider first the criticism that EU theory is too permissive with respect to what may influence an agent’s desires. We then turn to the opposing criticism: that when it comes to desire, EU theory is not permissive enough.

The worry that EU theory is too permissive with respect to desire is related to the worry that the theory is unfalsifiable. The worry is that apparently irrational preferences by the lights of EU theory can always be construed as rational, under a suitable description of the options under consideration. As discussed in Section 1 above, preferences that seem to violate Transitivity can be construed as consistent with this axiom so long as the options being compared vary in their description depending on, amongst other things, the other options under consideration. The same goes for preferences that seem to violate Separability or Independence (of the contribution of each outcome to the overall value of an option), discussed further in Section 5.1 below. One might argue that this is the right way to describe such agents’ preferences. After all, an apt model of preference is supposedly one that captures, in the description of final outcomes and options, everything that matters to an agent. In that case, however, EU theory is effectively vacuous or impotent as a standard of rationality to which agents can aspire. Moreover, it stretches the notion of what are genuine properties of outcomes that can reasonably confer value or be desirable for an agent.

There are two ways one can react to the idea that an agent’s preferences are necessarily consistent with EU theory, with the above-mentioned implications for what the agent may desire:

  • One can resist the claim, asserting that there are additional constraints on the content of an agent’s preferences. On the one hand there may be empirical constraints whereby the content of preferences is determined by some tradeoff between fit and simplicity in representing the agent’s greater “web” of preference attitudes. On the other hand there may be normative constraints with respect to what sorts of outcomes an agent may reasonably discriminate (for relevant discussion, see Tversky 1975; Broome 1991a & 1993; Pettit 1993; Dreier 1996; Guala 2006; Vredenburgh 2020).

  • One can alternatively embrace the claim, interpreting EU theory not as a standard against which an agent may pass or fail, but rather as an organising principle that enables the characterisation of an agent’s desires as well as her beliefs (see esp. Guala 2008).

Either way, it may yet be argued that EU theory does not go far enough in structuring an agent’s preference attitudes so that we may understand the reasons for these preference attitudes. Dietrich and List (2013 & 2016a) have proposed a more general framework that fills this lacuna. In their framework, preferences satisfying some minimal constraints are representable as dependent on the bundle of properties in terms of which each option is perceived by the agent in a given context. Properties can, in turn, be categorised as either option properties (which are intrinsic to the outcome), relational properties (which concern the outcome in a particular context), or context properties (which concern the context of choice itself). Such a representation permits more detailed analysis of the reasons for an agent’s preferences and captures different kinds of context-dependence in an agent’s choices. Furthermore, it permits explicit restrictions on what counts as a legitimate reason for preference, or in other words, what properties legitimately feature in an outcome description; such restrictions may help to clarify the normative commitments of EU theory.

There are also less general models that offer templates for understanding the reasons underlying preferences. For instance, the multiple criteria decision framework (see, for instance, Keeney and Raiffa 1993) takes an agent’s overall preference ordering over options to be an aggregate of the set of preference orderings corresponding to all the pertinent dimensions of value. Under certain assumptions, the overall or aggregate preference ordering is compatible with EU theory. One might otherwise seek to understand the role of time, or the temporal position of goods, on preferences. To this end, outcomes are described in terms of temporally-indexed bundles of goods, or consumption streams (for an early model of this kind see Ramsey 1928; a later influential treatment is Koopmans 1960). There may be systematic structure to an agent's preferences over these consumption streams, over and above the structure imposed by the EU axioms of preference. For instance, the aforementioned authors considered and characterised preferences that exhibit exponential time discounting.

Let’s turn now to the opposing kind of criticism: that the limited constraints that EU theory imposes on rational preference and desire are nonetheless overly restrictive. Here the focus will be on the compatibility of EU theory with prominent ethical positions regarding the choice-worthiness of acts, as well as meta-ethical positions regarding the nature of value and its relationship to belief.

One may well wonder whether EU theory, indeed decision theory more generally, is neutral with respect to normative ethics, or whether it is compatible only with ethical consequentialism, given that the ranking of an act is fully determined by the utility of its possible outcomes. Such a model seems at odds with nonconsequentialist ethical theories for which the choice-worthiness of acts purportedly depends on more than the moral value of their consequences. The model does not seem able to accommodate basic deontological notions like agent relativity, absolute prohibitions or permissible and yet suboptimal acts.

An initial response, however, is that one should not read too much into the formal concepts of decision theory. The utility measure over acts and outcomes is simply a convenient way to represent an ordering, and leaves much scope for different ways of identifying and evaluating outcomes. Just as an agent’s utility function need not be insensitive to ethical considerations in general (a common misconception due to the prevalence of selfish preferences in economic models; see, for instance, Sen 1977), nor need it be insensitive to specifically nonconsequentialist or deontological ethical considerations. It all depends on how acts and their outcomes are distinguished and evaluated. For starters, the character of an act may feature as a property of all its possible outcomes. Moreover, whether some event befalls or is perpetrated by the deciding agent or rather someone else may be relevant. That an act involves lying, say, can be referenced in all possible outcomes of the act, and furthermore this lying on the part of the deciding agent can be distinguished from the lying of others. In general, acts and their outcomes can be distinguished according to whatever matters morally, be it a complex relational property to do with how and when the act is chosen, by whom, and/or in what way some state of affairs results from the act. For early discussions on how a wide range of ethical properties can be accommodated in the description of acts and outcomes, see, for instance, Sen (1982), Vallentyne (1988), Broome (1991b) and Dreier (1993). This idea has since been embraced by others associated with the so-called “consequentializing” program, including Louise (2004) and Portmore (2007). The idea is that the normative advice of putatively nonconsequentialist ethical theories can be represented in terms of a ranking of acts/outcomes corresponding to some value function, as per consequentialist ethical theories (see too Colyvan et al. 2010).

A sticking point for reconciling decision theory with all forms of nonconsequentialism is the difficulty in accommodating absolute prohibitions or side constraints (see Oddie and Milne 1999; Jackson and Smith 2006). For instance, suppose there is a moral prohibition against killing an innocent person, whatever else is at stake. Perhaps such a constraint is best modelled in terms of a lexical ranking and corresponding value function, whereby the killing-innocents status of an act/outcome takes priority in determining its relative rank/value. But this has counterintuitive implications in the face of risk since very many acts will have some chance, however small, of killing an innocent. The lesson here may simply be that the theories in question require development; any mature ethical theory owes us an account of how to act under risk or uncertainty. What is arguably a more compelling challenge for the reconciliation of decision theory and nonconsequentialism is the accommodation of “agent-centred options” and associated “supererogation”. Portmore (e.g., 2007) and Lazar (e.g., 2017) offer proposals to this effect, which appeal (in different ways) to the moral ranking of acts/outcomes as distinct from the personal costs to the agent of pursuing these acts/outcomes.

To the extent that decision theory can be reconciled with the full range of ethical theories, should we say that there are no meaningful distinctions between these theories? Brown (2011) and Dietrich and List (2017) demonstrate that in fact the choice-theoretic representation of ethical theories better facilitates distinctions between them; terms like “(non)consequentialism” can be precisely defined, albeit in debatable ways. More generally, we can catalogue theories in terms of the kinds of properties (whether intrinsic or in some sense relational) that distinguish acts/outcomes and also in terms of the nature of the ranking of acts/outcomes that they yield (whether transitive, complete, continuous and so on). This also serves to reveal departures from EU theory.

Indeed, some of the most compelling counterexamples to EU axioms of preference rest on ethical considerations. Recall our earlier discussion of the basic Ordering axioms in Section 1. The Transitivity axiom has been challenged by appeal to ethically-motivated examples of preference cycles (see Temkin 2012). The notion of a non-continuous lexical ordering was mentioned above in relation to ethical side constraints. The dispensability of the Completeness axiom, too, is often motivated by appeal to examples involving competing ethical values that are difficult to tradeoff against each other, like average versus total welfare. Other suggestive examples against Completeness involve competing notions of personal welfare (see, e.g., Levi 1986; Chang 2002). Must a rational agent have a defined preference between, say, two career options that pull in different directions as regards opportunities for creative self-expression versus community service (perhaps a career as a dancer versus a career as a doctor in remote regions)? Note that some of these challenges to EU theory are discussed in more depth in Section 5 below.

Finally, we turn to the potential meta-ethical commitments of EU theory. David Lewis (1988, 1996) famously employed EU theory to argue against anti-Humeanism, the position that we are sometimes moved entirely by our beliefs about what would be good, rather than by our desires as the Humean claims. He formulated the anti-Humean theory as postulating a necessary connection between, on the one hand, an agent’s desire for any proposition \(A\), and, on the other hand, her belief in a proposition about \(A\)’s goodness; and claimed to prove that when such a connection is formulated in terms of EU theory, the agent in question will be dynamically incoherent. Several people have criticised Lewis’s argument. For instance, Broome (1991c), Byrne and Hájek (1997) and Hájek and Pettit (2004) suggest formulations of anti-Humeanism that are immune to Lewis’ criticism, while Stefánsson (2014) and Bradley and Stefánsson (2016) argue that Lewis’ proof relies on a false assumption. Nevertheless, Lewis’ argument no doubt provoked an interesting debate about the sorts of connections between belief and desire that EU theory permits. There are, moreover, further questions of meta-ethical relevance that one might investigate regarding the role and structure of desire in EU theory. For instance, Jeffrey (1974) and Sen (1977) offer some preliminary investigations as to whether the theory can accommodate higher-order desires/preferences, and if so, how these relate to first-order desires/preferences.

5. Challenges to EU theory

Thus far the focus has been on prominent versions of the standard theory of rational choice: EU theory. This section picks up on some key criticisms of EU theory that have been developed into alternative accounts of rational choice. The proposed innovations to the standard theory are distinct and so are discussed separately, but they are not necessarily mutually exclusive. Note that we do not address all criticisms of EU theory that have inspired alternative accounts of rational choice. Two major omissions of this sort (for want of space and also because they have been thoroughly addressed in alternative entries of this encyclopedia) are i) the problem of causal anomalies and the development of causal decision theory (see the entry on causal decision theory), and ii) the problem of infinite state spaces and the development of alternatives like “relative expectation theory” (see the entries on normative theories of rational choice: expected utility theory and the St. Petersburg paradox).

5.1 On risk and regret attitudes

Expected utility theory has been criticised for not allowing for value interactions between outcomes in different, mutually incompatible states of the world. For instance, recall that when deciding between two risky options you should, according to Savage’s version of the theory, ignore the states of the world where the two options result in the same outcome. That seems very reasonable if we can assume separability between outcomes in different states of the world, i.e., if the contribution that an outcome in one state of the world makes towards the overall value of an option is independent of what other outcomes the option might result in. For then identical outcomes (with equal probabilities) should cancel each other out in a comparison of two options, which would entail that if two options share an outcome in some state of the world, then when comparing the options, it does not matter what that shared outcome is.

The Allais paradox, discussed in Section 2.3 above, is a classic example where the aforementioned separability seems to fail. For ease of reference, the options that generate the paradox are reproduced as Table 3. Recall from Section 2.3 that people tend to prefer \(L_2\) over \(L_1\) and \(L_3\) over \(L_4\)—an attitude that has been called Allais’ preferences—in violation of expected utility theory. The violation occurs precisely because the contributions that some of these outcomes make towards the overall value of an option is not independent of the other outcomes that the option can have. Compare the extra chance of outcome $0 that \(L_1\) has over \(L_2\) with the same extra chance of $0 that \(L_3\) has over \(L_4\). Many people think that this extra chance counts more heavily in the first comparison than the latter, i.e., that an extra 0.01 chance of $0 contributes a greater negative value to \(L_1\) than to \(L_3\). Some explain this by pointing out that the regret one would experience by winning nothing when one could have had $2400 for sure—i.e., when choosing \(L_1\) over \(L_2\) and the first ticket is drawn—is much greater than the regret one would experience by winning nothing when the option one turned down also had a high chance of resulting in $0—such as when choosing \(L_3\) over \(L_4\) (see, e.g., Loomes and Sugden 1982). But whether or not the preference in question should be explained by the potential for regret, it would seem that the desirability of the $0-outcome depends on what could (or would) otherwise have been; in violation of the aforementioned assumption of separability. (See Thoma 2020a for a recent extensive discussion of this assumption.)

  1 2–34 35–100
\(L_1\) $0 $2500 $2400
\(L_2\) $2400 $2400 $2400
  1 2–34 35–100
\(L_3\) $0 $2500 $0
\(L_4\) $2400 $2400 $0

Table 3. Allais’ paradox

Various attempts have been made to make Allais’ preferences compatible with some version of expected utility theory. A common response is to suggest that the choice problem has been incorrectly described. If it really is rational to evaluate $0 differently depending on which lottery it is part of, then perhaps this should be accounted for in the description of the outcomes (Broome 1991a). For instance, we could add a variable to the $0 outcome that \(L_1\) might result in to represent the extra regret or risk associate with that outcome compared to the $0 outcomes from the other lotteries (as done in Table 4). If we do that, Allais’ preferences are no longer inconsistent with EU theory. The simplest way to see this is to note that when we ignore the state of the world where the options that are being compared have the same outcome (i.e., when we ignore the last column in Table 4), \(L_1\) is no longer identical to \(L_3\), which means that the Independence axiom of von Neumann and Morgenstern (and Savage’s Sure Thing Principle) no longer requires that one prefer \(L_2\) over \(L_1\) only if one prefers \(L_4\) over \(L_3\).

  1 2–34 35–100
\(L_1\) $0 + \(\delta\) $2500 $2400
\(L_2\) $2400 $2400 $2400
  1 2–34 35–100
\(L_3\) $0 $2500 $0
\(L_4\) $2400 $2400 $0

Table 4. Allais’ paradox re-described

The above “re-description strategy” could be employed whenever the value and/or contribution of an outcome depends on other possible outcomes: just describe the outcomes in a way that accounts for this dependency. But more worryingly, the strategy could be employed whenever one comes across any violation of expected utility theory or other theories of rationality (as discussed in Section 4.2).

Lara Buchak (2013) has recently developed a decision theory that can accommodate Allais’ preferences without re-describing the outcomes. On Buchak’s interpretation, the explanation for Allais’ preferences is not the different value that the outcome $0 has depending on what lottery it is part of. The outcome itself has the same value. However, the contribution that $0 makes towards the overall value of an option partly depends on what other outcomes are possible, she suggests, which reflects the fact that the option-risk that the possibility of $0 generates depends on what other outcomes the option might result in. To accommodate Allais’ preferences (and other intuitively rational attitudes to risk that violate EU theory), Buchak introduces a risk function that represents people’s willingness to trade chances of something good for risks of something bad. And she shows that if an agent satisfies a particular set of axioms, which is essentially Savage’s except that the Sure Thing Principle is replaced with a strictly weaker one, then the agent’s preferences can be represented as maximising risk weighted expected utility; which is essentially Savage-style expected utility weighted by a risk function.

Bradley and Stefánsson (2017) also develop a new decision theory partly in response to the Allais paradox. But unlike Buchak, they suggest that what explains Allais’ preferences is that the value of wining nothing from a chosen lottery partly depends on what would have happened had one chosen differently. To accommodate this, they extend the Boolean algebra in Jeffrey’s decision theory to counterfactual propositions, and show that Jeffrey’s extended theory can represent the value-dependencies one often finds between counterfactual and actual outcomes. In particular, their theory can capture the intuition that the (un)desirability of winning nothing partly depends on whether or not one was guaranteed to win something had one chosen differently. Therefore, their theory can represent Allais’ preferences as maximising the value of an extended Jeffrey-desirability function.

Stefánsson and Bradley (2019) suggest yet another way of accounting for Allais’ preferences in an extension of Jeffrey’s decision theory; this time extended to chance propositions, that is, propositions describing objective probability distributions. The general idea is that the desirability of a particular increase or decrease in the chance of some outcome—for instance, in the Allais case, a 0.01 increase in the chance of the $0-outcome—might depend on what the chances were before the increase or decrease. Stefánsson and Bradley’s extension of Jeffrey’s theory to chance propositions is also motivated by the fact that standard decision theories do not distinguish between risk aversion with respect to some good and attitudes to quantities of that good (which is found problematic by, for instance, Hansson 1988, Rabin 2000, and Buchak 2013).

5.2 On completeness: Vague beliefs and desires

As noted in Section 4, criticisms of the EU requirement of a complete preference ordering are motivated by both epistemic and desire/value considerations. On the value side, many contend that a rational agent may simply find two options incomparable due to their incommensurable qualities. (Here a prominent usage of these terms will be followed, whereby particular options may be described as incomparable in value, while general properties or dimensions of value may be described as incommensurable.) As in, the agent’s evaluations of the desirability of sure options may not be representable by any precise utility function. Likewise, on the belief side, some contend (notably, Joyce 2010 and Bradley 2017) that the evidence may be such that it does not commit a rational agent to precise degrees of belief measurable by a unique probability function.

There are various alternative, “fuzzier” representations of desire and belief that might be deemed more suitable. Halpern (2003), for instance, investigates different ways of conceptualising and representing epistemic uncertainty, once we depart from probabilities. Presumably there are also various ways to represent uncertain desire. Here the focus will be on just one proposal that is popular amongst philosophers: the use of sets of probability and utility functions to represent uncertainty in belief and desire respectively. This is a minimal generalisation of the standard EU model, in the sense that probability and utility measures still feature. Roughly, the more severe the epistemic uncertainty, the more probability measures over the space of possibilities needed to conjointly represent the agent’s beliefs. This notion of rational belief is referred to as imprecise probabilism (see the entry on imprecise probabilities). Likewise, the more severe the evaluative uncertainty, the more utility measures over the space of sure options needed to conjointly represent the agent’s desires. Strictly speaking, we should not treat belief and desire separately, but rather talk of the agent’s incomplete preferences being represented by a set of probability and utility pairs. Recall the requirement that incomplete preferences be coherently extendible (refer back to Section 1); on this representation, all the probability-utility pairs amount to candidate extensions of the incomplete preferences.

The question then arises: Is there a conservative generalisation of the EU decision rule that can handle sets of probability and utility pairs? Contender decision rules are standardly framed in terms of choice functions that take as input some set of feasible options and return as output a non-empty set of admissible choices that is a subset of the feasible options. A basic constraint on these choice functions is that they respect the agent’s preferences in cases where options are in fact comparable. That is, if all pairs of probability and utility functions characterising the agent’s attitudes agree on the ranking of two options, then these particular options should be ranked accordingly. The relevant constraint on choice functions is that “EU-dominated options” are not admissible choices, i.e., if an option has lower expected utility than another option according to all pairs of probability and utility functions, then the former dominated option is not an admissible choice. Note that Levi (1986) has a slightly more restrictive condition on admissibility: if an option does not have maximum EU for at least one pair of probability and utility functions, then it is not admissible. In ordinary cases where sets of probability and utility functions are closed convex sets, however, Levi’s condition is equivalent to the aforementioned one that rules out EU-dominated options (Schervish et al. 2003).

The treatment of genuinely incomparable options (those surviving the above admissibility test and yet are not such that the agent is indifferent) is where the real controversies begin. See Bradley (2017) for extensive discussion of the various ways to proceed. A consideration that is often appealed to in order to discriminate between incomparable options is caution. The Maxmin-EU rule, for instance, recommends picking the action with greatest minimum expected utility (see Gilboa and Schmeidler 1989; Walley 1991). The rule is simple to use, but arguably much too cautious, paying no attention at all to the full spread of expected utilities. The \(\alpha\)-Maxmin rule, by contrast, recommends taking the action with the greatest \(\alpha\)-weighted sum of the minimum and maximum expected utilities associated with it. The relative weights for the minimum and maximum expected utilities can be thought of as reflecting either the decision maker’s pessimism in the face of uncertainty or else her degree of caution (see Binmore 2009).

There are more complicated choice rules that depend on a richer representation of uncertainty involving a notion of confidence. For instance, Klibanoff et al. (2005) propose a rule whereby choices are made between otherwise incomparable options on the basis of confidence-weighted expected utility. It presupposes that weights can be assigned to the various expected utilities associated with an act, reflecting the agent’s confidence in the corresponding probability and utility pairs. There are alternative rules that appeal to confidence even in the absence of precise cardinal weights. Gärdenfors and Sahlin (1982), for instance, suggest simply excluding from consideration any probability (and utility) functions that fall below a confidence threshold, and then applying the Maxmin-EU rule based on the remainder. Hill’s (2013) choice theory is somewhat similar, although confidence thresholds for probability and utility pairs are allowed to vary depending on the choice problem (and the term “confidence” is itself used differently). There are further proposals whereby acts are compared in terms of how much uncertainty they can tolerate (which again depends on levels of confidence) and yet still be a satisfactory option (see, e.g., Ben-Haim 2001). These rules are compelling, but they do raise a host of difficult questions regarding how to interpret and measure the extra subjective attitudes that play a role, like “amount of confidence in a belief/desire” and “satisfactory level of desirability”.

5.3 Unawareness

There has been recent interest in yet a further challenge to expected utility theory, namely, the challenge from unawareness. In fact, unawareness presents a challenge for all extant normative theories of choice. To keep things simple, we shall however focus on Savage’s expected utility theory to illustrate the challenge posed by unawareness.

As the reader will recall, Savage takes for granted a set of possible outcomes \(\bO\), and another set of possible states of the world \(\bS\), and defines the set of acts, \(\bF\), as the set of all functions from \(\bS\) to \(\bO\). Moreover, his representation theorem has been interpreted as justifying the claim that a rational person always performs the act in \(\bF\) that maximises expected utility, relative to a probability measure over \(\bS\) and a utility measure over \(\bO\).

Now, Savage’s theory is neutral about how to interpret the states in \(\bS\) and the outcomes in \(\bO\). For instance, the theory is consistent with interpreting \(\bS\) and \(\bO\) as respectively the sets of all logically possible states and outcomes, but it is also consistent with interpreting \(\bS\) and \(\bO\) as respectively the sets of states and outcomes that some modeller recognises, or the sets of states and outcomes that the decision-maker herself recognises.

If the theory is meant to describe the reasoning of a decision-maker, the first two interpretations would seem inferior to the third. The problem with the first two interpretations is that the decision-maker might be unaware of some of the logically possible states and outcomes, as well as some of the states and outcomes that the modeller is aware of. (Having said that, one may identify the states and outcomes that the agent is unaware of by reference to those of which the modeller is aware.)

When it comes to (partially) unaware decision-makers, an important distinction can be made between on the one hand what we might call “unawareness of unawareness”—that is, a situation where a decision-maker does not realise that there might be some outcome or state that they are unaware of—and on the other hand “awareness of unawareness”—that is, a situation where a decision-maker at least suspects that there is some outcome or state of which they are unaware.

From the perspective of decision-making, unawareness of unawareness is not of much interest. After all, if one is not even aware of the possibility that one is unaware of some state or outcome, then that unawareness cannot play any role in one’s reasoning about what to do. However, decision-theoretic models have been proposed for how a rational person responds to growth in awareness (that is meant to apply even to people who previously were unaware of their unawareness). In particular, economists Karni and Vierø (2013, 2015) have recently extended standard Bayesian conditionalisation to such learning events. Their theory, Reverse Bayesianism, informally says that awareness growth should not affect the ratios of probabilities of the states/outcomes that the agent was aware of before the growth. Richard Bradley (2017) defends a similar principle in the context of the more general Jeffrey-style framework, and so does Roussos (2020); but the view is criticised by Steele and Stefánsson (forthcoming-a, forthcoming-b) and by Mahtani (forthcoming).

In contrast, awareness of unawareness would seem to be of great interest from the perspective of decision-making. If you suspect that there is some possible state, say, that you have not yet entertained, and some corresponding outcome, the content of which you are unaware, then you might want to at least come to some view about how likely you expect this state to be, and how good or bad you expect the corresponding outcome to be, before you make a decision.

A number of people have suggested models to represent agents who are aware of their unawareness (e.g., Walker & Dietz 2013, Piermont 2017, Karni & Vierø 2017). Steele and Stefánsson (forthcoming-b) argue that there may not be anything especially distinctive about how a decision-maker reasons about states/outcomes of which she is aware she is unaware, in terms of the confidence she has in her judgments and how she manages risk. That said, the way she arrives at such judgments of probability and desirability is worth exploring further. Grant and Quiggin (2013a, 2013b), for instance, suggest that these judgments are made based on induction from past situations where one experienced awareness growth.

In general, the literature on unawareness has been rapidly growing. Bradley (2017) and Steele and Stefánsson (forthcoming-b) are new in-depth treatments of this topic within philosophy. Schipper maintains a bibliography on unawareness, mostly with papers in economics and computer science, at \url{http://faculty.econ.ucdavis.edu/faculty/schipper/unaw.htm}.

6. Sequential decisions

The decision theories of Savage and Jeffrey, as well as those of their critics, apparently concern a single or “one shot only” decision; at issue is an agent’s preference ordering, and ultimately her choice of act, at a particular point in time. One may refer to this as a static decision problem. The question arises as to whether this framework is adequate for handling more complex scenarios, in particular those involving a series or sequence of decisions; these are referred to as sequential decision problems.

On paper, at least, static and sequential decision models look very different. The static model has familiar tabular or normal form, with each row representing an available act/option, and columns representing the different possible states of the world that yield a given outcome for each act. The sequential decision model, on the other hand, has tree or extensive form (such as in Figure 1). It depicts a series of anticipated choice points, where the branches extending from a choice point represent the options at that choice point. Some of these branches lead to further choice points, often after the resolution of some uncertainty due to new evidence.

These basic differences between static and sequential decision models raise questions about how, in fact, they relate to each other:

  • Do static and sequential decision models depict the same kind of decision problem? If so, what is the static counterpart of a sequential decision model?

  • Does the sequential decision setting reveal any further (dis)advantages of EU theory? More generally does this setting shed light on normative theories of choice?

These questions turn out to be rather controversial. They will be addressed in turn, after the scene has been set with an old story about Ulysses.

6.1 Was Ulysses rational?

A well-known sequential decision problem is the one facing Ulysses on his journey home to Ithaca in Homer’s great tale from antiquity. Ulysses must make a choice about the manner in which he will sail past an island inhabited by sweet-singing sirens. He can choose to sail unrestrained or else tied to the mast. In the former case, Ulysses will later have the choice, upon hearing the sirens, to either continue sailing home to Ithaca or to stay on the island indefinitely. In the latter case, he will not be free to make further choices and the ship will sail onwards to Ithaca past the sweet-singing sirens. The final outcome depends on what sequence of choices Ulysses makes. Ulysses’ decision problem is represented in tree (or extensive) form in Figure 1 (where the two boxes represent choice points for Ulysses).

Figure 1. Ulysses’ decision problem

We are told that, before embarking, Ulysses would most prefer to freely hear the sirens and return home to Ithaca. The problem is that Ulysses predicts his future self will not comply: if he sails unrestrained, he will later be seduced by the sirens and will not in fact continue home to Ithaca but will rather remain on the island indefinitely. Ulysses therefore reasons that it would be better to be tied to the mast, because he would prefer the shame and discomfort of being tied to the mast and making it home to remaining on the sirens’ island forever.

It is hard to deny that Ulysses makes a wise choice in being tied to the mast. Some hold, however, that Ulysses is nevertheless not an exemplary agent, since his present self must play against his future self who will be unwittingly seduced by the sirens. While Ulysses is rational at the first choice node by static decision standards, we might regard him irrational overall by sequential decision standards, understood in terms of the relative value of sequences of choices. The sequence of choices that Ulysses inevitably pursues is, after all, suboptimal. It would have been better were he able to sail unconstrained and continue on home to Ithaca. This sequence could have been achieved if Ulysses were continuously rational over the extended time period; say, if at all times he were to act as an EU maximiser, and change his beliefs and desires only in accordance with Bayesian norms (variants of standard conditionalisation). On this reading, sequential decision models introduce considerations of rationality-over-time.

While rationality-over-time may have import in assessing an agent’s preferences and norms for changing these preferences (one can read the discussion in Section 6.2 below in this way), there remains the important question of how an agent should act in light of her preferences at any given point in time. To this end, the sequential decision model can be fruitfully viewed as a tool for helping determine rational choice at a particular time, just like the static decision model. The sequential decision tree is effectively a way of visualising the temporal series of choices and learning events that an agent believes she will confront in the future, depending on what part of the decision tree she will find herself. The key question, then, is: How should an agent choose amongst her initial options in light of her projected decision tree? This question has generated a surprising amount of controversy. Three major approaches to negotiating sequential decision trees have appeared in the literature. These are the naïve or myopic approach, the sophisticated approach and the resolute approach. These will be discussed in turn; it will be suggested that the disputes may not be substantial but rather indicate subtle differences in the interpretation of sequential decision models.

The so-called naïve approach to negotiating sequential decisions serves as a useful contrast to the other two approaches. The naïve agent assumes that any path through the decision tree is possible, and so sets off on whichever path is optimal, given his/her present attitudes. For instance, a naïve Ulysses would simply presume that he has three overall strategies to choose from: either ordering the crew to tie him to the mast, or issuing no such order and later stopping at the sirens’ island, or issuing no such order and later sticking to his course. Ulysses prefers the outcome associated with the latter combination, and so he initiates this strategy by not ordering the crew to restrain him. Table 5 presents the static counterpart of naïve Ulysses’ decision problem. In effect, this decision model does not take into account Ulysses’ present knowledge of his future preferences, and hence advises that he pursue an option that is predicted to be impossible.

Act Outcome
order tying to mast reach home, some humiliation
sail unconstrained then stay with sirens life with sirens
sail unconstrained then home to Ithaca reach home, no humiliation

Table 5. Naïve Ulysses’ decision problem

There is no need to labour the point that the naïve approach to sequential choice is aptly named. The hallmark of the sophisticated approach, by contrast, is its emphasis on backwards planning: the sophisticated chooser does not assume that all paths through the decision tree, or in other words, all possible combinations of choices at the various choice nodes, will be possible. The agent considers, rather, what he/she will be inclined to choose at later choice nodes when he/she gets to the temporal position in question. Sophisticated Ulysses would take note of the fact that, if he reaches the island of the sirens unrestrained, he will want to stop there indefinitely, due to the transformative effect of the sirens’ song on his preferences. This is then reflected in the static representation of the decision problem, as per Table 6. The states here concern Ulysses’ future preferences, once he reaches the island. Since the second state has (by assumption) probability zero, the acts are decided on the basis of the first state, so Ulysses wisely chooses to be tied to the mast.

Act later choose sirens \( (p = 1)\) later choose Ithaca \( (p = 0)\)
order tying to mast home, some humiliation home, some humiliation
sail unconstrained life with sirens home, no humiliation

Table 6. Sophisticated Ulysses’ decision problem

Resolute choice deviates from sophisticated choice only under certain conditions that are not fulfilled by Ulysses, given his inexplicable change in attitudes. Defenders of resolute choice typically defend decision theories and associated preferences that violate the Independence axiom/Sure-Thing Principle (notably McClennen 1990 and Machina 1989; see also Rabinowicz 1995 and Buchak 2013 for discussion), and appeal to resolute choice to make these preferences more palatable in the sequential-decision context (to be discussed further in Section 6.2 below). According to resolute choice, in appropriate contexts, the agent should at all choice points stick to the strategy that was initially deemed best. The question is whether this advice makes sense, given the standard interpretation of a sequential decision model. What would it mean for an agent to choose against her preferences in order to fulfill a previously-selected plan? That would seem to defy the very notion of preference. Of course, an agent may place considerable importance on honouring previous commitments. Any such integrity concerns, however, should arguably be reflected in the specification of outcomes and thus in the agent’s preferences at the time in question. This is quite different from choosing out of step with one’s all-things-considered preferences at a time.

Defenders of resolute choice may have in mind a different interpretation of sequential decision models, whereby future “choice points” are not really points at which an agent is free to choose according to her preferences at the time. If so, this would amount to a subtle shift in the question or problem of interest. In what follows, the standard interpretation of sequential decision models will be assumed, and accordingly, it will be assumed that rational agents pursue the sophisticated approach to choice (as per Levi 1991, Maher 1992, Seidenfeld 1994, amongst others).

6.2 The EU axioms revisited

We have seen that sequential decision trees can help an agent like Ulysses take stock of the consequences of his current choice, so that he can better reflect on what to do now. The literature on sequential choice is primarily concerned, however, with more ambitious questions. The sequential-decision setting effectively offers new ways to “test” theories of rational preference and norms for preference (or belief and desire) change. The question is whether an agent’s decision theory in this broad sense is shown to be dynamically inconsistent or self-defeating.

Skyrms’ (1993) “diachronic Dutch book” argument for conditionalisation can be read in this way. The agent is assumed to have EU preferences and to take a sophisticated (backwards reasoning) approach to sequential decision problems. Skyrms shows that any such agent who plans to learn in a manner at odds with conditionalisation will make self-defeating choices in some specially contrived sequential decision situations. A conditionalising agent, by contrast, will never make choices that are self-defeating in this way. The kind of “self-defeating choices” at issue here are ones that yield a sure loss. That is, the agent chooses a strategy that is surely worse, by her own lights, than another strategy that she might otherwise have chosen, if only her learning rule was such that she would choose differently at one or more future decision nodes.

A similar “dynamic consistency” argument can be used to defend EU preferences in addition to learning in accordance with conditionalisation (see Hammond 1976, 1977, 1988b,c). It is assumed, as before, that the agent takes a sophisticated approach to sequential decision problems. Hammond shows that only a fully Bayesian agent can plan to pursue any path in a sequential decision tree that is deemed optimal at the initial choice node. This makes the Bayesian agent unique in that she will never make “self-defeating choices” on account of her preferences and norms for preference change. She will never choose a strategy that is worse by her own lights than another strategy that she might otherwise have chosen, if only her preferences were such that she would choose differently at one or more future decision nodes.

Hammond’s argument for EU theory, and the notion of dynamic consistency that it invokes, has been criticised from different quarters, both by those who defend theories that violate the Independence axiom but retain the Completeness and Transitivity (i.e., Ordering) axioms of EU theory, and those who defend theories that violate the latter (for discussion, see Steele 2010). The approach taken by some defenders of Independence-violating theories (notably, Machina 1989 and McClennen 1990) has already been alluded to: They reject the assumption of sophisticated choice underpinning the dynamic consistency arguments. Seidenfeld (1988a,b, 1994, 2000a,b) rather rejects Hammond’s notion of dynamic consistency in favour of a more subtle notion that discriminates between theories that violate Ordering and those that violate Independence alone; the former, unlike the latter, pass Seidenfeld’s test that turns on future decision nodes where the agent is indifferent between the best options. This argument too is not without its critics (see McClennen 1988, Hammond 1988a, Rabinowicz 2000).

Note that the costs of any departure from EU theory are well highlighted by Al-Najjar and Weinstein (2009), in particular the possibility of aversion to free information and aversion to opportunities for greater choice in the future. Kadane et al. (2008) and Bradley and Steele (2016) focus on the sure loss that is associated with paying to avoid free evidence. But see Buchak (2010, 2013) for nuanced discussion of this issue in relation to epistemic versus instrumental rationality.

7. Concluding remarks

Let us conclude by summarising the main reasons why decision theory, as described above, is of philosophical interest. First, normative decision theory is clearly a (minimal) theory of practical rationality. The aim is to characterise the attitudes of agents who are practically rational, and various (static and sequential) arguments are typically made to show that certain practical catastrophes befall agents who do not satisfy standard decision-theoretic constraints. Second, many of these constraints concern the agents’ beliefs. In particular, normative decision theory requires that agents’ degrees of beliefs satisfy the probability axioms and that they respond to new information by conditionalisation. Therefore, decision theory has great implications for debates in epistemology and philosophy of science; that is, for theories of epistemic rationality.

Finally, decision theory should be of great interest to philosophers of mind and psychology, and others who are interested in how people can understand the behaviour and intentions of others; and, more generally, how we can interpret what goes on in other people’s minds. Decision theorists typically assume that a person’s behaviour can be fully explained in terms of her beliefs and desires. But perhaps more interestingly, some of the most important results of decision theory—the various representation theorems, some of which have discussed here—suggest that if a person satisfies certain rationality requirements, then we can read her beliefs and desires, and how strong these beliefs and desires are, from her choice dispositions (or preferences). How much these theorems really tell us is a matter of debate, as discussed above. But on an optimistic reading of these results, they assure us that we can meaningfully talk about what goes on in other people’s minds without much evidence beyond information about their dispositions to choose.

Bibliography

  • Al-Najjar, Nabil I. and Jonathan Weinstein, 2009, “The Ambiguity Aversion Literature: A Critical Assessment”, Economics and Philosophy, 25: 249–284. [al-Najjar and Weinstein 2009 available online (pdf)]
  • Allais, Maurice, 1953, “Le Comportement de l’Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l’Ecole Américaine”, Econometrica, 21: 503–546.
  • Alt, Franz, 1936, “Über die Meßbarkeit des Nutzens”, Zeitschrift für Nationalökonomie, 7: 161–169.
  • Ben-Haim, Yakov, 2001, Information-Gap Theory: Decisions Under Severe Uncertainty, London: Academic Press.
  • Bermúdez, José Luis, 2009, Challenges to Decision Theory, Oxford: Oxford University Press.
  • Binmore, Ken, 2009, Rational Decisions, Princeton: Princeton University Press.
  • Bolker, Ethan D., 1966, “Functions Resembling Quotients of Measures”, Transactions of the American Mathematical Society, 124: 292–312.
  • –––, 1967, “A Simultaneous Axiomatisation of Utility and Subjective Probability”, Philosophy of Science, 34: 333–340.
  • Bradley, Richard, 1998, “A Representation Theorem for a Decision Theory with Conditionals”, Synthese, 116: 187–222
  • –––, 2004, “Ramsey’s Representation Theorem”, Dialectica, 4: 484–497.
  • –––, 2007, “A Unified Bayesian Decision Theory”, Theory and Decision, 63: 233–263.
  • –––, 2017, Decision Theory with a Human Face, Cambridge: Cambridge University Press.
  • Bradley, Richard and H. Orri Stefánsson, 2017, “Counterfactual Desirability”, British Journal for the Philosophy of Science, 68: 485–533.
  • –––, 2016, “Desire, Expectation and Invariance”, Mind, 125: 691–725.
  • Bradley, Seamus and Katie Steele, 2016, “Can Free Evidence be Bad? Value of Information for the Imprecise Probabilist”, Philosophy of Science, 83: 1–28.
  • Broome, John, 1991a, Weighing Goods: Equality, Uncertainty and Time, Oxford: Blackwell.
  • –––, 1991b, “The Structure of Good: Decision Theory and Ethics”, in Foundations of Decision Theory, Michael Bacharach and Susan Hurley (eds.), Oxford: Blackwell, pp. 123–146.
  • –––, 1991c, “Desire, Belief and Expectation”, Mind, 100: 265–267.
  • –––, 1993, “Can a Humean be moderate?”, in Value, Welfare and Morality, G.R. Frey and Christopher W. Morris (eds.), Cambridge: Cambridge University Press. pp. 51–73.
  • Brown, Campbell, 2011, “Consequentialize This”, Ethics, 121: 749–771.
  • Buchak, Lara, 2010, “Instrumental rationality, epistemic rationality, and evidence-gathering”, Philosophical Perspectives, 24: 85–120.
  • –––, 2013, Risk and Rationality, Oxford: Oxford University Press.
  • –––, 2016, “Decision Theory”, in Oxford Handbook of Probability and Philosophy, Christopher Hitchcock and Alan Hájek (eds.), Oxford: Oxford University Press, pp. 789–814.
  • Byrne, Alex and Alan Hájek, 1997, “David Hume, David Lewis, and Decision Theory”, Mind, 106: 411–728.
  • Chang, Ruth, 2002, “The Possibility of Parity”, Ethics, 112: 659–688.
  • Colyvan, Mark, Damian Cox, and Katie Steele, 2010, “Modelling the Moral Dimension of Decisions”, Noûs, 44: 503–529.
  • Davidson, Donald, J. C. C. McKinsey and Patrick Suppes, 1955, “Outlines of a Formal Theory of Value, I”, Philosophy of Science, 22: 140–160.
  • Dietrich, Franz and Christian List, 2013, “A Reason-Based Theory of Rational Choice”, Noûs, 47: 104–134.
  • –––, 2016a, “Reason-Based Choice and Context Dependence: An Explanatory Framework”, Economics and Philosophy, 32: 175–229.
  • –––, 2016b, “Mentalism Versus Behaviourism in Economics: a Philosophy-of-Science Perspective”, Economics and Philosophy, 32: 249–281.
  • –––, 2017, “What Matters and How it Matters: A Choice-Theoretic Representation of Moral Theories”, The Philosophical Review, 126: 421–479.
  • Dreier, James, 1996, “Rational Preference: Decision Theory as a Theory of Practical Rationality”, Theory and Decision, 40: 249–276.
  • –––, 1993, “Structures of Normative Theories”, The Monist, 76: 22–40.
  • Elster, Jon and John E. Roemer (eds.), 1993, Interpersonal Comparisons of Well-Being, Cambridge: Cambridge University Press.
  • Eriksson, Lina and Alan Hájek, 2007, “What are Degrees of Belief”, Studia Logica, 86: 183–213.
  • Gärdenfors, Peter and Nils-Eric Sahlin, 1982, “Unreliability Probabilities, Risk Taking, and Decision Making”, reprinted in P. Gärdenfors and N.-E. Sahlin (eds.), 1988, Decision, Probability and Utility, Cambridge: Cambridge University Press, pp. 313–334.
  • Gaifman, Haim and Yang Liu, 2018, “A Simpler and more Realistic Subjective Decision Theory”, Synthese, 195: 4205–4241.
  • Gilboa, Itzhak and David Schmeidler, 1989, “Maxmin Expected Utility With Non-Unique Prior”, Journal of Mathematical Economics, 18: 141–153.
  • Good, I.J., 1967, “On the Principle of Total Evidence”, British Journal for the Philosophy of Science, 17: 319–321.
  • Grant, Simon and John Quiggin, 2013a, “Bounded Awareness, Heuristics and the Precautionary Principle”, Journal of Economic Behavior & Organization, 93: 17–31.
  • –––, 2013b, “Inductive Reasoning about Unawareness”, Economic Theory, 54: 717–755.
  • Guala, Francesco, 2006, “Has Game Theory Been Refuted?”, Journal of Philosophy, 103: 239–263.
  • Gustafsson, Johan E., 2010, “A Money-Pump for Acyclic Intransitive Preferences”, Dialectica, 64: 251–257.
  • –––, 2013, “The Irrelevance of the Diachronic Money-Pump Argument for Acyclicity”, The Journal of Philosophy, 110: 460–464.
  • Hájek, Alan and Philip Pettit, 2004, “Desire Beyond Belief”, Australasian Journal of Philosophy, 82: 77–92.
  • Halpern, Joseph Y., 2003, Reasoning About Uncertainty, Cambridge, MA: MIT Press.
  • Hammond, Peter J., 1976, “Changing Tastes and Coherent Dynamic Choice”, The Review of Economic Studies, 43: 159–173.
  • –––, 1977, “Dynamic Restrictions on Metastatic Choice”, Economica, 44: 337–350.
  • –––, 1988a, “Orderly Decision Theory: A Comment on Professor Seidenfeld”, Economics and Philosophy, 4: 292–297.
  • –––, 1988b, “Consequentialism and the Independence Axiom”, in Risk, Decision and Rationality, B. R. Munier (ed.), Dordrecht: D. Reidel, pp. 503–516.
  • –––, 1988c, “Consequentialist Foundations for Expected Utility Theory”, Theory and Decision, 25: 25–78.
  • Hansson, Bengt, 1988, “Risk Aversion as a Problem of Conjoint Measurement”, in Decision, Probability and Utility, P. Gärdenfors and N.-E. Sahlin (ed.), Cambridge: Cambridge University Press, pp. 136–158.
  • Hausman, Daniel M., 2011a, “Mistakes about Preferences in the Social Sciences”, Philosophy of the Social Sciences, 41: 3–25.
  • –––, 2011b, Preference, Value, Choice, and Welfare, Cambridge: Cambridge University Press.
  • Heap, Shaun Hargreaves, Martin Hollis, Bruce Lyons, Robert Sugden, and Albert Weale, 1992, The Theory of Choice: A Critical Introduction, Oxford: Blackwell Publishers.
  • Hill, Brian, 2013, “Confidence and Decision”, Games and Economic Behaviour, 82: 675–692.
  • Jackson, Frank and Michael Smith, 2006, “Absolutist Moral Theories and Uncertainty”, The Journal of Philosophy, 103: 267–283.
  • Jeffrey, Richard C., 1965, The Logic of Decision, New York: McGraw-Hill.
  • –––, 1974, “Preferences Among Preferences”, The Journal of Philosophy, 71: 377–391.
  • –––, 1983, “Bayesianism With a Human Face”, in Testing Scientific Theories, John Earman (ed.), Minneapolis: University of Minnesota Press, pp. 133–156.
  • Joyce, James M., 1998, “A Non-Pragmatic Vindication of Probabilism”, Philosophy of Science, 65: 575–603.
  • –––, 1999, The Foundations of Causal Decision Theory, New York: Cambridge University Press.
  • –––, 2002, “Levi on Causal Decision Theory and the Possibility of Predicting one’s Own Actions”, Philosophical Studies, 110: 69–102.
  • –––, 2010, “A Defense of Imprecise Credences in Inference and Decision Making”, Philosophical Perspectives, 24: 281–323.
  • Kadane, Joseph B., Mark J. Schervish, and Teddy Seidenfeld, 2008, “Is Ignorance Bliss?”, The Journal of Philosophy, 105: 5–36.
  • Karni, Edi and Marie-Louise Vierø, 2013, “‘Reverse Bayesianism’: A Choice-Based Theory of Growing Awareness”, American Economic Review, 103: 2790–2810.
  • –––, 2015, “Probabilistic Sophistication and Reverse Bayesianism”, Journal of Risk and Uncertainty, 50: 189–208.
  • –––, 2017, “Awareness of Unawareness: A Theory of Decision Making in the Face of Ignorance”, Journal of Economic Theory, 168: 301–325.
  • Keeney, Ralph L. and Howard Raiffa, 1993, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, Cambridge: Cambridge University Press.
  • Klibanoff, Peter, Massimo Marinacci, and Sujoy Mukerji, 2005, “A Smooth Model of Decision Making Under Ambiguity”, Econometrica, 73: 1849–1892.
  • Knight, Frank, 1921, Risk, Uncertainty, and Profit, Boston: Houghton Mifflin Company.
  • Koopmans, Tjalling C., 1960, “Stationary Ordinal Utility and Impatience”, Econometrica, 28: 287–309.
  • Kreps, David M., 1988, Notes on the Theory of Choice, Boulder: Westview Press.
  • Lazar, Seth, 2017, “Deontological Decision Theory and Agent-Centred Options”, Ethics, 127: 579–609.
  • Levi, Isaac, 1986, Hard Choices: Decision Making Under Unresolved Conflict, Cambridge: Cambridge University Press.
  • –––, 1991, “Consequentialism and Sequential Choice”, in Foundations of Decision Theory, M. Bacharach and S. Hurley (eds.), Oxford: Basil Blackwell, pp. 70–101.
  • Lewis, David, 1988, “Desire as Belief”, Mind, 97: 323–332.
  • –––, 1996, “Desire as Belief II”, Mind, 105: 303–313.
  • Loomes, Graham and Robert Sugden, 1982, “Regret Theory: An Alternative Theory of Rational Choice Under Uncertainty”, The Economic Journal, 92: 805–824.
  • Louise, Jennie, 2004, “Relativity of Value and the Consequentialist Umbrella”, Philosophical Quarterly, 4: 518–536.
  • Machina, Mark J., 1989, “Dynamic Consistency and Non-Expected Utility Models of Choice Under Uncertainty”, Journal of Economic Literature, 27: 1622–1668.
  • Maher, Patrick, 1992, “Diachronic Rationality”, Philosophy of Science, 59: 120–141.
  • Mahtani, Anna, forthcoming, “Awareness Growth and Dispositional Attitudes”, Synthese.
  • Mandler, Michael, 2001, “A Difficult Choice in Preference Theory: Rationality Implies Completeness or Transitivity but Not Both”, in Varieties of Practical Reasoning, Elijah Millgram (ed.), Cambridge, MA: MIT Press, pp. 373–402.
  • McClennen, Edward F., 1988, “Ordering and Independence: A Comment on Professor Seidenfeld”, Economics and Philosophy, 4: 298–308.
  • –––, 1990, Rationality and Dynamic Choice: Foundational Explorations. Cambridge: Cambridge University Press.
  • Meacham, Patrick, Christopher J. G. and Jonathan Weisberg, 2011, “Representation Theorems and the Foundations of Decision Theory”, Australasian Journal of Philosophy, 89: 641–663.
  • Peterson, Martin, 2009, An Introduction to Decision Theory, Cambridge: Cambridge University Press.
  • Pettit, Philip, 1993, “Decision Theory and Folk Psychology”, in Foundations of Decision Theory: Issues and Advances, Michael Bacharach and Susan Hurley (eds.), Oxford: Blackwell, pp. 147–175.
  • Piermont, Evan, 2017, “Introspective Unawareness and Observable Choice”, Games and Economic Behavior, 106: 134–152.
  • Portmore, Douglas W., 2007, “Consequentializing Moral Theories”, Pacific Philosophical Quarterly, 88: 39–73.
  • Rabin, Matthew, 2000, “Risk Aversion and Expected-Utility Theory: A Calibration Theorem”, Econometrica, 68: 1281–1292.
  • Rabinowicz, Wlodek, 1995, “To Have one’s Cake and Eat it, Too: Sequential Choice and Expected-Utility Violations”, Journal of Philosophy, 92: 586–620.
  • –––, 2000, “Preference Stability and Substitution of Indifferents: A rejoinder to Seidenfeld”, Theory and Decision, 48: 311–318.
  • –––, 2002, “Does Practical Deliberation Crowd out Self-Prediction?”, Erkenntnis, 57: 91–122.
  • Ramsey, Frank P., 1926 [1931], “Truth and Probability”, in The Foundations of Mathematics and other Logical Essays, R.B. Braithwaite (ed.), London: Kegan, Paul, Trench, Trubner & Co., pp. 156–198.
  • –––, 1928, “A Mathematical Theory of Saving”, The Economic Journal, 38: 543–559.
  • –––, 1990, “Weight of the Value of Knowledge”, British Journal for the Philosophy of Science, 41: 1–4.
  • Resnik, Michael D., 1987, Choices: An Introduction to Decision Theory, Minneapolis: University of Minnesota Press.
  • Rinard, Susanna, 2017, “No Exception for Belief”, Philosophy and Phenomenological Research, 94: 121–143.
  • Roussos, Joe, 2020, Policymaking Under Scientific Uncertainty, Ph.D. thesis, London School of Economics and Political Science.
  • Savage, Leonard J., 1954, The Foundations of Statistics, New York: John Wiley and Sons.
  • Schervish, Mark J., Teddy Seidenfeld, Joseph B. Kadane, and Isaac Levi, 2003, “Extensions of Expected Utility Theory and Some Limitations of Pairwise Comparisons”, Proceedings of the Third ISIPTA (JM): 496–510.
  • Seidenfeld, Teddy, 1988a, “Decision Theory Without ‘Independence’ or Without ‘Ordering’”, Economics and Philosophy, 4: 309–315.
  • –––, 1988b, “Rejoinder [to Hammond and McClennen]”, Economics and Philosophy, 4: 309–315.
  • –––, 1994, “When Normal and Extensive Form Decisions Differ”, Logic, Methodology and Philosophy of Science, IX: 451–463.
  • –––, 2000a, “Substitution of Indifferent Options at Choice Nodes and Admissibility: A Reply to Rabinowicz”, Theory and Decision, 48: 305–310.
  • –––, 2000b, “The Independence Postulate, Hypothetical and Called-off Acts: A Further Reply to Rabinowicz”, Theory and Decision, 48: 319–322.
  • Sen, Amartya, 1973, “Behaviour and the Concept of Preference”, Economica, 40: 241–259.
  • –––, 1977, “Rational Fools: A Critique of the Behavioural Foundations of Economic Theory”, Philosophy and Public Affairs, 6: 317–344.
  • –––, 1982, “Rights and Agency”, Philosophy and Public Affairs, 11: 3–39.
  • Skyrms, Brian, 1993, “A Mistake in Dynamic Coherence Arguments?”, Philosophy of Science, 60: 320–328.
  • Stalnaker, Robert C., 1987, Inquiry, Cambridge, MA: MIT Press.
  • Steele, Katie S., 2010, “What are the Minimal Requirements of Rational Choice?: Arguments from the Sequential-Decision Setting”, Theory and Decision, 68: 463–487.
  • Steele, Katie S. and H. Orri Stefánsson, forthcoming-a, “Belief Revision for Growing Awareness”, Mind.
  • –––, forthcoming-b, Beyond Uncertainty: Reasoning with Unknown Possibilities, Martin Peterson (ed.), Cambridge University Press.
  • Stefánsson, H. Orri, 2014, “Desires, Beliefs and Conditional Desirability”, Synthese, 191: 4019–4035.
  • Stefánsson, H. Orri and Richard Bradley, 2019, “What is Risk Aversion?”, British Journal for the Philosophy of Science, 70: 77–102.
  • Suppes, Patrick, 2002, Representation and Invariance of Scientific Structures, Stanford: CSLI Publications.
  • Temkin, Larry, 2012, Rethinking the Good: Moral Ideals and the Nature of Practical Reasoning, Oxford: Oxford University Press.
  • Thoma, Johanna, 2020a, “Instrumental Rationality Without Separability”, Erkenntnis, 85: 1219–1240. doi:10.1007/s10670-018-0074-9
  • –––, 2020b, “In Defense of Revealed Preference Theory”, Economics and Philosophy, 1-25. doi:10.1017/S0266267120000073
  • Tversky, Amos, 1975, “A Critique of Expected Utility Theory: Descriptive and Normative Considerations”, Erkenntnis, 9: 163–173.
  • Villegas, C., 1964, “On Qualitative Probability \(\sigma\)-Algebras”, Annals of Mathematical Statistics, 35: 1787–1796.
  • Vallentyne, Peter, 1988, “Gimmicky Representations of Moral Theories”, Metaphilosophy, 19: 253–263.
  • Vredenburgh, Kate, 2020, “A Unificationist Defence of Revealed Preferences”, Economics and Philosophy, 36(1): 149–169. doi:10.1017/S0266267118000524
  • von Neumann, John and Oskar Morgenstern, 1944, Theory of Games and Economic Behavior, Princeton: Princeton University Press.
  • Walley, Peter, 1991, Statistical Reasoning with Imprecise Probabilities, New York: Chapman and Hall.
  • Walker, Oliver and Simon Dietz, 2011, “A Representation Result for Choice Under Conscious Unawareness”, Grantham Research Institute on Climate Change and the Environment, Working Paper No. 59.
  • Zynda, Lyle, 2000, “Representation Theorems and Realism about Degrees of Belief”, Philosophy of Science, 67: 45–69.

Copyright © 2020 by
Katie Steele <katie.steele@anu.edu.au>
H. Orri Stefánsson <orri.stefansson@philosophy.su.se>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.