Notes to Interpretations of Probability

1. Compare: apart from the assignment of ‘true’ to tautologies and ‘false’ to contradictions, deductive logic is silent regarding the assignment of truth values.

2. It turns out that the axiomatization that Salmon gives (p. 59) is inconsistent, and thus that by his lights no interpretation could be admissible. His axiom A2 states:

If “A is a subclass of B, P(A, B) = 1” (read this as “the probability of B, given A, equals 1”).

Let I be the empty class; then for all B, P(I, B) = 1. But his A3 states:

If B and C are mutually exclusive P(A, BC) = P(A, B) + P(A, C).

Then for any X, P(I, X ∪ −X) = P(I, X) + P(I, −X) = 1 + 1 = 2, which contradicts his normalization axiom A1. Carnap (1950, 341) notes a similar inconsistency in Jeffreys' (1939) axiomatization. This problem is easily remedied — simply add the qualification in A2 that A is non-empty — but it is instructive. It suggests that we ought not take the admissibility criterion too seriously. After all, Salmon's subsequent discussion of the merits and demerits of the various interpretations, as judged by the ascertainability and applicability criteria, still stands, and that is where the real interest lies.

3. For example, we might specify that our family consists of distributions over the non-negative integers with a given mean, m. Then it turns out that the maximum entropy distribution exists, and is geometric:

P(k) =
1
1 + m
(
m
1 + m
)k , k = 1,2,…

However, not just any further constraint will solve the problem. If instead our family consists of distributions over the positive integers with finite mean, then once more there is no distribution that achieves maximum entropy. (Intuitively, the larger the mean, the more diffuse we can make the distribution, and there is no bound on the mean.)

4. Indeed, according to the requirement of regularity (to be discussed further in §3.3), one should not be certain of anything stronger than T, on pain of irrationality!

5. Some authors simply define ‘coherence’ as conformity to the probability calculus.

6. Still, according to some, the fair price of a bet on E measures the wrong quantity: not your probability that E will be the case, but rather your probability that E will be the case and that the prize will be paid, which may be rather less — for example, if E is unverifiable. Perhaps we should say that betting behavior can be used only to measure probabilities of propositions of the form ‘E and it is verified that E’. For typical bets, the distinction between ‘E’ and ‘E and it is verified that E’ will not matter. But if E is unverifiable, then a bet on it cannot be used to elicit the agent's probability for it. In that case we should think of this objection as showing that the betting interpretation is incomplete.

7. Note, however, that some authors find calibration a poor measure for evaluating degrees of belief. One probability function can be better calibrated than another even though the latter uniformly assigns higher probabilities to truths and lower probabilities to falsehoods — see Joyce (1998).

8. There are subtleties that I cannot go into here, including the notion of admissibility, the relativity of chances to times, and Lewis' (1994b) revised version of the Principle.

9. The reference class problem is analogous to the “total evidence” problem for Carnap, discussed above. Intuitively, the right reference class is determined by all the evidence relevant to my longevity, and it is unclear what is and is not relevant evidence without appeal to probabilities.

10. It should be noted that Gillies argues that Humphries' paradox does not force non-Kolmogorovian propensities on us.

11. I am grateful to Aidan Lyon for this point.

Copyright © 2011 by
Alan Hájek <alan.hajek@anu.edu.au>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]