Notes to Common Knowledge
1. Thanks to Scott Boorman, Johan van Benthem, and Brian Skyrms, who called our attention on Friedell's work.
2. Thanks to Alan Hajek for this example, the only example in this section which does not appear elsewhere in the literature.
3. The version of the story Littlewood analyzes involves a group of cannibals, some of whom are marrried to unfaithful wives, and a missionary who visits the group and makes a public announcement of the fact.
4. Robert Vanderschraaf reminded me in conversation that a crucial assumption in this problem is that the cook is telling the diners the truth, that is, the cook's announcement generates common knowledge and not merely common belief that there is at least one messy individual. For if the agents believe the cook's announcements even if the cook does not reliably tell the truth, then should the cook mischeviously announce that there is at least one messy individual when in fact all are clean, all will wipe their faces at once.
5. The mutual knowledge characterized by (i), (ii), and (iii) is sufficient both to account for the agents' following the D1,D2-outcome, and for their being able to predict each others' moves. However, weaker knowledge assumptions imply that the agents will play D1,D2, even if they might not both be able to predict this outcome before the start of play. As Fiona's quoted argument implies, if both are rational, both know the game, and Fiona knows that Alan is rational and knows the game, then the D1,D2-outcome is the result, even if Alan does not know that Fiona is rational or knows the game
6. Hume's analysis of the Farmer's Dilemma is perhaps the earliest example of a backwards induction argument applied to a sequential decision problem. See Skyrms (1996) and Vanderschraaf (1996) for more extended discussions of this argument.
7. See §3 for a formal definition of the Nash equilibrium concept.
8. Aumann (1976) himself gives a set-theoretic account of common knowledge, which has been generalized in several articles in the literature, including Monderer and Samet (1988) and Binmore and Brandenburger (1989). Vanderschraaf (1997) gives the set-theoretic formulation of Lewis' account of common knowledge reviewed in this paper.
9. This result appears in several articles in the literature, including Monderer and Samet's and Binmore and Brandenburger's articles on common knowledge.
10. I abuse notation slightly, writing ‘KiK j(A)’ for ‘Ki(K j(A))’.
11. A partition of a set Ω is a collection of sets H = {H1, H2, … } such that Hi∩Hj = ∅ for i≠j, and ∪i Hi = Ω.
12. As a consequence of Proposition 2.2, the agents' private information systems determine an a priori structure of propositions over the space of possible worlds regarding what they can know, including what mutual and common knowledge they potentially have. The world ω ∈ Ω which obtains determines a posteriori what individual, mutual and common knowledge agents in fact have. Hence, one can read ω ∈ Ki(A) as ‘i knows A at (possible world) ω’, ω ∈ KmN(A) as ‘A is mth level mutual knowledge for the agents of N at ω’, and so on. If ω obtains, then one can conclude that i does know A, that A is mth level mutual knowledge, and so on.
13. Thanks to Chris Miller and Jarah Evslin for suggesting the term ‘symmetric reasoner’ to decribe the parity of reasoning powers that Lewis relies upon in his treatment of common knowledge. Lewis does not explicitly include the notion of A′-symmetric reasoning into his definition of common knowledge, but he makes use of the notion implicitly in his argument for how his definition of common knowledge generates the mutual knowledge hierarchy.
14. The meet M of a collection Hi, i ∈ N of partitions is the finest common coarsening of the partitions. More specifically, for any ω ∈ Ω, if M(ω) is the element of M containing ω, then
- Hi(ω) ⊆ M(ω) for all i ∈ N, and
- For any other M′ satisfying (i), M(ω) ⊆ M′(ω).
15. Bc denotes the complement of B, that is Bc = ω − B = {ω ∈ Ω:ω ∉ Ω}. Bc can be read “not-B”.
16. Gilbert does not elaborate further on what counts as epistemic normality.
17. Gilbert (1989, p. 193) also maintains that her account of common knowledge has the advantage of not requiring that the agents reason through an infinite hierarchy of propositions. On her account, the agents' smooth-reasoner counterparts do all the necessary reasoning for them. However, Gilbert fails to note that Aumann's and Lewis' accounts of common knowledge also have this advantage.
18. Suppose the following all hold:
Ω = {ω1, ω2, ω3, ω4},
H1 = {{ω1,ω2}, {ω3,ω4}}
H2 = {{ω1,ω2,ω3}, {ω4}}
μ(ωi) = 1/4
Then if E = {ω1,ω4}, then at ω1, we have:
q1(E) = μ(E | {ω1,ω2}) = 1/2, andq2(E) = μ(E | {ω1,ω2,ω3}) = 1/3
Moreover, at ω = ω1, Agent 1 knows that H2(ω) = {ω1,ω2,ω3}, so she knows that q2(E) = 1/3. Agent 2 knows at ω1 that either H1(ω) = {ω1,ω2} or H1(ω) = {ω3,ω4}, so either way he knows that q1(E) = 1/2. Hence the agents' posteriors are mutually known, and yet they are unequal. The reason for this is that the posteriors are not common knowledge. For Agent 2 does not know what Agent 1 thinks q2(E) is, since if ω = ω3, which is consistent with what Agent 2 knows, then Agent 1 will believe that q2(E) = 1/3 with probability 1/2 (if ω = ω3) and q2(E) = 1 with probability 1/2 (if ω = ω4).
19. Harsanyi (1968) is the most famous defender of the CPA. Indeed, Aumann (1974, 1987) calls the CPA the Harsanyi Doctrine in Harsanyi's honor.
20. Alan Hajek first pointed this out to the first author in conversation.
21. An agent's pure strategies in a noncooperative game are simply the alternative acts the agent might choose as defined by the game. A mixed strategy σk(·) is a probability distribution defined over k's pure strategies by some random experiment such as the toss of a coin or the spin of a roulette wheel. k plays each pure strategy sk j with probability σk(sk j) according to the outcome of the experiment, which is assumed to be probabilistically independant of the others' experiments. A strategy is completely mixed when each pure strategy has a positive probability of being the one selected by the mixing device.
22. Lewis (1969), p. 76. Lewis gives a further definition of agents following a convention to a certain degree if only a certain percentage of the agents actually conform to the coordination equilibrium corresponding to the convention. See Lewis (1969, pp. 78-89).
23. To show that the containment can be proper and hence that rationalizability is a nontrivial notion, consider the 2-agent game with payoff structure defined by Figure 3.2a:
Figure 3.2a
In this game, s1 and s3 strictly dominate s2 for Lizzi, so Lizzi cannot play s2 on pain of violating Bayesian rationality. Joanna knows this, so Joanna knows that the only pure strategy profiles which are possible outcomes of the game will be among the six profiles in which Lizzi does not choose s2. In effect, the 3 × 3 game is reduced to the 2 × 3 game defined by Figure 3.2b:
Figure 3.2b
In this reduced game, s2 is strictly dominated for Joanna by s1, and so Joanna will rule out playing s2. Lizzi knows this, and so she rules out strategy combinations in which Joanna plays s2. The rationalizable strategy profiles are the four profiles that remain after deleting all of the strategy combinations in which either Lizzi or Joanna play s2. In effect, common knowledge of Bayesian rationality reduces the 3 × 3 game of Figure 3.2a to the 2 × 2 game defined by Figure 3.2c:
Figure 3.2c
since Lizzi and Joanna both know that the only possible outcomes of the game are (s1, s1), (s1, s3), (s3, s1), and (s3, s3).
24. In their original papers, Bernheim (1984) and Pearce (1984) included in their definitions of rationalizability the requirement that the agents' probability distributions over their opponents satisy probabilistic independance, that is, for each agent k and for each
s-k = (s1j1, … , sk−1 jk−1, sk+1 jk+1, … , snjn) ∈ S-k
k's joint probability must equal the product of k's marginal probabilities, that is,
μk(s-k) = μk(s1j1) · … · μk(sk−1 jk−1) · μk(sk+1 jk+1) · … · μk(snjn)
Brandenburger and Dekel (1987), Skyrms (1990), and Vanderschraaf (1995) all argue that the probabilistic independence requirement is not well-motivated, and do not include this requirement in their presentations of rationalizability. Bernheim (1984) calls a Bayes concordant system of beliefs a “consistent” system of beliefs. Since the term “consistent beliefs” is used in this paper to describe probability distributions that agree with respect to a mutual opponent's strategies, I use the term “Bayes concordant system of beliefs” rather than Bernheim's “consistent system of beliefs”.
25. A mixed strategy is a propbability distribution σk(·) defined over k's pure strategies by some random experiment such as the toss of a coin or the spin of a roulette wheel. k plays each pure strategy skj with probability σk(skj) according to the outcome of the experimentm which is assumed to be probabilistically independent of the others' experiments. A strategy is completely mixed when each pure strategy has a positive probability of being the one selected by the mixing device.
Nash (1950, 1951) originally developed the Nash equilibrium concept in terms of mixed strategies. In subsequent years, game theorists have realized that the Nash and more general correlated equilibrium concepts can be defined entirely in terms of agents' beliefs, without recourse to mixed strategies. See Aumann (1987), Brandenburger and Dekel (1988), and Skyrms (1991) for an extended discussion of equilibrium-in-beliefs.
26. Ron's private recommendations in effect partition Ω as follows:
- H1 = { {ω1,ω2}, {ω3} }, and
- H2 = { {ω1,ω3}, {ω2} }.
These partitions are diagrammed below:
Given their private information, at each possible world ω to which an agent i assigns positive probability, following f maximizes i's expected utility. For instance, at ω = ω2,
E(u1(A1) | H1)(ω2) = ½·3 + ½·2 = 5/2 > 2 = ½·4 + ½·0 = E(u1(A2) | H1)(ω2) and
E(u2(A2) | H2)(ω2) = 4 > 3 = E(u2(A1) | H1)(ω2)
27. An outcome s1 of a game Pareto-dominates an outcome s2 if, and only if,
- E(uk(s1)) ≥ E(uk(s2)) for all k ∈ N.
- s1 strictly dominates s2 if the inequalities of (i) are all srict.
28. While both the endogenous and the Aumann correlated equilibrium concepts generalize the Nash equilibrium, neither correlated equilibrium concept contains the other. See Chapter 2 of Vanderschraaf (1995) for examples which show this.
29. Aumann (1987) notes that it is possible to extend the definitions of Aumann correlated equilibrium and Hi-measurability to allow for cases in which Ω is infinite and the Hi's are not necessarily partitions. However, he argues that there is nothing to be gained conceptually by doing so.
30. In general, the method of backwards induction is undefined for games of imperfect information, although backwards induction reasoning can be applied to a limited extent in such games.
31. By the elementary properties of the knowledge operator, we have that K2K1K 2(Γ) ⊆ K2K1(Γ) and K1K2K 1K2(Γ) ⊆ K1K2K 1(Γ), so we needn't explicitly state that at I22, K2K1(Γ) and at I11, K1K2K 1(Γ). By the same elementary properties, the knowledge assumptions at the latter two information sets imply that Fiona and Alan have third-order mutual knowledge of the game and second-order mutual knowledge of rationality. For instance, since K2K1(Γ) is given at I22, we have that K2K1K 1(Γ) because K1(Γ) ⊆ K1K1(Γ) and so K2K1(Γ) ⊆ K2K1K1(Γ). The other statements which characterize third order mutual knowledge of the game and second order mutual knowledge of rationality are similarly derived.
32. The version of the example Rubinstein presents is more general than the version presented here. Rubinstein notes that this game is closely related to the coordinated attack problem analyzed in Halpern (1986).
33. In the terminology of decision theory, A is each agents' maximin strategy.
34. This could be achieved if the e-mail systems were constructed so that each nth confirmation is sent 2-n seconds after receipt of the nth message.
Notes to Rubinstein's Proof
1. If this does not look immediately obvious, consider that eitherE = [T2 = t] = my (Lizzi's) tth confirmation was lost,
or
F = [T2 = t] = my tth confirmation was received and Joanna's tth confirmation was lost
must occur, and that μ1(T1 = t | E) = μ1(T1 = t | F) = 1 because Lizzi can see her own computer screen, so we can apply Bayes' Theorem as follows:
μ1(E | T1 = t) | = |
|
||
= |
|
|||
= |
|