Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Notes toCommon Knowledge

1. Thanks to Scott Boorman, Johan van Benthem, and Brian Skyrms, whocalled our attention on Friedell’s work.

2. Thanks to Alan Hájek for this example, the only example in thissection which does not appear elsewhere in the literature.

3. The version of the story Littlewood analyzes involves a group ofcannibals, some of whom are marrried to unfaithful wives, and amissionary who visits the group and makes a public announcement of thefact.

4. Robert Vanderschraaf reminded me in conversation that a crucialassumption in this problem is that the cook is telling the diners thetruth, that is, the cook’s announcement generates commonknowledge and not merelycommon belief that there is at leastone messy individual. For if the agents believe the cook’sannouncements even if the cook does not reliably tell the truth, thenshould the cook mischeviously announce that there is at least onemessy individual when in fact all are clean, all will wipe their facesat once.

5. The mutual knowledge characterized by (i), (ii), and (iii) issufficient both to account for the agents’ following the\(D^1,D^2\)-outcome, and for their being able to predict eachothers’ moves. However, weaker knowledge assumptions imply thatthe agents will play \(D^1,D^2\), even if they might not both be ableto predict this outcome before the start of play. As Fiona’squoted argument implies, if both are rational, both know the game, andFiona knows that Alan is rational and knows the game, then the\(D^1,D^2\)-outcome is the result, even if Alan does not know thatFiona is rational or knows the game

6. Hume’s analysis of the Farmer’s Dilemma is perhaps theearliest example of a backwards induction argument applied to asequential decision problem. See Skyrms (1998) and Vanderschraaf(1996) for more extended discussions of this argument.

7. See §3 for a formal definition of the Nash equilibriumconcept.

8. Aumann (1976) himself gives a set-theoretic account of commonknowledge, which has been generalized in several articles in theliterature, including Monderer and Samet (1988) and Binmore andBrandenburger (1989). Vanderschraaf (1997) gives the set-theoreticformulation of Lewis’ account of common knowledge reviewed inthis paper.

9. This result appears in several articles in the literature, includingMonderer and Samet’s and Binmore and Brandenburger’sarticles on common knowledge.

10. K3 abuses notation slightly, with‘\(\mathbf{K}_i\mathbf{K}_j(A)\)’ for‘\(\mathbf{K}_i (\mathbf{K}_j(A))\)’.

11. A partition of a set \(\Omega\) is a collection of sets \(\mathcal{H}= \{H_1, H_2 , \ldots \}\) such that \(H_i\cap H_j = \varnothing\) for\(i\ne j,\) and \(\bigcup_i H_i = \Omega\).

12. As a consequence of Proposition 2.2, the agents’ privateinformation systems determine ana priori structure ofpropositions over the space of possible worlds regarding what they canknow, including what mutual and common knowledge they potentiallyhave. The world \(\omega \in \Omega\) which obtains determinesaposteriori what individual, mutual and common knowledge agents infact have. Hence, one can read \(\omega \in \mathbf{K}_i (A)\) as‘\(i\) knows \(A\) at (possible world) \(\omega\)’,\(\omega \in \mathbf{K}^m_N(A)\) as ‘\(A\) is \(m\)thlevel mutual knowledge for the agents of \(N\) at \(\omega\)’,and so on. If \(\omega\) obtains, then one can conclude that \(i\)does know \(A\), that \(A\) is \(m\)th level mutualknowledge, and so on.

13. Thanks to Chris Miller and Jarah Evslin for suggesting the term‘symmetric reasoner’ to decribe the parity of reasoningpowers that Lewis relies upon in his treatment of common knowledge.Lewis does not explicitly include the notion of \(A'\)-symmetricreasoning into his definition of common knowledge, but he makes use ofthe notion implicitly in his argument for how his definition of commonknowledge generates the mutual knowledge hierarchy.

14. Themeet \(\mathcal{M}\) of a collection \(\mathcal{H}_i, i\in N\) of partitions is the finest common coarsening of thepartitions. More specifically, for any \(\omega \in \Omega\), if\(\mathcal{M}(\omega)\) is the element of \(\mathcal{M}\) containing\(\omega\), then

  1. \(\mathcal{H}_i (\omega) \subseteq \mathcal{M}(\omega)\) for all\(i \in N\), and
  2. For any other \(\mathcal{M}'\) satisfying (i),\(\mathcal{M}(\omega) \subseteq \mathcal{M}'(\omega)\).

15. \(B^c\) denotes the complement of \(B\), that is \(B^c = \omega - B =\{\omega \in \Omega :\omega \not\in \Omega\}\). \(B^c\) can be read“not-\(B\)”.

16. Gilbert does not elaborate further on what counts as epistemicnormality.

17. Gilbert (1989, p. 193) also maintains that her account of commonknowledge has the advantage of not requiring that the agents reasonthrough an infinite hierarchy of propositions. On her account, theagents’ smooth-reasoner counterparts do all the necessaryreasoning for them. However, Gilbert fails to note that Aumann’sand Lewis’ accounts of common knowledge also have thisadvantage.

18. Suppose the following all hold:

\[\begin{align}\Omega &= \{\omega_1, \omega_2, \omega_3, \omega_4\}, \\\mathcal{H}_1 &= \{\{\omega_1,\omega_2\}, \{\omega_3,\omega_4\}\} \\\mathcal{H}_2 &= \{\{\omega_1,\omega_2,\omega_3\}, \{\omega_4\}\} \\\mu(\omega_i) &= 1/4 \end{align}\]

Then if \(E = \{\omega_1,\omega_4\}\), then at \(\omega_1\), wehave:

\[\begin{align}q_1 (E) &= \mu(E \mid \{\omega_1,\omega_2\}) = 1/2, \text{ and} \\q_2 (E) &= \mu(E \mid \{\omega_1,\omega_2,\omega_3\}) = 1/3\end{align}\]

Moreover, at \(\omega = \omega_1\), Agent 1 knows that \(\mathcal{H}_2(\omega) = \{\omega_1,\omega_2,\omega_3\}\), so she knows that \(q_2(E) = 1/3\). Agent 2 knows at \(\omega_1\) that either \(\mathcal{H}_1(\omega) = \{\omega_1,\omega_2\}\) or \(\mathcal{H}_1 (\omega) =\{\omega_3,\omega_4\}\), so either way he knows that \(q_1 (E) =1/2\). Hence the agents’ posteriors are mutually known, and yetthey are unequal. The reason for this is that the posteriors are notcommon knowledge. For Agent 2 does not know what Agent 1 thinks \(q_2(E)\) is, since if \(\omega = \omega_3\), which is consistent withwhat Agent 2 knows, then Agent 1 will believe that \(q_2 (E) = 1/3\)with probability 1/2 (if \(\omega = \omega_3)\) and \(q_2 (E) = 1\)with probability 1/2 (if \(\omega = \omega_4)\).

19. Harsanyi (1968) is the most famous defender of the CPA. Indeed,Aumann (1974, 1987) calls the CPA theHarsanyi Doctrine inHarsanyi’s honor.

20. Alan Hájek first pointed this out to the first author inconversation.

21. An agent’spure strategies in a noncooperative gameare simply the alternative acts the agent might choose as defined bythe game. A mixed strategy \(\sigma_k (\cdot)\) is a probabilitydistribution defined over \(k\)’s pure strategies by some randomexperiment such as the toss of a coin or the spin of a roulette wheel.\(k\) plays each pure strategy \(s_{k j}\) with probability \(\sigma_k(s_{k j})\) according to the outcome of the experiment, which isassumed to be probabilistically independent of the others’experiments. A strategy iscompletely mixed when each purestrategy has a positive probability of being the one selected by themixing device.

22. Lewis (1969), p. 76. Lewis gives a further definition of agentsfollowing a convention to acertain degree if only a certainpercentage of the agents actually conform to the coordinationequilibrium corresponding to the convention. See Lewis (1969, pp.78–89).

23. To show that the containment can be proper and hence thatrationalizability is a nontrivial notion, consider the 2-agent gamewith payoff structure defined by Figure 3.2a:

  Joanna
  \(s_1\)\(s_2\)\(s_3\)
Lizzi\(s_1\)(4,3)(1,2)(3,4)
\(s_2\)(1,1)(0,5)(1,1)
\(s_3\)(3,4)(1,3)(4,3)

Figure 3.2a

In this game, \(s_1\) and \(s_3\) strictly dominate \(s_2\) for Lizzi,so Lizzi cannot play \(s_2\) on pain of violating Bayesianrationality. Joanna knows this, so Joanna knows that the only purestrategy profiles which are possible outcomes of the game will beamong the six profiles in which Lizzi does not choose \(s_2\). Ineffect, the \(3 \times 3\) game is reduced to the \(2 \times 3\) gamedefined by Figure 3.2b:

  Joanna
  \(s_1\)\(s_2\)\(s_3\)
Lizzi\(s_1\)(4,3)(1,2)(3,4)
\(s_3\)(3,4)(1,3)(4,3)

Figure 3.2b

In this reduced game, \(s_2\) is strictly dominated for Joanna by\(s_1\), and so Joanna will rule out playing \(s_2\). Lizzi knowsthis, and so she rules out strategy combinations in which Joanna plays\(s_2\). The rationalizable strategy profiles are the four profilesthat remain after deleting all of the strategy combinations in whicheither Lizzi or Joanna play \(s_2\). In effect, common knowledge ofBayesian rationality reduces the \(3 \times 3\) game of Figure 3.2a tothe \(2 \times 2\) game defined by Figure 3.2c:

  Joanna
  \(s_1\)\(s_3\)
Lizzi\(s_1\)(4,3)(3,4)
\(s_3\)(3,4)(4,3)

Figure 3.2c

since Lizzi and Joanna both know that the only possible outcomes ofthe game are \((s_1, s_1), (s_1, s_3), (s_3, s_1)\), and \((s_3,s_3)\).

24. In their original papers, Bernheim (1984) and Pearce (1984) includedin their definitions of rationalizability the requirement that theagents’ probability distributions over their opponents satisyprobabilistic independence, that is, for each agent \(k\) andfor each

\[\mathbf{s}_{-k} = (s_{1j_1}, \ldots ,s_{k-1j_{k-1}}, s_{k+1j_{k+1}}, \ldots ,s_{nj_{ n} }) \in S_{-k}\]

\(k\)’s joint probability must equal the product of k’smarginal probabilities, that is,

\[\mu_k (\mathbf{s}_{-k}) = \mu_k (s_{1j_1})\cdots \mu_k (s_{k-1j_{k-1}})\cdot \mu_k (s_{k+1j_{k+1}})\cdots \mu_k (s_{nj_{ n} })\]

Brandenburger and Dekel (1987), Skyrms (1990), and Vanderschraaf(1995) all argue that the probabilistic independence requirement isnot well-motivated, and do not include this requirement in theirpresentations of rationalizability. Bernheim (1984) calls a Bayesconcordant system of beliefs a “consistent” system ofbeliefs. Since the term “consistent beliefs” is used inthis paper to describe probability distributions that agree withrespect to a mutual opponent’s strategies, I use the term“Bayes concordant system of beliefs” rather thanBernheim’s “consistent system of beliefs”.

25. A mixed strategy is a propbability distribution \(\sigma_k (\cdot)\)defined over \(k\)’s pure strategies by some random experimentsuch as the toss of a coin or the spin of a roulette wheel. \(k\)plays each pure strategys\(_{kj}\) with probability\(\sigma_k\)(s\(_{kj}\)) according to the outcome of theexperimentm which is assumed to be probabilistically independent ofthe others’ experiments. A strategy iscompletely mixedwhen each pure strategy has a positive probability of being the oneselected by the mixing device.

Nash (1950, 1951) originally developed the Nash equilibrium concept interms of mixed strategies. In subsequent years, game theorists haverealized that the Nash and more general correlated equilibriumconcepts can be defined entirely in terms of agents’ beliefs,without recourse to mixed strategies. See Aumann (1987), Brandenburgerand Dekel (1988), and Skyrms (1991) for an extended discussion ofequilibrium-in-beliefs.

26. Ron’s private recommendations in effect partition \(\Omega\) asfollows:

\[\begin{align}\mathcal{H}_1 &= \{ \{\omega_1,\omega_2\}, \{\omega_3\} \}, \text{ and} \\\mathcal{H}_2 &= \{ \{\omega_1,\omega_3\}, \{\omega_2\} \}.\end{align}\]

These partitions are diagrammed below:

Two boxes: one, labeled H1 contains a box with omega1 and omega2 and a box with omega3; the other, labeled H2 contains a box with omega1 and omega3 and a box with omega2

Partition diagram for \(\mathcal{H}_1\) and \(\mathcal{H}_2\)

Given their private information, at each possible world \(\omega\) towhich an agent \(i\) assigns positive probability, following \(f\)maximizes \(i\)’s expected utility. For instance, at \(\omega =\omega_2\),

\[\begin{align}E(u_1 (A_1) \mid \mathcal{H}_1)(\omega_2) &= \tfrac{1}{2} \cdot 3 + \tfrac{1}{2} \cdot 2 &\\ &= \qquad \tfrac{5}{2} &\\ &\gt \qquad 2 &&= \tfrac{1}{2}\cdot 4 + \tfrac{1}{2}\cdot 0 \\ & &&= E(u_1 (A_2) \mid \mathcal{H}_1)(\omega_2)\end{align}\]

and

\[\begin{align}E(u_2 (A_2) \mid \mathcal{H}_2)(\omega_2) &= \quad 4 &\\ &\gt \quad 3 &&= E(u_2 (A_1) \mid \mathcal{H}_1)(\omega_2)\end{align}\]

27. An outcome \(\mathbf{s}_1\) of a game Pareto-dominates an outcome\(\mathbf{s}_2\) if, and only if,

  1. \(E(u_k (\mathbf{s}_1)) \ge E(u_k (\mathbf{s}_2))\) for all \(k\in N\).
  2. \(\mathbf{s}_1\) strictly dominates \(\mathbf{s}_2\) if theinequalities of (i) are all srict.

28. While both the endogenous and the Aumann correlated equilibriumconcepts generalize the Nash equilibrium, neither correlatedequilibrium concept contains the other. See Chapter 2 of Vanderschraaf(1995) for examples which show this.

29. Aumann (1987) notes that it is possible to extend the definitions ofAumann correlated equilibrium and \(\mathcal{H}_i\)-measurability toallow for cases in which \(\Omega\) is infinite and the\(\mathcal{H}_i\)’s are not necessarily partitions. However, heargues that there is nothing to be gained conceptually by doingso.

30. In general, the method of backwards induction is undefined for gamesof imperfect information, although backwards induction reasoning canbe applied to a limited extent in such games.

31. By the elementary properties of the knowledge operator, we have that\(\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2 (\Gamma) \subseteq\mathbf{K}_2\mathbf{K}_1 (\Gamma)\) and\(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2 (\Gamma) \subseteq\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1 (\Gamma)\), so we needn’texplicitly state that at \(I^{22},\) \(\mathbf{K}_2\mathbf{K}_1(\Gamma)\) and at \(I^{11},\) \(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1(\Gamma)\). By the same elementary properties, the knowledgeassumptions at the latter two information sets imply that Fiona andAlan have third-order mutual knowledge of the game and second-ordermutual knowledge of rationality. For instance, since\(\mathbf{K}_2\mathbf{K}_1 (\Gamma)\) is given at \(I^{22}\), we havethat \(\mathbf{K}_2\mathbf{K}_1\mathbf{K}_1 (\Gamma)\) because\(\mathbf{K}_1 (\Gamma) \subseteq \mathbf{K}_1\mathbf{K}_1 (\Gamma)\)and so \(\mathbf{K}_2\mathbf{K}_1 (\Gamma) \subseteq\mathbf{K}_2\mathbf{K}_1\mathbf{K}_1 (\Gamma)\). The other statementswhich characterize third order mutual knowledge of the game and secondorder mutual knowledge of rationality are similarly derived.

32. The version of the example Rubinstein presents is more general thanthe version presented here. Rubinstein notes that this game is closelyrelated to thecoordinated attack problem analyzed in Halpern(1986).

33. In the terminology of decision theory, \(A\) is each agents’maximin strategy.

34. This could be achieved if the e-mail systems were constructed so thateach \(n\)th confirmation is sent \(2^{-n}\) seconds afterreceipt of the \(n\)th message.

35. A blockchain is a distributed ledger constituted by a sequence ofblocks of data. The way proof-of-work blockchains are constructed iscompatible with different agents holding different versions of theledger. Moreover, the set of agents (i.e. nodes of the blockchain)changes over time. Some agents are possibly dishonest, because offaultiness or because of maliciousness. Finally, the system isasynchronous (delivery time is bounded but uncertain.) Yet aproof-of-work blockchain allows agents (nodes in the network) to reacha consensus about the distributed ledger. Halpern and Pass (2017)offer a knowledge-based analysis of the blockchain desirableproperties that make it possible to reach a distributed consensus. Thefirst property isT-consistency: A blockchain protocol issaid to beT-consistent when, except for the last \(T\)transactions, all honest agents agree on the initial part of theledger up to \(T\).T-consistency is not strong enough tomake the blockchain a useful ledger for transactions, as nothingprevents a transaction \(x\) to be in \(T\) for agent \(i\) and yetnever appear on agent \(j\)’s ledger, because \(j\)’sledger never grows to include the block in which \(x\) is to berecorded.

To avoid this, we need a second property, said\(\Delta\)-growth, stating that if \(i\) is honest and herledger at time \(t\) has length \(N\), then all honest agents at time\(i + \Delta\) will have ledgers of length \(N\). A blockchainprotocol that satisfies \(T\)-consistency and\(\Delta\)-growth guarantees that, within time \(\Delta\) andfrom that point in time onwards, all honest agents will know thatwithin time \(\Delta\) and from that point in time onwards, all honestagents will know that… etc. This is called\(\Delta\text{-}\Box\)-common knowledge. There are two morefactors to take into consideration: one is that the set of agents(nodes in the blockchain) is not constant. The result is shown to holdin that the “all honest agents” above should read as“all honest agents present in the blockchain at that particularpoint in time.” The way the result is preserved is byintroducing indexical formulas and specifying semantical clausesindexed by specific agents. The second element is that, in actualblockchains, neither \(T\)-consistency nor \(\Delta\)-growthare guaranteed to hold but with high probability, thus the groupattitude emerging from the blockchain is\(\Delta\text{-}\Box\)-common knowledge, stating that withintime \(\Delta\) and from that point in time onwards, all honest agentswill know with probability \(p\) that within time \(\Delta\) and fromthat point in time onwards, all honest agents will know probability\(p\) that… etc.

Notes to Rubinstein’s Proof

1. If this does not look immediately obvious, consider that either

\(E = [T_2 = t] =\) my (Lizzi’s) \(t\)th confirmationwas lost,

or

\(F = [T_2 = t] =\) my \(t\)th confirmation was receivedand Joanna’s \(t\)th confirmation was lost

must occur, and that \(\mu_1 (T_1 = t \mid E) = \mu_1 (T_1 = t \mid F)= 1\) because Lizzi can see her own computer screen, so we can applyBayes’ Theorem as follows:

\[\begin{align}\mu_1 (E \mid T_1 = t) &= \frac{\mu_1 (T_1 = t \mid E) \mu_1 (E)}{\mu_1 (T_1 = t \mid E) \mu_1 (E) + \mu_1 (T_1 = t \mid F) \mu_1 (F)} \\ &= \frac{\mu_1 (E)}{\mu_1 (E) + \mu_1 (F)} \\ &= \frac{\varepsilon}{\varepsilon + (1-\varepsilon)\varepsilon}\end{align}\]

Copyright © 2022 by
Peter Vanderschraaf
Giacomo Sillari<gsillari@luiss.it>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp