Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Common Knowledge

First published Tue Aug 28, 2001; substantive revision Fri Aug 5, 2022

A proposition \(A\) ismutual knowledge among a set of agentsif each agent knows that \(A\). Mutual knowledge by itself impliesnothing about what, if any, knowledge anyone attributes to anyoneelse. Suppose each student arrives for a class meeting knowing thatthe instructor will be late. That the instructor will be late ismutual knowledge, but each student might think only she knows theinstructor will be late. However, if one of the students says openly“Peter told me he will be late again,” then each studentknows that each student knows that the instructor will be late, eachstudent knows that each student knows that each student knows that theinstructor will be late, and so on,ad infinitum. Theannouncement made the mutually known factcommon knowledgeamong the students.

Common knowledge is a phenomenon which underwrites much of sociallife. In order to communicate or otherwise coordinate their behaviorsuccessfully, individuals typically require mutual or commonunderstandings or background knowledge. Indeed, if a particularinteraction results in “failure”, the usual explanationfor this is that the agents involved did not have the common knowledgethat would have resulted in success. If a married couple are separatedin a department store, they stand a good chance of finding one anotherbecause their common knowledge of each others’ tastes andexperiences leads them each to look for the other in a part of thestore both know that both would tend to frequent. Since the spousesboth love cappuccino, each expects the other to go to the coffee bar,and they find one another. But in a less happy case, if a pedestriancauses a minor traffic jam by crossing against a red light, sheexplains her mistake as the result of her not noticing, and thereforenot knowing, the status of the traffic signal that all the motoristsknew. The spouses coordinate successfully given their commonknowledge, while the pedestrian and the motorists miscoordinate as theresult of a breakdown in common knowledge.

Given the importance of common knowledge in social interactions, it isremarkable that only quite recently have philosophers and socialscientists attempted to analyze the concept. David Hume (1740) wasperhaps the first to make explicit reference to the role of mutualknowledge in coordination. In his account of convention inATreatise of Human Nature, Hume argued that a necessary conditionfor coordinated activity was that agents all know what behavior toexpect from one another. Without the requisite mutual knowledge, Humemaintained, mutually beneficial social conventions would disappear.Much later, J. E. Littlewood (1953) presented some examples ofcommon-knowledge-type reasoning, and Thomas Schelling (1960) and JohnHarsanyi (1967–1968) argued that something like common knowledgeis needed to explain certain inferences people make about each other.The philosopher Robert Nozick describes, but does not develop, thenotion in his doctoral dissertation (Nozick 1963), while the firstmathematical analysis and application of the notion of commonknowledge is found in the technical report by Friedell (1967), thenpublished as (Friedell 1969).[1] The first full-fledged philosophical analysis of common knowledge wasoffered by David Lewis (1969) in the monographConvention.Stephen Schiffer (1972), Robert Aumann (1976), and Gilbert Harman(1977) independently gave alternate definitions of common knowledge.Jon Barwise (1988, 1989) gave a precise formulation of Harman’sintuitive account. Throughout the 1980s a number of epistemiclogicians, both from philosophy and from computer science, studied thelogical structure of common knowledge, and the interested readershould consult the relevant portions of the two important monographs(Fagin et al. 1995) and (Meyer and Van der Hoek 1995). MargaretGilbert (1989) proposed a somewhat different account of commonknowledge which she argues is preferable to the standard account.Others have developed accounts of mutual knowledge,approximatecommon knowledge, andcommon belief which require lessstringent assumptions than the standard account, and which serve asmore plausible models of what agents know in cases where strict commonknowledge seems impossible (Brandenburger and Dekel 1987, Monderer and Samet 1989, Rubinstein 1992). The analysis andapplications of common knowledge and related multi-agent knowledgeconcepts has become a lively field of research.

The purpose of this essay is to overview of some of the most importantresults stemming from this contemporary research. The topics reviewedin each section of this essay are as follows: Section 1 givesmotivating examples which illustrate a variety of ways in which theactions of agents depend crucially upon their having, or lacking,certain common knowledge. Section 2 discusses alternative analyses ofcommon knowledge. Section 3 reviews applications of multi-agentknowledge concepts, particularly togame theory (von Neumannand Morgenstern 1944), in which common knowledge assumptions have beenfound to have great importance in justifyingsolutionconcepts for mathematical games. Section 4 discusses skepticaldoubts about the attainability of common knowledge. Finally, Section 5discusses thecommon belief concept which result fromweakening the assumptions of Lewis’ account of commonknowledge.

1. Motivating Examples

Most of the examples in this section are familiar in the commonknowledge literature, although some of the details and interpretationspresented here are new. Readers may want to ask themselves what, ifany, distinctive aspects of mutual and common knowledge reasoning eachexample illustrates.

1.1. The Clumsy Waiter

A waiter serving dinner slips, and spills gravy on a guest’swhite silk evening gown. The guest glares at the waiter, and thewaiter declares “I’m sorry. It was my fault.” Whydid the waiter say that he was at fault? He knew that he was at fault,and he knew from the guest’s angry expression that she knew hewas at fault. However, the sorry waiter wanted assurance that theguestknew that he knew he was at fault. By saying openlythat he was at fault, the waiter knew that the guest knew what hewanted her to know, namely, that he knew he was at fault. Note thatthe waiter’s declaration established at least three levels ofnested knowledge.[2]

Certain assumptions are implicit in the preceding story. Inparticular, the waiter must know that the guest knows he has spokenthe truth, and that she can draw the desired conclusion from what hesays in this context. More fundamentally, the waiter must know that ifhe announces “It was my fault” to the guest, she willinterpret his intended meaning correctly and will infer what hismaking this announcement ordinarily implies in this context. This inturn implies that the guest must know that if the waiter announces“It was my fault” in this context, then the waiter indeedknows he is at fault. Then on account of his announcement, the waiterknows that the guest knows that he knows he was at fault. Thewaiter’s announcement was meant to generatehigher-order levels of knowledge of a fact each alreadyknew.

Just a slight strengthening of the stated assumptions results in evenhigher levels of nested knowledge. Suppose the waiter and the guesteach know that the other can infer what he infers from thewaiter’s announcement. Can the guest now believe that the waiterdoes not know that she knows that he knows he is at fault? If theguest considers this question, she reasons that if the waiter falselybelieves it is possible that she does not know that he knows he is atfault, then the waiter must believe it to be possible that she cannotinfer that he knows he is at fault from his own declaration. Since sheknows shecan infer that the waiter knows he is at fault fromhis declaration, she knows that the waiter knows she can infer this,as well. Hence the waiter’s announcement establishes thefourth-order knowledge claim: The guest knows that the waiter knowsthat she knows that he knows he is at fault. By similar, albeitlengthier, arguments, the agents can verify that correspondingknowledge claims of even higher-order must also obtain under theseassumptions.

1.2 The Barbecue Problem

This is a variation of an example first published by Littlewood(1953), although he notes that his version of the example was alreadywell-known at the time.[3] \(N\) individuals enjoy a picnic supper together which includesbarbecued spareribs. At the end of the meal, \(k \ge 1\) of thesediners have barbecue sauce on their faces. Since no one can see herown face, none of the messy diners knows whether he or she is messy.Then the cook who served the spareribs returns with a carton of icecream. Amused by what he sees, the cook rings the dinner bell andmakes the following announcement: “At least one of you hasbarbecue sauce on her face. I will ring the dinner bell over and over,until anyone who is messy has wiped her face. Then I will servedessert.” For the first \(k - 1\) rings, no one does anything.Then, at the \(k\)th ring, each of the messy individualssuddenly reaches for a napkin, and soon afterwards, the diners are allenjoying their ice cream.

How did the messy diners finally realize that their faces neededcleaning? The \(k = 1\) case is easy, since in this case, the lonemessy individual will realize he is messy immediately, since he seesthat everyone else is clean. Consider the \(k = 2\) case next. At thefirst ring, messy individual \(i_1\) knows that one other person,\(i_2\), is messy, but does not yet know about himself. At the secondring, \(i_1\) realizes that he must be messy, since had \(i_2\) beenthe only messy one, \(i_2\) would have known this after the first ringwhen the cook made his announcement, and would have cleaned her facethen. By a symmetric argument, messy diner \(i_2\) also concludes thatshe is messy at the second ring, and both pick up a napkin at thattime.

The general case follows by induction. Suppose that if \(k = j\), theneach of the \(j\) messy diners can determine that he is messy after\(j\) rings. Then if \(k = j + 1\), then at the \(j + 1\)string, each of the \(j + 1\) individuals will realize that he is messy.For if he were not messy, then the other \(j\) messy ones would haveall realized their messiness at the \(j\)th ring andcleaned themselves then. Since no one cleaned herself after the\(j\)th ring, at the \(j + 1\)st ring each messyperson will conclude that someone besides the other \(j\) messy peoplemust also be messy, namely, himself.

The “paradox” of this argument is that for \(k \gt 1\),like the case of the clumsy waiter of Example 1.1, the cook’sannouncement told the diners something that each already knew. Yetapparently the cook’s announcement also gave the diners usefulinformation. How could this be? By announcing a fact already known toevery diner, the cook made this factcommon knowledge amongthem, enabling each of them to eventually deduce the condition of hisown face after sufficiently many rings of the bell.[4]

1.3 The Farmer’s Dilemma

Does meeting one’s obligations to others serve one’sself-interest? Plato and his successors recognized that in certaincases, the answer seems to be “No.” Hobbes (1651, pp.101–102) considers the challenge of a “Foole”, whoclaims that it is irrational to honor an agreement made with anotherwho has already fulfilled his part of the agreement. Noting that inthis situation one has gained all the benefit of the other’scompliance, the Foole contends that it would now be best for him tobreak the agreement, thereby saving himself the costs of compliance.Of course, if the Foole’s analysis of the situation is correct,then would the other party to the agreement not anticipate theFoole’s response to agreements honored, and act accordingly?

Hume (1740, pp. 520–521) takes up this question, using anexample: Two neighboring farmers each expect a bumper crop of corn.Each will require his neighbor’s help in harvesting his cornwhen it ripens, or else a substantial portion will rot in the field.Since their corn will ripen at different times, the two farmers canensure full harvests for themselves by helping each other when theircrops ripen, and both know this. Yet the farmers do not help eachother. For the farmer whose corn ripens later reasons that if she wereto help the other farmer, then when her corn ripens he would be in theposition of Hobbes’ Foole, having already benefited from herhelp. He would no longer have anything to gain from her, so he wouldnot help her, sparing himself the hard labor of a second harvest.Since she cannot expect the other farmer to return her aid when thetime comes, she will not help when his corn ripens first, and ofcourse the other farmer does not help her when her corn ripenslater.

The structure of Hume’sFarmers’ Dilemma problemcan be summarized using the following tree diagram:

missing text, please inform

Figure 1.1a

This tree is an example of agame in extensive form. At eachstage \(i\), the agent who moves can either choose \(C^i\), whichcorresponds to helping orcooperating, or \(D^i\), whichcorresponds to not helping ordefecting. The relativepreferences of the two agents over the various outcomes are reflectedby the ordered pairs ofpayoffs each receives at anyparticular outcome. If, for instance, Fiona chooses \(C^i\) and Alanchooses \(D^i\), then Fiona’s payoff is 0, her worst payoff, andAlan’s is 4, his best payoff. In a game such as the Figure 1.1.agame, agents are(Bayesian) rational if each chooses an actthat maximizes her expected payoff, given what she knows.

In the Farmers’ Dilemma game, following the \(C^1,C^2\)-path isstrictly better for both farmers than following the \(D^1,D^2\)-path.However, Fiona chooses \(D^1\), as the result of the following simpleargument: “If I were to choose \(C^1\), then Alan, who isrational and who knows the payoff structure of the game, would choose\(D^2\). I am also rational and know the payoff structure of the game.So I should choose \(D^1\).” Since Fiona knows that Alan isrational and knows the game’s payoffs, she concludes that sheneed only analyze thereduced game in the followingfigure:

missing text, please inform

Figure 1.1b

In this reduced game, Fiona is certain to gain a strictly higherpayoff by choosing \(D^1\) than if she chooses \(C^1\), so \(D^1\) isher unique best choice. Of course, when Fiona chooses \(D^1\), Alan,being rational, responds by choosing \(D^2\). If Fiona and Alan know:(i) that they are both rational, (ii) that they both know the payoffstructure of the game, and (iii) that they both know (i) and (ii),then they both can predict what the other will do at every node of theFigure 1.1.a game, and conclude that they can rule out the\(D^1,C^2\)-branch of the Figure 1.1.b game and analyze just thereduced game of the following figure:

missing text, please inform

Figure 1.1c

On account of thismutual knowledge, both know that Fionawill choose \(D^1\), and that Alan will respond with \(D^2\). Hence,the \(D^1,D^2\)-outcome results if the Farmers’ Dilemma game isplayed by agents having this mutual knowledge, though it is suboptimalsince both agents would fare better at the \(C^1,C^2\)-branch.[5] This argument, which in its essentials is Hume’s argument, isan example of a standard technique for solving sequential games knownasbackwards induction.[6] The basic idea behind backwards induction is that the agents engagedin a sequential game deduce how each will act throughout the entiregame by ruling out the acts that are not payoff-maximizing for theagents who would move last, then ruling out the acts that are notpayoff-maximizing for the agents who would move next-to-last, and soon. Clearly, backwards induction arguments rely crucially upon what,if any, mutual knowledge the agents have regarding their situation,and they typically require the agents to evaluate the truth values ofcertain subjunctive conditionals, such as “If I (Fiona) were tochoose \(C^1\), then Alan would choose \(D^2\)”.

1.4 The Centipede

The mutual knowledge assumptions required to construct a backwardsinduction solution to a game become more complex as the number ofstages in the game increases. To see this, consider the sequentialCentipede game depicted in the following figure:

missing text, please inform

Figure 1.2

At each stage i\), the agent who moves can either choose \(R^i\),which in the first three stages gives the other agent an opportunityto move, or \(L^i\), which ends the game.

Like the Farmers’ Dilemma, this game is a commitment problem forthe agents. If each agent could trust the other to choose \(R^i\) ateach stage, then they would each expect to receive a payoff of 3.However, Alan chooses \(L^1\), leaving each with a payoff of only 1,as the result of the following backwards induction argument: “Ifnode \(n_4\) were to be reached, then Fiona, (being rational) wouldchoose \(L^4\). I, knowing this, would (being rational) choose \(L^3\)if node \(n_3\) were to be reached. Fiona, knowingthis,would (being rational) choose \(L^2\) if node \(n_2\) were to bereached. Hence, I (being rational) should choose \(L^1\).” Tocarry out this backwards induction argument, Alan implicitly assumesthat: (i) he knows that Fiona knows he is rational, and (ii) he knowsthat Fiona knows that he knows she is rational. Put another way, forAlan to carry out the backwards induction argument, at node \(n_1\) hemust know what Fiona must know at node \(n_2\) to make \(L^2\) herbest response should \(n_2\) be reached. While in the Farmer’sDilemma Fiona needed onlyfirst-order knowledge ofAlan’s rationality andsecond-order knowledge ofAlan’s knowledge of the game to derive the backwards inductionsolution, in the Figure 1.2 game, for Alan to be able to derive thebackwards induction solution, the agents must havethird-ordermutual knowledge of the game andsecond-order mutualknowledge of rationality, and Alan must havefourth-order knowledge of this mutual knowledge of the gameandthird-order knowledge of their mutual knowledge ofrationality. This argument also involves several counterfactuals,since to construct it the agents must be able to evaluate conditionalsof the form, “If node \(n_i\) were to be reached, Alan (Fiona)would choose \(L^i (R^i)\)”, which for \(i \gt 1\) arecounterfactual, since third-order mutual knowledge of rationalityimplies that nodes \(n_2,\) \(n_3\), and \(n_4\) are neverreached.

The method of backwards induction can be applied to any sequentialgame ofperfect information, in which the agents can observeeach others’ moves in turn and can recall the entire history ofplay. However, as the number of potential stages of play increases,the backwards induction argument evidently becomes harder toconstruct. This raises certain questions: (1) What precisely are themutual or common knowledge assumptions that are required to justifythe backwards induction argument for a particular sequential game? (2)As a sequential game increases in complexity, would we expect themutual knowledge that is required for backwards induction to start tofail?

1.5 The Department Store

When a man loses his wife in a department store without any priorunderstanding on where to meet if they get separated, the chances aregood that they will find each other. It is likely that each will thinkof some obvious place to meet, so obvious that each will be sure thatit is “obvious” to both of them. One does not simplypredict where the other will go, which is wherever the first predictsthe second to predict the first to go, and soad infinitum.Not “What would I do if I were she?” but “What wouldI do if I were she wondering what she would do if she were wonderingwhat I would do if I wereshe … ?”—Thomas Schelling,The Strategy ofConflict

Schelling’s department store problem is an example of apurecoordination problem, that is, an interaction problem in whichthe interests of the agents coincide perfectly. Schelling (1960) andLewis (1969), who were the first to make explicit the role commonknowledge plays in social coordination, were also among the first toargue that coordination problems can be modeled using the analyticvocabulary of game theory. A very simple example of such acoordination problem is given in the next figure:

  Robert
  \(s_1\)\(s_2\)\(s_3\)\(s_4\)
Liz\(s_1\)(4,3)(1,2)(1,2)(3,4)
\(s_3\)(3,4)(1,3)(1,3)(4,3)
\(s_3\)(3,4)(1,3)(1,3)(4,3)
\(s_3\)(3,4)(1,3)(1,3)(4,3)

\(s_i =\) search on floor \(i\), \(1 \leq i \leq 4\)

Figure 1.3

The matrix of Figure 1.3 is an example of agame in strategicform. At each outcome of the game, which corresponds to a cell inthe matrix, the row (column) agent receives as payoff the first(second) element of the ordered pair in the corresponding cell.However, in strategic form games, each agent chooses without firstbeing able to observe the choices of any other agent, so that all mustchoose as if they were choosing simultaneously. The Figure 1.3 game isa game ofpure coordination (Lewis 1969), that is, a game inwhich at each outcome, each agent receives exactly the same payoff.One interpretation of this game is that Schelling’s spouses, Lizand Robert, are searching for each other in the department store withfour floors, and they find each other if they go to the same floor.Four outcomes at which the spouses coordinate correspond to thestrategy profiles \((s_j, s_j), 1 \le j \le 4\), of the Figure 1.3game. These four profiles are strictNash equilibria (Nash1950, 1951) of the game, that is, each agent has a decisive reason tofollow her end of one of these strategy profiles provided that theother also follows this profile.[7]

The difficulty the agents face is trying to select an equilibrium tofollow. For suppose that Robert hopes to coordinate with Liz on aparticular equilibrium of the game, say \((s_2, s_2)\). Robert reasonsas follows: “Since there are several strict equilibria we mightfollow, I should follow my end of \((s_2, s_2)\) if, and only if, Ihave sufficiently high expectations that Liz will follow her end of\((s_2, s_2)\). But I can only have sufficiently high expectationsthat Liz will follow \((s_2, s_2 )\) if she has sufficiently highexpectations that I will follow \((s_2, s_2)\). For her to have suchexpectations, Liz must have sufficiently high (second-order)expectations that I have sufficiently high expectations that she willfollow \((s_2, s_2)\), for if Liz doesn’t have these(second-order) expectations, then she will believe I don’t havesufficient reason to follow \((s_2, s_2)\) and may therefore deviatefrom \((s_2, s_2)\) herself. So I need to have sufficiently high(third-order) expectations that Liz has sufficiently high(second-order) expectations that I have sufficiently high expectationsthat she will follow \((s_2, s_2 )\), which involves her infourth-order expectations regarding me, which involves me infifth-order expectations regarding Liz, and so on.” What wouldsuffice for Robert, and Liz, to have decisive reason to follow \((s_2,s_2)\) is that they eachknow that the otherknowsthat … that the other will follow \((s_2, s_2)\) for any numberof levels of knowledge, which is to say that between Liz and Robert itis common knowledge that they will follow \((s_2, s_2)\). If agentsfollow a strict equilibrium in a pure coordination game as aconsequence of their having common knowledge of the game, theirrationality and their intentions to follow this equilibrium, and noother, then the agents are said to be following aLewis-convention (Lewis 1969).

Lewis’ theory of convention applies to a more general class ofgames than pure coordination games, but pure coordination gamesalready model a variety of important social interactions. Inparticular, Lewis models conventions of language as equilibrium pointsof a pure coordination game. The role common knowledge plays in gamesof pure coordination sketched above of course raises furtherquestions: (1) Can people ever attain the common knowledge whichcharacterizes a Lewis-convention? (2) Would less stringent epistemicassumptions suffice to justify Nash equilibrium behavior in acoordination problem?

2. Alternative Accounts of Common Knowledge

Informally, a proposition \(A\) ismutually known among a setof agents if each agent knows that \(A\). Mutual knowledge by itselfimplies nothing about what, if any, knowledge anyone attributes toanyone else. Suppose each student arrives for a class meeting knowingthat the instructor will be late. That the instructor will be late ismutual knowledge, but each student might think only she knows theinstructor will be late. However, if one of the students says openly“Peter told me he will be late again,” then the mutuallyknown fact is nowcommonly known. Each student now knows thatthe instructor will be late, and so on,ad infinitum. Theagents have common knowledge in the sense articulated informally bySchelling (1960), and more precisely by Lewis (1969) and Schiffer(1972). Schiffer uses the formal vocabulary ofepistemic logic (Hintikka 1962) to state his definition of common knowledge.Schiffer’s general approach was to augment a system ofsentential logic with a set of knowledge operators corresponding to aset of agents, and then to define common knowledge as a hierarchy ofpropositions in the augmented system. Bacharach (1992) and Bicchieri(1993) adopt this approach, and develop logical theories of commonknowledge which include soundness and completeness theorems in thestyle of (Fagin et al. 1995). One can also develop formal accounts ofcommon knowledge in set-theoretic terms, as it was done in the earlyFriedell (1969) and in the economic literature after Aumann (1976).Such an approach, easily proven to be equivalent to the ones cast inepistemic logic, is taken also in this article.[8]

2.1 The Hierarchical Account

Monderer and Samet (1988) and Binmore and Brandenburger (1989) give aparticularly elegant set-theoretic definition of common knowledge. Iwill review this definition here, and then show that it is logicallyequivalent to the ‘\(i\) knows that \(j\) knows that \(\ldotsk\) knows that A’ hierarchy that Lewis (1969) and Schiffer(1972) argue characterizes common knowledge.[9]

Some preliminary notions must be stated first. Following C. I. Lewis(1943–1944) and Carnap (1947), propositions are formally subsetsof a set \(\Omega\) ofstate descriptions orpossibleworlds. One can think of the elements of \(\Omega\) asrepresenting Leibniz’s possible worlds or Wittgenstein’spossible states of affairs. Some results in the common knowledgeliterature presuppose that \(\Omega\) is of finite cardinality. Ifthis admittedly unrealistic assumption is needed in any context, thiswill be explicitly stated in this essay, and otherwise one may assumethat \(\Omega\) may be either a finite or an infinite set. Adistinguished actual world \(\omega_{\alpha}\) is an element of\(\Omega\). A proposition \(A \subseteq \Omega\) obtains (or is true)if the actual world \(\omega_{\alpha} \in A\). In general, we say that\(A\)obtains at a world \(\omega \in \Omega\) if \(\omega\in A\). What an agent \(i\) knows about the possible worlds is statedformally in terms of aknowledge operator \(\mathbf{K}_i\).Given a proposition \(A \subseteq \Omega , \mathbf{K}_i (A)\) denotesa new proposition, corresponding to the set of possible worlds atwhich agent \(i\) knows that A obtains. \(\mathbf{K}_i (A)\) is readas ‘\(i\) knows (that) \(A\) (is the case)’. The knowledgeoperator \(\mathbf{K}_i\) satisfies certain axioms, including:

\[\begin{align}\tag{K1} \mathbf{K}_i (A) &\subseteq A \\\tag{K2} \Omega &\subseteq \mathbf{K}_i (\Omega) \\\tag{K3} \mathbf{K}_i(\bigcap_k A_k) &= \bigcap_k \mathbf{K}_i (A_k) \\\tag{K4} \mathbf{K}_i (A) &\subseteq \mathbf{K}_i\mathbf{K}_i(A) \\\tag{K5} -\mathbf{K}_i (A) &\subseteq \mathbf{K}_i -\mathbf{K}_{i}(A)\end{align}\]

In words, K1 says that if \(i\) knows \(A\), then \(A\) must be thecase. K2 says that \(i\) knows that some possible world in \(\Omega\)occurs no matter which possible world \(\omega\) occurs. K3[10] says that \(i\) knows a conjunction if, and only if, \(i\) knows eachconjunct. K4 is areflection axiom, sometimes also presentedas theaxiom of transparency (or ofpositiveintrospection), which says that if \(i\) knows \(A\), then \(i\)knows that she knows \(A\). Finally, K5 says that if the agent doesnot know an event, then she knows that she does not know.This axiom is presented as the axiom ofnegativeintrospection, or as theaxiom of wisdom (since theagents possess Socratic wisdom, knowing that they do not know.) Notethat by K3, if \(A \subseteq B\) then \(\mathbf{K}_i (A) \subseteq\mathbf{K}_i (B)\), by K1 and K2, \(\mathbf{K}_i (\Omega) = \Omega\),and by K1 and K4, \(\mathbf{K}_i (A) = \mathbf{K}_i \mathbf{K}_i(A)\). Any system of knowledge satisfying K1–K5 corresponds tothe modal system S5, while any system satisfying K1–K4corresponds to S4 (Kripke 1963). If one drops the K1 axiom and retainsthe others, the resulting system would give a formal account of whatan agentbelieves, but does not necessarilyknow.

A useful notion in the formal analysis of knowledge is that of apossibility set. An agent i’s possibility set at astate of the world \(\Omega\) is the smallest set of possible worldsthat \(i\) thinks could be the case if \(\omega\) is the actual world.More precisely,

Definition 2.1
Agent \(i\)’spossibility set \(\mathcal{H}_i(\omega)\) at \(\omega \in \Omega\) is defined as

\[\mathcal{H}_i (\omega) \equiv \bigcap \{ E \mid \omega \in \mathbf{K}_i (E) \}\]

The collection of sets

\[\mathcal{H}_i = \bigcup_{\omega \in \Omega} \mathcal{H}_{i}(\omega)\]

is \(i\)’sprivate information system.

Since in words, \(\mathcal{H}_i (\omega)\) is the intersection of allpropositions which \(i\) knows at \(\omega , \mathcal{H}_i (\omega)\)is the smallest proposition in \(\Omega\) that \(i\) knows at\(\omega\). Put another way, \(\mathcal{H}_i (\omega)\) is the mostspecific information that \(i\) has about the possible world\(\omega\). The intuition behind assigning agents private informationsystems is that while an agent \(i\) may not be able to perceive orcomprehend every last detail of the world in which \(i\) lives, \(i\)does know certain facts about that world. The elements of\(i\)’s information system represent what \(i\) knowsimmediately at a possible world. We also have the following:

Proposition 2.2
\(\mathbf{K}_i (A) = \{ \omega \mid \mathcal{H}_i (\omega) \subseteq A\}\)

In many formal analyses of knowledge in the literature, possibilitysets are taken as primitive and Proposition 2.2 is given as thedefinition of knowledge. If one adopts this viewpoint, then the axiomsK1–K5 follow as consequences of the definition of knowledge. Inmany applications, the agents’ possibility sets are assumed topartition[11] the set, in which case \(\mathcal{H}_i\) is called i’sprivate information partition. Notice that if axiomsK1–K5 hold, then the possibility sets of each agent alwayspartition the state set, and vice versa.

To illustrate the idea of possibility sets, let us return to theBarbecue Problem described in Example 1.2. Suppose there are threediners: Cathy, Jennifer and Mark. Then there are 8 relevant states ofthe world, summarized by Table 2.1:

Table 2.1
\(\omega_1\)\(\omega_2\)\(\omega_3\)\(\omega_4\)\(\omega_5\)\(\omega_6\)\(\omega_7\)\(\omega_8\)
Cathycleanmessycleancleanmessymessycleanmessy
Jennifercleancleanmessycleanmessycleanmessymessy
Markcleancleancleanmessycleanmessymessymessy

Each diner knows the condition of the other diners’ faces, butnot her own. Suppose the cook makes no announcement, after all. Thennone of the diners knows the true state of the world whatever \(\omega\in \Omega\) the actual world turns out to be, but they do knowapriori that certain propositions are true at various states ofthe world. For instance, Cathy’s information system before anyannouncement is made is depicted in Figure 2.1a:

missing text, please inform

Figure 2.1a

In this case, Cathy’s information system is a partition\(\mathcal{H}_1\) of \(\Omega\) defined by

\[\mathcal{H}_1 = \{H_{CC}, H_{CM}, H_{MC}, H_{MM}\}\]

where

\[\begin{align}H_{CC} &= \{\omega_1, \omega_2\} \text{ (i.e., Jennifer and Mark are both clean)} \\H_{CM} &= \{\omega_4, \omega_6\} \text{ (i.e., Jennifer is clean and Mark is messy)} \\H_{MC} &= \{\omega_3, \omega_5\} \text{ (i.e., Jennifer is messy and Mark is clean)} \\H_{MM} &= \{\omega_7, \omega_8\} \text{ (i.e., Jennifer and Mark are both messy)}\end{align}\]

Cathy knows immediately which cell \(\mathcal{H}_1 (\omega)\) in herpartition is the case at any state of the world, but does not knowwhich is the true state at any \(\omega \in \Omega\).

If we add in the assumption stated in Example 1.2 that if there is atleast one messy diner, then the cook announces the fact, thenCathy’s information partition is depicted by Figure 2.1b:

missing text, please inform

Figure 2.1b

In this case, Cathy’s information system is a partition\(\mathcal{H}_1\) of \(\Omega\) defined by

\[\mathcal{H}_1 = \{H_{CCC}, H_{MCC}, H_{CM}, H_{MC}, H_{MM}\}\]

where

\[\begin{align}H_{CCC} &= \{\omega_1\} &&\text{ (i.e., Jennifer, Mark, and I are all clean)} \\H_{MCC} &= \{\omega_2\} &&\text{ (i.e., Jennifer and Mark are clean and I am messy)} \\H_{CM} &= \{\omega_4, \omega_6\} && \text{ (i.e., Jennifer is clean and Mark is messy)} \\H_{MC} &= \{\omega_3, \omega_5\} &&\text{ (i.e., Jennifer is messy and Mark is clean)} \\H_{MM} &= \{\omega_7, \omega_8\} &&\text{ (i.e., Jennifer and Mark are both messy)} \end{align}\]

In this case, Cathy’s information partition is arefinement of the partition she has when there is noannouncement, for in this case, then Cathy knowsa priorithat if \(\omega_1\) is the case there will be no announcement andwill know immediately that she is clean, and Cathy knowsapriori that if \(\omega_2\) is the case, then she will knowimmediately from the cook’s announcement that she is messy.

Similarly, if the cook makes an announcement only if he sees at leasttwo messy diners, Cathy’s possibility set is the one representedin fig. 2.1c:

missing text, please inform

Figure 2.1c

Cathy’s information partition is now defined by

\[\mathcal{H}_1 = \{H_{CC}, H_{CMC}, H_{CCM}, H_{MMC}, H_{MCM}, H_{MM}\}\]

where

\[\begin{align}H_{CC} &= \{\omega_1, \omega_2\} &&\text{ (i.e., Jennifer and Mark are both clean)} \\H_{CMC} &= \{\omega_3\} &&\text{ (i.e., Mark and I are clean, Jennifer is messy)} \\H_{CCM} &= \{\omega_4\} &&\text{ (i.e., Jennifer and I are clean, Mark is messy)} \\H_{CCM} &= \{\omega_5\} &&\text{ (i.e., Jennifer and I are messy, Mark is clean)} \\H_{CCM} &= \{\omega_6\} &&\text{ (i.e., Mark and I are messy, Jennifer is clean)} \\H_{MM} &= \{\omega_7, \omega_8\} &&\text{ (i.e., Jennifer and Mark are both messy)}\end{align}\]

In this case, Cathy knowsa priori that if \(\omega_3\)obtains there will be no announcement, and similarly for \(\omega_4\).Thus, she will be able to distinguish these states from \(\omega_5\)and \(\omega_6\), respectively.

As mentioned earlier in this subsection, the assumption thatagents’ possibility sets partition the state space depends onthe modeler’s choice of specific axioms for the knowledgeoperators. For example, if we drop axiom K5 (preserving the validityof K1–K4) the agent’s possibility sets need not partitionthe space set (follow the link for anexample. For more details and applications, cf. Samet 1990.) It wasconjectured (cf. Geanakoplos 1989) that lack of negative introspection(i.e. systems without K5) would allow to incorporate unforeseencontingencies in the epistemic model, by representing theagents’unawareness of certain events (i.e. the case inwhich the agent does not know that an event occurs and also does notknow that she does not know that.) It was later shown by Dekel et al.(1998) that standard models are not suitable to representagents’ unawareness. An original non-standard model to representunawareness is provided in Heifetzet al. (2006). For acomprehensive bibliography on modeling unawareness and applications ofthe notion, cf. the external links at the end on this entry.

We can now define mutual and common knowledge as follows:

Definition 2.3
Let a set \(\Omega\) of possible worlds together with a set of agents\(N\) be given.

1. The proposition that \(A\) is(first level orfirstorder)mutual knowledge for the agents of N,\(\mathbf{K}^{1}_N (A)\), is the set defined by

\[\mathbf{K}^1_N (A) \equiv \bigcap_{i\in N} \mathbf{K}_i(A).\]

2. The proposition that \(A\) is \(m\)thlevel (or\(m\)thorder)mutual knowledge among theagents of N, \(\mathbf{K}^m_N(A),\) is defined recursively as theset

\[\mathbf{K}^m_N(A) \equiv \bigcap_{i\in N} \mathbf{K}_i (\mathbf{K}^{m-1}_N(A)).\]

3. The proposition that \(A\) iscommon knowledge among theagents of \(N, \mathbf{K}^{*}_N (A),\) is defined as the set[12]

\[\mathbf{K}^*_N (A) \equiv \bigcap_{m=1}^{\infty} \mathbf{K}^m_N(A).\]

Common knowledge of a proposition \(E\) implies common knowledge ofall that \(E\) implies, as is shown in the following:

Proposition 2.4
If \(\omega \in \mathbf{K}^{*}_N (E)\) and \(E \subseteq F\), then\(\omega \in \mathbf{K}^{*}_N (F)\).
Proof.

Note that \((\mathbf{K}^m_N(E))_{m\ge 1}\) is a decreasing sequence ofevents, in the sense that \(\mathbf{K}^{m+1}_N (E) \subseteq\mathbf{K}^m_N(E)\), for all \(m \ge 1\). It is also easy to checkthat if everyone knows \(E\), then \(E\) must be true, that is,\(\mathbf{K}^1_N (E) \subseteq E\). If \(\Omega\) is assumed to befinite, then if \(E\) is common knowledge at \(\omega\), this impliesthat there must be a finite \(m\) such that

\[\mathbf{K}^m_N(E) = \bigcap_{n=1}^{\infty} \mathbf{K}^n_N(E).\]

The following result relates the set-theoretic definition of commonknowledge to the hierarchy of ‘\(i\) knows that \(j\) knows that… knows \(A\)’ statements.

Proposition 2.5
\(\omega \in \mathbf{K}^m_N(A)\) iff

(1) For all agents \(i_1, i_2 , \ldots ,i_m \in N, \omega \in\mathbf{K}_{i_1}\mathbf{K}_{i_2} \ldots \mathbf{K}_{i_m}(A)\)

Hence, \(\omega \in \mathbf{K}^*_N (A)\) iff (1) is the case for each\(m \ge 1\).
Proof.

The condition that \(\omega \in \mathbf{K}_{i_1}\mathbf{K}_{i_2}\ldots \mathbf{K}_{i_m}(A)\) for all \(m \ge 1\) and all \(i_1, i_2 ,\ldots ,i_m \in N\) is Schiffer’s definition of commonknowledge, and is often used as the definition of common knowledge inthe literature.

2.2 Lewis’ Account

Lewis is credited with the idea of characterizing common knowledge asa hierarchy of ‘\(i\) knows that \(j\) knows that … knowsthat \(A\)’ propositions. However, Lewis is aware of thedifficulties that such an infinitary definition raises. A firstproblem is whether it is possible to reduce the infinity inherent inthe hierarchical account into a workable finite definition. A secondproblem is the issue that finite agents cannot entertain the infiniteamount of epistemic states which is necessary for common knowledge toobtain. Lewis tackles both problems, but his presentation is informal.Aumann is often credited with presenting the first finitary method ofgenerating the common knowledge hierarchy (Aumann 1976), even though(Friedell 1969) in fact predates both Aumann’s and Lewis’swork. Recently, Cubitt and Sugden (2003) have argued thatAumann’s and Lewis’ accounts of common knowledge areradically different and irreconcilable.

Although Lewis introduced the technical term ‘commonknowledge,’ his analysis is about belief, rather than knowledge.Indeed, Lewis offers his solution to the second problem mentionedabove by introducing a distinction betweenactual belief andreason to believe. Reasons to believe are interpreted aspotential beliefs of agents, so that the infinite hierarchy ofepistemic states becomes harmless, consisting in an infinite number ofstates of potential belief. The solution to the first problem is givenby providing a finite set of conditions that, if met, generate theinfinite series of reasons to believe. Such conditions taken togetherrepresent Lewis’ official definition of common knowledge. Noticethat it would be more appropriate to speak of ‘common reason tobelieve,’ or, at least, of ‘common belief.’ Lewishimself later acknowledges that “[t]hat term [common knowledge]was unfortunate, since there is no assurance that it will beknowledge, or even that it will be true.” Cf. (Lewis 1978, p.44, n.13) Disregarding the distinction between reasons to believe andactual belief, we follow (Vanderschraaf 1998) to give the details of aformal account of Lewis’s definition here, and show thatLewis’ analysis does result in the common knowledge hierarchyfollowing from a finite set of axioms. It is however debatable whethera possible worlds approach can properly render the subtleties ofLewis’ characterization. Cubitt and Sugden (2003), for example,abandon the possible worlds framework altogether and propose adifferent formal interpretation of Lewis in which, among otherelements, the distinction between reasons to believe and actual beliefis taken into account. An attempt to reconcile the two positions canbe found in (Sillari 2005), where Lewis’ characterization isformalized in a richer possible worlds semantic framework where thedistinction between reasons to believe and actual believe isrepresented.

Lewis presents his account of common knowledge on pp. 52–57 ofConvention. Lewis does not specify what account of knowledgeis needed for common knowledge. As it turns out, Lewis’ accountis satisfactory for any formal account of knowledge in which theknowledge operators \(\mathbf{K}_i, i \in N\), satisfy K1, K2, and K3.A crucial assumption in Lewis’ analysis of common knowledge isthat agents know they share the same “rationality, inductivestandards and background information” (Lewis 1969, p. 53) withrespect to a state of affairs \(A'\), that is, if an agent can drawany conclusion from \(A'\), she knows that all can do likewise. Thisidea is made precise in the following:

Definition 2.6
Given a set of agents \(N\) and a proposition \(A' \subseteq \Omega\),the agents of \(N\) aresymmetric reasoners with respect to\(A' (or A'\)-symmetric reasoners) iff, for each \(i, j \inN\) and for any proposition \(E \subseteq \Omega\), if \(\mathbf{K}_i(A') \subseteq \mathbf{K}_i (E)\) and \(\mathbf{K}_i (A') \subseteq\mathbf{K}_i\mathbf{K}_j(A')\), then \(\mathbf{K}_i (A') \subseteq \mathbf{K}_i\mathbf{K}_j(E)\).[13]

The definiens says that for each agent \(i\), if \(i\) can infer from\(A'\) that \(E\) is the case and that everyone knows that \(A'\) isthe case, then \(i\) can also infer that everyone knows that \(E\) isthe case.

Definition 2.7
A proposition \(E\) isLewis-common knowledge at \(\omega \in\Omega\) among the agents of a set \(N = \{1, \ldots ,n\}\) iff thereis a proposition \(A\)* such that \(\omega \in A\)*, the agents of\(N\) are \(A\)*-symmetric reasoners, and for every \(i \in N\),\[\begin{align}\tag{L1} &\omega \in \mathbf{K}_i (A^*) \\\tag{L2} &\mathbf{K}_i(A*) \subseteq \mathbf{K}_i(\bigcap_{j\in N} \mathbf{K}_j(A^*)) \\\tag{L3} &\mathbf{K}_i (A*) \subseteq \mathbf{K}_i (E)\end{align}\]

\(A\)* is abasis for the agents’ common knowledge.\(\mathbf{L}*_N (E)\) denotes the proposition defined by L1–L3for a set \(N\) of \(A\)*-symmetric reasoners, so we can say that\(E\) is Lewis-common knowledge for the agents of \(N\) iff \(\omega\in \mathbf{L}*_N (E)\).

In words, L1 says that \(i\) knows \(A\)* at \(\omega\). L2 says thatif \(i\) knows that \(A\)* obtains, then \(i\) knows that everyoneknows that \(A\)* obtains. This axiom is meant to capture the ideathat common knowledge is based upon a proposition \(A\)* that ispublicly known, as is the case when agents hear a publicannouncement. If the agents’ knowledge is represented bypartitions, then a typical basis for the agents’ commonknowledge would be an element \(\mathcal{M}(\omega)\) in the meet[14] of their partitions. L3 says that \(i\) can infer from \(A\)* that\(E\). Lewis’ definition implies the entire common knowledgehierarchy, as is shown in the following result.

Proposition 2.8
\(\mathbf{L}*_N (E) \subseteq \mathbf{K}*_N (E)\), that is,Lewis-common knowledge of \(E\) implies common knowledge of \(E\).
Proof.

As mentioned above, it has recently come into question whether aformal rendition of Lewis’ definition as the one given aboveadequately represents all facets of Lewis’ approach. Cubitt andSugden (2003) argue that it does not, their critique hinging on afeature of Lewis’ analysis that is lost in the possible worldsframework, namely the 3-place relation ofindication used byLewis. The definition of indication can be found at pp. 52–53 ofConvention:

Definition 2.9
A state of affairs \(A\)indicates \(E\) to agent \(i\) \((A\indi E)\) if and only if, if \(i\) had reason to believe that \(A\)held, \(i\) would thereby have reason to believe that \(E\)

The wording of Lewis’ definition and the use he makes of theindication relation in the definitory clauses for common knowledge,suggest that Lewis is careful to distinguish indication and materialimplication. Cubitt and Sugden (2003) incorporate such distinction intheir formal reconstruction. Paired with their interpretation of“\(i\) has reason to believe \(x\)” as “\(x\) isyielded by some logic of reasoning that \(i\) endorses,” we havethat, if \(A \indi x,\) then \(i\)’s reason to believe \(A\)provides \(i\) with reason to believe \(x\) as well. Given that Lewisdoes want to endow agents with deductive reasoning, (Cubitt and Sugden2003) list the following axioms, claiming that they capture thedesired properties of indication. For all agents \(i, j,\) with\(\mathbf{R}_i A\) standing for “agent \(i\) has reason tobelieve A”, we have

\[\begin{align}\tag{CS1} (\mathbf{R}_i A \wedge A \indi x) &\to \mathbf{R}_i x \\\tag{CS2} (A \text{ entails } B) &\to A \indi B \\\tag{CS3} (A \indi x \wedge A \indi y) &\to A \indi (x \wedge y) \\\tag{CS4} (A \indi B \wedge B \indi x) &\to A \indi x \\\tag{CS5} ((A \indi \mathbf{R}_j B) \wedge \mathbf{R}_i(B \indj x)) &\to A \indi \mathbf{R}_j x\end{align}\]

The first axioms captures the intuition behind indication. It saysthat if an agent has reason to believe that \(A\) holds, then, if\(A\) indicates \(x\) to her, she has reason to believe \(x\) as well.CS2 says that indication extends material implication. CS3 says thatif two propositions \(x\) and \(y\) are indicated to an agent by aproposition \(A\), then \(A\) indicates to her also the conjunction of\(x\) and \(y\). The next axiom states that indication is transitive.CS5 says that if a proposition \(A\) indicates to \(i\) that agent\(j\) has reason to believe \(B\), and \(i\) has reason to believethat \(B\) indicates \(x\) to \(j\), then \(A\) indicates to \(i\)also that \(j\) has reason to believe \(x\).

Armed with these axioms, it is possible to give the followingdefinition.

Definition 2.10
In any given population \(P\) a proposition \(A\) is areflexivecommon indicator that x if and only if, for all \(i, j \in P\)and all propositions \(x, y,\) the following four conditions hold:

\[\begin{align}\tag{RCI1} &A \to \mathbf{R}_i A \\\tag{RCI2} &A \indi \mathbf{R}_j A \\\tag{RCI3} &A \indi x \\\tag{RCI4} &A \indj y \to \mathbf{R}_i(A \indj y)\end{align}\]

Clauses RCI1–RCI3 above render L1–L3 of definition 2.7above in the formal language that underlies axioms CS1–CS5;while RCI4 affirms (cf. definition 2.6 above) that agents aresymmetric reasoners, i.e. that if a proposition indicates anotherproposition to a certain agent, then it does so to all agents in thepopulation.

The following proposition shows that RCI1–RCI4 are sufficientconditions for ‘common reason to believe’ to arise:

Proposition 2.11
If \(A\) holds, and if \(A\) is a common reflexive indicator in thepopulation \(P\) that \(x\), then there is common reason to believe in\(P\) that \(x\).
Proof.

A group of (ideal)faultless reasoners who have common reasonto believe that \(p\), will achieve common belief in \(p\).

Is it possible to take formally in account the insights ofLewis’ definition of common knowledge without abandoning thepossible world framework? (Sillari 2005) puts forth an attempt to givea positive answer to that question by articulating in a possible worldsemantics the distinction between actual belief and reason to believe.As in (Cubitt and Sugden 2003), the basic epistemic operatorrepresents reasons to believe. The idea is then to impose anawareness structure over possible worlds, adopting theframework first introduced by Fagin and Halpern (1988). Simply put, anawareness structure associates to each agent, for every possibleworld, a set of events of which the agent is said to be aware. Anagent entertains an actual belief that a certain event occurs if andonly if she has reason to believe that the event occursandsuch event is in her awareness set at the world under consideration. Adifferent avenue to the formalization of Lewis’s account ofcommon knowledge is offered by Paternotte (2011), where the centralnotion is probabilistic common belief (see section 5.2 below).

2.3 Aumann’s Account

Aumann (1976) gives a different characterization of common knowledgewhich gives another simple algorithm for determining what informationis commonly known. Aumann’s original account assumes that theeach agent’s possibility set forms a private informationpartition of the space \(\Omega\) of possible worlds. Aumann showsthat a proposition C is common knowledge if, and only if, C contains acell of the meet of the agents’ partitions. One way to computethe meet \(\mathcal{M}\) of the partitions \(\mathcal{H}_i, i \in N\)is to use the idea of “reachability”.

Definition 2.13
A state \(\omega ' \in \Omega\) isreachable from \(\omega\in \Omega\) iff there exists a sequence

\[\omega =\omega_0, \omega_1, \omega_2 , \ldots ,\omega_m =\omega'\]

such that for each \(k \in \{0,1, \ldots ,m-1\}\), there exists anagent \(i_k \in N\) such that \(\mathcal{H}_{i_{ k} }(\omega_k) =\mathcal{H}_{i_{ k} }(\omega_{k+1})\).

In words, \(\omega '\) is reachable from \(\omega\) if there exists asequence or “chain” of states from \(\omega\) to \(\omega'\) such that two consecutive states are in the same cell of someagent’s information partition. To illustrate the idea ofreachability, let us return to the modified Barbecue Problem in whichCathy, Jennifer and Mark receive no announcement. Their informationpartitions are all depicted in Figure 2.1d:

missing text, please inform

Figure 2.1d

One can understand the importance of the notion of reachability in thefollowing way: If \(\omega '\) is reachable from \(\omega\), then if\(\omega\) obtains then some agent can reason that some other agentthinks that \(\omega '\) is possible. Looking at Figure 2.1d, if\(\omega = \omega_1\) occurs, then Cathy (who knows only that\(\{\omega_1, \omega_2\}\) has occurred) knows that Jennifer thinksthat \(\omega_5\) might have occurred (even though Cathy knows that\(\omega_5\) did not occur). So Cathy cannot rule out the possibilitythat Jennifer thinks that Mark thinks that that \(\omega_8\) mighthave occurred. And Cathy cannot rule out the possibility that Jenniferthinks that Mark thinks that Cathy believes that \(\omega_7\) ispossible. In this sense, \(\omega_7\) is reachable from \(\omega_1\).The chain of states which establishes this is \(\omega_1, \omega_2,\omega_5, \omega_8, \omega_7\), since \(\mathcal{H}_1 (\omega_1) =\mathcal{H}_1 (\omega_2),\) \(\mathcal{H}_2 (\omega_2) = \mathcal{H}_2(\omega_5),\) \(\mathcal{H}_3 (\omega_5) = \mathcal{H}_3 (\omega_8),\)and \(\mathcal{H}_1 (\omega_8) = \mathcal{H}_1 (\omega_7)\). Note thatone can show similarly that in this example any state is reachablefrom any other state. This example also illustrates the followingimmediate result:

Proposition 2.14
\(\omega'\) is reachable from \(\omega\) iff there is a sequence\(i_1, i_2 , \ldots ,i_m \in N\) such that

\[ \omega' \in \mathcal{H}_{i_m}(\cdots(\mathcal{H}_{i_2}(\mathcal{H}_{i_1}(\omega))))\]

One can read (1) as: ‘At \(\omega , i_1\) thinks that \(i_2\)thinks that \(\ldots ,i_m\) thinks that \(\omega'\) ispossible.’

We now have:

Lemma 2.15
\(\omega' \in \mathcal{M}(\omega)\) iff \(\omega '\) is reachable from\(\omega\).
Proof.

and

Lemma 2.16
\(\mathcal{M}(\omega)\) is common knowledge for the agents of \(N\) at\(\omega\).
Proof.

and

Proposition 2.17 (Aumann 1976)
Let \(\mathcal{M}\) be the meet of the agents’ partitions\(\mathcal{H}_i\) for each \(i \in N\). A proposition \(E \subseteq\Omega\) is common knowledge for the agents of \(N\) at \(\omega\) iff\(\mathcal{M}(\omega) \subseteq E\). (In Aumann (1976), \(E\) isdefined to be common knowledge at \(\omega\) iff\(\mathcal{M}(\omega) \subseteq E\).)
Proof.

If \(E = \mathbf{K}^1_N (E)\), then \(E\) is apublic event(Milgrom 1981) or acommon truism (Binmore and Brandenburger1989). Clearly, a common truism is common knowledge whenever itoccurs, since in this case \(E = \mathbf{K}^1_N (E) = \mathbf{K}^2_N(E) =\ldots\) , so \(E = \mathbf{K}^*_N (E)\). The proof ofProposition 2.17 shows that the common truisms are precisely theelements of \(\mathcal{M}\) and unions of elements of \(\mathcal{M}\),so any commonly known event is the consequence of a common truism.

2.4 Barwise’s Account

Barwise (1988) proposes another definition of common knowledge thatavoids explicit reference to the hierarchy of ‘\(i\) knows that\(j\) knows that … knows that \(A\)’ propositions.Barwise’s analysis builds upon an informal proposal by Harman(1977). Consider the situation of the guest and clumsy waiter inExample 1 when he announces that he was at fault. They are now in asetting where they have heard the waiter’s announcement and knowthat they are in the setting. Harman adopts the circularity in thischaracterization of the setting as fundamental, and propses adefinition of common knowledge in terms of this circularity.Barwise’s formal analysis gives a precise formulation ofHarman’s intuitive analysis of common knowledge as afixedpoint. Given a function \(f, A\) is a fixed point of \(f\) if\(f(A)=A.\) Now note that

\[\begin{align}\mathbf{K}^1_N (E \cap \bigcap_{m=1}^{\infty} \mathbf{K}^m_N(E)) &= \mathbf{K}^1_N (E) \cap \mathbf{K}^1_N( \bigcap_{m=1}^{\infty} \mathbf{K}^m_N(E)) \\ &= \mathbf{K}^1_N (E) \cap (\bigcap_{m=1}^{\infty} \mathbf{K}^1_N (\mathbf{K}^m_N (E))) \\ &= \mathbf{K}^1_N (E) \cap (\bigcap_{m=1}^{\infty} \mathbf{K}^m_N (E)) \\ &= \bigcap_{m=1}^{\infty} \mathbf{K}^m_N (E)\end{align}\]

So we have established that \(\mathbf{K}^{*}_N (E)\) is a fixed pointof the function \(f_E\) defined by \(f_E (X) = \mathbf{K}^{1}_N (E\cap X). f_E\) has other fixed points. For instance, any contradiction\(B \cap B^c = \varnothing\) is a fixed point of \(f_E\).[15] Note also that if \(A \subseteq B\), then \(E \cap A \subseteq E \capB\) and so

\[f_E (A) = \mathbf{K}^1_N (E \cap A) \subseteq \mathbf{K}^1_N (E \cap B) = f_E(B)\]

that is, \(f_E\) ismonotone. (We saw that \(\mathbf{K}^1_N\)is also monotone in the proof of Proposition 2.4.) Barwise’sanalysis of common knowledge can be developed using the followingresult from set theory:

Proposition
A monotone function \(f\) has a unique fixed point \(C\) such that if\(B\) is a fixed point of \(f\), then \(B\subseteq C.\) \(C\) is thegreatest fixed point of \(f.\)

This proposition establishes that \(f_E\) has a greatest fixed point,which characterizes common knowledge in Barwise’s account. AsBarwise himself observes, the fixed point analysis of common knowledgeis closely related to Aumann’s partition account. This is easyto see when one compares the fixed point analysis to the notion ofcommon truisms that Aumann’s account generates. Some authorsregard the fixed point analysis as an alternate formulation ofAumann’s analysis. Barwise’s fixed point analysis ofcommon knowledge is favored by those who are especially interested inthe applications of common knowledge to problems in logic, while thehierarchical and the partition accounts are favored by those who wishto apply common knowledge in social philosophy and social science.When knowledge operators satisfy the axioms (K1)-(K5), the Barwiseaccount of common knowledge is equivalent to the hierarchicalaccount.

Proposition 2.18
Let \(C^*_N\) be the greatest fixed point of \(f_E.\) Then \(C^{*}_N(E) = K^*_N (E).\) (In Barwise (1988, 1989), \(E\) isdefinedto be common knowledge at \(\omega\) iff \(\omega \in C^*_N(E).)\)
Proof.

Barwise argues that in fact the fixed point analysis is more flexibleand consequently more general than the hierarchical account. This maysurprise readers in light of Proposition 2.18, which shows thatBarwise’s fixed point definition isequivalent to thehierarchical account. Indeed, while Barwise (1988, 1989) proves aresult showing that the fixed point account implies the hierarchicalaccount and gives examples that satisfy the common knowledge hierarchybut fail to be fixed points, a number of authors who have writtenafter Barwise have given various proofs of the equivalence of the twodefinitions, as was shown in Proposition 2.18. In fact, as (Heifetz1999) shows, the hierarchical and fixed-point accounts are equivalentfor all finite levels of iteration, while fixed-point common knowledgeimplies the conjunction of mutual knowledge up to any transfiniteorder, but it is never implied by any such conjunction.

2.5 Gilbert’s Account

Gilbert (1989, Chapter 3) presents an alternative account of commonknowledge, which is meant to be more intuitively plausible thanLewis’ and Aumann’s accounts. Gilbert gives a highlydetailed description of the circumstances under which agents havecommon knowledge.

Definition 2.19
A set of agents \(N\) are in acommon knowledge situation\(\mathcal{S}(A)\) with respect to a proposition \(A\) if, and onlyif, \(\omega \in A\) and for each \(i \in N\),

(\(G_1\))
\(i\) isepistemically normal, in the sense that \(i\)has normal perceptual organs which are functioning normally and hasnormal reasoning capacity.[16]
(\(G_2\))
\(i\) has the concepts needed to fulfill the otherconditions.
(\(G_3\))
\(i\) perceives the other agents of \(N\).
(\(G_4\))
\(i\) perceives that G\(_1\) and G\(_2\) are the case.
(\(G_5\))
\(i\) perceives that the state of affairs described by \(A\) isthe case.
(\(G_6\))
\(i\) perceives that all the agents of \(N\) perceive that \(A\)is the case.

Gilbert’s definition appears to contain some redundancy, sincepresumably an agent would not perceive A unless A is the case. Gilbertis evidently trying to give a more explicit account of single agentknowledge than Lewis and Aumann give. For Gilbert, agent \(i\) knowsthat a proposition \(E\) is the case if, and only if, \(\omega \inE\), that is, \(E\) is true, and either \(i\) perceives that the stateof affairs \(E\) describes obtains or \(i\) can infer \(E\) as aconsequence of other propositions \(i\) knows, given sufficientinferential capacity.

Like Lewis, Gilbert recognizes that human agents do not in fact haveunlimited inferential capacity. To generate the infinite hierarchy ofmutual knowledge, Gilbert introduces the device of an agent’ssmooth-reasoner counterpart. The smooth-reasoner counterpart\(i'\) of an agent \(i\) is an agent that draws every logicalconclusion from every fact that \(i\) knows. Gilbert stipulates that\(i'\) does not have any of the constraints on time, memory, orreasoning ability that \(i\) might have, so \(i'\) can literally thinkthrough the infinitely many levels of a common knowledgehierarchy.

Definition 2.20
If a set of agents \(N\) are in a common knowledge situation\(\mathcal{S}_N (A)\) with respect to \(A\), then the correspondingset \(N'\) of their smooth-reasoner counterparts is in aparallelsituation \(\mathcal{S}'_{N'}(A)\) if, and only if, for each \(i'\in N\),

(\(G'_1\))\(i'\) can perceive anything that the counterpart \(i\) canperceive.
(\(G_2'\))\(G_2\)–\(G_6\) obtain for \(i'\) with respect to \(A\)and \(N'\), same as for the counterpart \(i\) with respect to \(A\)and \(N\).
(\(G_3'\))\(i'\) perceives that all the agents of \(N'\) aresmooth-reasoners.

From this definition we get the following immediate consequence:

Proposition 2.21
If a set of smooth-reasoner counterparts to a set \(N\) of agents arein a situation \(\mathcal{S}'_{N'}(A)\) parallel to a common knowledgesituation \(\mathcal{S}_N (A)\) of \(N\), then

for all \(m \in \mathbb{N}\) and for any \(i_1', \ldots ,i_m',\mathbf{K}_{i_1'}\mathbf{K}_{i_2'} \ldots \mathbf{K}_{i_m'}(A)\).

Consequently, \(\mathbf{K}^{m}_{N'}(A)\) for any \(m \in\mathbb{N}.\)

Gilbert argues that, given \(\mathcal{S}'_{N'}(A)\), thesmooth-reasoner counterparts of the agents of \(N\) actually satisfy amuch stronger condition, namely mutual knowledge\(\mathbf{K}^{\alpha}_{N'}(A)\) to the level of any ordinal number\(\alpha\), finite or infinite. When this stronger condition issatisfied, the proposition \(A\) is said to beopen* to the agentsof \(N\). With the concept of open*-ness, Gilbert gives herdefinition of common knowledge.

Definition 2.22
A proposition \(E \subseteq \Omega\) isGilbert-commonknowledge among the agents of a set \(N = \{1, \ldots,n\},\) ifand only if,

(\(G_1^*\))\(E\) is open* to the agents of \(N\).
(\(G_2^*\))For every \(i \in N,\) \(\mathbf{K}_i(G_1^*).\)

\(\mathbf{G}_N^*(E)\) denotes the proposition defined by \(G_1^*\) and\(G_2^*\) for a set \(N\) of \(A^*\)-symmetric reasoners, so we cansay that \(E\) is Lewis-common knowledge for the agents of \(N\) iff\(\omega \in \mathbf{G}_N^*(E)\).

One might think that an immediate corollary to Gilbert’sdefinition is that Gilbert-common knowledge implies the hierarchicalcommon knowledge of Proposition 2.5. However, this claim follows onlyon the assumption that an agent knows all of the propositions that hersmooth-reasoner counterpart reasons through. Gilbert does notexplicitly endorse this position, although she correctly observes thatLewis and Aumann are committed to something like it.[17] Gilbert maintains that her account of common knowledge expresses ourintuitions with respect to common knowledge better than Lewis’and Aumann’s accounts, since the notion of open*-ness presumablymakes explicit that when a proposition is common knowledge, it is“out in the open”, so to speak.

3. Applications of Mutual and Common Knowledge

Readers primarily interested in philosophical applications of commonknowledge may want to focus on the No Disagreement Theorem andConvention subsections. Readers interested in applications of commonknowledge in game theory may continue with the Strategic Form Games,and Games of Perfect Information subsections.

3.1 The “No Disagreement” Theorem

Aumann (1976) originally used his definition of common knowledge toprove a celebrated result that says that in a certain sense, agentscannot “agree to disagree” about their beliefs, formalizedas probability distributions, if they start with common prior beliefs.Since agents in a community often hold different opinions and knowthey do so, one might attribute such differences to the agents’having different private information. Aumann’s surprising resultis that even if agents condition their beliefs on private information,mere common knowledge of their conditioned beliefs and a common priorprobability distribution implies that their beliefs cannot bedifferent, after all!

Proposition 3.1
Let \(\Omega\) be a finite set of states of the world. Supposethat

  1. Agents \(i\) and \(j\) have a common prior probabilitydistribution \(\mu(\cdot)\) over the events of \(\Omega\) such that\(\mu(\omega) \gt 0\), for each \(\omega \in \Omega\), and
  2. It is common knowledge at \(\omega\) that \(i\)’s posteriorprobability of event \(E\) is \(q_i(E)\) and that \(j\)’sposterior probability of \(E\) is \(q_j(E)\).

Then \(q_i(E) = q_j(E)\).

Proof.
[Note that in the proof of this proposition, and in the sequel,\(\mu(\cdot\mid B)\) denotes conditional probability; that is, given\(\mu(B)\gt 0, \mu(A\mid B) = \mu(A\cap B)/\mu(B)\).]

In a later article, Aumann (1987) argues that the assumptions that\(\Omega\) is finite and that \(\mu(\omega) \gt 0\) for each \(\omega\in \Omega\) reflect the idea that agents only regard as“really” possible a finite collection of salient worlds towhich they assign positive probability, so that one can drop thestates with probability 0 from the description of the state space.Aumann also notes that this result implicitly assumes that the agentshave common knowledge of their partitions, since a description of eachpossible world includes a description of the agents’ possibilitysets. And of course, this result depends crucially upon (i), which isknown as thecommon prior assumption (CPA).

Aumann’s “no disagreement” theorem has beengeneralized in a number of ways in the literature. Cave 1983generalizes the argument to 3 agents. Bacharach 1985 extends it tocases in which agents observe each other’s decisions rather thanposteriors. Milgrom and Stokey, 1982 use it crucially for theirno-trade theorem, applying no disagreement to show that speculativetrade is impossible. Geanakoplos and Polemarchakis 1982 generalize theargument to a dynamic setting in which two agents communicate theirposterior probabilities back and forth until they reach an agreement– this particular take on the agreement theorem has beencharacterized in terms of dynamic epistemic logic by Dégremontand Roy, 2009 and applied to cases of epistemic peer disagreement bySillari 2019. McKelvey and Page 1986 further extend the results ofGeanakoplos and Polemarchakis to the case of \(n\) individuals. (Seealso Monderer and Samet 1989 and, for a survey, Geanakoplos 1994.)

However, all of these “no disagreement” results raise thesame philosophical puzzle raised by Aumann’s original result:How are we to explain differences in belief? Aumann’s resultleaves us with two options: (1) admit that at some level, commonknowledge of the agents’ beliefs or how they form their beliefsfails, or (2) deny the CPA. Thus, even if agents do assign preciseposterior probabilities to an event, Aumann shows that if they havemerely first-order mutual knowledge of the posteriors, they can“agree to disagree”.[18] Another way Aumann’s result might fail is if agents do not havecommon knowledge that they update their beliefs by Bayesianconditionalization. Then clearly, agents can explain divergentopinions as the result of others having modified their beliefs in the“wrong” way. However, there are cases in which neitherexplanation will seem convincing and denying the requisite commonknowledge seems a ratherad hoc move. Why should one thinkthat such failures of common knowledge provide a general explanationfor divergent beliefs?

What of the second option, that is, denying the CPA?[19] The main argument put forward in favor of the CPA is that anydifferences in agents’ probabilities should be the result oftheir having different information only, that is, there is no reasonto think that the different beliefs that agents have regarding thesame event are the result of anything other than their havingdifferent information. However, one can reply that this argumentamounts simply to a restatement of the Harsanyi Doctrine.[20]

3.2 Convention

Schelling’s Department Store problem of Example 1.5 is a verysimple example in which the agents “solve” theircoordination problem appropriately by establishing aconvention. (see also the entry onconvention in this encyclopedia.) Using the vocabulary of game theory, Lewis(1969) defines a convention as astrict coordinationequilibrium of a game which agents follow on account of theircommon knowledge that they all prefer to follow this coordinationequilibrium in a recurrent coordination problem. A coordinationequilibrium of a game is a strategy combination such that no agent isbetter off if any agent unilaterally deviates from this combination.As with equilibria in general, a coordination equilibrium isstrict if any agent who deviates unilaterally from theequilibrium is strictly worse off. The strategic form game of Figure1.3 summarizes Liz’s and Robert’s situation. TheDepartment Store game has four Nash equilibrium outcomes in purestrategies: \((s_1, s_1),\) \((s_2, s_2),\) \((s_3, s_3)\), and\((s_4, s_4)\).[21] These four equilibria are all strict coordination equilibria. If theagents follow either of these equilibria, then they coordinatesuccessfully. For agents to be following a Lewis-convention in thissituation, they must follow one of the game’s coordinationequilibria. However, for Lewis to follow a coordination equilibrium isnot a sufficient condition for agents to be following a convention.For suppose that Liz and Robert fail to analyze their predicamentproperly at all, but Liz chooses \(s_2\) and Robert chooses \(s_2\),so that they coordinate at \((s_2, s_2)\) by sheer luck. Lewis doesnot count accidental coordination of this sort as a convention.

Suppose next that both agents are Bayesian rational, and that part ofwhat each agent knows is the payoff structure of the Intersectiongame. If the agents expect each other to follow \((s_2, s_2)\) andthey consequently coordinate successfully, are they then following aconvention? Not necessarily, contends Lewis in a subtle argument on p.59 ofConvention. For while each agent knows the game andthat she is rational, still she might not attribute the same knowledgeto the other agent. If each agent believes that the other agent willfollow her end of the \((s_2, s_2)\) equilibrium mindlessly, then herbest response is to follow her end of \((s_2, s_2)\). But in this casethe agents coordinated as the result of their each falsely believingthat the other acts like an automaton, and Lewis thinks that anyproper account of convention must require that agents havecorrect beliefs about one another. In particular, Lewisrequires that each agent involved in a convention must have mutualexpectations that each is acting with the aim of coordinating with theother. The argument can be carried further on. What if both agentsbelieve that they will follow \((s_2, s_2)\), and believe that eachother will do so thinking that the other will choose \(s_2\)rationally and not mindlessly? Then, say, Liz would coordinate as theresult of her false second-order belief that Robert believes that Lizacts mindlessly. Similarly for third-order beliefs and so on for anyhigher order of knowledge.

Lewis concludes that a necessary condition for agents to be followinga convention is that their preferences to follow the correspondingcoordination equilibrium be common knowledge (the issue whetherconventions need to be common knowledge has been debated recently, cf.Cubitt and Sugden 2003, Binmore 2008, Sillari 2008, and, for anexperimental approach, see Devetag et al. 2013, for a connection tothe topic of rule-following, see Sillari 2013). So on Lewis’account, a convention for a set of agents is a coordinationequilibrium which the agents follow on account of their commonknowledge of their rationality, the payoff structure of the relevantgame and that each agent follows her part of the equilibrium.

A regularity \(R\) in the behavior of members of a population \(P\)when they are agents in a recurrent situation \(S\) is aconvention if and only if it is true that, and it is commonknowledge in \(P\) that, in any instance of \(S\) among the members of\(P\),
  1. everyone conforms to \(R\);
  2. everyone expects everyone else to conform to \(R\);
  3. everyone has approximately the same preferences regarding allpossible combinations of actions;
  4. everyone prefers that everyone conform to \(R\), on condition thatat least all but one conform to R;
  5. everyone would prefer that everyone conform to \(R'\), oncondition that at least all but one conform to \(R'\),

where \(R'\) is some possible regularity in the behavior of members of\(P\) in \(S\), such that no one in any instance of \(S\) amongmembers of \(P\) could conform both to \(R'\) and to \(R\).
(Lewis 1969, p. 76)[22]

Lewis includes the requirement that there be an alternate coordinationequilibrium \(R'\) besides the equilibrium \(R\) that all follow inorder to capture the fundamental intuition that how the agents whofollow a convention behave depends crucially upon how they expect theothers to behave.

Sugden (1986) and Vanderschraaf (1998) argue that it is not crucial tothe notion of convention that the corresponding equilibrium be acoordination equilibrium. Lewis’ key insight is that aconvention is a pattern of mutually beneficial behavior which dependson the agents’ common knowledge that all followthispattern, and no other. Vanderschraaf gives a more general definitionof convention as astrict equilibrium together with commonknowledge that all follow this equilibrium and that all would havefollowed a different equilibrium had their beliefs about each otherbeen different. An example of this more general kind of convention isgiven below in the discussion of the Figure 3.1 example.

3.3 Strategic Form Games

Lewis formulated the notion of common knowledge as part of his generalaccount of conventions. In the years following the publication ofConvention, game theorists have recognized that anyexplanation of a particular pattern of play in a game dependscrucially on mutual and common knowledge assumptions. Morespecifically,solution concepts in game theory are bothmotivated and justified in large part by the mutual or commonknowledge the agents in the game have regarding their situation.

To establish the notation that will be used in the discussion thatfollows, the usual definitions of a game in strategic form, expectedutility and agents’ distributions over their opponents’strategies, are given here:

Definition 3.2
Agame \(\Gamma\) is an ordered triple \((N, S,\boldsymbol{u})\) consisting of the following elements:

  1. A finite set \(N = \{1,2, \ldots ,n\}\), called theset ofagents orplayers.
  2. For each agent \(k \in N\), there is a finite set \(S_k =\{s_{k1},s_{k2}, \ldots ,s_{kn_k}\}\), called thealternative purestrategies for agent \(k\). The Cartesian product \(S = S_1\times \ldots \times S_n\) is called thepure strategy setfor the game \(\Gamma\).
  3. A map \(\boldsymbol{u} : S \rightarrow \Re^n,\) called theutility orpayoff function on the pure strategy set.At each strategy combination \(\boldsymbol{s} = (s_{1j_1}, \ldots,s_{nj_n}) \in S\), agent \(k\)’s particular payoff or utilityis given by the \(k\)th component of the value of\(\boldsymbol{u}\), that is, agent \(k\)’s utility \(u_k\) at\(\boldsymbol{s}\) is determined by \[u_k (\boldsymbol{s}) = I_k (\boldsymbol{u}(s_{1j_1}, \ldots ,s_{nj_n}))\]

    where \(I_k (\boldsymbol{x})\) projects \(\boldsymbol{x} \in \Re^n\)onto its \(k\)th component.

The subscript ‘\(-k\)’ indicates the result of removingthe \(k\)th component of an \(n\)-tuple or an \(n\)-foldCartesian product. For instance,

\[S_{-k} = S_1 \times \ldots \times S_{k-1} \times S_{k+1} \times \ldots \times S_n\]

denotes the pure strategy combinations that agent \(k\)’sopponents may play.

Now let us formally introduce a system of the agents’ beliefsinto this framework. \(\Delta_k (S_{-k})\) denotes the set ofprobability distributions over the measurable space \((S_{-k},\mathfrak{F}_k)\), where \(\mathfrak{F}_k\) denotes the Booleanalgebra generated by the strategy combinations \(S_{-k}\). Each agent\(k\) has a probability distribution \(\mu_k \in \Delta_k(S_{-k})\),and this distribution determines the (Savage)expectedutilities for each of \(k\)’s possible acts:

\[E(u_k (s_{k j})) = \sum_{A_{-k} \in S_{-k}} u_k (s_{kj}, \boldsymbol{s}_{-k}) \mu_k (\boldsymbol{s}_{-k}),\ j = 1, 2, \ldots ,n_k\]

If \(i\) is an opponent of \(k\), then \(i\)’s individualstrategy \(s_{i j}\) may be characterized as a union of strategycombinations \(\bigcup\{\boldsymbol{s}_{-k}\mid s_{ij}\in\boldsymbol{s}_{-k}\} \in \mathfrak{F}_k\), and so \(k\)’smarginal probability for \(i\)’s strategy \(s_{i j}\) may becalculated as follows:

\[\mu_k (s_{ij}) = \sum_{\{s_{-k}\mid s_{ij}\in s_{-k}\}} \mu_{k}(s_{-k})\]

\(\mu_k(\cdot \mid A)\) denotes \(k\)’s conditional probabilitydistribution given a set \(A\), and \(E(\cdot \mid A)\) denotes\(k\)’s conditional expectation given \(\mu_k(\cdot\midA).\)

Suppose first that the agents have common knowledge of the full payoffstructure of the game they are engaged in and that they are allrational, and that no other information is common knowledge. In otherwords, each agent knows that her opponents are expected utilitymaximizers, but does not in general know exactly which strategies theywill choose or what their probabilities for her acts are. These commonknowledge assumptions are the motivational basis for the solutionconcept for noncooperative games known asrationalizability,introduced independently by Bernheim (1984) and Pearce (1984). Roughlyspeaking, arationalizable strategy is any strategy an agentmay choose without violating common knowledge of Bayesian rationality.Bernheim and Pearce argue that when only the structure of the game andthe agents’ Bayesian rationality are common knowledge, the gameshould be considered “solved” if every agent plays arationalizable strategy. For instance, in the “Chicken”game with payoff structure defined by Figure 3.1,

  Joanna
  \(s_1\)\(s_2\)
Lizzi\(s_1\)(3,3)(2,4)
\(s_2\)(4,2)(0,0)

Figure 3.1

if Joanna and Lizzi have common knowledge of all of the payoffs atevery strategy combination, and they have common knowledge that bothare Bayesian rational, then any of the four pure strategy profiles isrationalizable. For if their beliefs about each other are defined bythe probabilities

\[\begin{align}\alpha_1 &= \mu_1 \text{ (Joanna plays } s_1), \text{ and} \\\alpha_2 &= \mu_2 \text{ (Lizzi plays } s_1)\end{align}\]

then

\[ E(u_i (s_1)) = 3\alpha_i + 2(1 - \alpha_i) = \alpha_i + 2\]

and

\[ E(u_i (s_2)) = 4\alpha_i + 0(1 - \alpha_i) = 4\alpha_i, \ i = 1, 2\]

so each agent maximizes her expected utility by playing \(s_1\) if\(\alpha_i + 2 \ge 4\alpha_i\) or \(\alpha_i \le 2/3\) and maximizesher expected utility by playing \(s_2\) if \(\alpha_i \ge 2/3\). If itso happens that \(\alpha_i \gt 2/3\) for both agents, then bothconform with Bayesian rationality by playing their respective ends ofthe strategy combination \((s_2,s_2)\)given their beliefs,even though each would want to defect from this strategy combinationwere she to discover that the other is in fact going to play \(s_2\).Note that the game’s pure strategy Nash equilibria, \((s_1,s_2)\) and \((s_2, s_1)\), are rationalizable, since it is rationalfor Lizzi and Joanna to conform with either equilibrium givenappropriate distributions. In general, the set of a game’srationalizable strategy combinations contains the set of thegame’s pure strategy Nash equilibria.[23]

Rationalizability can be defined formally in several ways. A variationof Bernheim’s original (1984) definition is given here.

Definition 3.3
Given that each agent \(k \in N\) has a probability distribution\(\mu_k \in \Delta_k(s_{-k})\), the system of beliefs \[ \boldsymbol{\mu} = (\mu_1 , \ldots ,\mu_n) \in \Delta_1 (S_{-1}) \times \cdots \times \Delta_n (S_{-n})\]

isBayes concordant if and only if,

(3.i)
For \(i \ne k, \mu_i (s_{kj}) \gt 0 \Rightarrow s_{kj}\) maximizes\(k\)’s expected utility for some \(\sigma_k \in\Delta_k(s_{-k}),\)

and (3.i) is common knowledge. A pure strategy combination\(\boldsymbol{s} = (s_{1j_1}, \ldots ,s_{nj_n}) \in S\) isrationalizable if and only if the agents have a Bayesconcordant system\(\mu\) of beliefs and, for eachagent \(k \in N\),

(3.ii)
\(E(u_k (s_{kj_k})) \ge E(u_k (s_{ki_k})),\) for \(i_k \ne j_k\).[24]

The following result shows that the common knowledge restriction onthe distributions in Definition 3.1 formalizes the assumption that theagents have common knowledge of Bayesian rationality.

Proposition 3.4
In a game \(\Gamma\), common knowledge of Bayesian rationality issatisfied if, and only if, (3.i) is common knowledge.
Proof.

When agents have common knowledge of the game and their Bayesianrationality only, one can predict that they will follow arationalizable strategy profile. However, rationalizability becomes anunstable solution concept if the agents come to know more about oneanother. For instance, in the Chicken example above with \(\alpha_i\gt 2/3, i = 1, 2\), if either agent were to discover the otheragent’s beliefs about her, she would have good reason not tofollow the \((s_2,s_2)\) profile and to revise her own beliefsregarding the other agent. If, in the other hand, it so happens that\(\alpha_1 = 1\) and \(\alpha_2 = 0\), so that the agents maximizeexpected payoff by following the \((s_2, s_1)\) profile, then shouldthe agents discover their beliefs about each other, they will stillfollow \((s_2, s_1)\). Indeed, if their beliefs are common knowledge,then one can predict with certainty that they will follow\((s_2,s_1)\). The Nash equilibrium \((s_2,s_1)\) is characterized bythe belief distributions defined by \(\alpha_1 = 1\) and \(\alpha_2 =0\).

The Nash equilibrium is a special case ofcorrelated equilibriumconcepts, which are defined in terms of the belief distributionsof the agents in a game. In general, a correlatedequilibrium-in-beliefs is a system of agents’ probabilitydistributions which remains stable given common knowledge of the game,rationality and thebeliefs themselves. We will review twoalternative correlated equilibrium concepts (Aumann 1974, 1987;Vanderschraaf 1995, 2001), and show how each generalizes the Nashequilibrium concept.

Definition 3.5
Given that each agent \(k \in N\) has a probability distribution\(\mu_k \in \Delta_k (s_{-k})\), the system of beliefs

\[ \boldsymbol{\mu}^* = (\mu_1^*, \ldots ,\mu_n^* ) \in \Delta_1 (s_{-1}) \times \ldots \times \Delta_n (s_{-n})\]

is anendogenous correlated equilibrium if, and only if,

(3.iii)
For \(i \ne k, \mu_i^*(s_{kj}) \gt 0 \Rightarrow s_{kj}\)maximizes \(k\)’s expected utility given \(\mu_k^*.\)

If \(\boldsymbol{\mu}^*\) is an endogenous correlated equilibrium apure strategy combination \(\boldsymbol{s}^* = (s_1^*, \ldots ,s_n^*)\in S\) is anendogenous correlated equilibrium strategycombination given \(\boldsymbol{\mu}^*\) if, and only if, foreach agent \(k \in N,\)

(3.iv)
\(E(u_k (s_k^*)) \ge E(u_k (s_{ki}))\) for \(s_{ki} \nes_k^*.\)

Hence, the endogenous correlated equilibrium \(\boldsymbol{\mu}^*\)restricts the set of strategies that the agents might follow, as dothe Bayes concordant beliefs of rationalizability. However, theendogenous correlated equilibrium concept is a proper refinement ofrationalizability, because the latter does not presuppose thatcondition (3.iii) holds with respect to the beliefs one’sopponents actually have. If exactly one pure strategy combination\(\boldsymbol{s}^*\) satisfies (3.iv) given \(\boldsymbol{\mu}^*\),then \(\boldsymbol{\mu}^*\) is astrict equilibrium, and inthis case one can predict with certainty what the agents will do givencommon knowledge of the game, rationality and their beliefs. Note thatDefinition 3.5 says nothing about whether or not the agents regardtheir opponents’ strategy combinations as probabilisticallyindependent. Also, this definition does not require that theagents’ probabilities areconsistent, in the sense thatagents’ probabilities for a mutual opponent’s acts agree.A simple refinement of the endogenous correlated equilibrium conceptcharacterizes the Nash equilibrium concept.

Definition 3.6
A system of agents’ beliefs \(\boldsymbol{\mu}^*\) is aNashequilibrium if, and only if,

  1. condition (3.iii) is satisfied,
  2. For each \(k \in N, \mu_k^*\) satisfies probabilisticindependence, and
  3. For each \(s_{kj} \in s_k\), if \(i, l \ne k\) then\(\mu_i^*(s_{kj}) = \mu_l^*(s_{kj})\).

In other words, an endogenous correlated equilibrium is a Nashequilibrium-in-beliefs when each agent regards the moves of hisopponents as probabilistically independent and the agents’probabilities are consistent. Note that in the 2-agent case,conditions (b) and (c) of the Definition 3.6 are always satisfied, sofor 2-agent games the endogenous correlated equilibrium conceptreduces to the Nash equilibrium concept. Conditions (b) and (c) aretraditionally assumed in game theory, but Skyrms (1991) andVanderschraaf (1995, 2001) argue that there may be good reasons torelax these assumptions in games with 3 or more agents.

Brandenburger and Dekel (1988) show that in 2-agent games, if thebeliefs of the agents are common knowledge, condition (3.iii)characterizes a Nash equilibrium-in-beliefs. As they note, condition(3.iii) characterizes a Nash equilibrium in beliefs for the\(n\)-agent case if the probability distributions are consistent andsatisfy probabilistic independence. Proposition 3.7 extendsBrandenburger and Dekel’s result to the endogenous correlatedequilibrium concept by relaxing the consistency and probabilisticindependence assumptions.

Proposition 3.7
Assume that the probabilities

\[ \mu = (\mu_1 ,\ldots ,\mu_n) \in \Delta_1 (s_{-1}) \times \ldots \times \Delta_n (s_{-n})\]

are common knowledge. Then common knowledge of Bayesian rationality issatisfied if, and only if, \(\boldsymbol{\mu}\) is an endogenouscorrelated equilibrium.
Proof.

In addition, we have:

Corollary 3.8 (Brandenburger and Dekel, 1988)
Assume in a 2-agent game that the probabilities

\[ \boldsymbol{\mu} = (\mu_1,\mu_2) \in \Delta_1 (s_{-1}) \times \Delta_2 (s_{-2})\]

are common knowledge. Then common knowledge of Bayesian rationality issatisfied if, and only if, \(\boldsymbol{\mu}\) is a Nashequilibrium.

Proof.
The endogenous correlated equilibrium concept reduces to the Nashequilibrium concept in the 2-agent case, so the corollary follows byProposition 3.7.

If \(\boldsymbol{\mu}^*\) is a strict equilibrium, then one canpredict which pure strategy profile the agents in a game will followgiven common knowledge of the game, rationality and\(\boldsymbol{\mu}^*.\) But if \(\boldsymbol{\mu}^*\) is such thatseveral distinct pure strategy profiles satisfy (3.iv) with respect to\(\boldsymbol{\mu}^*\), then one can no longer predict with certaintywhat the agents will do. For instance, in the Chicken game of Figure3.1, the belief distributions defined by \(\alpha_1 = \alpha_2 = 2/3\)together are a Nash equilibrium-in-beliefs. Given common knowledge ofthis equilibrium, either pure strategy is a best reply for each agent,in the sense that either pure strategy maximizes expected utility.Indeed, if agents can also adopt randomized ormixedstrategies at which they follow one of several pure strategiesaccording to the outcome of a chance experiment, then any of theinfinitely mixed strategies an agent might adopt in Chicken is a bestreply given \(\boldsymbol{\mu}^*\).[25] So the endogenous correlated equilibrium concept does not determinethe exact outcome of a game in all cases, even if one assumesprobabilistic consistency and independence so that the equilibrium isa Nash equilibrium.

Another correlated equilibrium concept formalized by Aumann (1974,1987) does give a determinate prediction of what agents will do in agame given appropriate common knowledge. To illustrate Aumann’scorrelated equilibrium concept, let us consider the Figure 3.1 gameonce more. If Joanna and Lizzi can tie their strategies to theirknowledge of the possible worlds in a certain way, they can follow asystem of correlated strategies which will yield a payoff vector theyboth prefer to that of the mixed Nash equilibrium and which is itselfan equilibrium. One way they can achieve this is to have their friendRon play a variation of the familiar shell game by hiding a pea underone of three walnut shells, numbered 1, 2 and 3. Joanna and Lizzi boththink that each of the three relevant possible worlds corresponding to\(\omega_k = \{\)the pea lies under shell \(k\}\) is equally likely.Ron then gives Lizzi and Joanna each a private recommendation, basedupon the outcome of the game, which defines a system of strategycombinations f as follows

\[\tag{\(\star\)}f(\omega) =\begin{cases} (s_1, s_1) \text{ if } \omega_k = \omega_1 \\ (s_1, s_2) \text{ if } \omega_k = \omega_2 \\ (s_2, s_1) \text{ if } \omega_k = \omega_3\end{cases}\]

\(f\) is acorrelated strategy system because the agents tietheir strategies, by following their recommendations, to the same setof states of the world \(\Omega\). \(f\) is also a strictAumanncorrelated equilibrium, for if each agent knows how Ron makes hisrecommendations, but knows only the recommendation he gives her,either would do strictly worse were she to deviate from her recommendation.[26] Since there are several strict equilibria of Chicken, \(f\)corresponds to a convention as defined in Vanderschraaf (1998). Theoverall expected payoff vector of \(f\) is (3,3), which lies outsidethe convex hull of the payoffs for the game’s Nash equilibriaand which Pareto-dominates the expected payoff vector (4/3, 4/3), ofthe mixed Nash equilibrium defined by \(\alpha_1 = 2/3\), \(i = 1, 2.\)[27] The correlated equilibrium f is characterized by the probabilitydistribution of the agents’ play over the strategy profiles,given in Figure 3.3:

  Joanna
  \(s_1\)\(s_2\)
Lizzi\(s_1\)
\(s_2\)0

Figure 3.3

Aumann (1987) proves a result relating his correlated equilibriumconcept to common knowledge. To review this result, we must give theformal definition of Aumann correlated equilibrium.

Definition 3.9
Given a game \(\Gamma = (N, S, \boldsymbol{u})\) together with afinite set of possible worlds \(\Omega\), the vector valued function\(f: \Omega \rightarrow S\) is acorrelated n-tuple. If\(f(\omega) = (f_1 (\omega), \ldots ,f_n (\omega))\) denotes thecomponents of \(f\) for the agents of \(N\), then agent \(k\)’srecommended strategy at \(\omega\) is \(f_k (\omega).\) \(f\)is anAumann correlated equilibrium iff \[E(u_k \circ f) \ge E(u_k (f_{-k}, g_k)),\]

for each \(k \in N\) and for any function \(g_k\) that is a functionof \(f_i\).

The agents are at Aumann correlated equilibrium if at each possibleworld \(\omega \in \Omega\), no agent will want to deviate from hisrecommended strategy, given that the others follow their recommendedstrategies. Hence, Aumann correlated equilibrium uniquely specifiesthe strategy of each agent, by explicitly introducing a space ofpossible worlds to which agents can correlate their acts. Thedeviations \(g_i\) are required to be functions of \(f_i\), that is,compositions of some other function with \(f_i\), because \(i\) isinformed of \(f_i (\omega)\) only, and so can only distinguish betweenthe possible worlds of \(\Omega\) that are distinguished by \(f_i\).As noted already, the primary difference between Aumann’s notionof correlated equilibrium and the endogenous correlated equilibrium isthat in Aumann’s correlated equilibrium, the agents correlatetheir strategies to some event \(\omega \in \Omega\) that is externalto the game. One way to view this difference is that agents whocorrelate their strategies exogenously can calculate their expectedutilities conditional on their own strategies.

In Aumann’s model, a description of each possible world\(\omega\) includes descriptions of the following: the game\(\Gamma\), the agent’s private information partitions, and theactions chosen by each agent at \(\omega\), and each agent’sprior probability distribution \(\mu_k(\cdot)\) over \(\Omega\). Thebasic idea is that conditional on \(\omega\), everyone knowseverything that can be the object of uncertainty on the part of anyagent, but in general, no agent necessarily knows which world\(\omega\) is the actual world. The agents can use their priors tocalculate the probabilities that the various act combinations\(\boldsymbol{s} \in S\) are played. If the agents’ priors aresuch that for all \(i, j \in N,\) \(\mu_i(\omega) = 0\) iff \(\mu_j(\omega) = 0,\) then the agents’ priors aremutuallyabsolutely continuous. If the agents’ priors all agree,that is, \(\mu_1 (\omega) = \ldots = \mu_n (\omega) = \mu(\omega)\)for each \(\omega \in \Omega\), then it is said that thecommonprior assumption, or CPA, is satisfied. If agents are followingan Aumann correlated equilibrium \(f\) and the CPA is satisfied, then\(f\) is anobjective Aumann correlated equilibrium. AnAumann correlated equilibrium is a Nash equilibrium if the CPA issatisfied and the agents’ distributions satisfy probabilistic independence.[28]

Let \(s_i (\omega)\) denote the strategy chosen by agent \(i\) atpossible world \(\omega\). Then \(s: \Omega \rightarrow S\) defined by\(s(\omega) = ( s_1 (\omega),\ldots ,s_n (\omega) )\) is a correlated\(n\)-tuple. Given that \(\mathcal{H}_i\) is a partition of \(\Omega\),[29] the function \(s_i : \Omega \rightarrow s_i\) defined by \(s\) is\(\mathcal{H}_i\)-measurable if for each \(\mathcal{H}_{ij}\in \mathcal{H}_{i}, s_i (\omega')\) is constant for each \(\omega'\in \mathcal{H}_{ij}.\) \(\mathcal{H}_i\)-measurability is a formalway of saying that \(i\) knows what she will do at each possibleworld, given her information.

Definition 3.10
Agent \(i\) isBayes rational with respect to \(\omega \in\Omega\) (alternatively, \(\omega\)-Bayes rational) iff\(s_i\) is \(\mathcal{H}_i\)-measurable and

\[E(u_i \circ s \mid \mathcal{H}_i)(\omega) \ge E(u_i (v_i, s_{-i}) \mid \mathcal{H}_i)(\omega)\]

for any \(\mathcal{H}_i\)-measurable function \(v_i : \Omega\rightarrow s_i\).

Note that Aumann’s definition of \(\omega\)-Bayesian rationalityimplies that \(\mu_i (\mathcal{H}_i (\omega)) \gt 0\), so that theconditional expectations are defined. Aumann’s main result,given next, implicitly assumes that \(\mu_i (\mathcal{H}_i (\omega))\gt 0\) for every agent \(i \in N\) and every possible world \(\omega\in \Omega\). This poses no technical difficulties if the CPA issatisfied, or even if the priors are only mutually absolutelycontinuous, since if this is the case then one can simply drop any\(\omega\) with zero prior from consideration.

Proposition 3.11 (Aumann 1987)
If each agent \(i \in N\) is \(\omega\)-Bayes rational at eachpossible world \(\omega \in \Omega\), then the agents are following anAumann correlated equilibrium. If the CPA is satisfied, then thecorrelated equilibrium is objective.
Proof.

Part of the uncertainty the agents might have about their situation iswhether or not all agents are rational. But if it is assumed that allagents are \(\omega\)-Bayesian rational at each \(\omega \in \Omega\),then a description of this fact forms part of the description of eachpossible \(\omega\) and thus lies in the meet of the agents’partitions. As noted already, descriptions of the agents’priors, their partitions and the game also form part of thedescription of each possible world, so propositions corresponding tothese facts also lie in the meet of the agents’ partitions. Soanother way of stating Aumann’s main result is as follows:Common knowledge of \(\omega\)-Bayesian rationality ateach possible world implies that the agents follow an Aumanncorrelated equilibrium.

Propositions 3.7 and 3.11 are powerful results. They say that commonknowledge of rationality and of agents beliefs about each other,quantified as their probability distributions over the strategyprofiles they might follow, implies that the agents’ beliefscharacterize an equilibrium of the game. Then if the agents’beliefs are unconditional, Proposition 3.7 says that the agents arerational to follow a strategy profile consistent with thecorresponding endogenous correlated equilibrium. If their beliefs areconditional on their private information partitions, then Proposition3.11 says they are rational to follow the strategies the correspondingAumann correlated equilibrium recommends. However, we must notoverestimate the importance of these results, for they say nothingabout theorigins of the common knowledge of rationality andbeliefs. For instance, in the Chicken game of Figure 3.1, weconsidered an example of a correlated equilibrium in which it wasassumed that Lizzi and Joanna had common knowledge of thesystem of recommended strategies defined by \((\star).\) Given thiscommon knowledge, Joanna and Lizzi indeed have decisive reason tofollow the Aumann correlated equilibrium f. But where did this commonknowledge come from? How, in general, do agents come to have thecommon knowledge which justifies their conforming to an equilibrium?Philosophers and social scientists have made only limited progress inaddressing this question.

3.4 Games of Perfect Information

In extensive form games, the agents move in sequence. At each stage,the agent who is to move must base her decisions upon what she knowsabout the preceding moves. This part of the agent’s knowledge ischaracterized by aninformation set, which is the set ofalternative moves that an agent knows her predecessor might havechosen. For instance, consider the extensive form game of Figure3.4:

missing text, please inform

Figure 3.4

When Joanna moves she is at her information set \(I^{22} = \{C^1,D^1\},\) that is, she moves knowing that Lizzi might have choseneither \(C^1\) or \(D^1\), so this game is an extensive formrepresentation of the Chicken game of Figure 3.1.

In a game of perfect information, each information set consists of asingle node in the game tree, since by definition at each state theagent who is to move knows exactly how her predecessors have moved. InExample 1.4 it was noted that the method of backwards induction can beapplied to any game of perfect information.[30] The backwards induction solution is the unique Nash equilibrium of agame of perfect information. The following result gives sufficientconditions to justify backwards induction play in a game of perfectinformation:

Proposition 3.12 (Bicchieri 1993)
In an extensive form game of perfect information, the agents followthe backwards induction solution if the following conditions aresatisfied for each agent \(i\) at each information set \(I^{ik}\):

  1. \(i\) is rational, \(i\) knows this and \(i\) knows the game,and
  2. At any information set \(I^{jk + 1}\) that immediately follows\(I^{ik}, i\) knows at \(I^{ik}\) what \(j\) knows at \(I^{jk +1}\).
Proof.

Proposition 3.12 says that far less than common knowledge of the gameand of rationality suffices for the backwards induction solution toobtain in a game of perfect information. All that is needed is foreach agent at each of her information sets to be rational, to know thegame and to know what the next agent to move knows! For instance, inthe Figure 1.2 game, if \(R_1 (R_2)\) stands for “Alan (Fiona)is rational” and \(\mathbf{K}_i (\Gamma)\) stands for“\(i\) knows the game \(\Gamma\)”, then the backwardsinduction solution is implied by the following:

  1. At \(I^{24}, R_2\) and \(\mathbf{K}_2 (\Gamma)\).
  2. At \(I^{13}, R_1, \mathbf{K}_1 (\Gamma), \mathbf{K}_1 (R_2)\), and\(\mathbf{K}_1\mathbf{K}_2 (\Gamma)\).
  3. At \(I^{22}, \mathbf{K}_2 (R_1), \mathbf{K}_2\mathbf{K}_1 (R_2)\),and \(\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2 (\Gamma)\).
  4. At \(I ^{11}, \mathbf{K}_1\mathbf{K}_2 (R_1),\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1 (R_2)\), and\(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2 (\Gamma)\).[31]

One might think that a corollary to Proposition 3.11 is that in a gameof perfect information, common knowledge of the game and ofrationality implies the backwards induction solution. This is theclassical argument for the backwards induction solution. Manygame theorists continue to accept the classical argument, but inrecent years, the argument has come under strong challenge, led by thework of Reny (1988, 1992), Binmore (1987) and Bicchieri (1989, 1993).The basic idea underlying their criticisms of backwards induction canbe illustrated with the Figure 1.2 game. According to the classicalargument, if Alan and Fiona have common knowledge of rationality andthe game, then each will predict that the other will follow her end ofthe backwards induction solution, to which his end of the backwardsinduction solution is his unique best response. However, what if Fionareconsiders what to do if she finds herself at the information set\(I^{22}\)? If the information set \(I^{22}\) is reached, then Alanhas of course not followed the backwards induction solution. If weassume that at \(I^{22}\), Fiona knows only what is stated in (iii),then she can explain her being at \(I^{22}\) as a failure of either\(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1 (R_2)\) or\(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2 (\Gamma)\) at\(I^{11}\). In this case, Fiona’s thinking that either\(\neg\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1 (R_2)\) or\(\neg\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2(\Gamma)\) at\(I^{11}\) is compatible with what Alan in fact does know at\(I^{11}\), so Fiona should not necessarily be surprised to findherself at \(I^{22}\), and given that what she knows there ischaracterized by (iii), following the backwards induction solution isher best strategy. But if rationality and the game are commonknowledge, or even if Fiona and Alan both have just have mutualknowledge of the statements characterized by (iii) and (iv), then at\(I^{22}\), Fiona knows that \(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1(R_2)\) or \(\mathbf{K}_1\mathbf{K}_2\mathbf{K}_1\mathbf{K}_2(\Gamma)\) at \(I^{11}\). Hence given this much mutual knowledge,Fiona no longer can explain why Alan has deviated from the backwardsinduction solution, since this deviation contradicts part of what istheir mutual knowledge. So if she finds herself at \(I^{22}\), Fionadoes not necessarily have good reason to think that Alan will followthe backwards induction solution of the subgame beginning at\(I^{22}\), and hence she might not have good reason to follow thebackwards induction solution, either. Bicchieri (1993), who along withBinmore (1987) and Reny (1988, 1992) extends this argument to games ofperfect information with arbitrary length, draws a startlingconclusion: If agents have strictly too few orstrictly toomany levels of mutual knowledge of rationality and the gamerelative to the number of potential moves, one cannot predict thatthey will follow the backwards induction solution. This wouldundermine the central role backwards induction has played in theanalysis of extensive form games. For why should the number of levelsof mutual knowledge the agents have depend upon the length of thegame?

The classical argument for backwards induction implicitly assumes thatat each stage of the game, the agents discount the preceding moves asstrategically irrelevant. Defenders of the classical argument canargue that this assumption makes sense, since by definition at anyagents’ decision node, the previous moves that led to this nodeare now fixed. Critics of the classical argument question thisassumption, contending that when reasoning about how to move at any ofhis information sets,including those not on the backwardsinduction equilibrium path, part of what an agent must consideris what conditions might have led to his being at that informationset. In other words, agents should incorporate reasoning about thereasoning of the previous movers, orforward inductionreasoning, into their deliberations over how to move at a giveninformation set. Binmore (1987) and Bicchieri (1993) contend that abackwards induction solution to a game should be consistent with thesolution a corresponding forward induction argument recommends. As wehave seen, given common knowledge of the game and of rationality,forward induction reasoning can lead the agents to an apparentcontradiction: The classical argument for backwards induction ispredicated on what agents predict they would do at nodes in the treethat are never reached. They make these predictions based upon theircommon knowledge of the game and of rationality. But forward inductionreasoning seems to imply that if any off-equilibrium node had beenreached, common knowledge of rationality and the game must havefailed, so how could the agents have predicted what would happen atthese nodes?

3.5 Communication Networks

Situations in which a member of a population \(P\) is willing toengage in a certain course of action provided that a large enoughportion of \(P\) engages in some appropriate behavior are typicalproblems ofcollective action. Consider the case of an agentwho is debating whether to join a revolt. Her decision to join or notto join will depend on the number of other agents whom she expects tojoin the revolt. If such a number is too low, she will prefer not torevolt, while if the number is sufficiently large, she will prefer torevolt. Michael Chwe proposes a model where such a situation ismodeled game-theoretically. Players’ knowledge about otherplayers’ intentions depends on asocial network inwhich players are located. The individual ‘thresholds’ foreach player (the number of other agents that are needed for thatspecific player to revolt) are only known by the immediate neighborsin the network. Besides the intrinsic value of the results obtained byChwe’s analysis regarding the subject of collective action, hismodel also provides insights about both the relation between socialnetworks and common knowledge and about the role of common knowledgein collective action. For example, in some situations, first-orderknowledge of other agents’ personal thresholds is not sufficientto motivate an agent to take action, whereas higher-order knowledgeor, in the limit, common knowledge is.

We present Chwe’s model following (Chwe 1999) and (Chwe 2000).Suppose there is a group \(P\) of \(n\) people, and each agent has twostrategies: \(r\) (revolt, that is participating in the collectiveaction) and \(s\) (stay home and not participate). Each agent has herown individualthreshold \(\theta \in \{1, 2,\ldots, n+1\}\)and she prefers \(r\) over \(s\) if and only if the total number ofplayers who revolt is greater than or equal to her threshold. An agentwith threshold 1 always revolts; an agent with threshold 2 revoltsonly if another agent does; an agent with threshold \(n\) revolts onlyif all agents do; an agent with threshold \(n+1\) never revolts, etc.The agents are located in a social network, represented by a binaryrelation \(\rightarrow\) over \(P\). The intended meaning of \(i\rightarrow j\) is that agent \(i\) ‘talks’ to agent\(j\), that is to say, agent \(i\)knows the threshold ofagent \(j.\) If we define \(B(i)\) to be the set \(\{j \in P : j\rightarrow i\},\) we can interpret \(B(i)\) as \(i\)’s‘neighborhood’ and say that, in general, \(i\) knows thethresholds of all agents in her neighborhood. A further assumption isthat, for all \(j,k \in B(i),\) \(i\) knows whether \(j \rightarrowk\) or not, that is, every agent knows whether her neighbors arecommunicating with each other. The relation \(\rightarrow\) is takento be reflexive (one knows her own threshold).

Players’ knowledge is represented as usual in a possible worldsframework. Consider for example the case in which there are twoagents, both with one of thresholds 1, 2 or 3. There are nine possibleworlds represented by ordered pairs of numbers, representing the firstand second player’s individual thresholds respectively: 11, 12,13,…, 32, 33. If the players do not communicate, each knows herown threshold only. Player 1’s information partition reflectsher ignorance about player’s 2 threshold and it consists of thesets \(\{11, 12, 13\},\) \(\{21, 22, 23\},\) \(\{31, 32, 33\};\)whereas, similarly, player 2’s partition consists of the sets\(\{11, 21, 31\},\) \(\{12, 22, 32\},\) \(\{13, 23, 33\}.\) If player1’s threshold is 1, she revolts no matter what player 2’sthreshold is. Hence, player 1 revolts in \(\{11, 12, 13\}\). If player1’s threshold is 3, she never revolts. Hence, she plays \(s\) in\(\{31, 32, 33\}\). If her threshold is 2, she revolts only if theother player revolts as well. Since in this example we are assumingthat there is no communication between the agents, player 1 cannot besure of player’s 2 action, and chooses the non-risky \(s\) in\(\{21, 22, 23\}\) as well. Similarly, player 2 plays \(r\) in \(\{11,21, 31\}\) and \(s\) otherwise. Consider now the case in which \(1\rightarrow 2\) and \(2 \rightarrow 1\). Both players have now thefinest information partitions. Thresholds of 1 and 3 yield \(r\) and\(s\), respectively, for both players again. However, in player1’s cells \(\{21\}\) and \(\{22\}\), she knows that player 2will revolt, and, having threshold 2, she revolts as well. Similarlyfor player 2 in his cells \(\{12\}\) and \(\{22\}\). Note, that thecase in which both players have threshold 2, yields both theequilibrium in which both players revolt and the equilibrium in whicheach player stays home. It is assumed that in the case of multipleequilibria, the one which results in the most revolt will obtain.

missing text, please inform

Figure 3.5

The analysis of the example above applies to general networks with\(n\) agents. Consider for example the three person network \(1\rightarrow 2, 2 \rightarrow 1, 2 \rightarrow 3\), represented infigure 3.5a (notice that symmetric links are represented by a linewithout arrowheads) and assume that each player has threshold 2. Thenetwork between players 1 and 2 is the same as the one above, hence ifthey have threshold 2, they both revolt regardless of the threshold ofplayer 3. Player 3, on the other hand, knows her own threshold andplayer 2’s. Hence, if they all have threshold 2, she cannotdistinguish between the possibilities in the set \(\{122, 222, 322,422\}\). At 422, in particular, neither player 1 nor player 2 revolt,hence player 3 cannot take the risk and does not revolt,evenif, in fact, she has a neighbor who revolts. Adding the link \(1\rightarrow 3\) to the network (cf. figure 3.5b) we provide player 3with knowledge about player 1’s action, hence in this case, ifthey all have threshold 2, they all revolt. Notice that if we breakthe link between players 1 and 2 (so that the network is \(1\rightarrow 3\) and \(2 \rightarrow 3)\), player 3 knows that 1 and 2cannot communicate and hence do not revolt at 222, therefore shechooses \(s\) as well. Knowledge of what other players know aboutother players is crucial.

missing text, please inform

Figure 3.6

The next example reveals that in some cases not even first-orderknowledge is sufficient to trigger action, and higher levels ofknowledge are necessary. Consider four players, each with threshold 3,in the two different networks represented in figure 3.6(‘square’, in figure 3.6a, and ‘kite’, infigure 3.6b.) In thesquare network, player 1 knows that both2 and 4 have threshold 3. However, she does not know about player3’s threshold. If player 3 has threshold 5, then player 2 willnever revolt, since he does not know about player 4’s thresholdand it is then possible for him that player 4 has threshold 5 as well.Player 1’s uncertainty about player 3 together with player1’s knowledge of player 2’s uncertainty about player 4force her not to revolt, although she has threshold 3 and twoneighbors with threshold 3 as well. Similar reasoning applies to allother players, hence in the square no one revolts. Consider now thekite network. Player 4 ignores player 1’s and player2’s thresholds, hence he does not revolt. However, player 1knows that players 2 and 3 have threshold 3, that they know that theydo, and that they know that player 1 knows that they do. This isenough to trigger action \(r\) for the three of them, and indeed ifplayers 1, 2 and 3 all revolt in all states in \(\{3331, 3332, 3333,3334, 3335\}\), this is an equilibrium since in all states at leastthree people revolt each with threshold three.

The difference between the square and the kite networks is that,although in the square enough agents are willing to revolt for arevolt to actually take place, and they all individually know this, noagent knows that others know it. In the kite, on the other hand,agents in the triangle not only know that there are three agents withthreshold 3, but they also know that they all know it, know that theyall know that they all know it, and so on. There is common knowledgeof such fact among them. It is interesting to notice that inChwe’s model, common knowledge obtains without there been apublicly known fact (cf. section 2.2). The proposition“players 1, 2 and 3 all have threshold 3” (semantically:the event \(\{3331, 3332, 3333, 3334, 3335\})\) is known by players 1,2 and 3 because of the network structure, and becomes common knowledgebecause the network structure is known by the players. To be sure, thenetwork structure is not just simply known, but it is actuallycommonly known by the players. Player 1, for example, does not onlyknow that players 2 and 3 communicate with each other. She also knowsthat players 2 and 3 know that she knows that they communicate witheach other, and so on.

Incomplete networks (networks in which all playerscommunicate with everyone else, as within the triangle in the kitenetwork) the information partitions of the players coincide, and theyare the finest partitions of the set of possible worlds. Hence, ifplayers have sufficiently low thresholds, such fact is commonly knownand there is an equilibrium in which all players revolt.

Definition 3.13
We say that \(\rightarrow\) is asufficient network if thereis an equilibrium such that all players choose to revolt.

For a game in which all players have sufficiently low thresholds, thecomplete network is clearly sufficient. Is the complete networknecessary to obtain an equilibrium in which all players revolt? Itturns out that it is not. A crucial role is played by structures ofthe same kind as the ‘triangle’ group in the kite network,calledcliques. In such structures, ‘local’common knowledge (that is, limited to the players part of thestructure) arises naturally. In a complete network (that is, a networkin which there is sufficient but not superfluous communication for itto fully revolt) in which cliques cover the entire population, if oneclique speaks to another then every member of that clique speaks toevery member of the other clique. Moreover, for every two cliques suchthat one is talking to the other, there exists a ‘chain’of cliques with a starting element. In other words, every pair ofcliques in the relation are part of a chain (of length at least 2)with a starting element (aleading clique.) Revolt propagatesin the network moving from ‘leading adopters’ to‘followers’, according to thesocial rolehierarchy defined by the cliques and their relation. Consider thefollowing example, in which cliques are represented by circles andnumbers represent the thresholds of individual players:

missing text, please inform

Figure 3.7

Here the threshold 3 clique is the leading clique, igniting revolt inthe threshold 5 follower clique. In turn, the clique of a singlethreshold 3 element follows. Notice that although she does not need toknow that the leading clique actually revolts to be willing to revolt,that information is needed to ensure that the threshold 5 clique doesrevolt, and hence that it is safe for her to join the revolt. While ineach clique information about thresholds and hence willingness torevolt is common knowledge, in a chain of cliques information is‘linear’; each clique knows about the clique of which itis a follower, but does not know about earlier cliques.

Analyzing Chwe’s models for collective action under the respectof weak versus strong links (cf. both Chwe 1999 and Chwe 2000)provides further insights about the interaction between communicationnetworks and common knowledge. A strong link, roughly speaking, joinsclose friends, whereas a weak link joins acquaintances. Strong linkstend to increase more slowly than weak ones, since people have commonclose friends more often than they share acquaintances. In terms ofspreading information and connecting society, then, weak links do abetter job than strong links, since they traverse society more quicklyand have therefore larger reach. What role do strong and weak linksplay in collective action? In Chwe’s dynamic analysis, stronglinks fare better when thresholds are low, whereas weak links arebetter when players’ thresholds are higher. Intuitively, onesees that strong links tend to form small cliques right away (becauseof the symmetry intrinsic in them: my friends’ friends tend tobe my friends as well); common knowledge arises quickly at the locallevel and, if thresholds are low, there is a better chance that agroup tied by a strong link becomes a leading clique initiatingrevolt. If, on the other hand, thresholds are high, local commonknowledge in small cliques is fruitless, and weak links, reachingfurther distances more quickly, speed up communication and building ofthe large cliques needed to sparkle collective action. Suchconsiderations shed some light on the relation between social networksand common knowledge. While it is true that knowledge spreads fasterin networks in which weak links predominate, higher-order knowledge(and, hence, common knowledge) tends to arise more slowly in this kindof networks. Networks with a larger number of strong links, on theother hand, facilitate the formation of common knowledge at the locallevel.

4. Is Common Knowledge Attainable?

Lewis formulated an account of common knowledge which generates thehierarchy of ‘\(i\) knows that \(j\) knows that … \(k\)knows that \(A\)’ propositions in order to ensure that in hisaccount of convention, agents have correct beliefs about each other.But since human agents obviously cannot reason their way through suchan infinite hierarchy, it is natural to wonder whether any group ofpeople can have full common knowledge of any proposition. Morebroadly, the analyses of common knowledge reviewed in §3 would beof little worth to social scientists and philosophers if this commonknowledge lies beyond the reach of human agents.

Fortunately for Lewis’ program, there are strong arguments thatcommon knowledge is indeed attainable. Lewis (1969) argues that thecommon knowledge hierarchy should be viewed as a chain ofimplications, and not as steps in anyone’s actual reasoning. Hegives informal arguments that the common knowledge hierarchy isgenerated from a finite set of axioms. We saw in §2 that it ispossible to formulate Lewis’ axioms precisely and to derive thecommon knowledge hierarchy from these axioms and apublicevent functioning as a basis for common knowledge. Again, thebasic idea behind Lewis’ argument is that for a set of agents,if a proposition \(A\) is publicly known among them and each agentknows that everyone can draw the same conclusion \(p\) from \(A\) thatshe can, then \(p\) is common knowledge. These conditions areobviously context dependent, just as an individual’s knowing ornot knowing a proposition is context dependent. Yet there are manycases where it is natural to assume that a public event generatescommon knowledge, because it is properly broadcast, agents in thegroup are in ideal conditions to perceive it, the inference from thepublic event to the object of common knowledge is immediate, etc.However, common knowledge could fail if some of the people failed toperceive the public event, or if some of them believed that some ofthe others could not understand the announcement, or hear it, or couldnot draw the necessary inferences, and so on.

In fact, skeptical doubt about the attainability of common knowledgeis certainly possible. A strong skeptical argument has been recentlyput forth by Lederman (2018b). Lederman builds an argument meant toundermine the possibility of deriving the common knowledge hierarchy,as done in §2, on the basis of a public event or, as Ledermancalls it,public information. The principle that Ledermantargets is what he callsideal common knowledge (or belief),that is: If \(p\) is public information in a group \(G\) then \(p\) iscommon knowledge in \(G\), provided the agents in \(G\) are idealreasoners. The argument rests on the privacy and interpersonalincomparability of mental states among agents, and although it isoffered in terms of perceptual knowledge, its scope goes beyondperception to question the possibility of common knowledge toutcourt.

Lederman (2018b) uses the following scenario: Two contestants, Aliceand Bob, observe the height of the mast of a toy sailboat (100 cm)that is subsequently replaced with a randomly selected sailboat whosemast may be more or less tall than 100 cm. As a matter of fact, themast of the selected boat is 300 cm tall. It is therefore publicinformation that the mast is taller than 100 cm. Theideal commonknowledge principle above, along with assumptions about Alice andBob’s visual systems and their publicity, would entail thatAlice and Bob have common knowledge that the mast is taller than 100cm, and yet Lederman’s argument shows that they do not. The mainidea is that there is some degree of approximation in how humansperceive, among other things, heights. Thus, for Alice it isepistemically compatible with the mast looking 300 cm tall to her,that the mast looks somewhat shorter than 300 cm to Bob, say 299 cm.Also, Alice knows that if the mast looks 299 cm tall to Bob, then itis epistemically compatible for him that the mast looks 298 cm tall toAlice. Also, Bob knows that Alice knows that if the mast looks 298 cmtall to Alice, then it is epistemically compatible for her that themast looks 297 cm tall to Bob. The reasoning can be repeated untilthere it is epistemically compatible for Alice and Bob that the mastis not taller than 100 cm, against the intuition that they have commonknowledge that the mast is over 100 cm tall!

Lederman (2018b) generalizes the argument to arbitrary cases andsources of public information, to conclude that people never achievecommon knowledge or belief. In his view, the unattainability of commonknowledge is not a concern in terms of a possible loss of explanatorypower for social behavior. While common knowledge and publicinformation from which it proceeds have long been considered crucialfor coordinating behavior, Lederman claims that in fact coordinationrequires neither (see the discussion of Lederman 2018a in the nextsection.) Against Lederman, Immerman (2021) argues that the skepticalargument sketched here fails in a large set of circumstances, andhence fails to prove the unattainability of common knowledge. The keyidea in Immerman’s attempt to refute Lederman’s argumentis that there are many perceptual values that agents will notentertain to begin with, as if, in the original sailboat example, theyknew that all masts within 100 and 300 cm tall had been stolen.According to Immerman, cases of such “knowledge of gaps”are not at all uncommon and their availability preventsLederman’s argument to go through.

Even if one were to reject Lederman’s skeptical argument (be itby agreeing with Immerman’s argument above, or with the argumentby Thomason (2021) addressed in the next section, or otherwise), caremust be taken in ascribing common knowledge to a group of humanagents. Common knowledge is a phenomenon highly sensitive to theagents’ circumstances. The following section gives an examplethat shows that in order for \(A\) to be a common truism for a set ofagents, they ordinarily must perceive an event which implies \(A\)simultaneously andpublicly.

5. Coordination and Common \(p\)-Belief

In certain contexts, agents might not be able to achieve commonknowledge. The skeptical argument put forth by Lederman (2018b),indeed, rests on and generalizes related arguments about theattainability of common knowledge that were made in theoreticalcomputer science in relation to thecoordinated attackproblem (see Lederman 2018a, Halpern and Moses, 1990 and Fagin et al.1995, esp. chapters 6 and 11). In the context of distributed systems,using the formal systems of epistemic logic that, as mentioned above,are equivalent to the semantic approach privileged by economists, itcan be proven formally that (i) common knowledge is necessary forcoordination and that (ii) the attainability of common knowledgedepends on assumptions made about the system. In particular,asynchronous systems do not allow for common knowledge of acommunicated message to arise, making coordination impossible. Mightthe agents achieve something “close” to common knowledge?There are various weakenings of the notion of common knowledge thatcan be of use: \(\varepsilon\)-common knowledge (agents will achievecommon knowledge within time \(\varepsilon\), hence they willcoordinate within time \(\varepsilon)\), eventual common knowledge(agents will achieve common knowledge and therefore coordinateeventually), probabilistic common knowledge (agents will achieveprobability \(p\) common belief, and hence with probability \(p\)successfully coordinate), etc. Such weakenings of the notion of commonknowledge might prove useful depending on the intendedapplication.

Another weakening of common knowledge to consider is of course\(m\)th level mutual knowledge. For a high value of \(m,\mathbf{K}^m_N(A)\) might seem a good approximation of\(\mathbf{K}^{*}_N(A)\). However, point (i) above maintains that noarbitrary high value of \(m\) will help for instance with thepractical task of achieving coordination, so that the full force ofcommon knowledge is needed. We illustrate the point through thefollowing example, due to Rubinstein (1989, 1992), showing that simplytruncating the common knowledge hierarchy at any finite level can leadagents to behave as if they had no mutual knowledge at all.[32]

5.1 The E-mail Coordination Example

Lizzi and Joanna are faced with the coordination problem summarized inthe following figure:

  Joanna
  \(A\)\(B\)
Lizzi\(A\)(2,2)(0,−4)
\(B\)(−4,0)(0,0)

Figure 5.1a \(\quad\omega_1,\mu(\omega_1)=0.51\)

  Joanna
  \(A\)\(B\)
Lizzi\(A\)(2,2)(0,−4)
\(B\)(−4,0)(0,0)

Figure 5.1b \(\quad\omega_2,\mu(\omega_2)=0.49\)

In Figure 5.1, the payoffs are dependent upon a pair of possibleworlds. World \(\omega_1\) occurs with probability \(\mu(\omega_1) =.51,\) while \(\omega_2\) occurs with probability \(\mu(\omega_2) =.49.\) Hence, they coordinate with complete success by both choosing\(A(B)\) only if the state of the world is \(\omega_1(\omega_2)\).

Suppose that Lizzi can observe the state of the world, but Joannacannot. We can interpret this game as follows: Joanna and Lizzi wouldlike to have a dinner together prepared by Aldo, their favorite chef.Aldo alternates between \(A\) and \(B\), the two branches of Sorriso,their favorite restaurant. State \(\omega_i\) is Aldo’s locationthat day. At state \(\omega_1 (\omega_2)\), Aldo is at \(A (B)\).Lizzi, who is on Sorriso’s special mailing list, receives noticeof \(\omega_i\). Lizzi’s and Joanna’s best outcome occurswhen they meet where Aldo is working, so they can have their planneddinner. If they meet but miss Aldo, they are disappointed and do nothave dinner after all. If either goes to \(A\) and finds herselfalone, then she is again disappointed and does not have dinner. Butwhat each really wants to avoid is going to \(B\) if the other goes to\(A\). If either of them arrives at \(B\) alone, she not only missesdinner but must pay the exorbitant parking fee of the hotel whichhouses \(B\), since the headwaiter of \(B\) refuses to validate theparking ticket of anyone who asks for a table for two and then sitsalone. This is what Harsanyi (1967) terms a game ofincompleteinformation, since the game’s payoffs depend upon stateswhich not all the agents know.

\(A\) is a “play-it-safe” strategy for both Joanna and Lizzi.[33] By choosing \(A\) whatever the state of the world happens to be, theagents run the risk that they will fail to get the positive payoff ofmeeting where Aldo is, but each is also sure to avoid the really badconsequence of choosing \(B\) if the other chooses \(A\). And sinceonly Lizzi knows the state of the world, neither can use informationregarding the state of the world to improve their prospects forcoordination. For Joanna has no such information, and since Lizziknows this, she knows that Joanna has to choose accordingly, so Lizzimust choose her best response to the move she anticipates Joanna tomake regardless of the state of the world Lizzi observes. ApparentlyLizzi and Joanna cannot achieve expected payoffs greater than 1.02 foreach, their expected payoffs if they choose \((A, A)\) at either stateof the world.

If the state \(\omega\) were common knowledge, then the conditionalstrategy profile \((A, A)\) if \(\omega = \omega_1\) and \((B, B)\),if \(\omega = \omega_2\) would be a strict Nash equilibrium at whicheach would achieve a payoff of 2. So the obvious remedy to theirpredicament would be for Lizzi to tell Joanna Aldo’s location ina face-to-face or telephone conversation and for them to agree to gowhere Aldo is, which would make the state \(\omega\) and theirintentions to coordinate on the best outcome given \(\omega\) commonknowledge between them. Suppose for some reason they cannot talk toeach other, but they prearrange that Lizzi will send Joanna an e-mailmessage if, and only if, \(\omega_2\) occurs. Suppose further thatJoanna’s and Lizzi’s e-mail systems are set up to send areply message automatically to the sender of any message received andviewed, and that due to technical problems there is a smallprobability, \(\varepsilon \gt 0\), that any message can fail toarrive at its destination. Then if Lizzi sends Joanna a message, andreceives an automatic confirmation, then Lizzi knows that Joanna knowsthat \(\omega_2\) has occurred. If Joanna receives an automaticconfirmation of Lizzi’s automatic confirmation, then Joannaknows that Lizzi knows that Joanna knows that \(\omega_2\) occurred,and so on. That \(\omega_2\) has occurred would become commonknowledge if each agent received infinitely many automaticconfirmations, assuming that all the confirmations could be sent andreceived in a finite amount of time.[34] However, because of the probability \(\varepsilon\) of transmissionfailure at every stage of communication, the sequence of confirmationsstops after finitely many stages with probability one. Withprobability one, therefore, the agents fail to achieve full commonknowledge. But they do at least achieve something “close”to common knowledge. Does this imply that they have good prospects ofsettling upon \((B, B)\)?

Rubinstein shows by induction that if the number of automaticallyexchanged confirmation messages is finite, then \(A\) is the onlychoice that maximizes expected utility for each agent, given what sheknows about what they both know.

Rubinstein’s Proof

So even if agents have “almost” common knowledge, in thesense that the number of levels of knowledge in “Joanna knowsthat Lizzi knows that … that Joanna knows that \(\omega_2\)occurred” is very large, their behavior is quite different fromtheir behavior given common knowledge that \(\omega_2\) has occurred.Indeed, as Rubinstein points out, given merely “almost”common knowledge, the agents choose as if no communication hadoccurred at all! Rubinstein also notes that this result violates ourintuitions about what we would expect the agents to do in this case.(See Rubinstein 1992, p. 324.) If \(T_i = 17\), wouldn’t weexpect agent \(i\) to choose \(B\)? Indeed, in many actual situationswe might think it plausible that the agents would each expect theother to choose \(B\) even if \(T_1 = T_2 = 2\), which is all that isneeded for Lizzi to know that Joanna has received her original messageand for Joanna to know that Lizzi knows this! Binmore and Samelson(2001) in fact show that if Joanna and Lizzi incur a cost when payingattention to the messages they exchange, or if sending a message iscostly, then longer streams of messages are not paid attention to ordo not occur, respectively.

Lederman (2018a) proposes a radical solution to the paradoxes. In thecase of the coordinated attack, he argues thatrationalgenerals who commonly know that they are rational will attack if (andonly if) they have common knowledge that they will attack; sincecommon knowledge is not attainable by exchanging messages, they willnot attack. However, admitting that the generals do not commonlybelieve that they are rational, a simple model can be built showingthat such generals do attack without common knowledge that they will.Similarly, in the case of the e-mail game, he shows that if playerscan be of an irrational type (so that she chooses game \(B\) even ifher expected payoff is lower than for choosing game \(A\),) and oneplayer believes with sufficiently high probability that the otherplayer is of the irrational type, then players can coordinate on game\(B\) after a finite number of messages have been exchanged. Thus,Lederman (2018a) argues that we should take common knowledge ofrationality to be a simplifying assumption, useful to producetractable mathematical models and yet generally false “in thewild,” where a commonsense notion of rationality does letgenerals and laymen easily coordinate after a small number of messageexchanges. Thomason (2021) takes issue with Lederman’s use ofthe notion of commonsense rationality, and argues about the importanceof considering instead the cognitive and deliberative processes thatlead to the emergence of both individual and commonly held attitudes.Despite their disagreement, both Lederman (2018a, 2018b) and Thomason(2021) emphasize the importance of the relation between (commonly)held beliefs or knowledge and practical reasoning. An interestingapplication of practical issues pertaining to the attainability ofcommon knowledge is offered in Halpern and Pass (2017), where ablockchain protocol (and consensus and hence coordination therein) isanalyzed in terms of suitable weakenings of the notion of common knowledge.[35]

5.2 Common \(p\)-Belief

The example in Section 5.1 hints that mutual knowledge is not the onlyweakening of common knowledge that is relevant to coordination.Brandenburger and Dekel (1987) and Monderer and Samet (1989)explore another option, which is to weaken the properties of the\(\mathbf{K}^{*}_N\) operator. Monderer and Samet motivate thisapproach by noting that even if a mutual knowledge hierarchy stops ata certain level, agents might still have higher level mutualbeliefs about the proposition in question. So they replacethe knowledge operator \(\mathbf{K}_i\) with abeliefoperator \(\mathbf{B}^p_i\):

Definition 5.1
If \(\mu_i(\cdot)\) is agent \(i\)’s probability distributionover \(\Omega\), then

\[\mathbf{B}^p_i(A) = \{\omega \mid \mu_i (A \mid \mathcal{H}_i (\omega)) \ge p \}\]

\(\mathbf{B}^p_i(A)\) is to be read ‘\(i\) believes \(A\) (given\(i\)’s private information) with probability at least \(p\) at\(\omega\)’, or ‘\(i\) \(p\)-believes \(A\)’. Thebelief operator \(\mathbf{B}^p_i\) satisfies axioms K2, K3, and K4 ofthe knowledge operator. \(\mathbf{B}^p_i\) does not satisfy K1, butdoes satisfy the weaker property

\[\mu_i (A \mid \mathbf{B}^p_i(A)) \ge p\]

that is, if one believes \(A\) with probability at least \(p\), thenthe probability of \(A\) is indeed at least \(p\).

One can definemutual andcommon p-beliefsrecursively in a manner similar to the definition of mutual and commonknowledge:

Definition 5.2
Let a set \(\Omega\) of possible worlds together with a set of agents\(N\) be given.

(1) The proposition that \(A\) is(first level or first order)mutual p-belief for the agents of \(N, \mathbf{B}^p_{N^1}(A),\)is the set defined by

\[\mathbf{B}^p_{N^1}(A) \equiv \bigcap_{i\in N} \mathbf{B}^p_i(A).\]

(2) The proposition that \(A\) is \(m\)thlevel(or \(m\)thorder)mutual\(p\)-beliefamong the agents of \(N, \mathbf{B}^p_{N^m}(A),\) is definedrecursively as the set

\[\mathbf{B}^p_{N^m}(A) \equiv \bigcap_{i\in N} \mathbf{B}^p_i (\mathbf{B}^p_{N^{m-1}}(A))\]

(3) The proposition that \(A\) iscommon p-belief among theagents of \(N, \mathbf{B}^p_{N^*}(A),\) is defined as the set

\[\mathbf{B}^p_{N^*}(A) \equiv \bigcap_{m=1}^{\infty} \mathbf{B}^p_{N^m}(A).\]

If \(A\) is common (or \(m\)th level mutual) knowledge atworld \(\omega\), then \(A\) is common \((m\)th level)\(p\)-belief at \(\omega\) for every value of \(p\). So mutual andcommon \(p\)-beliefs formally generalize the mutual and commonknowledge concepts. However, note that \(\mathbf{B}^1_{N^*}(A)\) isnot necessarily the same proposition as \(\mathbf{K}^{*}_N (A)\), thatis, even if \(A\) is common 1-belief, \(A\) can fail to be commonknowledge.

Common \(p\)-belief forms a hierarchy similar to a common knowledgehierarchy:

Proposition 5.3
\(\omega \in \mathbf{B}^{p}_{N^m}(A)\) iff

(∗) For all agents \(i_1, i_2 , \ldots ,i_m \in N,\) \(\omega\in \mathbf{B}^{p}_{i_1}\mathbf{B}^p_{i_2} \ldots\mathbf{B}^p_{i_m}(A)\)

Hence, \(\omega \in \mathbf{B}^{p}_{N^*}(A)\) iff (∗) is thecase for each \(m \ge 1.\)

Proof. Similar to theProof of Proposition 2.5.

One can draw several morals from the e-mail game of Example 5.1.Rubinstein (1987) argues that his conclusion seems paradoxical for thesame reason the backwards induction solution of Alan’s andFiona’s perfect information game might seem paradoxical:Mathematical induction does not appear to be part of our“everyday” reasoning. This game also shows that in orderfor A to be a common truism for a set of agents, they ordinarily mustperceive an event which implies Asimultaneously in eachothers’ presence. A third moral is that in some cases, it maymake sense for the agents to employ some solution concept weaker thanNash or correlated equilibrium. In their analysis of the e-mail game,Monderer and Samet (1989) introduce the notions ofex anteandex post \(\varepsilon\)-equilibrium. Anex anteequilibrium \(h\) is a system of strategy profiles such that no agent\(i\) expects to gain more than \(\varepsilon\)-utiles if \(i\)deviates from \(h\). Anex post equilibrium \(h'\) is asystem of strategy profiles such that no agent \(i\) expects to gainmore than \(\varepsilon\)-utiles by deviating from \(h'\) given\(i\)’s private information. When \(\varepsilon = 0\), theseconcepts coincide, and \(h\) is a Nash equilibrium. Monderer and Sametshow that, while the agents in the e-mail game can never achievecommon knowledge of the world \(\omega\), if they have common\(p\)-belief of \(\omega\) for sufficiently high \(p\), then there isanex ante equilibrium at which they follow \((A,A)\) if\(\omega = \omega_1\) and \((B,B)\), if \(\omega = \omega_2\). Thisequilibrium turns out not to beex post. However, if thesituation is changed so that there are no replies, then Lizzi andJoanna could have at most first order mutual knowledge that \(\omega =\omega_2\). Monderer and Samet show that in this situation, givensufficiently high common \(p\)-belief that \(\omega = \omega_2\),there is anex post equilibrium at which Joanna and Lizzichoose \((B,B)\) if \(\omega = \omega_2\)! So another way one mightview this third moral of the e-mail game is that agents’prospects for coordination can sometimes improve dramatically if theyrely on their common beliefs as well as their mutual knowledge. Morerecently, the notion of \(p\)-belief and \(p\)-common belief proveduseful (Paternotte, 2011) to analyze and formalize Lewis’saccount of common knowledge, while Paternotte (2017), establishing alink between “ordinary” common knowledge and \(p\)-commonbelief, uses the latter to show that only a limited number ofexchanges in the e-mail game or coordinated attack paradox would besufficient to determine coordination. The result, building onfoundations provided by Leitgeb (2014), is used to show that our“ordinary” understanding of common knowledge is capturedby probabilistic common belief, although at the price of decreasedrobustness relative to the number of individuals sharing common beliefand their awareness.

Bibliography

Annotations

Lewis (1969) is the classic pioneering study of common knowledge andits potential applications to conventions and game theory. As Lewisacknowledges, parts of his work are foreshadowed in Hume (1740) andSchelling (1960).

Aumann (1976) gives the first mathematically rigorous formulation ofcommon knowledge using set theory. Schiffer (1972) uses the formalvocabulary ofepistemic logic (Hintikka 1962) to state hisdefinition of common knowledge. Schiffer’s general approach isto augment a system of sentential logic with a set of knowledgeoperators corresponding to a set of agents, and then to define commonknowledge as a hierarchy of propositions in the augmented system.Bacharach (1992), Bicchieri (1993) and Fagin,et al. (1995)adopt this approach, and develop logical theories of common knowledgewhich include soundness and completeness theorems. Fagin, et al. showthat the syntactic and set-theoretic approaches to developing commonknowledge are logically equivalent.

Aumann (1995) gives a recent defense of the classical view ofbackwards induction in games of imperfect information. For criticismsof the classical view, see Binmore (1987), Reny (1992), Bicchieri(1989) and especially Bicchieri (1993). Brandenburger (1992) surveysthe known results connecting mutual and common knowledge to solutionconcepts in game theory. For more in-depth survey articles on commonknowledge and its applications to game theory, see Binmore andBrandenburger (1989), Geanakoplos (1994) and Dekel and Gul (1997). Forher alternate account of common knowledge along with an account ofconventions which opposes Lewis’ account, see Gilbert(1989).

Monderer and Samet (1989) remains one of the best resources for thestudy of common p-belief.

References

  • Alberucci, Luca and Jaeger, Gerhard, 2005, “About cutelimination for logics of common knowledge”,Annals of Pureand Applied Logic, 133(1–3): 73–99.
  • Aumann, Robert, 1974, “Subjectivity and Correlation inRandomized Strategies”,Journal of MathematicalEconomics, 1: 67–96.
  • –––, 1976, “Agreeing to Disagree”,Annals of Statistics, 4: 1236–9.
  • –––, 1987, “Correlated Equilibrium as anExpression of Bayesian Rationality”,Econometrica, 55:1–18.
  • –––, 1995, “Backward Induction and CommonKnowledge of Rationality”,Games and Economic Behavior8: 6–19.
  • Bacharach, Michael, 1985 “Some Extensions of a Claim ofAumann in an Axiomatic Model of Knowledge”,Journal ofEconomic Theory, 37(1): 167–190.
  • –––, 1992.“Backward Induction and BeliefsAbout Oneself”,Synthese, 91: 247–284.
  • Barwise, Jon, 1988, “Three Views of Common Knowledge”,inProceedings of the Second Conference on Theoretical Aspects ofReasoning About Knowledge, M.Y. Vardi (ed.), San Francisco:Morgan Kaufman, pp. 365–379.
  • –––, 1989,The Situation in Logic,Stanford: Center for the Study of Language and Information.
  • Bernheim, B. Douglas, 1984, “Rationalizable StrategicBehavior”,Econometrica, 52: 1007–1028.
  • Bicchieri, Cristina, 1989, “Self Refuting Theories ofStrategic Interaction: A Paradox of Common Knowledge”,Erkenntnis, 30: 69–85.
  • –––, 1993,Rationality andCoordination, Cambridge: Cambridge University Press.
  • –––, 2006,The Grammar of Society,Cambridge: Cambridge University Press.
  • Binmore, Ken, 1987, “Modelling Rational Players I”,Economics and Philosophy, 3: 179–241.
  • –––, 1992,Fun and Games, Lexington,MA: D. C. Heath.
  • –––, 2008, “Do Conventions Need to beCommon Knowledge?”,Topoi, 27: 17–27.
  • Binmore, Ken and Brandenburger, Adam, 1988, “Commonknowledge and Game theory” ST/ICERD Discussion Paper 88/167,London School of Economics.
  • Binmore, Ken and Samuelson, Larry, 2001, “Coordinated Actionin the Electronic Mail Game”Games and EconomicBehavior, 35(1): 6–30.
  • Bonanno, Giacomo and Battigalli, Pierpaolo, 1999, “RecentResults on Belief, Knowledge and the Epistemic Foundations of GameTheory”,Research in Economics, 53(2):149–225.
  • Bonnay, D. and Egré, Paul, 2009, “Inexact Knowledgewith Introspection”,Journal of Philosophical Logic,38: 179–227.
  • Brandenburger, Adam, 1992, “Knowledge and Equilibrium inGames”,Journal of Economic Perspectives, 6:83–101.
  • Brandenburger, Adam, and Dekel, Eddie, 1987, “CommonKnowledge with Probability 1”,Journal of MathematicalEconomics, 16: 237–245.
  • –––, 1988, “The Role of Common KnowledgeAssumptions in Game Theory”, inThe Economics of MissingMarkets, Information and Games, Frank Hahn (ed.), Oxford:Clarendon Press, 46–61.
  • Bruni, Riccardo and Giacomo Sillari, 2018, “A Rational Wayof Playing: Revision Theory for Strategic Interaction”,Journal of Philosophical Logic, 47(3), 419–448.
  • Carnap, Rudolf, 1947,Meaning and Necessity: A Study inSemantics and Modal Logic, Chicago, University of ChicagoPress.
  • Cave, Jonathan AK, 1983, “Learning to Agree”,Economics Letters, 12(2): 147–152.
  • Chwe, Michael, 1999, “Structure and Strategy in CollectiveAction”,American Journal of Sociology 105:128–56.
  • –––, 2000, “Communcation and Coordinationin Social Networks”,Review of Economic Studies, 67:1–16.
  • –––, 2001,Rational Ritual, Princeton,NJ: Princeton University Press
  • Cubitt, Robin and Sugden, Robert, 2003, “Common Knowledge,Salience and Convention: A Reconstruction of David Lewis’ GameTheory”,Economics and Philosophy, 19:175–210.
  • Dégremont, Cédric, and Oliver Roy, 2012,“Agreement Theorems in Dynamic-Epistemic Logic”,Journal of Philosophical Logic, 41(4): 735-764.
  • Dekel, Eddie and Gul, Faruk, 1997, “Rationality andKnowledge in Game Theory”, inAdvances in Economic Theory:Seventh World Congress of the Econometric Society, D. Kreps andK. Wallace eds., Cambridge: Cambridge University Press.
  • Dekel, Eddie, Lipman, Bart and Rustichini, Aldo, 1998,“Standard State-Space Models Preclude Unawareness,”Econometrica, 66: 159–173.
  • Devetag, Giovanna, Hosni, Hykel and Sillari, Giacomo, 2013,“Play 7: Mutual Versus Common Knowledge of Advice in a Weak-LinkGame,”Synthese, 190(8): 1351–1381
  • Fagin, Ronald and Halpern, Joseph Y., 1988, “Awareness andLimited Reasoning,”Artificial Intelligence, 34:39–76.
  • Fagin, Ronald, Halpern, Joseph Y., Moses, Yoram and Vardi, MosheY., 1995,Reasoning About Knowledge, Cambridge, MA: MITPress.
  • Friedell, Morris, 1967, “On the Structure of SharedAwareness,”Working papers of the Center for Research onSocial Organizations (Paper #27), Ann Arbor: University ofMichigan.
  • –––, 1969, “On the Structure of SharedAwareness,”Behavioral Science, 14(1):28–39.
  • Geanakoplos, John, 1989, “Games Theory without Partitions,and Applications to Speculation and Consensus,” CowlesFoundation Discussion Paper, No. 914.
  • –––, 1994, “Common Knowledge”, inHandbook of Game Theory (Volume 2), Robert Aumann and SergiuHart (eds.), Amsterdam: Elsevier Science B.V., 1438–1496.
  • Geanakoplos, John and Heraklis M. Polemarchakis, 1982, “WeCan’t Disagree Forever”Journal of Economictheory 28(1): 192–200.
  • Gilbert, Margaret, 1989,On Social Facts, Princeton:Princeton University Press.
  • Halpern, Jospeh, 2001, “Alternative Semantics forUnawareness”,Games and Economic Behavior, 37(2):321–339
  • Halpern, J. Y., & Moses, Y. , 1990, “Knowledge andcommon Knowledge in a Distributed Environment”.Journal ofthe Association for Computing Machinery, 37(3):549–587.
  • Halpern, J. Y., & Pass, R., 2017, “A Knowledge-BasedAnalysis of the Blockchain Protocol”.arXiv preprintarXiv:1707.08751.
  • Harman, Gilbert, 1977, “Review ofLinguisticBehavior by Jonathan Bennett”,Language, 53:417–424.
  • Harsanyi, J., 1967, “Games with Incomplete InformationPlayed by ”Bayesian“ Players, I: The basic model”,Management Science, 14: 159–82.
  • –––, 1968a, “Games with IncompleteInformation Played by ”Bayesian“ Players, II: BayesianEquilibrium Points”,Management Science, 14:320–324.
  • –––, 1968b, “Games with IncompleteInformation Played by ”Bayesian“ Players, III: The basicprobability distribution of the game”,ManagementScience, 14: 486–502.
  • Heifetz, Aviad, 1999, “Iterative and Fixed Point CommonBelief”,Journal of Philosophical Logic, 28(1):61–79.
  • Heifetz, Aviad, Meier, Martin and Schipper, Burkhard, 2006,“Interactive Unawareness”,Journal of EconomicTheory, 130: 78–94.
  • Hintikka, Jaakko, 1962,Knowledge and Belief, Ithaca, NY:Cornell University Press.
  • Hume, David, 1740 [1888, 1976],A Treatise of HumanNature, L. A. Selby-Bigge (ed.), rev. 2nd. edition P. H. Nidditch(ed.), Oxford: Clarendon Press.
  • Immerman, D., 2021, “How Common Knowledge IsPossible”.Mind, first online 17 January 2021.doi:10.1093/mind/fzaa090
  • Jäger, Gerhard and Michel Marti, 2016, “IntuitionisticCommon Knowledge or Belief”,Journal of Applied Logic,18: 150–163
  • Lederman, Harvey, 2018a, “Two Paradoxes of Common Knowledge:Coordinated Attack and Electronic Mail”,Noûs,52: 921–945.
  • –––, 2018b, “Uncommon Knowledge”Mind 127, 1069–1105.
  • Leitgeb, Hannes, 2014, “The Stability Theory ofBelief”,The Philosophical Review, 123(2):131–171.
  • Lewis, C. I., 1943, “The Modes of Meaning”,Philosophy and Phenomenological Research, 4:236–250.
  • Lewis, David, 1969,Convention: A Philosophical Study,Cambridge, MA: Harvard University Press.
  • –––, 1978, “Truth in Fiction”,American Philosophical Quarterly, 15: 37–46.
  • Littlewood, J. E., 1953,A Mathematical Miscellany,London: Methuen; reprinted asLittlewood’s Miscellany,B. Bollobas (ed.), Cambridge: Cambridge University Press, 1986.
  • McKelvey, Richard and Page, Talbot, 1986, “Common Knowledge,Consensus and Aggregate Information”,Econometrica, 54:109–127.
  • Meyer, J.-J.Ch. and van der Hoek, Wiebe, 1995,Epistemic Logicfor Computer Science and Artificial Intelligence (CambridgeTracts in Theoretical Computer Science 41), Cambridge: CambridgeUniversity Press.
  • Milgrom, Paul, 1981, “An Axiomatic Characterization ofCommon Knowledge”,Econometrica, 49:219–222.
  • Milgrom, Paul, and Nancy Stokey, 1982, “Information, Tradeand Common Knowledge”,Journal of Economic Theory,26(1): 17–27.
  • Monderer, Dov and Samet, Dov, 1989, “Approximating CommonKnowledge with Common Beliefs”,Games and EconomicBehavior, 1: 170–190.
  • Nash, John, 1950, “Equilibrium Points in N-personGames”.Proceedings of the National Academy of Sciences ofthe United States, 36: 48–49.
  • –––, 1951, “Non-Cooperative Games”.Annals of Mathematics, 54: 286–295.
  • Nozick, Robert, 1963,The Normative Theory of IndividualChoice, Ph.D. dissertation, Princeton University
  • Paternotte, Cédric, 2011, “Being Realistic aboutCommon Knowledge: a Lewisian Approach”,Synthese,183(2): 249–276.
  • –––, 2017, “The Fragility of CommonKnowledge”,Erkenntnis, 82(3): 451–472.
  • Pearce, David, 1984, “Rationalizable Strategic Behavior andthe Problem of Perfection”,Econometrica, 52:1029–1050.
  • Reny, Philip J, 1988, “Common Knowledge and Games withPerfect Information.” InPSA: Proceedings of the BiennialMeeting of the Philosophy of Science Association, vol. 1988, no.2, pp. 363–369. East Lansing: Philosophy of ScienceAssociation.
  • –––, 1992, “Rationality in Extensive FormGames”,Journal of Economic Perspectives, 6:103–118.
  • Rubinstein, Ariel, 1987, “A Game with ”Almost CommonKnowledge“: An Example”, inTheoreticalEconomics, D. P. 87/165. London School of Economics.
  • Samet, Dov, 1990, “Ignoring Ignorance and Agreeing toDisagree”,Journal of Economic Theory, 52:190–207.
  • Schelling, Thomas, 1960,The Strategy of Conflict,Cambridge, MA: Harvard University Press.
  • Schiffer, Stephen, 1972,Meaning, Oxford: OxfordUniversity Press.
  • Sillari, Giacomo, 2005, “A Logical Framework forConvention”,Synthese, 147(2): 379–400.
  • –––, 2008, “Common Knowledge andConvention”,Topoi, 27(1): 29–39.
  • –––, 2013, “Rule-Following asCoordination: a Game-Theoretic Approach”,Synthese,190(5): 871–890.
  • –––, 2019, “Logics of Belief”,Rivista di Filosofia, 110(2): 243–262.
  • Skyrms, Brian, 1984,Pragmatics and Empiricism, NewHaven: Yale University Press.
  • –––, 1990,The Dynamics of RationalDeliberation, Cambridge, MA: Harvard University Press
  • –––, 1991, “Inductive Deliberation,Admissible Acts, and Perfect Equilibrium”, inFoundations ofDecision Theory, Michael Bacharach and Susan Hurley eds.,Cambridge, MA: Blackwell, pp. 220–241.
  • –––, 1998, “The Shadow of theFuture”, inRational Commitment and Social Justice: Essaysfor Gregory Kavka, Jules Coleman and Christopher Morris eds.,Cambridge: Cambridge University Press, pp. 12–22.
  • Sugden, Robert, 1986,The Economics of Rights, Cooperation andWelfare, New York: Basil Blackwell.
  • Thomason, R. H., 2021, “Common Knowledge, Common Attitudesand Social Reasoning”,Bulletin of the Section ofLogic, 50(2): 229–247.
  • Vanderschraaf, Peter, 1995, “Endogenous CorrelatedEquilibria in Noncooperative Games”,Theory andDecision, 38: 61–84.
  • Vanderschraaf, Peter, 1998, “Knowledge, Equilibrium andConvention”,Erkenntnis, 49: 337–369.
  • –––, 2001.A Study in InductiveDeliberation, New York: Routledge.
  • von Neumann, John and Morgenstern, Oskar, 1944,Theory ofGames and Economic Behavior, Princeton: Princeton UniversityPress.

Copyright © 2022 by
Peter Vanderschraaf
Giacomo Sillari<gsillari@luiss.it>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp