Social procedures that have algorithmic aspects can often be improvedby redesign. This holds for voting and other peaceful decision makingprocedures, for match-making, for auctioning, for fair division ofestates, and for many procedures of distributive justice. Thealgorithmic aspects can be analyzed with formal methods. The term“social software” was coined by Rohit Parikh (2002)for the emerging interdisciplinaryenterprise that is concerned with the design and analysis ofalgorithms that regulate social processes. Such analysis and(re-)design uses methods from logic, game theory and theoreticalcomputer science.[1] The goals of research in formal approaches to social procedures aremodeling social situations, developing theories of correctness, and(re-)designing social procedures, ideally leading to new socialbehavior.
Logic, game theory and computer science are not the only disciplinesthat have something to say about social mechanisms. Such mechanismsare also an object of study in voting theory, in auction theory, insocial choice theory, in social epistemology, in mechanism designtheory, and in algorithmic game theory. Multi-agent interaction at amore abstract level is studied in artificial intelligence anddistributed computing, so all these disciplines have something to sayabout the formal analysis of social interaction.
Social software cannot be seen as a clearly defined research field onits own, but rather an umbrella for certain types of research incomputer science, logic, and game theory. Nevertheless, the socialsoftware perspective on social procedures and intelligent interaction,emphasizing algorithms and information, has already produced a widevariety of important insights. In this article, a number of exampleswill be discussed and pointers will be given to related discussions inphilosophy.
The prototypical example of analgorithm inmathematics (see also entry oncomputability and complexity) is Euclid’s recipe for finding the greatest common divisor(GCD) of two positive whole numbers \(A\) and \(B\). The GCD of twonumbers is the greatest number that divides both numbers withoutresidue.
If \(A\) is larger than \(B\), then replace \(A\) by \(A - B\),otherwise, if \(B\) is larger than \(A\), replace \(B\) by \(B - A\).Continue like this until \(A\) equals \(B\).
The final \(A = B\) yields the greatest common divisor of the numbers\(A\) and \(B\) that you started with. For instance, suppose you startthe algorithm with \(A = 20\), \(B = 12\). Then in the first step,\(A\) gets replaced by \(20 - 12 = 8\), so \(8\) becomes the new\(A\). In the second step, \(B\) gets replaced by \(12 - 8 = 4\), so\(4\) becomes the new \(B\). In the third step, \(A\) gets replaced by\(8 - 4 = 4\), so \(4\) becomes the new \(A\) and the two numbers\(A\) and \(B\) have become equal. The algorithm yields \(4\) as theGCD of \(20\) and \(12\).
Euclid’s recipe isformal, and we cananalyze it with formal means. The correctness of Euclid’salgorithm follows from the insight that if you have two numbers \(A\)and \(B\) with \(A\) larger than \(B\), and you replace \(A\) by \(A -B\), then the set of common divisors of the number pair does notchange.
Algorithms come in many forms and flavors, for example, sequential andparallel ones. For interesting introductions to algorithms in computerscience, see Harel & Feldman (2004) and Miller & Boxer (2012).In a similar manner as these algorithms, social procedures can beanalyzed with the formal tools of logic and theoretical computer science.[2]
It appears that most amenable to a formal approach are those socialprocedures for which one wants to guarantee that given certainstarting conditions, the procedure preserves or creates specificdesired properties. Examples are social procedures for fair division(Section2), matching (Section3), communication (Section4). Finally, the formal perspective is useful for situations requiringparticipants’ strategic reasoning (Section5). A common element is that in all these situations, agents’knowledge and lack of knowledge of other agents’ mental statesplay an important role. As a counterpart to the examples in thecurrent article, Van Benthem (2018) provides an intriguing overviewabout the current roles of social perspectives in computer scienceitself, highlighting social agency and information.
Formal methods by themselves do not solve philosophical issues, as thefollowing tale from Padma (2007)illustrates.
Two farmers, Ram and Shyam were eating chapatis. Ram had 3 pieces ofthe flat, round bread and Shyam had 5. A traveller who looked hungryand tired rode up to the two men. Ram and Shyam decided to share theirchapatis with him. The 3 men stacked the 8 chapatis (like pancakes)and cut the stack into 3 equal parts. They shared the pieces equallyand ate until nothing was left. The traveller, who was a nobleman, wasso grateful that he gave the two farmers 8 gold coins for his share ofthe food.
After the traveller left, Ram and Shyam wondered how they should sharethe 8 gold coins. Ram said that there were 8 coins and only 2 people,so each person should get an equal share of 4 coins. “Butthat’s not fair,” said Shyam, “since I had 5chapatis to begin with.” Ram could see his point, but hedidn’t really want to give 5 of the coins to Shyam. So hesuggested they go see Maulvi, who was very wise. Shyam agreed.
Ram and Shyam told the whole story to Maulvi. After thinking for along time, he said that the fairest way to share the coins was to giveShyam 7 coins and Ram only 1 coin. Both men were surprised. But whenthey asked Maulvi to explain his reasoning, they were satisfied thatit was a fair division of the 8 coins.
Here are reasons the participants could have given to explain eachmentioned division as being fair:
Ram: “If the traveller hadn’t arrived, we would haveshared the chapatis equally. So it is onlyfairif we now share the eight coins equally as well.”
Shyam: “If the traveller hadn’t arrived, you would havebought one chapati from me at the going rate for chapatis. Now thatthe traveller was so generous, the going rate suddenly went up to onegold coin for a chapati. So your chapatis turned out to be worth threegold coins and mine five gold coins.”
Maulvi: “The traveller has eaten one third of eight chapatis.Ram had only three chapatis to start with, and therefore he has eaten\(1/3\) chapati from Ram and \(7/3\) chapatis from Shyam. So it isonly fair if Ram gets one coin and Shyam gets seven.”
A moral of this could be that there is no obviously correct notion offairness, in this case, and in many cases. Formal analysis alwaysstarts from an intuition and can help to turn that intuition into amore precise definition. Then one can check if a given procedure fitsthe definition; however, if it fits, that does not show that thedefinition is right.
Social procedures are as old as the world.Divide andchoose (also known as “I cut, you choose”) isa procedure for two-person fair division of some desirable orundesirable heterogeneous good. One person divides the good into whatshe believes to be equal shares, and the other person chooses. If thetwo participants have different value judgements on parts of thegoods, it is possible that both participants feel that they receivedmore than 50 percent of the goods. Indeed, let \(X\) be a setrepresenting the good to be divided. A valuation function \(V\) for\(X\) is a function from \({{\cal P}}(X)\) to \([0,1]\) with theproperties that \(V(\emptyset) = 0\), \(V(X) = 1\), and for allsubsets \(A\), \(B\), it holds that \(A \subseteq B \subseteq X\)implies \(V(A) \leq V(B)\) (for explanation of notation, seesupplement onbasic set theory). Suppose \(V_m\) and \(V_y\) are functions for my and your valuationof the contents of \(X\). If \(V_m\) and \(V_y\) are different, thismeans that you and I value some items in \(X\) differently. Itfollows, as was already observed by Steinhaus in 1948, that thereexists a division that gives both parties more than their due part;“this fact disproves the common opinion that differences inestimation make fair division difficult” (Steinhaus1948).
It matters whether the valuations are known to the other party. Suchknowledge can be used to advantage by the one who cuts. First considerthe case that your valuation is unknown to me, and vice versa. Then ifI cut, the best I can do is pick sets \(A, B \subseteq X\) with \(A\cap B = \emptyset\), \(A \cup B = X\), and \(V_m(A) = V_m(B)\). Ifyou choose, you will use \(V_y\) to pick the maximum of \(\{ V_y(A),V_y(B) \}\). It follows immediately that cutting guarantees a fairshare, but no more than that, while choosing holds a promise for abetter deal. So if you ever get the choice between cutting andchoosing in a situation where both parties only know their ownvaluation, then it is to your advantage to leave the cutting to theother person.
However, if the valuations are common knowledge (see entry oncommon knowledge), the situation is reversed, for then it is more advantageous to takethe role of cutter. As cutter, you can attempt to make a division intpsets \(A\) and \(B\) with \(A\) slightly more valuable than \(B\)according to the valuation of the other party, while \(B\) is muchmore valuable than \(A\) according to your own valuation.
The example shows that issues of knowledge and ignorance, are crucialfor analysis of fair division protocols. Epistemic logic (see entry onepistemic logic) can shed much light on many subtle aspects of knowledge and ignorancein social interactions, and in particular in fair division problems;for an interesting cake-cutting experiment showing the importance ofknowledge and ignorance, see Kyropoulou et al.2019. Still, in traditional studies of fair division the roleof knowledge is not taken into account, as is witnessed by thecomprehensive study of “cake cutting algorithms” in Robertson& Webb (1998).
In the social choice literature (Brams 2005;Brams & Taylor 1996) it is common practice to use cakecutting as a metaphor for adivision of a singleheterogeneous good. Dividing a piece of land atinheritance would be an example. The cake has different toppings thatcannot all be cut into pieces with the same composition: it may havecandied cherries on top that someone likes but another person abhors,and so on. A cake division issimply fair ifeach of \(n\) players feels she received at least \(1/n\) of the cake,according to her individual valuation of its parts. A procedure may besimply fair without ruling out the possibility of hard feelings. Acake division is calledenvy-free if each personfeels that nobody else received a larger piece. A sure sign of adivision being envy-free is that nobody wishes to trade pieces withanyone else. It turns out to be very hard to design cake cuttingprocedures that are both fair and envy-free. TheI cut, youchoose procedure is fair, and it is envy-free simplybecause the rest of the cake is a single piece, so there is nopossibility for envy. See the entry oneconomics [normative] and economic justice for philosophical discussions of envy-freeness.
R. Parikh (2002) analyzes the so-calledBanach-Knaster algorithm for cake cutting when the cake has to bedivided fairly among at least three persons, which goes like this:
I cut a piece intended for myself. All others consider it. If nobodyobjects, I get my piece. If someone raises an objection, she has theright to cut off a slice and put that back with the rest of the cake.She then asks if she can have the reduced piece. If nobody objects,she gets it, otherwise someone else takes the knife and reduces thepiece a bit further, and so on, until someone gets the trimmed piece.Then on to the next round, with \(n-1\) players. When two players areleft, they use theDivide and choosealgorithm.
Parikh’s discussion shows how the methods of theoreticalcomputer science can be used to argue that the procedure is fair. Thekey ingredient of the procedure is a loop operation:
Continue to trim the piece until there are no further objections aboutthe size.
Suppose \(r\) stands for the action of trimming a piece of cake andputting it back with the main part of the cake, according to theBanach-Knaster algorithm, and suppose \(F(m,k)\) is the propositionthat the main part of the cake is large enough for \(k\) people. Thensurely \(F(m,n)\) holds at the beginning: the whole cake is largeenough for the whole group to begin with. Moreover, Parikh (1983,2002) uses his game logic to prove that\(F(m,k)\) is invariant under the action \(r\): If \(F(m,k)\) is truebefore \(r\), then it will still be true after \(r\) has occurred.Clearly, if one can show that \(F(m,k)\) continues to hold through thealgorithm, for \(k\) running through \(n, \ldots, 1\), then thisestablishes that the division is fair.[3]
A variation onDivide and choose was played byKing Solomon in the famous Judgement of Solomon, in a dispute amongtwo women who both claimed to be the mother of a child. The full storyis in 1 Kings 3:16–28. Two women who lived in the same houseboth had an infant son. One of the women claimed that the other womanhad stolen her child, after accidentally smothering her own son duringsleep. The other woman denied this and reversed the charge. Afterhearing their stories, King Solomon called for a sword and declaredthat there was only one fair solution: cut the living child in two andgive both women half of it. Upon hearing this, the true mother criedout that she was willing to give up the child if it could be spared,while the fake mother agreed with the judgement. This behaviourrevealed to Solomon who was the real mother, and her child was givenback to her.
This procedure is not repeatable. As the Bible story puts it:
And all Israel heard of the judgment which the king had judged; andthey feared the king; for they saw that the wisdom of God was in him,to do justice.
Obviously, in a second similar dispute, both women would exclaim“Give it to her, but let it live!”
Solomon’s handling of the situation can be turned into a socialprocedure that is repeatable, as follows. Solomon does not call for asword, but instead explains to the two women the following procedure.First he is going to ask the first woman if she is willing to give upthe child. If the answer is “yes”, the dispute isresolved, with no further questions asked. Otherwise he will ask theother woman if she is willing to give up the child. Again, if theanswer is “yes” the dispute is resolved. If they bothrefuse, then the child is his, and then he will allow one of the womento buy it back, at a price that is to be determined as follows. Theywill both write an amount of money on a sheet of paper, without theirnames. If the two bids are \(A\) and \(B\), then the price of thechild is set at \(\frac{A + B}{2}\), and fate will determine whichwoman is going to get the child at that price, where the other womanhas to pay a small fine. If the two women are rational, one of themwill give up the child when being first asked (see Moore1992 and Pauly 2005; for philosophicaldiscussions of rationality, see the entries onpractical reason &game theory and ethics).
Both Moore (1992) and Pauly (2005)discuss the importance of reasoningabout common knowledge and ignorance in the King Solomon cases. Forexample, King Solomon is ignorant of who is the real mother, but bothwomen commonly know from the start who is the true mother, and thatthe true mother will therefore bid much higher than the other one.This makes the procedure safe. Again, epistemic logic and inparticular common knowledge help to shed light on a tricky socialprocedure. For a more traditionally philosophical introduction to thefair division problem, including more extensive explanations offairness, manipulability and envy-freeness, see the entry oneconomics [normative] and economic justice.
The next section shows that the perspective of social software canalso shed light on social matching problems. These range frommarriages to the assignment of resident doctors to hospitals, collegeadmission procedures, and the assignment of students to housing.
Suppose equal sets of men and women are given, all of them seeking tomarry someone of the opposite gender, and each man has listed hispreferences for the women by means of a strict ordering, and similarlyfor each woman. Astable marriage match is aone-to-one mapping between the men and women with the property that ifa man prefers another woman over his own wife, then that woman doesnot prefer him to her own husband, and if awoman prefers another man over her own husband, then that man doesnot prefer her to his own wife.
The computer scientists Gale and Shapley proved that stable matchingsalways exist, and gave an algorithm for finding such matchings, theso-called Gale-Shapley algorithm (Gale &Shapley 1962):
Initially, all men and all women are free (unengaged).
Next, in a number of rounds, each free man proposes to themost-preferred woman to whom he has not yet proposed and ticks her offfrom his list. If the woman is free, she accepts, and they becomeengaged. If the woman is not free, she compares the proponent to hercurrent fiancé. If she likes him better, she dumps thefiancé who becomes free again, and the proponent and his womanof choice become engaged.
This goes on until all men and women are engaged.
As an example, suppose that there are three men \(a, b, c\) and threewomen \(d, e, f\), and the lists of preferences are as follows (withthe most preferred one first in the list): \(a: edf\), \(b: {\itfed}\), \(c: {\it dfe}\), \(d: {\it abc}\), \(e: {\it cda}\), \(g:{\it acb}\). So \(a: {\it edf}\) means that \(a\) prefers \(e\) to\(d\) and \(d\) to \(f\). It is assumed that preferences aretransitive, so \(a\) also prefers \(e\) to \(f\).
An example of a stable match for this situation is represented asthree pairs \((a,e)\), \((b,f)\), \((c,d)\). Note that woman \(d\)ends up with the man that is at the bottom of her list. But this matchis still stable, for although \(c\) is willing to swap her husband forany of the other two men, these two candidates will not agree, forthey both happen to be married to the woman who is on the top of theirown list.
To check that the Gale-Shapley algorithm always produces stablematchings, we can proceed as follows. Obviously, the situation whereno-one is engaged is stable.
What does it mean for \(E\), an “engagement” mapping, tobe stable on the set of women \(W\) and the set of men \(M\)? Let ususe \(m >_w m'\) for “\(w\) prefers \(m\) over \(m'\)”(so bigger is better).
What does it mean for a man to befree?
Next, inspect what happens in a single step in the algorithm. Theprecondition for the step is that there is at least one free man \(m\)left. Such a free man \(m\) proposes to the highest woman \(w\) on hislist to whom he has not yet proposed.
There are two cases. If \(w\) is free, \(w\) accepts the proposal, andthey become engaged. Is the new set of engaged pairs stable? We onlyhave to check for the new pair \((w,m)\).
Suppose that there is a free \(w'\) with \(w' >_m w\). This cannotbe, for \(w\) is at the top of \(m\)'s list.
Suppose there is \(m'\) with \(m' >_w m\). Then if \(m'\) isengaged, let us say to \(w'\), this must mean that not \(w >_{m'}w'\). For otherwise \(m'\) would have proposed to \(w\) instead of to\(w'\).
The new list of free men equals the old list, minus \(m\). This iscorrect, for \(m\) just got engaged.
Now the other case: suppose that \(w\) is already engaged. There aretwo subcases. In case \(w\) prefers her own current fiancé,nothing happens. The resulting list of engaged pairs is still stable.The list of free men remains the same, for \(m\) proposed and gotrejected.
In case \(w\) prefers \(m\) to her own fiancé \(m'\), sheswaps: \((m,w)\) replaces \((m',w)\) in the set of engaged pairs.Again, it is easy to see that the resulting list of engaged pairs isstable. Man \(m\) gets replaced by \(m'\) in the set of free men. Thisis also correct.
Note that the Gale-Shapley matching algorithm is hugely favourable tothe party that is doing the proposing. The proposing party gets achance to make proposals to any candidate, in order of preference. Butat the start of the procedure the receiving party has to say“yes” to any proposal! The result of swapping the roles ofthe men and the women in the algorithm will also compute a stablematch, but one that is more advantageous to the women.
The Gale-Shapley procedure runs in time quadratic in the number of menand women (see, e.g., Cechlérová et al. 2005). Pini etal. (2011) show how participants can easily manipulate the outcome ofthe procedure by misrepresenting their true preferences. Fortunately,Pini et al. also present an alternative procedure for whichmanipulation is hard, in that coming up with an individuallyprofitable misrepresentation of one’s preferences is acomputationally complex task.
The Gale-Shapley algorithm has many important applications, alsooutside of the area of marriage brokering; Gale and Shapley themselvesdiscuss college admission procedures (1962).The next subsection presents anotherapplication.
Using the perspective of social software, Parikh and Pauly (2012)investigate a variant of theGale-Shapley algorithm that is used in the Stanford Housing Draw toassign students to rooms. The situation is more complex than in themarriage setting, because students do not give a complete order on allhouses, but only on 8 of them; moreover, they may choose to enter theDraw in groups. In the housing context, the students turn out to havean incentive to honestly submit their true preferences: the draw isnon-manipulable for them. However, in theorythey could still strategize about choosing the subset of 8 houses onwhich they submit their preferences.
The issue of knowledge is interesting in this case. Even though thealgorithm can be found on the Stanford webpages, most students andadministrators do not fully understand how it works. Therefore, theStanford Housing Draw cannot be assumed as common knowledge among thestudents. An interesting phenomenon seems to occur: even whileadmitting not to understand the algorithm, most students would saythat they believe it to be fair (Parikh &Pauly 2012).
Communication protocols are important in distributed computing:computing with distributed systems, where a distributed system is aset of computers that are connected through a communication network.Communication protocols are also interesting from a philosophicalperspective, especially in the context of discussions of the value ofprivacy (see entries onprivacy andcomputing and moral responsibility). The formal approach can help in answering philosophical questionssuch as “Does more security automatically lead to lessprivacy?”.
In the following example, the inspiration does not only flow fromsocial problems to formal solutions, but also the other way, fromsuccessful social practices to formal procedures. Many algorithms fordistributed computing are related to social protocols forcommunication in everyday life. An example is the use of a“talking stick” to regulate discussion and decision makingin a group of peers, with the rules that the talking stick gets passedaround and only the person who is holding the stick is allowed to talk(Nerburn 1999).
A computer communication protocol based on this social procedure isthe token ring protocol. Atoken ring indistributed computing is a network where each computer is connected toexactly two other computers in such a way that each computer isreachable in the network, and where a single “token”circulates around the ring-shaped network. Communication can only beinitiated by the current owner of the token.
Sometimes the token gets lost through computer or network failure. Insuch cases the token has to be regenerated, with a guarantee that onlyone computer has the token. This problem of regenerating the token ina token ring is called theleader electionproblem. Here is an algorithm for it:
Assume that communication takes place clockwise, and each computer candistinguish its clockwise neighbour from its counterclockwiseneighbour. Assume that all computers have different identifiers(positive whole numbers) and each computer knows its identifier.
Each computer sends its identifier around the ring. When a computer\(c\) receives an identifier, \(c\) compares it to its own. If theidentifier is greater than its own, \(c\) passes it on. If it is lessthan its own, \(c\) discards it. If it is equal to its own, \(c\)declares itself to be the leader.
It is not hard to see that this guarantees the computer with thehighest identifier \(i_{\text{max}}\) to become the leader (see Lynch1996). No assumptions need to be madeabout the number of computers in the ring, nor about any computerknowing anything about the size of the ring or the identifiers of theother computers. A next stage of the protocol could be for the leaderto send around a request to register as non-leader and halt.
One further level of abstraction is from distributed computers orprocesses to interacting intelligent agents, or multi-agent systems.These agents can be computers, robots, humans, teams of humans, orsome mix of these. It is commonly assumed that the agents have adegree of autonomy, that the agents have a restricted local view ofthe system as a whole, and that there is no designated controller ofthe whole system (see Wooldridge 2002 [2009]).
Many social procedures are designed to create common knowledge (Lewis1969; van Ditmarsch et al. 2009; and entry oncommon knowledge). The old-fashioned ritual that takes place when you withdraw a largeamount of money from your bank account and have it paid out to you incash by the cashier is an example.
How and whether common knowledge can be achieved depends on availablecommunication facilities. Public announcement or publicly observableritual (the cashier ritual mentioned above) can create commonknowledge. But, as Halpern and Moses (1984)proved, message exchange in adistributed environment, where there is no guarantee that messages getdelivered, cannot. Halpern and Moses use the example of two generalswho are planning a coordinated attack on a city. The generals are ontwo hills on opposite sides of the city, each with their own army, andthey know that they can only succeed in capturing the city if theirtwo armies attack at the same time. But the valley that separates thetwo hills is in enemy hands, and any messengers that are sent from onearmy base to the other run a severe risk to get captured. The generalshave agreed on a joint attack, but still have to settle the time. Sothe generals start sending messages, for example, “Let’sattack at 9:00 AM”. But they cannot be sure that the messengerssucceed in delivering their message. And if they get through, there isno guarantee that the message of acknowledgement will get delivered.And so on.
Even if common knowledge is sometimes hard to achieve in practice, itserves as necessary presumption in regulating society. Roman lawgiversfound out long ago that if citizens within their jurisdiction couldplead innocence because of being unaware of the law, no offender couldever get convicted. So they invented principles likeIgnorantia legis neminem excusat,“ignorance of the law excuses no one”. Societies thatabide by the rule of law have to be organized in such a way thatcitizensin principle are in a position to knowthe law. The laws have to be properly published and distributed, forexample, by being printed in a government gazette to which everycitizen has access.
In his book “Rational Ritual” (2001),Michael Suk-Young Chwe points out theimportance of thesize of groups for whichcommon knowledge gets established. A brand name that is commonknowledge in a large group is worth a lot of money. Chwe analyzes theexample of advertisements broadcasted during the American footballSuper Bowl. He compares the enormous cost of making something commonknowledge by means of such advertisements to the obvious benefits.Part of the benefit is in the fact that the advertisements createcommon knowledge. An important consideration when deciding to buy asmartphone of a particular brand, for example, is the knowledge thatothers are going to buy the same model too.
Of course in many social situations, you may want topreventcommon knowledge from arising, for example, if you want to keep asecret from certain others. There are also more interesting caseswhere everybody knows some fact, for example that a particular countrypossesses nuclear weapons, but where it would lead to politicalproblems to make this fact common knowledge by making a publicannouncement. For a number of such social situations where maintainingprivacy and ignorance are crucial, see van Eijck and Verbrugge (2009).An interesting recent development is the study of dynamic-epistemiclogic-basedepistemic planning, which enables us tosynthesize communication protocols in order to create certain exactconfigurations of levels of higher-order knowledge within a group(Bolander and Andersen 2011, Löwe, Pacuit, and Witzel 2011).
The large field of game theory is extensively explained in otherlemmas in this encyclopedia (see among others the entry ongame theory). This field of research has been very active since the appearance ofthe seminal book (Von Neumann and Morgenstern 1944). Similarly, socialchoice theory and in particular voting theory (see entries onsocial choice theory andvoting methods) were already thriving fields of research long before the term socialsoftware came along.
It is useful to investigate how formal methods and an algorithmicperspective can help solve societal problems. For example, in the caseof the famous prisoner’s dilemma (see entry onprisoner’s dilemma) it is interesting to try to design policies that make cheating theother agent less profitable by penalizing it. Notice that this“social software engineering” takes place at themeta-level, on top of the level of the prisoners choosing theirstrategies (van Eijck 2015).
A related recent trend in game theory that is relevant for socialsoftware, is to step away from solution concepts such as the Nashequilibrium and instead to focus on the process of rationaldeliberation: the “theory of play” (see van Benthem,Pacuit and Roy, 2011, as well as the entrylogics for analyzing games). This type of research delineates both the normative principlesguiding the players’ strategic reasoning, and the psychologicalphenomena explaining the differences between predicted and observedbehavior in games (Camerer 2003; Ghosh and Verbrugge 2018; Pacuit2015; Meijering et al. 2012, 2014; Top et al. 2018).
In Section 4.2 we briefly discussed the role of the study of knowledgeand belief when analyzing social procedures. In this vein, the fieldof epistemic game theory focuses on agents’ beliefs about otheragents’ strategies and those agents’ beliefs about otheragents’ strategies, and so on, up to the idealized case ofcommon knowledge among a group of agents that they are all rational(see the entry onepistemic foundations of game theory; Perea 2012; Brandenburger 2014).
It turns out that in voting theory in particular, it is useful todesign a logic to explicitly model the knowledge that agents bring tobear when they are voting. It is especially interesting to model whathappens in a group when agents vote strategically by misrepresentingtheir own preferences in order to manipulate the outcome (van Eijck2015; van Ditmarsch et al. 2012).
In recent years in the research area of multi-agent systems, formalapproaches to social procedures have also been used to help designactual software, for example for cooperative problem solving in teams,coalition formation, knowledge fusion, auctions, and negotiationsamong software agents (Bulling et al. 2015; Chalkiadakis et al. 2011;Dunin-Kęplicz and Verbrugge 2010; Pauly 2002; Shoham andLeyton-Brown 2009; Vazirani et al. 2007). This literature is mostlynormative in nature.
In contrast, another fascinating area of research, evolutionary gametheory (see entry onevolutionary game theory), investigates how features like altruism, social norms, moral behaviorand cooperation could actually have evolved. This area combines bothnormative and descriptive work (Axelrod 1984; Bowles and Gintis 2011;Sigmund 2010). As a particular social software contribution to thisarea, Gärdenfors (2012) characterized how much cognition andcommunication are required for several kinds of cooperation, fromsimple flocking behavior, through reciprocal altruism(“I’ll scratch your back if you scratch mine”), upto fully fledged teamwork.
In conclusion, the formal perspective on social procedures andintelligent interaction, which emphasizes algorithms and information,has produced a wide variety of important insights. It has also led tointeresting philosophical discussions. The main challenge for thefuture appears to be to unify this currently relatively scatteredfield, in which many contributors do not seem to be aware of relevantwork in other subfields.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
altruism: biological |Arrow’s theorem |belief merging and judgment aggregation |common knowledge |computability and complexity |computing: and moral responsibility |economics [normative] and economic justice |epistemology: social |game theory |game theory: and ethics |game theory: epistemic foundations of |game theory: evolutionary |justice: transitional |logic: action |logic: epistemic |logic: for analyzing games |logic: paraconsistent |practical reason |prisoner’s dilemma |privacy |redistribution |social choice theory |social institutions |social norms |technology, philosophy of |voting: methods
I would like to express my gratitude to Jan van Eijck for theenjoyable time of writing the first 2014 version of this articletogether. Thanks are also due to two anonymous referees for suggestingseveral improvements and additions for this updated version. Becauseof Jan’s retirement a few years ago, the 2019 update of the articlehas been made by me (Rineke Verbrugge) and I therefore bear soleresponsibility for any remaining infelicities; please let me know ifyou spot any.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054