This articleneeds attention from an expert in Mathematics, Economics or Game theory. Please add areason or atalk parameter to this template to explain the issue with the article.WikiProject Mathematics,WikiProject Economics orWikiProject Game theory may be able to help recruit an expert.(April 2024)
The revelation principle shows that, whileGibbard's theorem proves it is impossible to design a system that will always be fully invulnerable to strategy (if we do not know how players will behave), itis possible to design a system that encourages honesty given asolution concept (if the corresponding equilibrium is unique).[3][4]
The idea behind the revelation principle is that, if we know which strategy the players in a game will use, we can simply ask all the players to submit their truepayoffs orutility functions; then, we take those preferences and calculate each voter's optimal strategy before executing it for them. This procedure means that an honest report of preferences is now the best-possible strategy, because it guarantees the mechanism will play the optimal strategy for the player.
Consider the following example. There is a certain item that Alice values as and Bob values as. The government needs to decide who will receive that item and in what terms.
Asocial-choice-function is a function that maps a set of individualpreferences to an optimal social outcome. An example function is theutilitarian rule, which says "give the item to a person that values it the most". We denote a social choice function bySoc and its recommended outcome given a set of preferences bySoc(Prefs).
Amechanism is a rule that maps a set of individualactions to a social outcome. A mechanismMech induces agame which we denote byGame(Mech).
A mechanismMech is said toimplement a social-choice-functionSoc if, for every combination of individual preferences, there is aNash equilibrium inGame(Mech) in which the outcome isSoc(Prefs). Two example mechanisms are:
"Each individual says a number between 1 and 10. The item is given to the individual who says the lowest number; if both say the same number, then the item is given to Alice". This mechanism does NOT implement the utilitarian function, since for every individual who wants the item, it is a dominant strategy to say "1" regardless of his/her true value. This means that in equilibrium the item is always given to Alice, even if Bob values it more.
First-price sealed-bid auction is a mechanism which implements the utilitarian function. For example, if, then any action profile in which Bob bids more than Alice and both bids are in the range is a Nash-equilibrium in which the item goes to Bob. Additionally, if the valuations of Alice and Bob are random variables drawn independently from the same distribution, then there is aBayesian Nash equilibrium in which the item goes to the bidder with the highest value.
Adirect-mechanism is a mechanism in which the set of actions available to each player is just the set of possible preferences of the player.
A direct-mechanismMech is said to beBayesian-Nash-Incentive-compatible (BNIC) if there is aBayesian Nash equilibrium ofGame(Mech) in which all players reveal their true preferences. Some example direct-mechanisms are:
"Each individual says how much he values the item. The item is given to the individual that said the highest value. In case of a tie, the item is given to Alice". This mechanism isnot BNIC, since a player who wants the item is better-off by saying the highest possible value, regardless of his true value.
First-price sealed-bid auction is alsonot BNIC, since the winner is always better-off by bidding the lowest value that is slightly above the loser's bid.
However, if the distribution of the players' valuations is known, then there isa variant which is BNIC and implements the utilitarian function.
Moreover, it is known thatsecond price auction is BNIC (it is even IC in a stronger sense—dominant-strategy IC). Additionally, it implements the utilitarian function.
Suppose we have an arbitrary mechanismMech that implementsSoc.
We construct a direct mechanismMech' that is truthful and implementsSoc.
Mech' simply simulates the equilibrium strategies of the players in Game(Mech). i.e.
Mech' asks the players to report their valuations.
Based on the reported valuations,Mech' calculates, for each player, his equilibrium strategy inMech.
Mech' returns the outcome returned byMech.
Reporting the true valuations inMech' is like playing the equilibrium strategies inMech. Hence, reporting the true valuations is a Nash equilibrium inMech', as desired. Moreover, the equilibrium payoffs are the same, as desired.
Inmechanism design, the revelation principle is importance in finding solutions. The researcher need only look at the set of equilibria characterized byincentive compatibility. That is, if the mechanism designer wants to implement some outcome or property, they can restrict their search to mechanisms in which agents are willing to reveal their private information to the mechanism designer that has that outcome or property. If no such direct and truthful mechanism exists, no mechanism can implement this outcome bycontraposition. By narrowing the area needed to be searched, the problem of finding a mechanism becomes much easier.
TheBayesian-Nash revelation-principle says that every social-choice function that can be implemented in Bayesian-Nash equilibrium (Bayesian game, i.e. game of incomplete information) can be implemented by aBayesian-Nash incentive-compatibility (BNIC) mechanism. This broader solution concept was introduced by Dasgupta, Hammond andMaskin;[3]Holmström;[5] andMyerson.[4]
The revelation principle also works forcorrelated equilibria:[citation needed] for every arbitrarycoordinating device a.k.a. correlating, there exists another direct device for which the state space equals the action space of each player.[jargon] Then the coordination is done by directly informing each player of his action.[clarification needed]
^abDasgupta, P., Hammond, P. and Maskin, E. 1979. The implementation of social choice rules: some results on incentive compatibility. Review of Economic Studies 46, 185–216.
^abMyerson, R. 1979. Incentive-compatibility and the bargaining problem. Econometrica 47, 61–73.
^Holmstrom, B. 1977. On incentives and control in organizations. Ph.D. thesis, Stanford University.