Computer Science > Computer Science and Game Theory
arXiv:2205.15863 (cs)
[Submitted on 24 May 2022]
Title:Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction
View a PDF of the paper titled Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction, by Sharadhi Alape Suryanarayana and 1 other authors
View PDFAbstract:In many social-choice mechanisms the resulting choice is not the most preferred one for some of the participants, thus the need for methods to justify the choice made in a way that improves the acceptance and satisfaction of said participants. One natural method for providing such explanations is to ask people to provide them, e.g., through crowdsourcing, and choosing the most convincing arguments among those received. In this paper we propose the use of an alternative approach, one that automatically generates explanations based on desirable mechanism features found in theoretical mechanism design literature. We test the effectiveness of both of the methods through a series of extensive experiments conducted with over 600 participants in ranked voting, a classic social choice mechanism. The analysis of the results reveals that explanations indeed affect both average satisfaction from and acceptance of the outcome in such settings. In particular, explanations are shown to have a positive effect on satisfaction and acceptance when the outcome (the winning candidate in our case) is the least desirable choice for the participant. A comparative analysis reveals that the automatically generated explanations result in similar levels of satisfaction from and acceptance of an outcome as with the more costly alternative of crowdsourced explanations, hence eliminating the need to keep humans in the loop. Furthermore, the automatically generated explanations significantly reduce participants' belief that a different winner should have been elected compared to crowdsourced explanations.
Subjects: | Computer Science and Game Theory (cs.GT); Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA) |
ACM classes: | I.2.11 |
Cite as: | arXiv:2205.15863 [cs.GT] |
(orarXiv:2205.15863v1 [cs.GT] for this version) | |
https://doi.org/10.48550/arXiv.2205.15863 arXiv-issued DOI via DataCite | |
Journal reference: | Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (2022) 1246-1255 |
Submission history
From: Sharadhi Alape Suryanarayana [view email][v1] Tue, 24 May 2022 19:15:26 UTC (575 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
- Other Formats
View a PDF of the paper titled Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction, by Sharadhi Alape Suryanarayana and 1 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.