Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Self-play

From Wikipedia, the free encyclopedia
(Redirected fromSelf-play (reinforcement learning technique))
Reinforcement learning technique
Part of a series on
Machine learning
anddata mining

Self-play is a technique for improving the performance ofreinforcement learning agents. Intuitively, agents learn to improve their performance by playing "against themselves".

Definition and motivation

[edit]

Inmulti-agent reinforcement learning experiments, researchers try to optimize the performance of a learning agent on a given task, in cooperation or competition with one or more agents. These agents learn by trial-and-error, and researchers may choose to have the learning algorithm play the role of two or more of the different agents. When successfully executed, this technique has a double advantage:

  1. It provides a straightforward way to determine the actions of the other agents, resulting in a meaningful challenge.
  2. It increases the amount of experience that can be used to improve the policy, by a factor of two or more, since the viewpoints of each of the different agents can be used for learning.

Czarnecki et al[1] argue that most of the games that people play for fun are "Games of Skill", meaning games whose space of all possible strategies looks like a spinning top. In more detail, we can partition the space of strategies into setsL1,L2,...,Ln{\displaystyle L_{1},L_{2},...,L_{n}}, such that anyi<j,πiLi,πjLj{\displaystyle i<j,\pi _{i}\in L_{i},\pi _{j}\in L_{j}}, the strategyπj{\displaystyle \pi _{j}} beats the strategyπi{\displaystyle \pi _{i}}. Then, in population-based self-play, if the population is larger thanmaxi|Li|{\displaystyle \max _{i}|L_{i}|}, then the algorithm would converge to the best possible strategy.

Usage

[edit]

Self-play is used by theAlphaZero program to improve its performance in the games ofchess,shogi andgo.[2]

Self-play is also used to train the Cicero AI system to outperform humans at the game ofDiplomacy. The technique is also used in training the DeepNash system to play the gameStratego.[3][4]

Connections to other disciplines

[edit]

Self-play has been compared to the epistemological concept oftabula rasa that describes the way that humans acquire knowledge from a "blank slate".[5]

Further reading

[edit]
  • DiGiovanni, Anthony; Zell, Ethan; et al. (2021). "Survey of Self-Play in Reinforcement Learning".arXiv:2107.02850 [cs.GT].

References

[edit]
  1. ^Czarnecki, Wojciech M.; Gidel, Gauthier; Tracey, Brendan; Tuyls, Karl; Omidshafiei, Shayegan; Balduzzi, David; Jaderberg, Max (2020)."Real World Games Look Like Spinning Tops".Advances in Neural Information Processing Systems.33. Curran Associates, Inc.:17443–17454.arXiv:2004.09468.
  2. ^Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent;Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen;Hassabis, Demis (5 December 2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm".arXiv:1712.01815 [cs.AI].
  3. ^Snyder, Alison (2022-12-01)."Two new AI systems beat humans at complex games".Axios. Retrieved2022-12-29.
  4. ^Erich_Grunewald (22 December 2022),"Notes on Meta's Diplomacy-Playing AI",LessWrong
  5. ^Laterre, Alexandre (2018). "Ranked Reward: Enabling Self-Play Reinforcement Learning for Combinatorial Optimization".arXiv:1712.01815 [cs.AI].
Retrieved from "https://en.wikipedia.org/w/index.php?title=Self-play&oldid=1297357226"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp