Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

♟️ Vectorized RL game environments in JAX

License

NotificationsYou must be signed in to change notification settings

sotetsuk/pgx

Repository files navigation

pythonpypilicensecicodecovarxiv

A collection of GPU-accelerated parallel game simulators for reinforcement learning (RL)

Note

⭐ If you find this project helpful, we would be grateful for your support through a GitHub star to help us grow the community and motivate further development!

Why Pgx?

Brax, aJAX-native physics engine, provides extremely high-speed parallel simulation for RL incontinuous state space.Then, what about RL indiscrete state spaces like Chess, Shogi, and Go?Pgx provides a wide variety of JAX-native game simulators! Highlighted features include:

  • Super fast in parallel execution on accelerators
  • 🎲Various game support includingBackgammon,Chess,Shogi, andGo
  • 🖼️Beautiful visualization in SVG format

Quick start

Read theFull Documentation for more details

Training examples

Usage

Pgx is available onPyPI. Note that your Python environment hasjax andjaxlib installed, depending on your hardware specification.

$ pip install pgx

The following code snippet shows a simple example of using Pgx.You can try it out inthis Colab.Note that allstep functions in Pgx environments areJAX-native., i.e., they are allJIT-able.Please refer to thedocumentation for more details.

importjaximportpgxenv=pgx.make("go_19x19")init=jax.jit(jax.vmap(env.init))step=jax.jit(jax.vmap(env.step))batch_size=1024keys=jax.random.split(jax.random.PRNGKey(42),batch_size)state=init(keys)# vectorized stateswhilenot (state.terminated|state.truncated).all():action=model(state.current_player,state.observation,state.legal_action_mask)# step(state, action, keys) for stochastic envsstate=step(state,action)# state.rewards with shape (1024, 2)

Pgx is a library that focuses on faster implementations rather than just the API itself.However, the API itself is also sufficiently general. For example, all environments in Pgx can be converted to the AEC API ofPettingZoo, and you can run Pgx environments through the PettingZoo API.You can see the demonstration inthis Colab.

📣 API v2 (v2.0.0)

Pgx has been updated from APIv1 tov2 as of November 8, 2023 (releasev2.0.0). As a result, the signature forEnv.step has changed as follows:

  • v1:step(state: State, action: Array)
  • v2:step(state: State, action: Array, key: Optional[PRNGKey] = None)

Also,pgx.experimental.auto_reset are changed to specifykey as the third argument.

Purpose of the update: In API v1, even in environments with stochastic state transitions, the state transitions were deterministic, determined by the_rng_key inside thestate. This was intentional, with the aim of increasing reproducibility. However, when using planning algorithms in this environment, there is a risk that information about the underlying true randomness could "leak." To make it easier for users to conduct correct experiments,Env.step has been changed to explicitly specify a key.

Impact of the update: Since thekey is optional, it is still possible to execute asenv.step(state, action) like API v1 in deterministic environments like Go and chess, so there is no impact on these games. As ofv2.0.0,only 2048, backgammon, and MinAtar suite are affected by this change.

Supported games

BackgammonChessShogiGo

Usepgx.available_envs() -> Tuple[EnvId] to see the list of currently available games. Given an<EnvId>, you can create the environment via

>>>env=pgx.make(<EnvId>)
Game/EnvIdVisualizationVersionFive-word description byChatGPT
2048
"2048"
v2Merge tiles to create 2048.
Animal Shogi
"animal_shogi"
v2Animal-themed child-friendly shogi.
Backgammon
"backgammon"
v2Luck aids bearing off checkers.
Bridge bidding
"bridge_bidding"
v1Partners exchange information via bids.
Chess
"chess"
v2Checkmate opponent's king to win.
Connect Four
"connect_four"
v0Connect discs, win with four.
Gardner Chess
"gardner_chess"
v05x5 chess variant, excluding castling.
Go
"go_9x9""go_19x19"
v1Strategically place stones, claim territory.
Hex
"hex"
v0Connect opposite sides, block opponent.
Kuhn Poker
"kuhn_poker"
v1Three-card betting and bluffing game.
Leduc hold'em
"leduc_holdem"
v0Two-suit, limited deck poker.
MinAtar/Asterix
"minatar-asterix"
v1Avoid enemies, collect treasure, survive.
MinAtar/Breakout
"minatar-breakout"
v1Paddle, ball, bricks, bounce, clear.
MinAtar/Freeway
"minatar-freeway"
v1Dodging cars, climbing up freeway.
MinAtar/Seaquest
"minatar-seaquest"
v1Underwater submarine rescue and combat.
MinAtar/SpaceInvaders
"minatar-space_invaders"
v1Alien shooter game, dodge bullets.
Othello
"othello"
v0Flip and conquer opponent's pieces.
Shogi
"shogi"
v1Japanese chess with captured pieces.
Sparrow Mahjong
"sparrow_mahjong"
v1A simplified, children-friendly Mahjong.
Tic-tac-toe
"tic_tac_toe"
v0Three in a row wins.
Versioning policy

Each environment is versioned, and the version is incremented when there are changes that affect the performance of agents or when there are changes that are not backward compatible with the API.If you want to pursue complete reproducibility, we recommend that you check the version of Pgx and each environment as follows:

>>>pgx.__version__'1.0.0'>>>env.version'v0'

See also

Pgx is intended to complement theseJAX-native environments with (classic) board game suits:

Combining Pgx with theseJAX-native algorithms/implementations might be an interesting direction:

Limitation

Currently, some environments, including Go and chess, do not perform well on TPUs. Please use GPUs instead.

Citation

If you use Pgx in your work, please citeour paper:

@inproceedings{koyamada2023pgx,  title={Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning},  author={Koyamada, Sotetsu and Okano, Shinri and Nishimori, Soichiro and Murata, Yu and Habara, Keigo and Kita, Haruka and Ishii, Shin},  booktitle={Advances in Neural Information Processing Systems},  pages={45716--45743},  volume={36},  year={2023}}

LICENSE

Apache-2.0


[8]ページ先頭

©2009-2025 Movatter.jp