Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

C++-based high-performance parallel environment execution engine (vectorized env) for general RL environments.

License

NotificationsYou must be signed in to change notification settings

sail-sg/envpool

Repository files navigation


PyPIDownloadsarXivRead the DocsUnittestGitHub issuesGitHub starsGitHub forksGitHub license

EnvPool is a C++-based batched environment pool with pybind11 and thread pool. It has high performance (~1M raw FPS with Atari games, ~3M raw FPS with Mujoco simulator on DGX-A100) and compatible APIs (supports both gym and dm_env, both sync and async, both single and multi player environment). Currently it supports:

Here are EnvPool's several highlights:

Check out ourarXiv paper for more details!

Installation

PyPI

EnvPool is currently hosted onPyPI. It requires Python >= 3.7.

You can simply install EnvPool with the following command:

$ pip install envpool

After installation, open a Python console and type

importenvpoolprint(envpool.__version__)

If no error occurs, you have successfully installed EnvPool.

From Source

Please refer to theguideline.

Documentation

The tutorials and API documentation are hosted onenvpool.readthedocs.io.

The example scripts are underexamples/ folder; benchmark scripts are underbenchmark/ folder.

Benchmark Results

We perform our benchmarks with ALE Atari environmentPongNoFrameskip-v4 (with environment wrappers fromOpenAI Baselines) and Mujoco environmentAnt-v3 on different hardware setups, including a TPUv3-8 virtual machine (VM) of 96 CPU cores and 2 NUMA nodes, and an NVIDIA DGX-A100 of 256 CPU cores with 8 NUMA nodes. Baselines include 1) naive Python for-loop; 2) the most popular RL environment parallelization execution by Python subprocess, e.g.,gym.vector_env; 3) to our knowledge, the fastest RL environment executorSample Factory before EnvPool.

We report EnvPool performance with sync mode, async mode, and NUMA + async mode, compared with the baselines on different number of workers (i.e., number of CPU cores). As we can see from the results, EnvPool achieves significant improvements over the baselines on all settings. On the high-end setup, EnvPool achieves 1 Million frames per second with Atari and 3 Million frames per second with Mujoco on 256 CPU cores, which is 14.9x / 19.6x of thegym.vector_env baseline. On a typical PC setup with 12 CPU cores, EnvPool's throughput is 3.1x / 2.9x ofgym.vector_env.

Atari Highest FPSLaptop (12)Workstation (32)TPU-VM (96)DGX-A100 (256)
For-loop4,8937,9143,9934,640
Subprocess15,86347,69946,91071,943
Sample-Factory28,216138,847222,327707,494
EnvPool (sync)37,396133,824170,380427,851
EnvPool (async)49,439200,428359,559891,286
EnvPool (numa+async)//373,1691,069,922
Mujoco Highest FPSLaptop (12)Workstation (32)TPU-VM (96)DGX-A100 (256)
For-loop12,86120,29810,47411,569
Subprocess36,586105,43287,403163,656
Sample-Factory62,510309,264461,5151,573,262
EnvPool (sync)66,622380,950296,681949,787
EnvPool (async)105,126582,446887,5402,363,864
EnvPool (numa+async)//896,8303,134,287

Please refer to thebenchmark page for more details.

API Usage

The following content shows both synchronous and asynchronous API usage of EnvPool. You can also run the full script atexamples/env_step.py

Synchronous API

importenvpoolimportnumpyasnp# make gym envenv=envpool.make("Pong-v5",env_type="gym",num_envs=100)# or use envpool.make_gym(...)obs=env.reset()# should be (100, 4, 84, 84)act=np.zeros(100,dtype=int)obs,rew,term,trunc,info=env.step(act)

Under the synchronous mode,envpool closely resemblesopenai-gym/dm-env. It has thereset andstep functions with the same meaning. However, there is one exception inenvpool: batch interaction is the default. Therefore, during the creation of the envpool, there is anum_envs argument that denotes how many envs you like to run in parallel.

env=envpool.make("Pong-v5",env_type="gym",num_envs=100)

The first dimension ofaction passed to the step function should equalnum_envs.

act=np.zeros(100,dtype=int)

You don't need to manually reset one environment when any ofdone is true; instead, all envs inenvpool have enabled auto-reset by default.

Asynchronous API

importenvpoolimportnumpyasnp# make asynchronousnum_envs=64batch_size=16env=envpool.make("Pong-v5",env_type="gym",num_envs=num_envs,batch_size=batch_size)action_num=env.action_space.nenv.async_reset()# send the initial reset signal to all envswhileTrue:obs,rew,term,trunc,info=env.recv()env_id=info["env_id"]action=np.random.randint(action_num,size=batch_size)env.send(action,env_id)

In the asynchronous mode, thestep function is split into two parts: thesend/recv functions.send takes two arguments, a batch of action, and the correspondingenv_id that each action should be sent to. Unlikestep,send does not wait for the envs to execute and return the next state, it returns immediately after the actions are fed to the envs. (The reason why it is called async mode).

env.send(action,env_id)

To get the "next states", we need to call therecv function. However,recv does not guarantee that you will get back the "next states" of the envs you just calledsend on. Instead, whatever envs finishes execution getsrecved first.

state=env.recv()

Besidesnum_envs, there is one more argumentbatch_size. Whilenum_envs defines how many envs in total are managed by theenvpool,batch_size specifies the number of envs involved each time we interact withenvpool. e.g. There are 64 envs executing in theenvpool,send andrecv each time interacts with a batch of 16 envs.

envpool.make("Pong-v5",env_type="gym",num_envs=64,batch_size=16)

There are other configurable arguments withenvpool.make; please check outEnvPool Python interface introduction.

Contributing

EnvPool is still under development. More environments will be added, and we always welcome contributions to help EnvPool better. If you would like to contribute, please check out ourcontribution guideline.

License

EnvPool is under Apache2 license.

Other third-party source-code and data are under their corresponding licenses.

We do not include their source code and data in this repo.

Citing EnvPool

If you find EnvPool useful, please cite it in your publications.

@inproceedings{weng2022envpool, author = {Weng, Jiayi and Lin, Min and Huang, Shengyi and Liu, Bo and Makoviichuk, Denys and Makoviychuk, Viktor and Liu, Zichen and Song, Yufan and Luo, Ting and Jiang, Yukun and Xu, Zhongwen and Yan, Shuicheng}, booktitle = {Advances in Neural Information Processing Systems}, editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh}, pages = {22409--22421}, publisher = {Curran Associates, Inc.}, title = {Env{P}ool: A Highly Parallel Reinforcement Learning Environment Execution Engine}, url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/8caaf08e49ddbad6694fae067442ee21-Paper-Datasets_and_Benchmarks.pdf}, volume = {35}, year = {2022}}

Disclaimer

This is not an official Sea Limited or Garena Online Private Limited product.


[8]ページ先頭

©2009-2025 Movatter.jp