openai-baselines
Here are 13 public repositories matching this topic...
Language:All
Sort:Most stars
Stable Baselines官方文档中文版
- Updated
Feb 21, 2021 - Python
Reproducing MuJoCo benchmarks in a modern, commercial game /physics engine (Unity + PhysX).
- Updated
Dec 11, 2024 - C#
Docker image with OpenAI Gym, Baselines, MuJoCo and Roboschool, utilizing TensorFlow and JupyterLab.
- Updated
May 1, 2019 - Dockerfile
Control Traffic lights intelligently with Reinforcement Learning!
- Updated
Jun 11, 2020 - Jupyter Notebook
OpenAI's PPO baseline applied to the classic game of Snake
- Updated
Mar 1, 2020 - Python
Halide with Reinforcement Learning
- Updated
Jan 9, 2024 - Python
- Updated
May 20, 2020 - Python
Snake using Deep Reinforcement Learning
- Updated
May 5, 2020 - JavaScript
A merge between OpenAI Baselines and Stable Baselines with increased focus on HER+DDPG and ease of use. Simply run the bash script to get started!
- Updated
Feb 27, 2020 - Python
Gym is a toolkit for developing and comparing reinforcement learning algorithms.
- Updated
Dec 27, 2021 - Python
Playing "Neyboy Challenge" with Reinforcement Learning
- Updated
May 18, 2022 - Python
A Facile guide for Deep Reinforcement Learning.
- Updated
Mar 16, 2020
Clean and flexible implementation of PPO (built on top of stable-baselines3)
- Updated
Jul 9, 2021 - Python
Improve this page
Add a description, image, and links to theopenai-baselines topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theopenai-baselines topic, visit your repo's landing page and select "manage topics."