Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Reinforcement Learning Demo: DQN with D3QN and DDPG with TD3 PyTorch re-implementation on Atari and MuJoCo

License

NotificationsYou must be signed in to change notification settings

dadadadawjb/RLDemo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Two kinds of model-free Reinforcement Learning methods, value-based RL and policy-based RL, are implemented respectively to solve two kinds of environments, with discrete action space or continuous action space. Each kind of method is implemented with a basic and popular algorithm and its corresponding representative improvement. And each kind of environment is tested with four different instances. Specifically,Deep Q-Network (DQN) withDueling Double DQN (D3QN) andDeep Deterministic Policy Gradient (DDPG) withTwin Delayed DDPG (TD3) are re-implemented inPyTorch onOpenAI Gym'sAtari (PongNoFrameskip-v4,BoxingNoFrameskip-v4,BreakoutNoFrameskip-v4,VideoPinball-ramNoFrameskip-v4) andMuJoCo (Hopper-v2,HalfCheetah-v2,Ant-v2,Humanoid-v2), not strictly compared withOpenAI Baselines,Dopamine,Spinning Up andTianshou.

overview

Demos

PongBoxingBreakoutPinball
DQNPong_DQNBoxing_DQNBreakout_DQNPinball_DQN
D3QNPong_D3QNBoxing_D3QNBreakout_D3QNPinball_D3QN
HopperHalfCheetahAntHumanoid
DDPGHopper_DDPGHalfCheetah_DDPGAnt_DDPGHumanoid_DDPG
TD3Hopper_TD3HalfCheetah_TD3Ant_TD3Humanoid_TD3

Dependencies

Main: python3.8, gym0.26.2, mujoco2.1.0.

# create conda environmentconda create -n rl python=3.8conda activate rl# install gympip install gym==0.26.2# install gym[atari]pip install gym[atari]pip install gym[accept-rom-license]# install gym[mujoco]pip install gym[mujoco]wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gztar -zxvf mujoco210-linux-x86_64.tar.gzmkdir~/.mujocomv mujoco210~/.mujoco/mujoco210rm mujoco210-linux-x86_64.tar.gzpip install -U'mujoco-py<2.2,>=2.1'sudo apt install libosmesa6-dev libgl1-mesa-glx libglfw3pip install Cython==3.0.0a10# install other dependenciespip install tqdmpip install numpypip install torch==1.13.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117pip install tensorboardpip install opencv-pythonpip install einops

Get Started

python run.py --env_name [env_name] [--improve] [--test] --seed [seed] --device [device] [--debug] [--gui] [--video]

Results

Full results refer tooutputs. Note that the hyperparameters of the algorithms vary across different implementations. Also, the metric used is not strictly the same (e.g. average testing score with 10 trials is used in RLDemo, while Tianshou uses the max average validation score in the last training 1M/10M timesteps). In addition, other popular implementations have no individual D3QN implementation, so Rainbow instead is selected for the D3QN's performance. Besides, other popular implementations have no individual results for the RAM version of Pinball environment, so its normal image version is selected.

DQNPongBoxingBreakoutPinball
OpenAI Baselines16.5-131.5-
Dopamine9.8~7792.2~65000
Tianshou20.2-133.5-
RLDemo21.095.49.66250.0
D3QN/RainbowPongBoxingBreakoutPinball
OpenAI Baselines----
Dopamine19.1~9947.9~465000
Tianshou20.2-440.4-
RLDemo21.084.66.24376.6
DDPGHopperHalfCheetahAntHumanoid
Spinning Up~1800~11000~840-
Tianshou2197.011718.7990.4177.3
RLDemo3289.28720.52685.32401.4
TD3HopperHalfCheetahAntHumanoid
Spinning Up~2860~9750~3800-
Tianshou3472.210201.25116.45189.5
RLDemo1205.312254.45058.15206.4
PongBoxingBreakoutPinball
LossPong_lossBoxing_lossBreakout_lossPinball_loss
Validation ScorePong_scoreBoxing_scoreBreakout_scorePinball_score
Validation ReturnPong_returnBoxing_returnBreakout_returnPinball_return
Validation IterationsPong_iterationsBoxing_iterationsBreakout_iterationsPinball_iterations
HopperHalfCheetahAntHumanoid
Actor LossHopper_actor_lossHalfCheetah_actor_lossAnt_actor_lossHumanoid_actor_loss
Critic LossHopper_critic_lossHalfCheetah_critic_lossAnt_critic_lossHumanoid_critic_loss
Validation ScoreHopper_scoreHalfCheetah_scoreAnt_scoreHumanoid_score
Validation ReturnHopper_returnHalfCheetah_returnAnt_returnHumanoid_return
Validation IterationsHopper_iterationsHalfCheetah_iterationsAnt_iterationsHumanoid_iterations

Note

All hyperparameters used are stored inconfigs. All checkpoints trained are stored inoutputs/[env_name]/[algo_name]/weights. And the training procedure curves are stored inoutputs/[env_name]/tb. The testing results are stored inoutputs/[env_name]/[algo_name]/test.txt. The evaluating demos are stored inoutputs/[env_name]/[algo_name]/video.

About

Reinforcement Learning Demo: DQN with D3QN and DDPG with TD3 PyTorch re-implementation on Atari and MuJoCo

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp