Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

An offline deep reinforcement learning library

License

NotificationsYou must be signed in to change notification settings

takuseno/d3rlpy

Repository files navigation

d3rlpy: An offline deep reinforcement learning library

testDocumentation StatuscodecovMaintainabilityMIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

importd3rlpydataset,env=d3rlpy.datasets.get_dataset("hopper-medium-v0")# prepare algorithmsac=d3rlpy.algos.SACConfig(compile_graph=True).create(device="cuda:0")# train offlinesac.fit(dataset,n_steps=1000000)# train onlinesac.fit_online(env,n_steps=1000000)# ready to controlactions=sac.predict(x)

Important

v2.x.x introduces breaking changes. If you still stick to v1.x.x, please explicitly install previous versions (e.g.pip install d3rlpy==1.1.1).

Key features

⚡ Most Practical RL Library Ever

  • offline RL: d3rlpy supports state-of-the-art offline RL algorithms. Offline RL is extremely powerful when the online interaction is not feasible during training (e.g. robotics, medical).
  • online RL: d3rlpy also supports conventional state-of-the-art online training algorithms without any compromising, which means that you can solve any kinds of RL problems only withd3rlpy.

🔰 User-friendly API

  • zero-knowledge of DL library: d3rlpy provides many state-of-the-art algorithms through intuitive APIs. You can become a RL engineer even without knowing how to use deep learning libraries.
  • extensive documentation: d3rlpy is fully documented and accompanied with tutorials and reproduction scripts of the original papers.

🚀 Beyond State-of-the-art

  • distributional Q function: d3rlpy is the first library that supports distributional Q functions in the all algorithms. The distributional Q function is known as the very powerful method to achieve the state-of-the-performance.
  • data-prallel distributed training: d3rlpy is the first library that supports data-parallel distributed offline RL training, which allows you to scale up offline RL with multiple GPUs or nodes. Seeexample.

Installation

d3rlpy supports Linux, macOS and Windows.

Dependencies

Installing d3rlpy package will install or upgrade the following packages to satisfy requirements:

  • torch>=2.5.0
  • tqdm>=4.66.3
  • gym>=0.26.0
  • gymnasium==1.0.0
  • click
  • colorama
  • dataclasses-json
  • h5py
  • structlog
  • typing-extensions
  • scikit-learn

PyPI (recommended)

PyPI versionPyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server BadgeAnaconda-Server Badge

$ conda install conda-forge/noarch::d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

Supported algorithms

algorithmdiscrete controlcontinuous control
Behavior Cloning (supervised learning)
Neural Fitted Q Iteration (NFQ)
Deep Q-Network (DQN)
Double DQN
Deep Deterministic Policy Gradients (DDPG)
Twin Delayed Deep Deterministic Policy Gradients (TD3)
Soft Actor-Critic (SAC)
Batch Constrained Q-learning (BCQ)
Bootstrapping Error Accumulation Reduction (BEAR)
Conservative Q-Learning (CQL)
Advantage Weighted Actor-Critic (AWAC)
Critic Reguralized Regression (CRR)
Policy in Latent Action Space (PLAS)
TD3+BC
Policy Regularization with Dataset Constraint (PRDC)
Implicit Q-Learning (IQL)
Calibrated Q-Learning (Cal-QL)
ReBRAC
Decision Transformer
Q-learning Decision Transformer (QDT)
Transformer Actor-Critic with Regularization (TACR)
Gato🚧🚧

Supported Q functions

Benchmark results

d3rlpy is benchmarked to ensure the implementation quality.The benchmark scripts are availablereproductions directory.The benchmark results are availabled3rlpy-benchmarks repository.

Examples

MuJoCo

importd3rlpy# prepare datasetdataset,env=d3rlpy.datasets.get_d4rl('hopper-medium-v0')# prepare algorithmcql=d3rlpy.algos.CQLConfig(compile_graph=True).create(device='cuda:0')# traincql.fit(dataset,n_steps=100000,evaluators={"environment":d3rlpy.metrics.EnvironmentEvaluator(env)},)

See more datasets atd4rl.

Atari 2600

importd3rlpy# prepare dataset (1% dataset)dataset,env=d3rlpy.datasets.get_atari_transitions('breakout',fraction=0.01,num_stack=4,)# prepare algorithmcql=d3rlpy.algos.DiscreteCQLConfig(observation_scaler=d3rlpy.preprocessing.PixelObservationScaler(),reward_scaler=d3rlpy.preprocessing.ClipRewardScaler(-1.0,1.0),compile_graph=True,).create(device='cuda:0')# start trainingcql.fit(dataset,n_steps=1000000,evaluators={"environment":d3rlpy.metrics.EnvironmentEvaluator(env,epsilon=0.001)},)

See more Atari datasets atd4rl-atari.

Online Training

importd3rlpyimportgym# prepare environmentenv=gym.make('Hopper-v3')eval_env=gym.make('Hopper-v3')# prepare algorithmsac=d3rlpy.algos.SACConfig(compile_graph=True).create(device='cuda:0')# prepare replay bufferbuffer=d3rlpy.dataset.create_fifo_replay_buffer(limit=1000000,env=env)# start trainingsac.fit_online(env,buffer,n_steps=1000000,eval_env=eval_env)

Tutorials

Try cartpole examples on Google Colaboratory!

  • offline RL tutorial:Open In Colab
  • online RL tutorial:Open In Colab

More tutorial documentations are availablehere.

Contributions

Any kind of contribution to d3rlpy would be highly appreciated!Please check thecontribution guide.

Community

ChannelLink
IssuesGitHub Issues

Important

Please do NOT email to any contributors including the owner of this project to ask for technical support. Such emails will be ignored without replying to your message. Use GitHub Issues to report your problems.

Projects using d3rlpy

ProjectDescription
MINERVAAn out-of-the-box GUI tool for offline RL
SCOPE-RLAn off-policy evaluation and selection library

Roadmap

The roadmap to the future release is available inROADMAP.md.

Citation

The paper is availablehere.

@article{d3rlpy,  author  = {Takuma Seno and Michita Imai},  title   = {d3rlpy: An Offline Deep Reinforcement Learning Library},  journal = {Journal of Machine Learning Research},  year    = {2022},  volume  = {23},  number  = {315},  pages   = {1--20},  url     = {http://jmlr.org/papers/v23/22-0017.html}}

Acknowledgement

This work started as a part ofTakuma Seno's Ph.D project at Keio University in 2020.

This work is supported by Information-technology Promotion Agency, Japan(IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscalyear 2020.


[8]ページ先頭

©2009-2025 Movatter.jp