- Notifications
You must be signed in to change notification settings - Fork739
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
License
tensorflow/agents
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
TF-Agents makes implementing, deploying,and testing new Bandits and RL algorithms easier. It provides well tested andmodular components that can be modified and extended. It enables fast codeiteration, with good test integration and benchmarking.
To get started, we recommend checking out one of our Colab tutorials. If youneed an intro to RL (or a quick recap),start here. Otherwise, check out ourDQN tutorial to get an agent up andrunning in the Cartpole environment. API documentation for the current stablerelease is ontensorflow.org.
TF-Agents is under active development and interfaces may change at any time.Feedback and comments are welcome.
Agents
Tutorials
Multi-Armed Bandits
Examples
Installation
Contributing
Releases
Principles
Contributors
Citation
Disclaimer
In TF-Agents, the core elements of RL algorithms are implemented asAgents
. Anagent encompasses two main responsibilities: defining a Policy to interact withthe Environment, and how to learn/train that Policy from collected experience.
Currently the following algorithms are available under TF-Agents:
- DQN:Human level control through deep reinforcement learning Mnih etal., 2015
- DDQN:Deep Reinforcement Learning with Double Q-learning Hasselt etal., 2015
- DDPG:Continuous control with deep reinforcement learning Lillicrap etal., 2015
- TD3:Addressing Function Approximation Error in Actor-Critic MethodsFujimoto et al., 2018
- REINFORCE:Simple Statistical Gradient-Following Algorithms forConnectionist Reinforcement Learning Williams,1992
- PPO:Proximal Policy Optimization Algorithms Schulman et al., 2017
- SAC:Soft Actor Critic Haarnoja et al., 2018
Seedocs/tutorials/
for tutorials on the major componentsprovided.
The TF-Agents library contains a comprehensive Multi-Armed Bandits suite,including Bandits environments and agents. RL agents can also be used on Banditenvironments. There is a tutorial inbandits_tutorial.ipynb
.and ready-to-run examples intf_agents/bandits/agents/examples/v2
.
End-to-end examples training agents can be found under each agent directory.e.g.:
TF-Agents publishes nightly and stable builds. For a list of releases read theReleases section. The commands below cover installingTF-Agents stable and nightly frompypi.org as well as from aGitHub clone.
⚠️ If using Reverb (replay buffer), which is very common,TF-Agents will only work with Linux.
Note: Python 3.11 requires pygame 2.1.3+.
Run the commands below to install the most recent stable release. APIdocumentation for the release is ontensorflow.org.
$ pip install --user tf-agents[reverb]# Use keras-2$export TF_USE_LEGACY_KERAS=1# Use this tag get the matching examples and colabs.$ git clone https://github.com/tensorflow/agents.git$cd agents$ git checkout v0.18.0
If you want to install TF-Agents with versions of Tensorflow orReverb that are flagged as not compatibleby the pip dependency check, use the following pattern below at your own risk.
$ pip install --user tensorflow$ pip install --user tf-keras$ pip install --user dm-reverb$ pip install --user tf-agents
If you want to use TF-Agents with TensorFlow 1.15 or 2.0, install version 0.3.0:
# Newer versions of tensorflow-probability require newer versions of TensorFlow.$ pip install tensorflow-probability==0.8.0$ pip install tf-agents==0.3.0
Nightly builds include newer features, but may be less stable than the versionedreleases. The nightly build is pushed astf-agents-nightly
. We suggestinstalling nightly versions of TensorFlow (tf-nightly
) and TensorFlowProbability (tfp-nightly
) as those are the versions TF-Agents nightly aretested against.
To install the nightly build version, run the following:
# Use keras-2$export TF_USE_LEGACY_KERAS=1# `--force-reinstall helps guarantee the right versions.$ pip install --user --force-reinstall tf-nightly$ pip install --user --force-reinstall tf-keras-nightly$ pip install --user --force-reinstall tfp-nightly$ pip install --user --force-reinstall dm-reverb-nightly# Installing with the `--upgrade` flag ensures you'll get the latest version.$ pip install --user --upgrade tf-agents-nightly
After cloning the repository, the dependencies can be installed by runningpip install -e .[tests]
. TensorFlow needs to be installed independently:pip install --user tf-nightly
.
We're eager to collaborate with you! SeeCONTRIBUTING.md
for a guide on how to contribute. This project adheres to TensorFlow'scode of conduct. By participating, you are expected touphold this code.
TF Agents has stable and nightly releases. The nightly releases are often finebut can have issues due to upstream libraries being in flux. The table belowlists the version(s) of TensorFlow that align with each TF Agents' release.Release versions of interest:
- 0.19.0 supports tensorflow-2.15.0.
- 0.18.0 dropped Python 3.8 support.
- 0.16.0 is the first version to support Python 3.11.
- 0.15.0 is the last release compatible with Python 3.7.
- If using numpy < 1.19, then use TF-Agents 0.15.0 or earlier.
- 0.9.0 is the last release compatible with Python 3.6.
- 0.3.0 is the last release compatible with Python 2.x.
Release | Branch / Tag | TensorFlow Version | dm-reverb Version |
---|---|---|---|
Nightly | master | tf-nightly | dm-reverb-nightly |
0.19.0 | v0.19.0 | 2.15.0 | 0.14.0 |
0.18.0 | v0.18.0 | 2.14.0 | 0.13.0 |
0.17.0 | v0.17.0 | 2.13.0 | 0.12.0 |
0.16.0 | v0.16.0 | 2.12.0 | 0.11.0 |
0.15.0 | v0.15.0 | 2.11.0 | 0.10.0 |
0.14.0 | v0.14.0 | 2.10.0 | 0.9.0 |
0.13.0 | v0.13.0 | 2.9.0 | 0.8.0 |
0.12.0 | v0.12.0 | 2.8.0 | 0.7.0 |
0.11.0 | v0.11.0 | 2.7.0 | 0.6.0 |
0.10.0 | v0.10.0 | 2.6.0 | |
0.9.0 | v0.9.0 | 2.6.0 | |
0.8.0 | v0.8.0 | 2.5.0 | |
0.7.1 | v0.7.1 | 2.4.0 | |
0.6.0 | v0.6.0 | 2.3.0 | |
0.5.0 | v0.5.0 | 2.2.0 | |
0.4.0 | v0.4.0 | 2.1.0 | |
0.3.0 | v0.3.0 | 1.15.0 and 2.0.0. |
This project adheres toGoogle's AI principles. Byparticipating, using or contributing to this project you are expected to adhereto these principles.
We would like to recognize the following individuals for their codecontributions, discussions, and other work to make the TF-Agents library.
- James Davidson
- Ethan Holly
- Toby Boyd
- Summer Yue
- Robert Ormandi
- Kuang-Huei Lee
- Alexa Greenberg
- Amir Yazdanbakhsh
- Yao Lu
- Gaurav Jain
- Christof Angermueller
- Mark Daoust
- Adam Wood
If you use this code, please cite it as:
@misc{TFAgents, title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow}, author = {Sergio Guadarrama and Anoop Korattikara and Oscar Ramirez and Pablo Castro and Ethan Holly and Sam Fishman and Ke Wang and Ekaterina Gonina and Neal Wu and Efi Kokiopoulou and Luciano Sbaiz and Jamie Smith and Gábor Bartók and Jesse Berent and Chris Harris and Vincent Vanhoucke and Eugene Brevdo}, howpublished = {\url{https://github.com/tensorflow/agents}}, url = "https://github.com/tensorflow/agents", year = 2018, note = "[Online; accessed 25-June-2019]"}
This is not an official Google product.
About
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Topics
Resources
License
Code of conduct
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.