- Notifications
You must be signed in to change notification settings - Fork0
A PyTorch library for all things Reinforcement Learning (RL) for Combinatorial Optimization (CO)
License
Ashirog1/rl4co
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
An extensive Reinforcement Learning (RL) for Combinatorial Optimization (CO) benchmark. Our goal is to provide a unified framework for RL-based CO algorithms, and to facilitate reproducible research in this field, decoupling the science from the engineering.
RL4CO is built upon:
- TorchRL: official PyTorch framework for RL algorithms and vectorized environments on GPUs
- TensorDict: a library to easily handle heterogeneous data such as states, actions and rewards
- PyTorch Lightning: a lightweight PyTorch wrapper for high-performance AI research
- Hydra: a framework for elegantly configuring complex applications
We offer flexible and efficient implementations of the following policies:
- Constructive: learn to construct a solution from scratch
- Autoregressive (AR): construct solutions one step at a time via a decoder
- NonAutoregressive (NAR): learn to predict a heuristic, such as a heatmap, to then construct a solution
- Improvement: learn to improve an pre-existing solution
We provide several utilities and modularization. For example, we modularize reusable components such asenvironment embeddings that can easily be swapped tosolve new problems.
RL4CO is now available for installation onpip
!
pip install rl4co
To get started, we recommend checking out ourquickstart notebook or theminimalistic example below.
This command installs the bleeding edgemain
version, useful for staying up-to-date with the latest developments - for instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet:
pip install -U git+https://github.com/ai4co/rl4co.git
If you want to develop RL4CO we recommend you to install it locally withpip
in editable mode:
git clone https://github.com/ai4co/rl4co&&cd rl4copip install -e.
We recommend using a virtual environment such asconda
to installrl4co
locally.
Train model with default configuration (AM on TSP environment):
python run.py
Tip
You may check outthis notebook to get started with Hydra!
Change experiment settings
Train model with chosen experiment configuration fromconfigs/experiment/
python run.py experiment=routing/am env=tsp env.num_loc=50 model.optimizer_kwargs.lr=2e-4
Here you may change the environment, e.g. withenv=cvrp
by command line or by modifying the corresponding experiment e.g.configs/experiment/routing/am.yaml.
Disable logging
python run.py experiment=routing/am logger=none'~callbacks.learning_rate_monitor'
Note that~
is used to disable a callback that would need a logger.
Create a sweep over hyperparameters (-m for multirun)
python run.py -m experiment=routing/am model.optimizer.lr=1e-3,1e-4,1e-5
Here is a minimalistic example training the Attention Model with greedy rollout baseline on TSP in less than 30 lines of code:
fromrl4co.envs.routingimportTSPEnv,TSPGeneratorfromrl4co.modelsimportAttentionModelPolicy,POMOfromrl4co.utilsimportRL4COTrainer# Instantiate generator and environmentgenerator=TSPGenerator(num_loc=50,loc_distribution="uniform")env=TSPEnv(generator)# Create policy and RL modelpolicy=AttentionModelPolicy(env_name=env.name,num_encoder_layers=6)model=POMO(env,policy,batch_size=64,optimizer_kwargs={"lr":1e-4})# Instantiate Trainer and fittrainer=RL4COTrainer(max_epochs=10,accelerator="gpu",precision="16-mixed")trainer.fit(model)
Other examples can be found on thedocumentation!
Run tests withpytest
from the root directory:
pytest tests
InstallingPyG
viaConda
seems to update Torch itself. We have found that this update introduces some bugs withtorchrl
. At this moment, we recommend installingPyG
withPip
:
pip install torch_geometric
Have a suggestion, request, or found a bug? Feel free toopen an issue orsubmit a pull request.If you would like to contribute, please check out our contribution guidelineshere. We welcome and look forward to all contributions to RL4CO!
We are also onSlack if you have any questions or would like to discuss RL4CO with us. We are open to collaborations and would love to hear from you 🚀
If you find RL4CO valuable for your research or applied projects:
@article{berto2024rl4co,title={{RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark}},author={Federico Berto and Chuanbo Hua and Junyoung Park and Laurin Luttmann and Yining Ma and Fanchen Bu and Jiarui Wang and Haoran Ye and Minsu Kim and Sanghyeok Choi and Nayeli Gast Zepeda and Andr\'e Hottung and Jianan Zhou and Jieyi Bi and Yu Hu and Fei Liu and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Davide Angioni and Wouter Kool and Zhiguang Cao and Jie Zhang and Kijung Shin and Cathy Wu and Sungsoo Ahn and Guojie Song and Changhyun Kwon and Lin Xie and Jinkyoo Park},year={2024},journal={arXiv preprint arXiv:2306.17100},note={\url{https://github.com/ai4co/rl4co}}}
Note that aprevious version of RL4CO has been accepted as an oral presentation at theNeurIPS 2023 GLFrontiers Workshop. Since then, the library has greatly evolved and improved!
We invite you to join our AI4CO community, an open research group in Artificial Intelligence (AI) for Combinatorial Optimization (CO)!