Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
forked fromai4co/rl4co

Fork of RL4CO for submitting PR

License

NotificationsYou must be signed in to change notification settings

aziabatz/rl4co

 
 

Repository files navigation


RL4CO has been accepted as an oral presentation at theNeurIPS 2023 GLFrontiers Workshop! 🎉


An extensive Reinforcement Learning (RL) for Combinatorial Optimization (CO) benchmark. Our goal is to provide a unified framework for RL-based CO algorithms, and to facilitate reproducible research in this field, decoupling the science from the engineering.

RL4CO is built upon:

  • TorchRL: official PyTorch framework for RL algorithms and vectorized environments on GPUs
  • TensorDict: a library to easily handle heterogeneous data such as states, actions and rewards
  • PyTorch Lightning: a lightweight PyTorch wrapper for high-performance AI research
  • Hydra: a framework for elegantly configuring complex applications

RL4CO Overview

We provide several utilities and modularization. For autoregressive policies, we modularize reusable components such asenvironment embeddings that can easily be swapped tosolve new problemsRL4CO Policy

Getting started

Open In Colab

RL4CO is now available for installation onpip!

pip install rl4co

To get started, we recommend checking out ourquickstart notebook or theminimalistic example below.

Install from source

This command installs the bleeding edgemain version, useful for staying up-to-date with the latest developments - for instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet:

pip install -U git+https://github.com/ai4co/rl4co.git

Local install and development

If you want to develop RL4CO we recommend you to install it locally withpip in editable mode:

git clone https://github.com/ai4co/rl4co&&cd rl4copip install -e.

We recommend using a virtual environment such asconda to installrl4co locally.

Usage

Train model with default configuration (AM on TSP environment):

python run.py

Tip

You may check outthis notebook to get started with Hydra!

Change experiment

Train model with chosen experiment configuration fromconfigs/experiment/ (e.g. tsp/am, and environment with 42 cities)

python run.py experiment=routing/am env.num_loc=42
Disable logging
python run.py experiment=routing/am logger=none'~callbacks.learning_rate_monitor'

Note that~ is used to disable a callback that would need a logger.

Create a sweep over hyperparameters (-m for multirun)
python run.py -m experiment=routing/am  train.optimizer.lr=1e-3,1e-4,1e-5

Minimalistic Example

Here is a minimalistic example training the Attention Model with greedy rollout baseline on TSP in less than 30 lines of code:

fromrl4co.envsimportTSPEnvfromrl4co.modelsimportAttentionModelfromrl4co.utilsimportRL4COTrainer# Environment, Model, and Lightning Moduleenv=TSPEnv(num_loc=20)model=AttentionModel(env,baseline="rollout",train_data_size=100_000,test_data_size=10_000,optimizer_kwargs={'lr':1e-4}                       )# Trainertrainer=RL4COTrainer(max_epochs=3)# Fit the modeltrainer.fit(model)# Test the modeltrainer.test(model)

Other examples can be found on thedocumentation!

Testing

Run tests withpytest from the root directory:

pytest tests

Known Bugs

Bugs installing PyTorch Geometric (PyG)

InstallingPyG viaConda seems to update Torch itself. We have found that this update introduces some bugs withtorchrl. At this moment, we recommend installingPyG withPip:

pip install torch_geometric

Contributing

Have a suggestion, request, or found a bug? Feel free toopen an issue orsubmit a pull request.If you would like to contribute, please check out our contribution guidelineshere. We welcome and look forward to all contributions to RL4CO!

We are also onSlack if you have any questions or would like to discuss RL4CO with us. We are open to collaborations and would love to hear from you 🚀

Contributors

Citation

If you find RL4CO valuable for your research or applied projects:

@inproceedings{berto2023rl4co,title={{RL}4{CO}: a Unified Reinforcement Learning for Combinatorial Optimization Library},author={Federico Berto and Chuanbo Hua and Junyoung Park and Minsu Kim and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Joungho Kim and Jinkyoo Park},booktitle={NeurIPS 2023 Workshop: New Frontiers in Graph Learning},year={2023},url={https://openreview.net/forum?id=YXSJxi8dOV},note={\url{https://github.com/ai4co/rl4co}}}

Join us

Slack

We invite you to join our AI4CO community, an open research group in Artificial Intelligence (AI) for Combinatorial Optimization (CO)!

About

Fork of RL4CO for submitting PR

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook83.7%
  • Python16.3%

[8]ページ先頭

©2009-2025 Movatter.jp