- Notifications
You must be signed in to change notification settings - Fork31
Q-attention (within the ARM system) and coarse-to-fine Q-attention (within C2F-ARM system).
License
NotificationsYou must be signed in to change notification settings
stepjam/ARM
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Codebase of Q-attention, coarse-to-fine Q-attention, and other variants. Code from the following papers:
- Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation (ARM system)
- Coarse-to-Fine Q-attention: Efficient Learning for Visual Robotic Manipulation via Discretisation (C2F-ARM system)
- Coarse-to-Fine Q-attention with Learned Path Ranking (C2F-ARM+LPR system)
- Coarse-to-Fine Q-attention with Tree Expansion
ARM is trained using theYARR framework and evaluated onRLBench 1.1.0.
Install all of the project requirements:
# Create conda environmentconda create -n arm python=3.8# Install PyTorch 2.0. Go to PyTorch website to install other versions.conda install pytorch=2.0 torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia# Install YARRpip install git+https://github.com/stepjam/YARR.git# Install CoppeliaSim 4.1.0 for Ubuntu 20.04# Refer to PyRep README for other versionsexport COPPELIASIM_ROOT=${HOME}/.local/bin/CoppeliaSimcurl -O https://www.coppeliarobotics.com/files/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xzmkdir -p$COPPELIASIM_ROOT&& tar -xf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz -C$COPPELIASIM_ROOT --strip-components 1## Add environment variables into bashrc (or zshrc)echo"export COPPELIASIM_ROOT=$COPPELIASIM_ROOTexport LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:\$COPPELIASIM_ROOTexport QT_QPA_PLATFORM_PLUGIN_PATH=\$COPPELIASIM_ROOT">>~/.bashrc# Install PyRepgit clone https://github.com/stepjam/PyRep.git .local/PyRepcd .local/PyReppip install -r requirements.txtpip install.cd ../..# Install RLBenchgit clone https://github.com/stepjam/RLBench.git .local/RLBenchcd .local/RLBenchpip install -r requirements.txtpip install.cd ../..# Install ARM dependenciespip install -r requirements.txt
Be sure to have RLBench demos saved on your machine before proceeding. To generate demos for a task, go to thetools directory in RLBench(rlbench/tools), and run:
python dataset_generator.py --save_path=/mnt/my/save/dir --tasks=take_lid_off_saucepan --image_size=128,128 \--renderer=opengl --episodes_per_task=100 --variations=1 --processes=1
Experiments are launched viaHydra. To start trainingC2F-ARM on thetake_lid_off_saucepan task with the default parameters ongpu 0:
python launch.py method=C2FARM rlbench.task=take_lid_off_saucepan rlbench.demo_path=/mnt/my/save/dir framework.gpu=0
To launchC2F-ARM+LPR:
python launch.py method=LPR rlbench.task=take_lid_off_saucepan rlbench.demo_path=/mnt/my/save/dir framework.gpu=0
To launchC2F-ARM+QTE:
python launch.py method=QTE rlbench.task=take_lid_off_saucepan rlbench.demo_path=/mnt/my/save/dir framework.gpu=0