- Notifications
You must be signed in to change notification settings - Fork64
Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"
License
devendrachaplot/Object-Goal-Navigation
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is a PyTorch implementation of the NeurIPS-20 paper:
Object Goal Navigation using Goal-Oriented Semantic Exploration
Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, Ruslan Salakhutdinov
Carnegie Mellon University, Facebook AI Research
Winner of theCVPR 2020 Habitat ObjectNav Challenge.
Project Website:https://devendrachaplot.github.io/projects/semantic-exploration
The Goal-Oriented Semantic Exploration (SemExp) model consists of three modules: a Semantic Mapping Module, a Goal-Oriented Semantic Policy, and a deterministic Local Policy.As shown below, the Semantic Mapping model builds a semantic map over time. The Goal-Oriented Semantic Policy selects a long-term goal based on the semanticmap to reach the given object goal efficiently. A deterministic local policy based on analytical planners is used to take low-level navigation actions to reach the long-term goal.
- Episode train and test datasets forObject Goal Navigation task for the Gibson dataset in the Habitat Simulator.
- The code to train and evaluate the Semantic Exploration (SemExp) model on the Object Goal Navigation task.
- Pretrained SemExp model.
- We use earlier versions ofhabitat-sim andhabitat-lab as specified below:
Installing habitat-sim:
git clone https://github.com/facebookresearch/habitat-sim.gitcd habitat-sim; git checkout tags/v0.1.5; pip install -r requirements.txt; python setup.py install --headlesspython setup.py install # (for Mac OS)
Installing habitat-lab:
git clone https://github.com/facebookresearch/habitat-lab.gitcd habitat-lab; git checkout tags/v0.1.5; pip install -e .
Check habitat installation by runningpython examples/benchmark.py
in the habitat-lab folder.
- Installpytorch according to your system configuration. The code is tested on pytorch v1.6.0 and cudatoolkit v10.2. If you are using conda:
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 #(Linux with GPU)conda install pytorch==1.6.0 torchvision==0.7.0 -c pytorch #(Mac OS)
- Installdetectron2 according to your system configuration. If you are using conda:
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.html #(Linux with GPU)CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' #(Mac OS)
We provide experimentaldocker andsingularity images with all the dependencies installed, seeDocker Instructions.
Clone the repository and install other requirements:
git clone https://github.com/devendrachaplot/Object-Goal-Navigation/cd Object-Goal-Navigation/;pip install -r requirements.txt
- Download the Gibson dataset using the instructions here:https://github.com/facebookresearch/habitat-lab#scenes-datasets (download the 11GB file
gibson_habitat_trainval.zip
) - Move the Gibson scene dataset or create a symlink at
data/scene_datasets/gibson_semantic
.
- Download the episode dataset:
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1tslnZAkH8m3V5nP8pbtBmaR2XEfr8Rau' -O objectnav_gibson_v1.1.zip
- Unzip the dataset into
data/datasets/objectnav/gibson/v1.1/
The code requires the datasets in adata
folder in the following format (same as habitat-lab):
Object-Goal-Navigation/ data/ scene_datasets/ gibson_semantic/ Adrian.glb Adrian.navmesh ... datasets/ objectnav/ gibson/ v1.1/ train/ val/
To verify that the data is setup correctly, run:
python test.py --agent random -n1 --num_eval_episodes 1 --auto_gpu_config 0
For training the SemExp model on the Object Goal Navigation task:
python main.py
mkdir pretrained_models;wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=171ZA7XNu5vi3XLpuKs8DuGGZrYyuSjL0' -O pretrained_models/sem_exp.pth
For evaluating the pre-trained model:
python main.py --split val --eval 1 --load pretrained_models/sem_exp.pth
For visualizing the agent observations and predicted semantic map, add-v 1
as an argument to the above command.
The pre-trained model should get 0.657 Success, 0.339 SPL and 1.474 DTG.
For more detailed instructions, seeINSTRUCTIONS.
Chaplot, D.S., Gandhi, D., Gupta, A. and Salakhutdinov, R., 2020. Object Goal Navigation using Goal-Oriented Semantic Exploration. In Neural Information Processing Systems (NeurIPS-20). (PDF)
@inproceedings{chaplot2020object, title={Object Goal Navigation using Goal-Oriented Semantic Exploration}, author={Chaplot, Devendra Singh and Gandhi, Dhiraj and Gupta, Abhinav and Salakhutdinov, Ruslan}, booktitle={In Neural Information Processing Systems (NeurIPS)}, year={2020} }
- This project builds on theActive Neural SLAM paper. The code and pretrained models for the Active Neural SLAM system are available at:https://github.com/devendrachaplot/Neural-SLAM.
- The Semantic Mapping module is similar to the one used inSemantic Curiosity.
This repository usesHabitat Lab implementation for running the RL environment.The implementation of PPO is borrowed fromikostrikov/pytorch-a2c-ppo-acktr-gail.The Mask-RCNN implementation is based on thedetectron2 repository. We would also like to thank Shubham Tulsiani and Saurabh Gupta for their help in implementing some parts of the code.
About
Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.