- Notifications
You must be signed in to change notification settings - Fork31
GrabNet: A Generative model to generate realistic 3D hands grasping unseen objects (ECCV2020)
License
otaheri/GrabNet
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
[Paper Page][Paper ]
GrabNet is a generative model for 3D hand grasps. Given a 3D object mesh, GrabNetcan predict several hand grasps for it. GrabNet has two succesive models, CoarseNet (cVAE) and RefineNet.It is trained on a subset (right hand and object only) ofGRAB dataset.For more details please refer to thePaper or theproject website.
Below you can see some generated results from GrabNet:
Binoculars | Mug | Camera | Toothpaste |
---|---|---|---|
![]() | ![]() | ![]() | ![]() |
Check out the YouTube videos below for more details.
Long Video | Short Video |
---|---|
![]() | ![]() |
This implementation:
- Can run GrabNet on arbitrary objects provided by users (incl. computing on the fly the BPS representation for them).
- Provides a quick and easy demo on google colab to generate grasps for any given object.
- Can run GrabNet on the test objects of our dataset (with pre-computed object centering and BPS representation).
- Can retrain GrabNet, allowing users to change details in the training configuration.
This package has the following requirements:
- Pytorch>=1.1.0
- Python >=3.6.0
- pytroch3d >=0.2.0
- MANO
- bps_torch
- psbody-mesh (for visualization)
To install the dependencies please follow the next steps:
- Clone this repository:
git clone https://github.com/otaheri/GrabNet
- Install the dependencies by the following command:
pip install -r requirements.txt
For a quick demo of GrabNet you can give it a try ongoogle-colab here.
Inorder to use the GrabNet model please follow the below steps:
- Download the GrabNet models from theGRAB website, and move the model files to the models folder as described below.
GrabNet ├── grabnet │ │ │ ├── models │ │ └── coarsenet.pt │ │ └── refinenet.pt │ │ │
- Download MANO models following the steps on theMANO repo (skip this part if you already followed this forGRAB dataset).
Download the GrabNet dataset (ZIP files) fromthis website. Please do NOT unzip the files yet.
Put all the downloaded ZIP files for GrabNet in a folder.
Clone this repository and install the requirements:
git clone https://github.com/otaheri/GrabNet
Run the following command to extract the ZIP files.
python grabnet/data/unzip_data.py --data-path$PATH_TO_FOLDER_WITH_ZIP_FILES \ --ectract-path$PATH_TO_EXTRACT_DATASET_TO
The extracted data should be in the following structure.
GRAB ├── data │ │ │ ├── bps.npz │ └── obj_info.npy │ └── sbj_info.npy │ │ │ └── [split_name] from (test, train, val) │ │ │ └── frame_names.npz │ └── grabnet_[split_name].npz │ └── data │ └── s1 │ └── ... │ └── s10 └── tools │ ├── object_meshes └── subject_meshes
After installing theGrabNet package, dependencies, and downloading the data and the models frommano website, you should be able to run the following examples:
python grabnet/tests/grab_new_objects.py --obj-path$NEW_OBJECT_PATH \ --rhm-path$MANO_MODEL_FOLDER
python grabnet/tests/test.py --rhm-path$MANO_MODEL_FOLDER \ --data-path$PATH_TO_GRABNET_DATA
To retrain GrabNet with a new configuration, please use the following code.
python train.py --work-dir$SAVING_PATH \ --rhm-path$MANO_MODEL_FOLDER \ --data-path$PATH_TO_GRABNET_DATA
python eval.py --rhm-path$MANO_MODEL_FOLDER \ --data-path$PATH_TO_GRABNET_DATA
@inproceedings{GRAB:2020, title = {{GRAB}: A Dataset of Whole-Body Human Grasping of Objects}, author = {Taheri, Omid and Ghorbani, Nima and Black, Michael J. and Tzionas, Dimitrios}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2020}, url = {https://grab.is.tue.mpg.de}}
Software Copyright License fornon-commercial scientific research purposes.Please read carefully the terms and conditions in theLICENSE file and any accompanying documentationbefore you download and/or use the GRAB data, model and software, (the "Data & Software"),including 3D meshes (body and objects), images, videos, textures, software, scripts, and animations.By downloading and/or using the Data & Software (including downloading,cloning, installing, and any other use of the corresponding github repository),you acknowledge that you have read these terms and conditions, understand them,and agree to be bound by them. If you do not agree with these terms and conditions,you must not download and/or use the Data & Software. Any infringement of the terms ofthis agreement will automatically terminate your rights under thisLicense.
Special thanks toMason Landry for his invaluable help with this project.
We thank:
- Senya Polikovsky, Markus Hoschle (MH) and Mason Landry (ML) for the MoCap facility.
- ML, Felipe Mattioni, David Hieber, and Alex Valis for MoCap cleaning.
- ML and Tsvetelina Alexiadis for trial coordination, and MH and Felix Grimminger for 3D printing,
- ML and Valerie Callaghan for voice recordings, Joachim Tesch for renderings.
- Jonathan Williams for the website design, and Benjamin Pellkofer for the IT and web support.
- Sergey Prokudin for early access to BPS code.
- Sai Kumar Dwivedi and Nikos Athanasiou for proofreading.
The code of this repository was implemented byOmid Taheri andNima Ghorbani.
For questions, please contactgrab@tue.mpg.de.
For commercial licensing (and all related questions for business applications), please contactps-licensing@tue.mpg.de.