Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

License

NotificationsYou must be signed in to change notification settings

xuchen-ethz/snarf

Repository files navigation

Official code release for ICCV 2021 paperSNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes. We propose a novel forward skinning module to animate neural implicit shapes with good generalization to unseen poses.

Update: we have released an improved version, FastSNARF, which is 150x faster than SNARF. Check it outhere.

If you find our code or paper useful, please cite as

@inproceedings{chen2021snarf,  title={SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes},  author={Chen, Xu and Zheng, Yufeng and Black, Michael J and Hilliges, Otmar and Geiger, Andreas},  booktitle={International Conference on Computer Vision (ICCV)},  year={2021}}

Quick Start

Clone this repo:

git clone https://github.com/xuchen-ethz/snarf.gitcd snarf

Install environment:

conda env create -f environment.ymlconda activate snarfpython setup.py install

DownloadSMPL models (1.0.0 for Python 2.7 (10 shape PCs)) and move them to the corresponding places:

mkdir lib/smpl/smpl_model/mv /path/to/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl lib/smpl/smpl_model/SMPL_FEMALE.pklmv /path/to/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl lib/smpl/smpl_model/SMPL_MALE.pkl

Download our pretrained models and test motion sequences:

sh ./download_data.sh

Run a quick demo for clothed human:

python demo.py expname=cape subject=3375 demo.motion_path=data/aist_demo/seqs +experiments=cape

You can the find the video inoutputs/cape/3375/demo.mp4 and images inoutputs/cape/3375/images/. To save the meshes, adddemo.save_mesh=true to the command.

You can also try other subjects (seeoutputs/data/cape for available options) by settingsubject=xx, and other motion sequences fromAMASS by settingdemo.motion_path=/path/to/amass_modetion.npz.

Some motion sequences have high fps and one might want to skip some frames. To do this, adddemo.every_n_frames=x to consider every x frame in the motion sequence. (e.g.demo.every_n_frames=10 for PosePrior sequences)

By default, we usedemo.fast_mode=true for fast mesh extraction. In this mode, we first extract mesh in canonical space, and then forward skin the mesh to posed space. This bypasses the root finding during inference, thus is faster. However, it's not really deforming a continuous field. To first deform the continuous field and then extract mesh in deformed space, usedemo.fast_mode=false instead.

Training and Evaluation

Install Additional Dependencies

Installkaolin for fast occupancy query from meshes.

git clone https://github.com/NVIDIAGameWorks/kaolincd kaolingit checkout v0.9.0python setup.py develop

Minimally Clothed Human

Prepare Datasets

Download theAMASS dataset. We use ''DFaust Snythetic'' and ''PosePrior'' subsets and SMPL-H format. Unzip the dataset intodata folder.

tar -xf DFaust67.tar.bz2 -C datatar -xf MPILimits.tar.bz2 -C data

Preprocess dataset:

python preprocess/sample_points.py --output_folder data/DFaust_processedpython preprocess/sample_points.py --output_folder data/MPI_processed --skip 10 --poseprior

Training

Run the following command to train for a specified subject:

python train.py subject=50002

Training logs are available onwandb (registration needed, free of charge). It should take ~12h on a single 2080Ti.

Evaluation

Run the following command to evaluate the method for a specified subject on within distribution data (DFaust test split):

python test.py subject=50002

and outside destribution (PosePrior):

python test.py subject=50002 datamodule=jointlim

Generate Animation

You can use the trained model to generate animation (same as in Quick Start):

python demo.py expname='dfaust' subject=50002 demo.motion_path='data/aist_demo/seqs'

Clothed Human

Training

Download theCAPE dataset and unzip intodata folder.

Run the following command to train for a specified subject and clothing type:

python train.py datamodule=cape subject=3375 datamodule.clothing='blazerlong' +experiments=cape

Training logs are available onwandb. It should take ~24h on a single 2080Ti.

Generate Animation

You can use the trained model to generate animation (same as in Quick Start):

python demo.py expname=cape subject=3375 demo.motion_path=data/aist_demo/seqs +experiments=cape

Acknowledgement

We use the pre-processing code inPTF andLEAP with some adaptions (./preprocess). The network and sampling part of the code (lib/model/network.py andlib/model/sample.py) is implemented based onIGR andIDR. The code for extracting mesh (lib/utils/meshing.py) is adapted fromNASA. Our implementation of Broyden's method (lib/model/broyden.py) is based onDEQ. We sincerely thank these authors for their awesome work.

About

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp