- Notifications
You must be signed in to change notification settings - Fork69
Code of [CVPR 2024] "Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling"
License
lizhe00/AnimatableGaussians
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
News
09/15/2024
Release thetemplates of ActorsHQ (Actor01 & Actor04) to facilitate training.05/22/2024
📢 An extension work of Animatable Gaussians for human avatar relighting is availablehere. Welcome to check it!03/11/2024
The code has been released. Welcome to have a try!03/11/2024
AvatarReX dataset, a high-resolution multi-view video dataset for avatar modeling, has been released.02/27/2024
Animatable Gaussians is accepted by CVPR 2024!
Zhe Li1,Zerong Zheng2,Lizhen Wang1,Yebin Liu1
1Tsinghua Univserity2NNKosmos Technology
teaser.mp4
Abstract: Modeling animatable human avatars from RGB videos is a long-standing and challenging problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent 3D humans, but it remains difficult for pure MLPs to regress pose-dependent garment details. To this end, we introduce Animatable Gaussians, a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars. To associate 3D Gaussians with the animatable avatar, we learn a parametric template from the input videos, and then parameterize the template on two front & back canonical Gaussian maps where each pixel represents a 3D Gaussian. The learned template is adaptive to the wearing garments for modeling looser clothes like dresses. Such template-guided 2D parameterization enables us to employ a powerful StyleGAN-based CNN to learn the pose-dependent Gaussian maps for modeling detailed dynamic appearances. Furthermore, we introduce a pose projection strategy for better generalization given novel poses. Overall, our method can create lifelike avatars with dynamic, realistic and generalized appearances. Experiments show that our method outperforms other state-of-the-art approaches.
We show avatars animated by challenging motions fromAMASS dataset.
basketball.mp4
More results (click to expand)
football.mp4
dancing.mp4
irish_dancing.mp4
- Clone this repo.
git clone https://github.com/lizhe00/AnimatableGaussians.git# orgit clone git@github.com:lizhe00/AnimatableGaussians.git
- Install environments.
# install requirementspip install -r requirements.txt# install diff-gaussian-rasterization-depth-alphacd gaussians/diff_gaussian_rasterization_depth_alphapython setup.py installcd ../..# install styleunetcd network/styleunetpython setup.py installcd ../..
- DownloadSMPL-X model, and place pkl files to
./smpl_files/smplx
.
- DownloadAvatarReX,ActorsHQ, orTHuman4.0 datasets.
- Data preprocessing. We provide two manners below. The first way is recommended if you plan to employ our pretrained models, because the renderer utilized in preprocessing may cause slight differences.
- (Recommended) Download our preprocessed files fromPREPROCESSED_DATASET.md, and unzip them to the root path of each character.
- Follow the instructions ingen_data/GEN_DATA.md to preprocess the dataset.
Note for ActorsHQ dataset: 1)DATA PATH. The subject from ActorsHQ dataset may include more than one sequences, but we only utilize the first sequence, i.e.,Sequence1
. The root path isActorsHQ/Actor0*/Sequence1
. 2)SMPL-X Registration. We provide SMPL-X fitting for ActorsHQ dataset. You can download it fromhere, and placesmpl_params.npz
at the corresponding root path of each subject.
Please refer togen_data/GEN_DATA.md to run on your own data.
Takeavatarrex_zzr
from AvatarReX dataset as an example, run:
python main_avatar.py -c configs/avatarrex_zzr/avatar.yaml --mode=train
After training, the checkpoint will be saved in./results/avatarrex_zzr/avatar
.
- Download pretrained checkpoint fromPRETRAINED_MODEL.md, unzip it to
./results/avatarrex_zzr/avatar
, or train the network from scratch. - DownloadTHuman4.0_POSE orAMASS dataset for acquiring driving pose sequences.We list some awesome pose sequences from AMASS dataset inconfigs/awesome_amass_poses.yaml.Specify the testing pose path inconfigs/avatarrex_zzr/avatar.yaml#L57.
- Run:
python main_avatar.py -c configs/avatarrex_zzr/avatar.yaml --mode=test
You will see the animation results like below in./test_results/avatarrex_zzr/avatar
.
example_animation.mp4
We provide evaluation metrics and example codes of comparison with body-only avatars ineval/comparison_body_only_avatars.py.
- Release the code.
- Release AvatarReX dataset.
Release all the checkpoints and preprocessed dataset.Cancelled due to graduation. Please run on other cases yourself with the providedconfigs.
Our code is based on these wonderful repos:
If you find our code or data is helpful to your research, please consider citing our paper.
@inproceedings{li2024animatablegaussians,title={Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling},author={Li, Zhe and Zheng, Zerong and Wang, Lizhen and Liu, Yebin},booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},year={2024}}
About
Code of [CVPR 2024] "Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling"
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.