Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[NeurIPS 2023] PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation

License

NotificationsYou must be signed in to change notification settings

FrozenBurning/PrimDiffusion

Repository files navigation

1S-Lab, Nanyang Technological University 2 Sensetime Research
arXiv Paper

TL;DR

PrimDiffusion generates 3D human by denoising a set of volumetric primitives.
Our method enables explicit pose, view and shape control with real-time rendering in high resolution.

Updates

[12/2023] Source code released! 🤩

[09/2023] PrimDiffusion has been accepted toNeurIPS 2023! 🥳

Citation

If you find our work useful for your research, please consider citing this paper:

@inproceedings{chen2023primdiffusion,title={PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation},author={Zhaoxi Chen and Fangzhou Hong and Haiyi Mei and Guangcong Wang and Lei Yang and Ziwei Liu},booktitle={Thirty-seventh Conference on Neural Information Processing Systems},year={2023}}

Installation

We highly recommend usingAnaconda to manage your python environment. You can setup the required environment by the following commands:

# clone this repogit clone https://github.com/FrozenBurning/PrimDiffusioncd PrimDiffusion# install python dependenciesconda env create -f environment.yamlconda activate primdiffusionconda install -c fvcore -c iopath -c conda-forge fvcore iopathconda install pytorch3d -c pytorch3d

Build raymarching extensions:

cd dvagit clone https://github.com/facebookresearch/mvpcd mvp/extensions/mvpraymarchmake -j4

Install Easymocap:

git clone https://github.com/zju3dv/EasyMocapcd EasyMocappip install --user.

Install xformers for speedup (Optional): Please refer to the official repo forinstallation.

Inference

Download Pretrained Models

Download sample data, necessary assets, and pretrained model fromGoogle Drive.Google Drive

Register and download SMPL modelshere. Please store the SMPL model together with downloaded files as follows:

├── ...└── PrimDiffusion    ├── visualize.py    ├── README.md    └── data        └──checkpoints            └── primdiffusion.pt        └──smpl            ├── basicModel_ft.npy            ├── basicModel_vt.npy            └── SMPL_NEUTRAL.pkl        └──render_people    ...

Visualize Denoising Process and Novel Views

You can run the following script for generating 3D human with PrimDiffusion:

python visualize.py configs/primdiffusion_inference.yml ddim=False

Please specify the path to the pretrained model ascheckpoint_path in the config file. Moreover, please specifyddim=True if you intend to use 100 steps DDIM sampler. The script will render and save videos underoutput_dir which is specified by the config file.

Training

Data Preparation

You could refer to the downloaded sample data at./data/render_people to prepare your own multiview dataset, and modify the corresponding path in the config file.

Stage I Training

torchrun --nnodes=1 --nproc_per_node=8 --master_port=6666 train_stage1.py configs/renderpeople_stage1_fitting.yml

This will create a folder with checkpoints, config and a monitoring image at theoutput_dir specified in config file.

Stage II Training

Please run the following command to launch the training of the diffusion model. Please setpretrained_encoder to the path of the latest checkpoint from Stage I. We also support training with mixed precision by default, please modifytrain.amp in the config file according to your usage.

torchrun --nnodes=1 --nproc_per_node=8 --master_port=6666 train_stage2.py configs/renderpeople_stage2_primdiffusion.yml

Note that, we use 8 GPUs for training by default. Please adjust--nproc_per_node to the number you want.

License

Distributed under the S-Lab License. SeeLICENSE for more information. Part of the code are also subject to theLICENSE of DVA.

Acknowledgements

PrimDiffusion is implemented on top of theDVA andLatent-Diffusion.

About

[NeurIPS 2023] PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation

Topics

Resources

License

Stars

Watchers

Forks

Languages


[8]ページ先頭

©2009-2025 Movatter.jp