- Notifications
You must be signed in to change notification settings - Fork7
[NeurIPS 2023] PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation
License
FrozenBurning/PrimDiffusion
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
PrimDiffusion generates 3D human by denoising a set of volumetric primitives.
Our method enables explicit pose, view and shape control with real-time rendering in high resolution.

[12/2023] Source code released! 🤩
[09/2023] PrimDiffusion has been accepted toNeurIPS 2023! 🥳
If you find our work useful for your research, please consider citing this paper:
@inproceedings{chen2023primdiffusion,title={PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation},author={Zhaoxi Chen and Fangzhou Hong and Haiyi Mei and Guangcong Wang and Lei Yang and Ziwei Liu},booktitle={Thirty-seventh Conference on Neural Information Processing Systems},year={2023}}We highly recommend usingAnaconda to manage your python environment. You can setup the required environment by the following commands:
# clone this repogit clone https://github.com/FrozenBurning/PrimDiffusioncd PrimDiffusion# install python dependenciesconda env create -f environment.yamlconda activate primdiffusionconda install -c fvcore -c iopath -c conda-forge fvcore iopathconda install pytorch3d -c pytorch3d
Build raymarching extensions:
cd dvagit clone https://github.com/facebookresearch/mvpcd mvp/extensions/mvpraymarchmake -j4
Install Easymocap:
git clone https://github.com/zju3dv/EasyMocapcd EasyMocappip install --user.
Install xformers for speedup (Optional): Please refer to the official repo forinstallation.
Download sample data, necessary assets, and pretrained model fromGoogle Drive.
Register and download SMPL modelshere. Please store the SMPL model together with downloaded files as follows:
├── ...└── PrimDiffusion ├── visualize.py ├── README.md └── data └──checkpoints └── primdiffusion.pt └──smpl ├── basicModel_ft.npy ├── basicModel_vt.npy └── SMPL_NEUTRAL.pkl └──render_people ...You can run the following script for generating 3D human with PrimDiffusion:
python visualize.py configs/primdiffusion_inference.yml ddim=False
Please specify the path to the pretrained model ascheckpoint_path in the config file. Moreover, please specifyddim=True if you intend to use 100 steps DDIM sampler. The script will render and save videos underoutput_dir which is specified by the config file.
You could refer to the downloaded sample data at./data/render_people to prepare your own multiview dataset, and modify the corresponding path in the config file.
torchrun --nnodes=1 --nproc_per_node=8 --master_port=6666 train_stage1.py configs/renderpeople_stage1_fitting.yml
This will create a folder with checkpoints, config and a monitoring image at theoutput_dir specified in config file.
Please run the following command to launch the training of the diffusion model. Please setpretrained_encoder to the path of the latest checkpoint from Stage I. We also support training with mixed precision by default, please modifytrain.amp in the config file according to your usage.
torchrun --nnodes=1 --nproc_per_node=8 --master_port=6666 train_stage2.py configs/renderpeople_stage2_primdiffusion.yml
Note that, we use 8 GPUs for training by default. Please adjust--nproc_per_node to the number you want.
Distributed under the S-Lab License. SeeLICENSE for more information. Part of the code are also subject to theLICENSE of DVA.
PrimDiffusion is implemented on top of theDVA andLatent-Diffusion.
About
[NeurIPS 2023] PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.