- Notifications
You must be signed in to change notification settings - Fork41
INSTA - Instant Volumetric Head Avatars [CVPR2023]
License
Zielon/INSTA
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This repository is based oninstant-ngp, some of the features of the original code are not available in this work.
The repository is based oninstant-ngp
commit. The requirements for the installation are the same, therefore please follow theguide.Remember to use the--recursive
option during cloning.
git clone --recursive https://github.com/Zielon/INSTA.gitcd INSTAcmake. -B buildcmake --build build --config RelWithDebInfo -j
After building the project you can either start training an avatar from scratch or load a snapshot. For training, we recommend a graphics card higher or equal toRTX3090 24GB
and32 GB
of RAM memory. Training on a different hardware probably requires adjusting options in the config:
"max_cached_bvh": 4000,# How many BVH data structures are cached"max_images_gpu": 1700,# How many frames are loaded to GPU. Adjust for a given GPU memory size."use_dataset_cache": true,# Load images to RAM memory"max_steps": 33000,# Max training steps after which test sequence will be recorded"render_novel_trajectory": false,# Dumps additional camera trajectories after max steps"render_from_snapshot":false# For --no-gui option to directly render sequences
Rendering from a snapshot does not require a high-end GPU and can be performed even on a laptop. We have tested it onRTX 3080 8GB
laptop version. For--no-gui
option you can train and load snapshot for rendering by using the config in the same way as the one withGUI
.The viewer options are the same as in the case ofinstant-ngp, with some additional keyF
to raycast the FLAME mesh.
Usage example
# Run without GUI examples script./run.sh# Run cross reenactment based on deformation gradient transfer./run_transfer.sh# Training./build/rta --config insta.json --scene data/obama --height 512 --width 512# Loading from a checkpoint./build/rta --config insta.json --scene data/obama/transforms_test.json --snapshot data/obama/snapshot.msgpack
We are releasing part of our dataset together with publicly available preprocessed avatars fromNHA,NeRFace andIMAvatar.The output of the training (Record Video in menu), including rendered frames, checkpoint, etc will be saved in the./data/{actor}/experiments/{config}/debug
.After the specified number of max steps, the program will automatically either render frames using novel cameras (All
option in GUI andrender_novel_trajectory
in config) or only the currently selected one inMode
, by defaultOverlay\Test
.
Available avatars. Click the selected avatar to download the training dataset and the checkpoint. The avatars have to be placed in thedata
folder.
For the input generation, a conda environment is needed, and a few other repositories. Simply runinstall.sh
fromscripts folder to prepare the workbench.
Next, you can useMetrical Photometric Tracker for the tracking of a sequence. After the processing is done run thegenerate.sh
script to prepare the sequence. As input please specify the absolute path of the tracker output.
For training we recommend at least 1000 frames.
# 1) Run the Metrical Photometric Tracker for a selected actorpython tracker.py --cfg ./configs/actors/duda.yml# 2) Generate a dataset using the script. Importantly, use the absolute path to tracker input and desired output../generate.sh /metrical-tracker/output/duda INSTA/data/duda 100# {input} {output} {# of test frames from the end}
If you use this project in your research please cite INSTA:
@proceedings{INSTA:CVPR2023,author ={Zielonka, Wojciech and Bolkart, Timo and Thies, Justus},title ={Instant Volumetric Head Avatars},journal ={Conference on Computer Vision and Pattern Recognition},year ={2023}}