Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

INSTA - Instant Volumetric Head Avatars [CVPR2023]

License

NotificationsYou must be signed in to change notification settings

Zielon/INSTA

Repository files navigation

Max Planck Institute for Intelligent Systems, Tübingen, Germany

Official Repository for CVPR 2023 paper Instant Volumetric Head Avatars

This repository is based oninstant-ngp, some of the features of the original code are not available in this work.

⚠ We also prepared a Pytorch demo version of the projectINSTA Pytorch 

Installation

The repository is based oninstant-ngpcommit. The requirements for the installation are the same, therefore please follow theguide.Remember to use the--recursive option during cloning.

git clone --recursive https://github.com/Zielon/INSTA.gitcd INSTAcmake. -B buildcmake --build build --config RelWithDebInfo -j

Usage and Requirements

After building the project you can either start training an avatar from scratch or load a snapshot. For training, we recommend a graphics card higher or equal toRTX3090 24GB and32 GB of RAM memory. Training on a different hardware probably requires adjusting options in the config:

"max_cached_bvh": 4000,# How many BVH data structures are cached"max_images_gpu": 1700,# How many frames are loaded to GPU. Adjust for a given GPU memory size."use_dataset_cache": true,# Load images to RAM memory"max_steps": 33000,# Max training steps after which test sequence will be recorded"render_novel_trajectory": false,# Dumps additional camera trajectories after max steps"render_from_snapshot":false# For --no-gui option to directly render sequences

Rendering from a snapshot does not require a high-end GPU and can be performed even on a laptop. We have tested it onRTX 3080 8GB laptop version. For--no-gui option you can train and load snapshot for rendering by using the config in the same way as the one withGUI.The viewer options are the same as in the case ofinstant-ngp, with some additional keyF to raycast the FLAME mesh.

Usage example

# Run without GUI examples script./run.sh# Run cross reenactment based on deformation gradient transfer./run_transfer.sh# Training./build/rta --config insta.json --scene data/obama --height 512 --width 512# Loading from a checkpoint./build/rta --config insta.json --scene data/obama/transforms_test.json --snapshot data/obama/snapshot.msgpack

Dataset and Training

We are releasing part of our dataset together with publicly available preprocessed avatars fromNHA,NeRFace andIMAvatar.The output of the training (Record Video in menu), including rendered frames, checkpoint, etc will be saved in the./data/{actor}/experiments/{config}/debug.After the specified number of max steps, the program will automatically either render frames using novel cameras (All option in GUI andrender_novel_trajectory in config) or only the currently selected one inMode, by defaultOverlay\Test.

Available avatars. Click the selected avatar to download the training dataset and the checkpoint. The avatars have to be placed in thedata folder.

Dataset Generation

For the input generation, a conda environment is needed, and a few other repositories. Simply runinstall.sh fromscripts folder to prepare the workbench.

Next, you can useMetrical Photometric Tracker for the tracking of a sequence. After the processing is done run thegenerate.sh script to prepare the sequence. As input please specify the absolute path of the tracker output.

For training we recommend at least 1000 frames.

# 1) Run the Metrical Photometric Tracker for a selected actorpython tracker.py --cfg ./configs/actors/duda.yml# 2) Generate a dataset using the script. Importantly, use the absolute path to tracker input and desired output../generate.sh /metrical-tracker/output/duda INSTA/data/duda 100#                        {input}                {output}    {# of test frames from the end}

Citation

If you use this project in your research please cite INSTA:

@proceedings{INSTA:CVPR2023,author ={Zielonka, Wojciech and Bolkart, Timo and Thies, Justus},title ={Instant Volumetric Head Avatars},journal ={Conference on Computer Vision and Pattern Recognition},year ={2023}}

[8]ページ先頭

©2009-2025 Movatter.jp