Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation"

License

NotificationsYou must be signed in to change notification settings

mkocabas/VIBE

Repository files navigation

reportOpen In ColabPWC

Check our YouTube videos below for more details.

Paper VideoQualitative Results
PaperVideoQualitativeResults

VIBE: Video Inference for Human Body Pose and Shape Estimation,
Muhammed Kocabas,Nikos Athanasiou,Michael J. Black,
IEEE Computer Vision and Pattern Recognition, 2020

Features

VideoInference forBody Pose and ShapeEstimation (VIBE) is a video pose and shape estimation method.It predicts the parameters of SMPL body model for each frame of an input video. Pleaser refer to ourarXiv report for further details.

This implementation:

  • has the demo and training code for VIBE implemented purely in PyTorch,
  • can work on arbitrary videos with multiple people,
  • supports both CPU and GPU inference (though GPU is way faster),
  • is fast, up-to 30 FPS on a RTX2080Ti (seethis table),
  • achieves SOTA results on 3DPW and MPI-INF-3DHP datasets,
  • includes Temporal SMPLify implementation.
  • includes the training code and detailed instruction on how to train it from scratch.
  • can create an FBX/glTF output to be used with major graphics softwares.

Updates

  • 05/01/2021: Windows installation tutorial is added thanks to amazing@carlosedubarreto
  • 06/10/2020: Support OneEuroFilter smoothing.
  • 14/09/2020: FBX/glTF conversion script is released.

Getting Started

VIBE has been implemented and tested on Ubuntu 18.04 with python >= 3.7. It supports both GPU and CPU inference.If you don't have a suitable device, try running our Colab demo.

Clone the repo:

git clone https://github.com/mkocabas/VIBE.git

Install the requirements usingvirtualenv orconda:

# pipsource scripts/install_pip.sh# condasource scripts/install_conda.sh

Running the Demo

We have prepared a nice demo code to run VIBE on arbitrary videos.First, you need download the required data(i.e our trained model and SMPL model parameters). To do this you can just run:

source scripts/prepare_data.sh

Then, running the demo is as simple as:

# Run on a local videopython demo.py --vid_file sample_video.mp4 --output_folder output/ --display# Run on a YouTube videopython demo.py --vid_file https://www.youtube.com/watch?v=wPZP8Bwxplo --output_folder output/ --display

Refer todoc/demo.md for more details about the demo code.

Sample demo output with the--sideview flag:

FBX and glTF output (New Feature!)

We provide a script to convert VIBE output to standalone FBX/glTF files to be used in 3D graphics tools likeBlender, Unity etc. You need to follow steps below to be able to run the conversion script.

  • You need to download FBX files for SMPL body model
    • Go toSMPL website and create an account.
    • Download the Unity-compatible FBX file through thelink
    • Unzip the contents and locate themdata/SMPL_unity_v.1.0.0.
  • Install Blender python API
    • Note that we tested our script with Blender v2.8.0 and v2.8.3.
  • Run the command below to convert VIBE output to FBX:
python lib/utils/fbx_output.py \    --input output/sample_video/vibe_output.pkl \    --output output/sample_video/fbx_output.fbx \ # specify the file extension as *.glb for glTF    --fps_source 30 \    --fps_target 30 \    --gender <male or female> \    --person_id <tracklet id from VIBE output>

Windows Installation Tutorial

You can follow the instructions provided by@carlosedubarreto to install and run VIBE on a Windows machine:

Google Colab

If you do not have a suitable environment to run this project then you could give Google Colab a try.It allows you to run the project in the cloud, free of charge. You may try our Colab demo using the notebook we have prepared:Open In Colab

Training

Run the commands below to start training:

source scripts/prepare_training_data.shpython train.py --cfg configs/config.yaml

Note that the training datasets should be downloaded and prepared before running data processing script.Please seedoc/train.md for details on how to prepare them.

Evaluation

Here we compare VIBE with recent state-of-the-art methods on 3D pose estimation datasets. Evaluation metric isProcrustes Aligned Mean Per Joint Position Error (PA-MPJPE) in mm.

Models3DPW ↓MPI-INF-3DHP ↓H36M ↓
SPIN59.267.541.1
Temporal HMR76.789.856.8
VIBE56.563.441.5

Seedoc/eval.md to reproduce the results in this table orevaluate a pretrained model.

Correction: Due to a mistake in dataset preprocessing, VIBE trained with 3DPW results in Table 1 of the original paper are not correct.Besides, even though training with 3DPW guarantees better quantitative performance, it does not give goodqualitative results. ArXiv version will be updated with the corrected results.

Citation

@inproceedings{kocabas2019vibe,title={VIBE: Video Inference for Human Body Pose and Shape Estimation},author={Kocabas, Muhammed and Athanasiou, Nikos and Black, Michael J.},booktitle ={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},month ={June},year ={2020}}

License

This code is available fornon-commercial scientific research purposes as defined in theLICENSE file. By downloading and using this code you agree to the terms in theLICENSE. Third-party datasets and software are subject to their respective licenses.

References

We indicate if a function or script is borrowed externally inside each file. Here are some great resources webenefit:

  • Pretrained HMR and some functions are borrowed fromSPIN.
  • SMPL models and layer is fromSMPL-X model.
  • Some functions are borrowed fromTemporal HMR.
  • Some functions are borrowed fromHMR-pytorch.
  • Some functions are borrowed fromKornia.
  • Pose tracker is fromSTAF.

[8]ページ先頭

©2009-2025 Movatter.jp