Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Official code repository for the paper "Unite the People – Closing the Loop Between 3D and 2D Human Representations".

License

NotificationsYou must be signed in to change notification settings

classner/up

Repository files navigation

Requirements:

  • OpenCV (on Ubuntu, e.g., install libopencv-dev and python-opencv).
  • SMPL (download athttp://smpl.is.tue.mpg.de/downloads) and unzip to aplace of your choice.
  • OpenDR (just runpip install opendr, unfortunately can't be doneautomatically with the setuptools requirements.
  • If you want to train a segmentation model, Deeplab V2(https://bitbucket.org/aquariusjay/deeplab-public-ver2) with a minimal patchapplied that can be found in the subdirectorypatches, to enable on the flymirroring of the segmented images. Since I didn't use the MATLAB interface anddid not care about fixing related errors, I just deletedsrc/caffe/layers/mat_{read,write}_layer.cpp as well assrc/caffe/util/matio_io.cpp and built with-DWITH_matlab=Off.
  • If you want to train a pose model, the Deepercut caffe(https://github.com/eldar/deepcut-cnn).
  • If you want to get deepercut-cnn predictions, download the deepercut.caffemodel file and place it inmodels/pose/deepercut.caffemodel.
  • Edit the fileconfig.py to set up the paths.
  • Register onhttps://smpl.is.tue.mpg.de/ to obtain a SMPL license and placethe model file atmodels/3D/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl.

The rest of the requirements is then automatically installed when running:

python setup.py develop

Folder structure

For each of the tasks we described, there is one subfolder with the relatedexecutables. All files that are being used for training or testing models areexecutable and provide a full synopsis when run with the--help option. In therespectivetools subfolder for each task, there is acreate_dataset.pyscript to summarize the data in the proper formats. This must be usually runbefore the training script. Themodels folder contains pretrained models andinfos,patches a patch for deeplab caffe,tests some Python tests andup_tools some Python tools that are shared between modalities.

There is a Docker image available that has been created by TheWebMonks here (notaffiliated with the authors):https://github.com/TheWebMonks/demo-2d3d .

Bodyfit

The adjusted SMPLify code to fit bodies to 91 keypoints is located in the folder3dfit. It can be used for 14 or 91 keypoints. Use the script3dfit/render.pyto render a fitted body.

Direct 3D fitting using regression forests

The relevant files are in the folderdirect3d. Runrun_partforest_training.sh to train all regressors. After that, you can usebodyfit.py to get predictions from estimated keypoints of the 91 keypoint posepredictor.

91 keypoint pose prediction

Thepose folder containes infrastructure for 91 keypoint pose prediction. Usethe scriptpose/tools/create_dataset.py with a dataset name of your choice anda target person size of 500 pixels to create the pose data from UP-3D,alternatively download it from ourwebsite.

Configure a model by creating the model configuration folderpose/training/config/modelname by cloning thepose model. Then you can runrun.sh {train,test,evaluate,trfull,tefull,evfull} modelname to run training,testing or evaluation on either the reduced training set with the held-outvalidation set as test data or the full training set and real test data. Weinitialized our training from the original Resnet models(https://github.com/KaimingHe/deep-residual-networks). You can do so bydownloading the model and saving it aspose/training/config/modelname/init.caffemodel.

Thepose.py script will produce a pose prediction for an image. It assumesthat a model with namepose has been trained (or downloaded). We normalize thetraining images w.r.t. person size, that's why the model works best for imageswith a rough person height of 500 pixels. Multiple people are not taken intoaccount; for every joint thearg max position is used over the full image.

31 part segmentation

The folder setup is just as for the keypoint estimation: usesegmentation/tools/create_dataset.py to create a segmentation dataset from theUP-3D data or download it (again, we used target person size 500). Then userun.sh {train,test,evaluate,trfull,tefull,evfull} modelname as described aboveto create your models. Thesegmentation.py script can be used to getsegmentation results for the model namedsegmentation from and image. Weinitialized our models from the Deeplab trained models availablehere. Move themodel file tosegmentation/training/modelname/init.caffemodel.

Website, citation, license

You can find more information on thewebsite.If you use this code for your research, please consider citing us:

@inproceedings{Lassner:UP:2017,  title = {Unite the People: Closing the Loop Between 3D and 2D Human Representations},  author = {Lassner, Christoph and Romero, Javier and Kiefel, Martin and Bogo, Federica and Black, Michael J. and Gehler, Peter V.},  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},  month = jul,  year = {2017},  url = {http://up.is.tuebingen.mpg.de},  month_numeric = {7}}

License:Creative Commons Non-Commercial 4.0.

The code for 3D fitting is based on theSMPLifycode. Parts of the files in the folderup_tools (capsule_ch.py,capsule_man.py,max_mixture_prior.py,robustifiers.py,sphere_collisions.py) as well as the modelmodels/3D/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl fall under the SMPLifylicense conditions.

About

Official code repository for the paper "Unite the People – Closing the Loop Between 3D and 2D Human Representations".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp