Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Official implementation of "SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements", CVPR 2021https://arxiv.org/abs/2104.07660

License

NotificationsYou must be signed in to change notification settings

qianlim/SCALE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PaperOpen In Colab

This repository contains the official PyTorch implementation of the CVPR 2021 paper:

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, and Michael J. Black
Full paper |Video |Project website |Poster

Installation

  • The code has been tested with python 3.6 on both (Ubuntu 18.04 + CUDA 10.0) and (Ubuntu 20.04 + CUDA 11.1).

  • First, in the folder of this SCALE repository, run the following commands to create a new virtual environment and install dependencies:

    python3 -m venv$HOME/.virtualenvs/SCALEsource$HOME/.virtualenvs/SCALE/bin/activatepip install -U pip setuptoolspip install -r requirements.txtmkdir checkpoints
  • Install the Chamfer Distance package (MIT license, taken fromthis implementation). Note: the compilation is verified to be successful under CUDA 10.0, but may not be compatible with later CUDA versions.

    cd chamferdistpython setup.py installcd ..
  • You are now good to go with the next steps! All the commands below are assumed to be run from theSCALE repository folder, within the virtual environment created above.

Run SCALE

  • Download ourpre-trained model weights, unzip it under thecheckpoints folder, such that the checkpoints' path is<SCALE repo folder>/checkpoints/SCALE_demo_00000_simuskirt/<checkpoint files>.

  • Download thepacked data for demo, unzip it under thedata/ folder, such that the data file paths are<SCALE repo folder>/data/packed/00000_simuskirt/<train,test,val split>/<data npz files>.

  • With the data and pre-trained model ready, the following code will generate a sequence of.ply files of the teaser dancing animation inresults/saved_samples/SCALE_demo_00000_simuskirt:

    python main.py --config configs/config_demo.yaml
  • To render images of the generated point sets, run the following command:

    python render/o3d_render_pcl.py --model_name SCALE_demo_00000_simuskirt

    The images (with both the point normal coloring and patch coloring) will be saved underresults/rendered_imgs/SCALE_demo_00000_simuskirt.

Train SCALE

Training demo with our data examples

  • Assume the demo training data is downloaded from the previous step underdata/packed/. Now run:

    python main.py --config configs/config_train_demo.yaml

    The training will start!

  • The code will also save the loss curves in the TensorBoard logs undertb_logs/<date>/SCALE_train_demo_00000_simuskirt.

  • Examples from the validation set at every 10 (can be set) epoch will be saved atresults/saved_samples/SCALE_train_demo_00000_simuskirt/val.

  • Note: the training data provided above are only for demonstration purposes. Due to their very limited number of frames, they will not likely yield a satisfying model. Please refer to the README files in thedata/ andlib_data/ folders for more information on how to process your customized data.

Training with your own data

We provide example codes inlib_data/ to assist you in adapting your own data to the format required by SCALE. Please refer tolib_data/README for more details.

News

  • [2021/10/29]We now provide the packed, SCALE-compatible CAPE data on theCAPE dataset website. Simply register as a user there to access the download links (at the bottom of the Download page).
  • [2021/06/24] Code online!

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully theterms and conditions and any accompanying documentation before you download and/or use the SCALE code, including the scripts, animation demos and pre-trained models. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under thisLicense.

The SMPL body related files (includingassets/{smpl_faces.npy, template_mesh_uv.obj} and the UV masks underassets/uv_masks/) are subject to the license of theSMPL model. The provided demo data (including the body pose and the meshes of clothed human bodies) are subject to the license of theCAPE Dataset. The Chamfer Distance implementation is subject to itsoriginal license.

Related Research

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks (CVPR 2021)
Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black

Ourimplicit solution to pose-dependent shape modeling: cycle-consistent implicit skinning fields + locally pose-aware implicit function = a fully animatable avatar with implicit surface from raw scans without surface registration!

Learning to Dress 3D People in Generative Clothing (CVPR 2020)
Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black

CAPE --- a generative model and a large-scale dataset for 3D clothed human meshes in varied poses and garment types.We trained SCALE using theCAPE dataset, check it out!

Citations

@inproceedings{Ma:CVPR:2021,title ={{SCALE}: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements},author ={Ma, Qianli and Saito, Shunsuke and Yang, Jinlong and Tang, Siyu and Black, Michael J.},booktitle ={Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},month = jun,year ={2021},month_numeric ={6}}

About

Official implementation of "SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements", CVPR 2021https://arxiv.org/abs/2104.07660

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp