Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Code and dataset for paper "VITON: An Image-based Virtual Try-on Network"

NotificationsYou must be signed in to change notification settings

xthan/VITON

Repository files navigation

Code and dataset for the CVPR 2018 paper "VITON: An Image-based Virtual Try-on Network"

Person representation extraction

The person representation used in this paper are extracted by a 2D pose estimator and a human parser:

Thanks@MosbehBarhoumi for creating aColab Notebook for quick preprocessing the data.

Dataset

The dataset is no longer publicly available due to copyright issues. For thoese who have already downloaded the dataset, please note that using or distributing it is illegal!

Test

First stage

Download pretrained models onGoogle Drive. Put them undermodel/ folder.

Runtest_stage1.sh to do the inference.The results are inresults/stage1/images/.results/stage1/index.html visualizes the results.

Second stage

Run the matlab scriptshape_context_warp.m to extract the TPS transformation control points.

Thentest_stage2.sh will do the refinement and generate the final results, which locates inresults/stage2/images/.results/stage2/index.html visualizes the results.

Train

Prepare data

Go insideprepare_data.

First runextract_tps.m. This will take sometime, you can try run it in parallel or directly download the pre-computed TPS control points via Google Drive and put them indata/tps/.

Then run./preprocess_viton.sh, and the generated TF records will be inprepare_data/tfrecord.

First stage

Runtrain_stage1.sh

Second stage

Runtrain_stage2.sh

Citation

If this code or dataset helps your research, please cite our paper:

@inproceedings{han2017viton,  title = {VITON: An Image-based Virtual Try-on Network},  author = {Han, Xintong and Wu, Zuxuan and Wu, Zhe and Yu, Ruichi and Davis, Larry S},  booktitle = {CVPR},  year  = {2018},}

About

Code and dataset for paper "VITON: An Image-based Virtual Try-on Network"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp