Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 11366))
Included in the following conference series:
2283Accesses
Abstract
Virtual try-on – synthesizing an almost-realistic image for dressing a target fashion item provided the source human photo – has growing needs due to the prevalence of e-commerce and the development of deep learning technologies. However, existing deep learning virtual try-on methods focus on the clothing replacement due to the lack of dataset and cope with flat body segments with frontal poses providing the front view of the target fashion item. In this paper, we present the pose invariant virtual try-on shoe (PIVTONS) to cope with the task of virtual try-on shoe. We collect the paired feet and shoe virtual-try on dataset, Zalando-shoes, containing 14,062 shoes among the 11 categories of shoes. The shoe image only contains a single view of the shoes but the try-on result should show other views of the shoes depending on the original feet pose. We formulate this task as an automatic and labor-free image completion task and design an end-to-end neural networks composing of feature point detector. Through the numerous experiments and ablation studies, we demonstrate the performance of the proposed framework and investigate the parameterizing factors for optimizing the challenging problem.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 5719
- Price includes VAT (Japan)
- Softcover Book
- JPY 7149
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The images shown in this paper are collected fromhttps://www.zalando.co.uk.
References
Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: CVPR (2017)
Gong, K., Liang, X., Zhang, D., Shen, X., Lin, L.: Look into person: self-supervised structure-sensitive learning and a new benchmark for human parsing. In: CVPR (2017)
Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)
Han, X., Wu, Z., Wu, Z., Yu, R., Davis, L.S.: VITON: an image-based virtual try-on network. In: CVPR (2018)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)
Jetchev, N., Bergmann, U.: The conditional analogy GAN: swapping fashion articles on people images. In: ICCV (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).https://doi.org/10.1007/978-3-319-46475-6_43
Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S.: Mask-specific inpainting with deep neural networks. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 523–534. Springer, Cham (2014).https://doi.org/10.1007/978-3-319-11752-2_43
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2017)
Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: CVPR (2017)
Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In: CVPR (2016)
Ma, L., Jia, X., Sun, Q., Schiele, B., Tuytelaars, T., Van Gool, L.: Pose guided person image generation. In: NIPS (2017)
Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv (2014)
Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill1, e3 (2016).https://doi.org/10.23915/distill.00003.http://distill.pub/2016/deconv-checkerboard
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: CVPR (2016)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv (2015)
Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. arXiv (2016)
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS (2016)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP13, 600–612 (2004)
Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: CVPR (2016)
Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: ICCV (2017)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)
Zhu, S., Urtasun, R., Fidler, S., Lin, D., Change Loy, C.: Be your own prada: fashion synthesis with structural coherence. In: ICCV (2017)
Acknowledgement
This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 107-2634-F-002-007 and 105-2221-E-002-182-MY2. We also benefit from the grants from NVIDIA and the NVIDIA DGX-1 AI Supercomputer. We also appreciate the research grants from Microsoft Research Asia.
Author information
Authors and Affiliations
National Taiwan University, Taipei, Taiwan
Chao-Te Chou, Cheng-Han Lee, Kaipeng Zhang, Hu-Cheng Lee & Winston H. Hsu
- Chao-Te Chou
You can also search for this author inPubMed Google Scholar
- Cheng-Han Lee
You can also search for this author inPubMed Google Scholar
- Kaipeng Zhang
You can also search for this author inPubMed Google Scholar
- Hu-Cheng Lee
You can also search for this author inPubMed Google Scholar
- Winston H. Hsu
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toChao-Te Chou.
Editor information
Editors and Affiliations
IIIT Hyderabad, Hyderabad, India
C.V. Jawahar
ANU, Canberra, ACT, Australia
Hongdong Li
Simon Fraser University, Burnaby, BC, Canada
Greg Mori
ETH Zurich, Zurich, Zürich, Switzerland
Konrad Schindler
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Chou, CT., Lee, CH., Zhang, K., Lee, HC., Hsu, W.H. (2019). PIVTONS: Pose Invariant Virtual Try-On Shoe with Conditional Image Completion. In: Jawahar, C., Li, H., Mori, G., Schindler, K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science(), vol 11366. Springer, Cham. https://doi.org/10.1007/978-3-030-20876-9_41
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-030-20875-2
Online ISBN:978-3-030-20876-9
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative