201Accesses
3Citations
Abstract
This study proposes a fusion method for universal super-resolution (SR) that produces good results for both face and non-face regions. We observed that most general-purpose SR networks fail to sufficiently reconstruct facial features. In addition, face-specific networks degrade the performance on non-face regions. To reconstruct face regions well, face-specific SR networks are trained by employing a facial feature network. Then, to preserve the performance on the non-face regions, a region-adaptive fusion that uses both face-specific and general-purpose networks is proposed. In the fusion stage, a face detection algorithm is included, and detected masks are smoothed to avoid boundary artefacts. The results indicate that the proposed method significantly improves the performance on the face region and delivers a similar performance on the non-face region. In addition, the experimental results indicate that additional computations can be considerably reduced by sharing the front layers of the networks.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.









Similar content being viewed by others
References
Hu, X., Sun, J., Mai, Z., Peng, S.: Face quality analysis of single-image super-resolution based on SIFT. Signal Image Video Process. (2019).https://doi.org/10.1007/s11760-019-01614-1
Nasrollahi, H., Farajzadeh, K., Hosseini, V., Zarezadeh, E., Abdollahzadeh, M.: Deep artifact-free residual network for single-image super-resolution. Signal Image Video Process. (2019).https://doi.org/10.1007/s11760-019-01569-3
Park, S.C., Park, M.K., Kang, M.G.: Super-resolution image reconstruction: a technical overview. IEEE. Signal Process. Mag.20(3), 21–36 (2003)
Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: a survey (2019).arXiv:1904.07523
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 184–199 (2014)
Kim, J., Lee, J., Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654 (2016)
Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 624–632 (2017)
Zhang, Y., Li, K., Li, K., Wang, L.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 286–301 (2018)
Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681–4690 (2017)
Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 63–79 (2018)
Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 606–615 (2018)
Fanaee, F., Yazdi, M., Faghihi, M.: Face image super-resolution via sparse representation and wavelet transform. Signal Image Video Process.13(1), 79–86 (2019)
Sajjadi, M.S.M., Scholkopf, B., Hirsch, M.: EnhanceNet: single image super-resolution through automated texture synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4491–4500 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard GAN (2018).arXiv:1807.00734
Cao, Q., et al.: Vggface2: A dataset for recognising faces across pose and age. In: 2018 13th IEEE Interantional Conference on Automatic Face and Gesture Recognition, pp. 67–74 (2018)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034 (2015)
Yu, X., Fernando, B., Ghanem, B., Porikli, F., Hartley, R.: Face super-resolution guided by facial component heatmaps. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 217–233 (2018)
Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 126–135 (2017)
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4401–4410 (2019)
Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)
Acknowledgements
This material is based upon work supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (10080619), and also supported by Graduate School of YONSEI University Research Scholarship Grants in 2019.
Author information
Authors and Affiliations
Department of Electrical and Electronic Engineering, Yonsei University, Seoul, 120-749, Korea
J. Mun & J. Kim
- J. Mun
You can also search for this author inPubMed Google Scholar
- J. Kim
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toJ. Kim.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Mun, J., Kim, J. Universal super-resolution for face and non-face regions via a facial feature network.SIViP14, 1601–1608 (2020). https://doi.org/10.1007/s11760-020-01706-3
Received:
Revised:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative