- Jian Liang ORCID:orcid.org/0000-0003-3890-189412,
- Yunbo Wang ORCID:orcid.org/0000-0002-6215-888813,
- Dapeng Hu12,
- Ran He ORCID:orcid.org/0000-0002-3807-991X14 &
- …
- Jiashi Feng ORCID:orcid.org/0000-0001-6843-006412
Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 12356))
Included in the following conference series:
5876Accesses
Abstract
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain. Such a partial transfer setting is realistic but challenging and existing methods always suffer from two key problems, negative transfer and uncertainty propagation. In this paper, we build on domain adversarial learning and propose a novel domain adaptation method BA\(^3\)US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS), respectively. On one hand, negative transfer results in misclassification of target samples to the classes only present in the source domain. To address this issue, BAA pursues the balance between label distributions across domains in a fairly simple manner. Specifically, it randomly leverages a few source samples to augment the smaller target domain during domain alignment so that classes in different domains are symmetric. On the other hand, a source sample would be denoted as uncertain if there is an incorrect class that has a relatively high prediction score, and such uncertainty easily propagates to unlabeled target data around it during alignment, which severely deteriorates adaptation performance. Thus we present AUS that emphasizes uncertain samples and exploits an adaptive weighted complement entropy objective to encourage incorrect classes to have uniform and low prediction scores. Experimental results on multiple benchmarks demonstrate our BA\(^3\)US surpasses state-of-the-arts for partial domain adaptation tasks. Code is available athttps://github.com/tim-learn/BA3US.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 11439
- Price includes VAT (Japan)
- Softcover Book
- JPY 14299
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Mach. Learn.79(1–2), 151–175 (2010)
Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Proceedings of the CVPR, pp. 3722–3731 (2017)
Bucci, S., D’Innocente, A., Tommasi, T.: Tackling partial domain adaptation with self-supervision. arXiv preprintarXiv:1906.05199 (2019)
Cao, Z., Long, M., Wang, J., Jordan, M.I.: Partial transfer learning with selective adversarial networks. In: Proceedings of the CVPR, pp. 2724–2732 (2018)
Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 139–155. Springer, Cham (2018).https://doi.org/10.1007/978-3-030-01237-3_9
Cao, Z., You, K., Long, M., Wang, J., Yang, Q.: Learning to transfer examples for partial domain adaptation. In: Proceedings of the CVPR, pp. 2985–2994 (2019)
Chen, H.Y., et al.: Complement objective training. In: Proceedings of the ICLR (2019)
Csurka, G.: Domain adaptation for visual applications: A comprehensive survey. arXiv preprintarXiv:1702.05374 (2017)
Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: Proceedings of the ICML, pp. 1180–1189 (2015)
Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res.17(1), 1–35 (2016)
Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D., Li, W.: Deep reconstruction-classification networks for unsupervised domain adaptation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 597–613. Springer, Cham (2016).https://doi.org/10.1007/978-3-319-46493-0_36
Goodfellow, I., et al.: Generative adversarial nets. In: Proceedings of the NeurIPS, pp. 2672–2680 (2014)
Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: Proceedings of the NeurIPS, pp. 529–536 (2005)
Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset (2007)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the CVPR, pp. 770–778 (2016)
Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: Proceedings of the ICML, pp. 1989–1998 (2018)
Hu, J., Wang, C., Qiao, L., Zhong, H., Jing, Z.: Multi-weight partial domain adaptation. In: Proceedings of the BMVC (2019)
Koniusz, P., Tas, Y., Porikli, F.: Domain adaptation by mixture of alignments of second-or higher-order scatter tensors. In: Proceedings of the CVPR, pp. 7139–7148 (2017)
Kouw, W.M., Loog, M.: A review of single-source unsupervised domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. (2019, in press)
Kumar, A., et al.: Co-regularized alignment for unsupervised domain adaptation. In: Proceedings of the NeurIPS, pp. 9345–9356 (2018)
Li, S., et al.: Deep residual correction network for partial domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. (2020, in press)
Liang, J., He, R., Sun, Z., Tan, T.: Aggregating randomized clustering-promoting invariant projections for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell.41(5), 1027–1042 (2019)
Liang, J., He, R., Sun, Z., Tan, T.: Distant supervised centroid shift: a simple and efficient approach to visual domain adaptation. In: Proceedings of the CVPR, pp. 2975–2984 (2019)
Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. In: Proceedings of the NeurIPS, pp. 469–477 (2016)
Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: Proceedings of the ICML, pp. 97–105 (2015)
Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Proceedings of the NeurIPS, pp. 1647–1657 (2018)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: Proceedings of the ICML, pp. 2208–2217 (2017)
van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res.9, 2579–2605 (2008)
Mao, X., Ma, Y., Yang, Z., Chen, Y., Li, Q.: Virtual mixup training for unsupervised domain adaptation. arXiv preprintarXiv:1905.04215 (2019)
Matsuura, T., Saito, K., Harada, T.: TWINs: Two weighted inconsistency-reduced networks for partial domain adaptation. arXiv preprintarXiv:1812.07405 (2018)
Ming Harry Hsu, T., Yu Chen, W., Hou, C.A., Hubert Tsai, Y.H., Yeh, Y.R., Frank Wang, Y.C.: Unsupervised domain adaptation with imbalanced cross-domain data. In: Proceedings of the ICCV, pp. 4121–4129 (2015)
Miyato, T., Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell.41(8), 1979–1993 (2018)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis.115(3), 211–252 (2015)
Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010).https://doi.org/10.1007/978-3-642-15561-1_16
Shu, R., Bui, H.H., Narui, H., Ermon, S.: A DIRT-T approach to unsupervised domain adaptation. In: Proceedings of the ICLR (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556 (2014)
Sun, B., Saenko, K.: Deep CORAL: correlation alignment for deep domain adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016).https://doi.org/10.1007/978-3-319-49409-8_35
Tsai, Y.H.H., Hou, C.A., Chen, W.Y., Yeh, Y.R., Wang, Y.C.F.: Domain-constraint transfer coding for imbalanced unsupervised domain adaptation. In: Proceedings of the AAAI, pp. 3597–3603 (2016)
Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: Proceedings of the ICCV, pp. 4068–4076 (2015)
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the CVPR, pp. 2962–2971 (2017)
Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. arXiv preprintarXiv:1412.3474 (2014)
Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the CVPR, pp. 5018–5027 (2017)
Wang, Q., Li, W., Van Gool, L.: Semi-supervised learning by augmented distribution alignment. In: Proceedings of the ICCV, pp. 1466–1475 (2019)
Wang, X., Jin, Y., Long, M., Wang, J., Jordan, M.I.: Transferable normalization: towards improving transferability of deep neural networks. In: Proceedings of the NeurIPS, pp. 1951–1961 (2019)
Wang, Z., Dai, Z., Póczos, B., Carbonell, J.: Characterizing and avoiding negative transfer. In: Proceedings of the CVPR, pp. 11293–11302 (2019)
Xu, R., Li, G., Yang, J., Lin, L.: Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In: Proceedings of the ICCV, pp. 1426–1435 (2019)
Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., Saminger-Platz, S.: Central moment discrepancy (CMD) for domain-invariant representation learning. In: Proceedings of the ICLR (2016)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: Proceedings of the ICLR (2018)
Zhang, J., Ding, Z., Li, W., Ogunbona, P.: Importance weighted adversarial nets for partial domain adaptation. In: Proceedings of the CVPR, pp. 8156–8164 (2018)
Zhang, L.: Transfer adaptation learning: A decade survey. arXiv preprintarXiv:1903.04687 (2019)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the ICCV, pp. 2223–2232 (2017)
Acknowledgment
J. Feng was partially supported by NUS ECRA FY17 P08, AISG-100E2019-035, and MOE Tier 2 MOE2017-T2-2-151. The authors also thank Quanhong Fu for her help to improve the technical writing aspect of this paper.
Author information
Authors and Affiliations
Department of ECE, National University of Singapore (NUS), Singapore, Singapore
Jian Liang, Dapeng Hu & Jiashi Feng
Peking University, Beijing, China
Yunbo Wang
Institute of Automation, Chinese Academy of Sciences, Beijing, China
Ran He
- Jian Liang
You can also search for this author inPubMed Google Scholar
- Yunbo Wang
You can also search for this author inPubMed Google Scholar
- Dapeng Hu
You can also search for this author inPubMed Google Scholar
- Ran He
You can also search for this author inPubMed Google Scholar
- Jiashi Feng
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toJian Liang.
Editor information
Editors and Affiliations
University of Oxford, Oxford, UK
Andrea Vedaldi
Graz University of Technology, Graz, Austria
Horst Bischof
University of Freiburg, Freiburg im Breisgau, Germany
Thomas Brox
University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Jan-Michael Frahm
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Liang, J., Wang, Y., Hu, D., He, R., Feng, J. (2020). A Balanced and Uncertainty-Aware Approach for Partial Domain Adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12356. Springer, Cham. https://doi.org/10.1007/978-3-030-58621-8_8
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-030-58620-1
Online ISBN:978-3-030-58621-8
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative