Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 13842))
Included in the following conference series:
419Accesses
Abstract
Cross-modality person re-identification between visible and infrared images has become a research hotspot in the image retrieval field due to its potential application scenarios. Existing research usually designs loss functions around samples or sample centers, mainly focusing on reducing cross-modality discrepancy and intra-modality variations. However, the sample-based loss function is susceptible to outliers, and the center-based loss function is not compact enough between features. To address the above issues, we propose a novel loss function called Heterocentric Sample Loss. It optimizes both the sample features and the center of the sample features in the batch. In addition, we also propose a network structure combining spatial and channel features and a random channel enhancement method, which improves feature discrimination and robustness to color changes. Finally, we conduct extensive experiments on the SYSU-MM01 and RegDB datasets to demonstrate the superiority of the proposed method.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 11439
- Price includes VAT (Japan)
- Softcover Book
- JPY 14299
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Sun, Y., Zheng, L., Li, Y., Yang, Y., Tian, Q., Wang, S.: Learning part-based convolutional features for person re-identification. IEEE Trans. Pattern Anal. Mach. Intell.43(3), 902–917 (2019).https://doi.org/10.1109/TPAMI.2019.2938523
Wang, G., Yuan, Y., Chen, X., Li, J., Zhou, X.: Learning discriminative features with multiple granularities for person re-identification. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 274–282 (2018).https://doi.org/10.1145/3240508.3240552
Xia, B.N., Gong, Y., Zhang, Y., Poellabauer, C.: Second-order non-local attention networks for person re-identification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3760–3769 (2019).https://doi.org/10.1109/ICCV.2019.00386
Zheng, L., Huang, Y., Lu, H., Yang, Y.: Pose-invariant embedding for deep person re-identification. IEEE Trans. Image Process.28(9), 4500–4509 (2019).https://doi.org/10.1109/TIP.2019.2910414
Wang, G., Zhang, T., Cheng, J., Liu, S., Yang, Y., Hou, Z.: RGB-infrared cross-modality person re-identification via joint pixel and feature alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3623–3632 (2019).https://doi.org/10.1109/ICCV.2019.00372
Wang, Z., Wang, Z., Zheng, Y., Chuang, Y.Y., Satoh, S.: Learning to reduce dual-level discrepancy for infrared-visible person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 618–626 (2019).https://doi.org/10.1109/CVPR.2019.00071
Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., Hoi, S.C.: Deep learning for person re-identification: a survey and outlook. IEEE Trans. Pattern Anal. Mach. Intell.44(6), 2872–2893 (2021).https://doi.org/10.1109/TPAMI.2021.3054775
Ye, M., Shen, J., J. Crandall, D., Shao, L., Luo, J.: Dynamic dual-attentive aggregation learning for visible-infrared person re-identification. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12362, pp. 229–247. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58520-4_14
Wu, Q., et al.: Discover cross-modality nuances for visible-infrared person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4330–4339 (2021).https://doi.org/10.1109/CVPR46437.2021.00431
Ye, M., Lan, X., Wang, Z., Yuen, P.C.: Bi-directional center-constrained top-ranking for visible thermal person re-identification. IEEE Trans. Inf. Forensics Secur.15, 407–419 (2019).https://doi.org/10.1109/TIFS.2019.2921454
Zhu, Y., Yang, Z., Wang, L., Zhao, S., Hu, X., Tao, D.: Hetero-center loss for cross-modality person re-identification. Neurocomputing386, 97–109 (2020).https://doi.org/10.1016/j.neucom.2019.12.100
Li, W., Qi, K., Chen, W., Zhou, Y.: Unified batch all triplet loss for visible-infrared person re-identification. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021).https://doi.org/10.1109/IJCNN52387.2021.9533325
Wu, A., Zheng, W.S., Yu, H.X., Gong, S., Lai, J.: RGB-infrared cross-modality person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5380–5389 (2017).https://doi.org/10.1109/ICCV.2017.575
Wang, G.A., et al.: Cross-modality paired-images generation for RGB-infrared person re-identification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12144–12151 (2020).https://doi.org/10.1609/aaai.v34i07.6894
Choi, S., Lee, S., Kim, Y., Kim, T., Kim, C.: Hi-CMD: hierarchical cross-modality disentanglement for visible-infrared person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10257–10266 (2020).https://doi.org/10.1109/CVPR42600.2020.01027
Hao, Y., Li, J., Wang, N., Gao, X.: Modality adversarial neural network for visible-thermal person re-identification. Pattern Recogn.107, 107533 (2020).https://doi.org/10.1016/j.patcog.2020.107533
Liu, H., Cheng, J., Wang, W., Su, Y., Bai, H.: Enhancing the discriminative feature learning for visible-thermal cross-modality person re-identification. Neurocomputing398, 11–19 (2020).https://doi.org/10.1016/j.neucom.2020.01.089
Huang, N., Liu, J., Zhang, Q., Han, J.: Exploring modality-shared appearance features and modality-invariant relation features for cross-modality person re-identification. arXiv preprintarXiv:2104.11539 (2021).https://doi.org/10.48550/arXiv.2104.11539
Zhang, C., Liu, H., Guo, W., Ye, M.: Multi-scale cascading network with compact feature learning for RGB-infrared person re-identification. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8679–8686. IEEE (2021).https://doi.org/10.1109/ICPR48806.2021.9412576
Liu, H., Tan, X., Zhou, X.: Parameter sharing exploration and hetero-center triplet loss for visible-thermal person re-identification. IEEE Trans. Multimedia23, 4414–4425 (2020).https://doi.org/10.1109/TMM.2020.3042080
Sun, Y., Zheng, L., Yang, Y., Tian, Q., Wang, S.: Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline). In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 480–496 (2018).https://doi.org/10.48550/arXiv.1711.09349
Nguyen, D.T., Hong, H.G., Kim, K.W., Park, K.R.: Person recognition system based on a combination of body images from visible light and thermal cameras. Sensors17(3), 605 (2017).https://doi.org/10.3390/s17030605
Dai, P., Ji, R., Wang, H., Wu, Q., Huang, Y.: Cross-modality person re-identification with generative adversarial training. In: IJCAI, vol. 1, p. 6 (2018).https://doi.org/10.24963/ijcai.2018/94
Zhao, Y.B., Lin, J.W., Xuan, Q., Xi, X.: HPILN: a feature learning framework for cross-modality person re-identification. IET Image Proc.13(14), 2897–2904 (2019).https://doi.org/10.1049/iet-ipr.2019.0699
Liu, H., Chai, Y., Tan, X., Li, D., Zhou, X.: Strong but simple baseline with dual-granularity triplet loss for visible-thermal person re-identification. IEEE Signal Process. Lett.28, 653–657 (2021).https://doi.org/10.1109/LSP.2021.3065903
Lu, Y., et al.: Cross-modality person re-identification with shared-specific feature transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13379–13389 (2020).https://doi.org/10.1109/CVPR42600.2020.01339
Wang, P., et al.: Deep multi-patch matching network for visible thermal person re-identification. IEEE Trans. Multimedia23, 1474–1488 (2020).https://doi.org/10.1109/TMM.2020.2999180
Miao, Z., Liu, H., Shi, W., Xu, W., Ye, H.: Modality-aware style adaptation for RGB-infrared person re-identification. In: IJCAI, pp. 916–922 (2021).https://doi.org/10.24963/ijcai.2021/127
Liu, H., Ma, S., Xia, D., Li, S.: SFANet: a spectrum-aware feature augmentation network for visible-infrared person reidentification. IEEE Trans. Neural Netw. Learn. Syst. (2021).https://doi.org/10.1109/TNNLS.2021.3105702
Ye, M., Shen, J., Shao, L.: Visible-infrared person re-identification via homogeneous augmented tri-modal learning. IEEE Trans. Inf. Forensics Secur.16, 728–739 (2020).https://doi.org/10.1109/TIFS.2020.3001665
Acknowledgements
This work was supported by the Project of NSFC (Grant No. U1908214, 61906032), Special Project of Central Government Guiding Local Science and Technology Development (Grant No. 2021JH6/10500140), the Program for Innovative Research Team in University of Liaoning Province (LT2020015), the Support Plan for Key Field Innovation Team of Dalian(2021RT06), the Science and Technology Innovation Fund of Dalian (Grant No. 2020JJ25CY001), the Support Plan for Leading Innovation Team of Dalian University (Grant No. XLJ202010), the Fundamental Research Funds for the Central Universities (Grant No. DUT21TD107), Dalian University Scientific Research Platform Project (No. 202101YB03).
Author information
Authors and Affiliations
Key Laboratory of Advanced Design and Intelligent Computing Ministry of Education, School of Software Engineering, Dalian University, Dalian, China
Peng Su, Rui Liu, Jing Dong, Pengfei Yi & Dongsheng Zhou
- Peng Su
You can also search for this author inPubMed Google Scholar
- Rui Liu
You can also search for this author inPubMed Google Scholar
- Jing Dong
You can also search for this author inPubMed Google Scholar
- Pengfei Yi
You can also search for this author inPubMed Google Scholar
- Dongsheng Zhou
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toRui Liu.
Editor information
Editors and Affiliations
University of Wollongong, Wollongong, NSW, Australia
Lei Wang
University of Bonn, Bonn, Germany
Juergen Gall
University of Adelaide, Adelaide, SA, Australia
Tat-Jun Chin
National Institute of Informatics, Tokyo, Japan
Imari Sato
Johns Hopkins University, Baltimore, MD, USA
Rama Chellappa
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Su, P., Liu, R., Dong, J., Yi, P., Zhou, D. (2023). SCFNet: A Spatial-Channel Features Network Based on Heterocentric Sample Loss for Visible-Infrared Person Re-identification. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13842. Springer, Cham. https://doi.org/10.1007/978-3-031-26284-5_33
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-26283-8
Online ISBN:978-3-031-26284-5
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative