Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

SCFNet: A Spatial-Channel Features Network Based on Heterocentric Sample Loss for Visible-Infrared Person Re-identification

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 13842))

Included in the following conference series:

  • 419Accesses

Abstract

Cross-modality person re-identification between visible and infrared images has become a research hotspot in the image retrieval field due to its potential application scenarios. Existing research usually designs loss functions around samples or sample centers, mainly focusing on reducing cross-modality discrepancy and intra-modality variations. However, the sample-based loss function is susceptible to outliers, and the center-based loss function is not compact enough between features. To address the above issues, we propose a novel loss function called Heterocentric Sample Loss. It optimizes both the sample features and the center of the sample features in the batch. In addition, we also propose a network structure combining spatial and channel features and a random channel enhancement method, which improves feature discrimination and robustness to color changes. Finally, we conduct extensive experiments on the SYSU-MM01 and RegDB datasets to demonstrate the superiority of the proposed method.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Sun, Y., Zheng, L., Li, Y., Yang, Y., Tian, Q., Wang, S.: Learning part-based convolutional features for person re-identification. IEEE Trans. Pattern Anal. Mach. Intell.43(3), 902–917 (2019).https://doi.org/10.1109/TPAMI.2019.2938523

    Article  Google Scholar 

  2. Wang, G., Yuan, Y., Chen, X., Li, J., Zhou, X.: Learning discriminative features with multiple granularities for person re-identification. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 274–282 (2018).https://doi.org/10.1145/3240508.3240552

  3. Xia, B.N., Gong, Y., Zhang, Y., Poellabauer, C.: Second-order non-local attention networks for person re-identification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3760–3769 (2019).https://doi.org/10.1109/ICCV.2019.00386

  4. Zheng, L., Huang, Y., Lu, H., Yang, Y.: Pose-invariant embedding for deep person re-identification. IEEE Trans. Image Process.28(9), 4500–4509 (2019).https://doi.org/10.1109/TIP.2019.2910414

    Article MathSciNet MATH  Google Scholar 

  5. Wang, G., Zhang, T., Cheng, J., Liu, S., Yang, Y., Hou, Z.: RGB-infrared cross-modality person re-identification via joint pixel and feature alignment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3623–3632 (2019).https://doi.org/10.1109/ICCV.2019.00372

  6. Wang, Z., Wang, Z., Zheng, Y., Chuang, Y.Y., Satoh, S.: Learning to reduce dual-level discrepancy for infrared-visible person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 618–626 (2019).https://doi.org/10.1109/CVPR.2019.00071

  7. Ye, M., Shen, J., Lin, G., Xiang, T., Shao, L., Hoi, S.C.: Deep learning for person re-identification: a survey and outlook. IEEE Trans. Pattern Anal. Mach. Intell.44(6), 2872–2893 (2021).https://doi.org/10.1109/TPAMI.2021.3054775

    Article  Google Scholar 

  8. Ye, M., Shen, J., J. Crandall, D., Shao, L., Luo, J.: Dynamic dual-attentive aggregation learning for visible-infrared person re-identification. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12362, pp. 229–247. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58520-4_14

    Chapter  Google Scholar 

  9. Wu, Q., et al.: Discover cross-modality nuances for visible-infrared person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4330–4339 (2021).https://doi.org/10.1109/CVPR46437.2021.00431

  10. Ye, M., Lan, X., Wang, Z., Yuen, P.C.: Bi-directional center-constrained top-ranking for visible thermal person re-identification. IEEE Trans. Inf. Forensics Secur.15, 407–419 (2019).https://doi.org/10.1109/TIFS.2019.2921454

    Article  Google Scholar 

  11. Zhu, Y., Yang, Z., Wang, L., Zhao, S., Hu, X., Tao, D.: Hetero-center loss for cross-modality person re-identification. Neurocomputing386, 97–109 (2020).https://doi.org/10.1016/j.neucom.2019.12.100

    Article  Google Scholar 

  12. Li, W., Qi, K., Chen, W., Zhou, Y.: Unified batch all triplet loss for visible-infrared person re-identification. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021).https://doi.org/10.1109/IJCNN52387.2021.9533325

  13. Wu, A., Zheng, W.S., Yu, H.X., Gong, S., Lai, J.: RGB-infrared cross-modality person re-identification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5380–5389 (2017).https://doi.org/10.1109/ICCV.2017.575

  14. Wang, G.A., et al.: Cross-modality paired-images generation for RGB-infrared person re-identification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12144–12151 (2020).https://doi.org/10.1609/aaai.v34i07.6894

  15. Choi, S., Lee, S., Kim, Y., Kim, T., Kim, C.: Hi-CMD: hierarchical cross-modality disentanglement for visible-infrared person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10257–10266 (2020).https://doi.org/10.1109/CVPR42600.2020.01027

  16. Hao, Y., Li, J., Wang, N., Gao, X.: Modality adversarial neural network for visible-thermal person re-identification. Pattern Recogn.107, 107533 (2020).https://doi.org/10.1016/j.patcog.2020.107533

    Article  Google Scholar 

  17. Liu, H., Cheng, J., Wang, W., Su, Y., Bai, H.: Enhancing the discriminative feature learning for visible-thermal cross-modality person re-identification. Neurocomputing398, 11–19 (2020).https://doi.org/10.1016/j.neucom.2020.01.089

    Article  Google Scholar 

  18. Huang, N., Liu, J., Zhang, Q., Han, J.: Exploring modality-shared appearance features and modality-invariant relation features for cross-modality person re-identification. arXiv preprintarXiv:2104.11539 (2021).https://doi.org/10.48550/arXiv.2104.11539

  19. Zhang, C., Liu, H., Guo, W., Ye, M.: Multi-scale cascading network with compact feature learning for RGB-infrared person re-identification. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 8679–8686. IEEE (2021).https://doi.org/10.1109/ICPR48806.2021.9412576

  20. Liu, H., Tan, X., Zhou, X.: Parameter sharing exploration and hetero-center triplet loss for visible-thermal person re-identification. IEEE Trans. Multimedia23, 4414–4425 (2020).https://doi.org/10.1109/TMM.2020.3042080

    Article  Google Scholar 

  21. Sun, Y., Zheng, L., Yang, Y., Tian, Q., Wang, S.: Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline). In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 480–496 (2018).https://doi.org/10.48550/arXiv.1711.09349

  22. Nguyen, D.T., Hong, H.G., Kim, K.W., Park, K.R.: Person recognition system based on a combination of body images from visible light and thermal cameras. Sensors17(3), 605 (2017).https://doi.org/10.3390/s17030605

    Article  Google Scholar 

  23. Dai, P., Ji, R., Wang, H., Wu, Q., Huang, Y.: Cross-modality person re-identification with generative adversarial training. In: IJCAI, vol. 1, p. 6 (2018).https://doi.org/10.24963/ijcai.2018/94

  24. Zhao, Y.B., Lin, J.W., Xuan, Q., Xi, X.: HPILN: a feature learning framework for cross-modality person re-identification. IET Image Proc.13(14), 2897–2904 (2019).https://doi.org/10.1049/iet-ipr.2019.0699

    Article  Google Scholar 

  25. Liu, H., Chai, Y., Tan, X., Li, D., Zhou, X.: Strong but simple baseline with dual-granularity triplet loss for visible-thermal person re-identification. IEEE Signal Process. Lett.28, 653–657 (2021).https://doi.org/10.1109/LSP.2021.3065903

    Article  Google Scholar 

  26. Lu, Y., et al.: Cross-modality person re-identification with shared-specific feature transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13379–13389 (2020).https://doi.org/10.1109/CVPR42600.2020.01339

  27. Wang, P., et al.: Deep multi-patch matching network for visible thermal person re-identification. IEEE Trans. Multimedia23, 1474–1488 (2020).https://doi.org/10.1109/TMM.2020.2999180

    Article  Google Scholar 

  28. Miao, Z., Liu, H., Shi, W., Xu, W., Ye, H.: Modality-aware style adaptation for RGB-infrared person re-identification. In: IJCAI, pp. 916–922 (2021).https://doi.org/10.24963/ijcai.2021/127

  29. Liu, H., Ma, S., Xia, D., Li, S.: SFANet: a spectrum-aware feature augmentation network for visible-infrared person reidentification. IEEE Trans. Neural Netw. Learn. Syst. (2021).https://doi.org/10.1109/TNNLS.2021.3105702

    Article  Google Scholar 

  30. Ye, M., Shen, J., Shao, L.: Visible-infrared person re-identification via homogeneous augmented tri-modal learning. IEEE Trans. Inf. Forensics Secur.16, 728–739 (2020).https://doi.org/10.1109/TIFS.2020.3001665

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Project of NSFC (Grant No. U1908214, 61906032), Special Project of Central Government Guiding Local Science and Technology Development (Grant No. 2021JH6/10500140), the Program for Innovative Research Team in University of Liaoning Province (LT2020015), the Support Plan for Key Field Innovation Team of Dalian(2021RT06), the Science and Technology Innovation Fund of Dalian (Grant No. 2020JJ25CY001), the Support Plan for Leading Innovation Team of Dalian University (Grant No. XLJ202010), the Fundamental Research Funds for the Central Universities (Grant No. DUT21TD107), Dalian University Scientific Research Platform Project (No. 202101YB03).

Author information

Authors and Affiliations

  1. Key Laboratory of Advanced Design and Intelligent Computing Ministry of Education, School of Software Engineering, Dalian University, Dalian, China

    Peng Su, Rui Liu, Jing Dong, Pengfei Yi & Dongsheng Zhou

Authors
  1. Peng Su

    You can also search for this author inPubMed Google Scholar

  2. Rui Liu

    You can also search for this author inPubMed Google Scholar

  3. Jing Dong

    You can also search for this author inPubMed Google Scholar

  4. Pengfei Yi

    You can also search for this author inPubMed Google Scholar

  5. Dongsheng Zhou

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toRui Liu.

Editor information

Editors and Affiliations

  1. University of Wollongong, Wollongong, NSW, Australia

    Lei Wang

  2. University of Bonn, Bonn, Germany

    Juergen Gall

  3. University of Adelaide, Adelaide, SA, Australia

    Tat-Jun Chin

  4. National Institute of Informatics, Tokyo, Japan

    Imari Sato

  5. Johns Hopkins University, Baltimore, MD, USA

    Rama Chellappa

Rights and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Su, P., Liu, R., Dong, J., Yi, P., Zhou, D. (2023). SCFNet: A Spatial-Channel Features Network Based on Heterocentric Sample Loss for Visible-Infrared Person Re-identification. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13842. Springer, Cham. https://doi.org/10.1007/978-3-031-26284-5_33

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp