Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Springer Nature Link
Log in

Singular Value Manipulating: An Effective DRL-Based Adversarial Attack on Deep Convolutional Neural Network

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In recent years, deep convolutional neural networks (DCNNs) have become increasingly prevalent in image processing applications. However, DCNNs are vulnerable to adversarial attacks, which are generated by adding imperceptible perturbations to the input that can cause the network to misclassify the image. In this study, we propose a black-box transferable adversarial attack method. The goal is to enhance the understanding of the vulnerability of these networks. Meanwhile, it could help develop more robust defenses against such attacks. This attack efficiently generates adversarial examples by manipulating the singular value matrix instead of directly perturbing pixels with complex noise. We utilize soft actor-critic to explore an optimal perturbation strategy. We perform extensive evaluations with VOC 2012, MS Coco 2017 datasets on object detection models, the MNIST dataset on image classification models, as well as the TT-100K dataset on a real-world case study to evaluate the proposed singular value manipulating attack (SVMA). Comparison results demonstrate that SVMA achieves a consistent query efficiency and attack ability on both one-stage detector Yolo and two-stage detector Faster R-CNN. Additionally, our case study demonstrates the adversarial examples of SVMA are effective in real-world scenarios. In the end, we propose a defense against such attacks.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9

  2. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article MathSciNet  Google Scholar 

  3. He K, Gkioxari G, Dollár P, Girshick RB (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp. 2961–2969

  4. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271

  5. Cai Z, Vasconcelos N (2018) Cascade r-cnn: delving into high quality object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6154–6162

  6. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks

  7. Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch

  8. Inkawhich N, Liang K, Carin L, Chen Y (2020) Transferable perturbations of deep feature distributions. In: International conference on learning representations .https://openreview.net/forum?id=rJxAo2VYwr

  9. Dolatabadi HM, Erfani SM, Leckie C (2020) Advflow: inconspicuous black-box adversarial attacks using normalizing flows. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (Eds) Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual .https://proceedings.neurips.cc/paper/2020/hash/b6cf334c22c8f4ce8eb920bb7b512ed0-Abstract.html

  10. Fan H, Wang B, Zhou P, Li A, Pang M, Xu Z, Fu C, Li H, Chen Y (2020) Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs

  11. Ma C, Chen L, Yong J (2021) Simulating unknown target models for query-efficient black-box attacks. In: IEEE conference on computer vision and pattern recognition, CVPR 2021, Virtual, June 19-25, 2021, pp. 11835–11844. Computer Vision Foundation/IEEE.https://doi.org/10.1109/CVPR46437.2021.01166 .https://openaccess.thecvf.com/content/CVPR2021/html/Ma_Simulating_Unknown_Target_Models_for_Query-Efficient_Black-Box_Attacks_CVPR_2021_paper.html

  12. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations .https://openreview.net/forum?id=rJzIBfZAb

  13. Wang D, Li C, Wen S, Han Q-L, Nepal S, Zhang X, Xiang Y (2022) Daedalus: breaking nonmaximum suppression in object detection via adversarial examples. IEEE Trans Cybern 52(8):7427–7440.https://doi.org/10.1109/TCYB.2020.3041481

    Article  Google Scholar 

  14. Chen T, Ling J, Sun Y (2022) White-box content camouflage attacks against deep learning. Comput Secur 117:102676.https://doi.org/10.1016/j.cose.2022.102676

    Article  Google Scholar 

  15. Laidlaw C, Feizi S (2019) Functional adversarial attacks. Adv Neural Inf Process Syst 32:10408–10418

    Google Scholar 

  16. Ma J, Ding S, Mei Q (2020) Towards more practical adversarial attacks on graph neural networks. Advances in neural information processing systems

  17. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy, pp. 372–387 . IEEE

  18. Tian B, Juefei-Xu F, Guo Q, Xie X, Li X, Liu Y (2021) Ava: adversarial vignetting attack against visual recognition. In: Zhou Z-H (Ed) Proceedings of the thirtieth international joint conference on artificial intelligence, IJCAI-21, pp. 1046–1053. International Joint Conferences on Artificial Intelligence Organization.https://doi.org/10.24963/ijcai.2021/145 . Main Track

  19. Wang X, He K (2021) Enhancing the transferability of adversarial attacks through variance tuning. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1924–1933 .https://doi.org/10.1109/CVPR46437.2021.00196

  20. Zhang H, Ma X (2022) Misleading attention and classification: an adversarial attack to fool object detection models in the real world. Comput Secur 122:102676.https://doi.org/10.1016/j.cose.2022.102876

    Article  Google Scholar 

  21. Pavlitskaya S, Polley N, Weber M, Zöllner JM (2023) Adversarial vulnerability of temporal feature networks for object detection. In: Karlinsky L, Michaeli T, Nishino K (eds) Computer Vision - ECCV 2022 Workshops. Springer, Cham, pp 510–525

    Chapter  Google Scholar 

  22. Zhang Y, Tan Y-A, Lu M, Liu L, Wang D, Zhang Q, Li Y (2023) Towards interpreting vulnerability of object detection models via adversarial distillation. J Inf Secur Appl 72:103410

    Google Scholar 

  23. Wang Z, Zhang C (2022) Attacking object detector by simultaneously learning perturbations and locations. Neural Process Lett. pp 1–16

  24. Alayrac J-B, Uesato J, Huang P-S, Fawzi A, Stanforth R, Kohli P (2019) Are labels required for improving adversarial robustness? Adv Neural Inf Process Syst 32:12214–12223

    Google Scholar 

  25. Aldahdooh A, Hamidouche W, Fezza S (2021) Adversarial example detection for DNN models: a review

  26. Zhang C, Benz P, Lin C, Karjauv A, Wu J, Kweon IS (2021) A survey on universal adversarial attack

  27. Co KT, Mu"ñoz-González L, Maupeou S, Lupu EC (2019) Procedural noise adversarial examples for black-box attacks on deep convolutional networks. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, pp. 275–289

  28. Carlini N, Wagner D (2017) Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 3–14

  29. Moosavi-Dezfooli S-M, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773

  30. Khrulkov V, Oseledets I (2018) Art of singular vectors and universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8562–8570

  31. Li J, Ji R, Liu H, Hong X, Gao Y, Tian Q (2019) Universal perturbation attack against image retrieval. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 4899–4908

  32. Mopuri K, Garg U, Venkatesh Babu R (2017) Fast feature fool: a data independent approach to universal adversarial perturbations. In: British machine vision conference 2017, BMVC 2017 . BMVA Press

  33. Li Y, Li L, Wang L, Zhang T, Gong B (2019) Nattack: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In: International conference on machine learning, pp. 3866–3876 . PMLR

  34. Moosavi-Dezfooli S-M, Fawzi A, Frossard, P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582

  35. Lin Y-C, Hong Z-W, Liao Y-H, Shih M-L, Liu M-Y, Sun M (2017) Tactics of adversarial attack on deep reinforcement learning agents

  36. Ilyas A, Engstrom L, Athalye A, Lin J (2018) Black-box adversarial attacks with limited queries and information. In: International conference on machine learning, pp. 2137–2146 . PMLR

  37. Chen P-Y, Zhang H, Sharma Y, Yi J, Hsieh C-J (2017) Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp. 15–26

  38. Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828–841

    Article  Google Scholar 

  39. Liu H, Ji R, Li J, Zhang B, Gao Y, Wu Y, Huang F (2019) Universal adversarial perturbation via prior driven uncertainty approximation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 2941–2949

  40. Moon S, An G, Song HO (2019) Parsimonious black-box adversarial attacks via efficient combinatorial optimization. In: International conference on machine learning, pp. 4636–4645 . PMLR

  41. Hayes J, Danezis G (2018) Learning universal adversarial perturbations with generative models. In: 2018 IEEE security and privacy workshops (SPW), pp. 43–49 . IEEE

  42. Perolat J, Malinowski M, Piot B, Pietquin O (2018) Playing the game of universal adversarial perturbations

  43. Mirza M, Osindero, S (2014) Conditional generative adversarial nets

  44. Tsingenopoulos I, Preuveneers D, Joosen W (2019) Autoattacker: a reinforcement learning approach for black-box adversarial attacks. In: 2019 IEEE European symposium on security and privacy workshops (EuroS &PW), pp. 229–237 . IEEE

  45. Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial examples in the physical world

  46. Athalye A, Engstrom L, Ilyas A, Kwok K (2018) Synthesizing robust adversarial examples. In: International conference on machine learning, pp. 284–293 . PMLR

  47. Chen P-C, Kung B-H, Chen J-C (2021) Class-aware robust adversarial training for object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 10420–10429

  48. Naseer MM, Khan SH, Khan MH, Shahbaz Khan F, Porikli F (2019) Cross-domain transferability of adversarial perturbations. Adv Neural Inf Process Syst 32:12905–12915

    Google Scholar 

  49. Wei X, Liang S, Chen N, Cao X (2019) Transferable adversarial attacks for image and video object detection. In: Proceedings of the twenty-eighth international joint conference on artificial intelligence, pp. 954–960. International Joint Conferences on Artificial Intelligence Organization,https://doi.org/10.24963/ijcai.2019/134

  50. Xie C, Wang J, Zhang Z, Zhou Y, Xie L, Yuille A (2017) Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE international conference on computer vision, pp. 1369–1378

  51. Chen S-T, Cornelius C, Martin J, Chau DH (2018) Robust physical adversarial attack on faster r-cnn object detector. In: ECML/PKDD

  52. Zolfi A, Kravchik M, Elovici Y, Shabtai A (2021) The translucent patch: a physical and universal attack on object detectors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15232–15241

  53. Mirsky Y (2021) IPatch: a remote adversarial patch

  54. Liu X, Yang H, Liu Z, Song L, Chen Y, Li H (2019) Dpatch: an adversarial patch attack on object detectors. In: SafeAI@ AAAI

  55. Jia S, Song Y, Ma C, Yang X (2021) Iou attack: towards temporally coherent black-box adversarial attack for visual object tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6709–6718

  56. Hendrycks D, Gimpel K (2016) Early methods for detecting adversarial images

  57. Rony J, Hafemann LG, Oliveira LS, Ayed IB, Sabourin R, Granger E (2019) Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4322–4330

  58. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (sp), pp. 39–57 . IEEE

  59. Kim Y-D, Park E, Yoo S, Choi T, Yang L, Shin D (2015) Compression of deep convolutional neural networks for fast and low power mobile applications

  60. Sorber L, Van Barel M, De Lathauwer L (2013) Optimization-based algorithms for tensor decompositions: canonical polyadic decomposition, decomposition in rank-(l_r, l_r,1) terms, and a new generalization. SIAM J Optim 23(2):695–720

    Article MathSciNet  Google Scholar 

  61. Everingham M, Winn J (2011) The pascal visual object classes challenge 2012 (voc2012) development kit. Pattern Analysis, Statistical Modelling and Computational Learning, Tech Rep. 8

  62. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision, pp. 740–755 . Springer

  63. Cohen G, Afshar S, Tapson J, Van Schaik A (2017) Emnist: extending mnist to handwritten letters. In: 2017 International joint conference on neural networks (IJCNN), pp. 2921–2926 . IEEE

  64. Jocher G (2020) yolov5. Accessed: 2020-07-10 .https://github.com/ultralytics/yolov5

  65. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International conference on computer vision, pp. 1440–1448

  66. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778

  67. Ilyas A, Engstrom L, Athalye A, Lin J (2018) Black-box adversarial attacks with limited queries and information. In: International conference on machine learning, pp. 2137–2146 . PMLR

  68. Liu S, Chen P-Y, Chen X, Hong M (2017) signsgd via zeroth-order oracle. In: International conference on learning representations

  69. Al-Dujaili A, O’Reilly U-M (2020) Sign bits are all you need for black-box attacks. In: International conference on learning representations

  70. Andriushchenko M, Croce F, Flammarion N, Hein M (2020) Square attack: a query-efficient black-box adversarial attack via random search. In: Computer Vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII, pp. 484–501 . Springer

  71. Liang S, Wu B, Fan Y, Wei X, Cao X (2021) Parallel rectangle flip attack: a query-based black-box attack against object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 7697–7707

  72. Zhu Z, Liang D, Zhang S, Huang X, Li B, Hu S (2016) Traffic-sign detection and classification in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2110–2118 (2016)

Download references

Acknowledgements

The research work of Cai Fu is supported by China NSF (62072200).

Author information

Authors and Affiliations

  1. Hubei Key Laboratory of Distributed System Security, Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong Science and technology University, Wuhan, 430000, Hubei, China

    Shuai He, Cai Fu, Guanyun Feng, Jianqiang Lv & Fengyang Deng

Authors
  1. Shuai He

    You can also search for this author inPubMed Google Scholar

  2. Cai Fu

    You can also search for this author inPubMed Google Scholar

  3. Guanyun Feng

    You can also search for this author inPubMed Google Scholar

  4. Jianqiang Lv

    You can also search for this author inPubMed Google Scholar

  5. Fengyang Deng

    You can also search for this author inPubMed Google Scholar

Contributions

SH: Conceptualization, Methodology, Writing original draft, Software. CF: Conceptualization, Writing—review & editing, Supervision, Funding acquisition. GF: Image classification model reproduction. JL: Validation, Review & editing. FD: Validation, Review & editing.

Corresponding author

Correspondence toCai Fu.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, S., Fu, C., Feng, G.et al. Singular Value Manipulating: An Effective DRL-Based Adversarial Attack on Deep Convolutional Neural Network.Neural Process Lett55, 12459–12480 (2023). https://doi.org/10.1007/s11063-023-11428-5

Download citation

Keywords

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp