Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

An Effective Way to Boost Black-Box Adversarial Attack

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNISA,volume 11961))

Included in the following conference series:

  • 3025Accesses

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples. Generally speaking adversarial examples are defined by adding input samples a small-magnitude perturbation, which is hardly misleading human observers’ decision but would lead to misclassifications for a well trained models. Most of existing iterative adversarial attack methods suffer from low success rates in fooling model in a black-box manner. And we find that it is because perturbation neutralize each other in iterative process. To address this issue, we propose a novel boosted iterative method to effectively promote success rates. We conduct the experiments on ImageNet dataset, with five models normally trained for classification. The experimental results show that our proposed strategy can significantly improve success rates of fooling models in a black-box manner. Furthermore, it also outperforms the momentum iterative method (MI-FSGM), which won the first places in NeurIPS Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 13727
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 17159
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  2. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell.40(4), 834–848 (2018)

    Article  Google Scholar 

  3. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). arXiv preprintarXiv:1412.6572

  5. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  7. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  9. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016). arXiv preprintarXiv:1607.02533

  10. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks (2016). arXiv preprintarXiv:1611.02770

  11. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  12. Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations (2017). arXiv preprintarXiv:1702.04267

  13. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement (2018). arXiv preprintarXiv:1804.02767

  14. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  16. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis.115(3), 211–252 (2015)

    Article MathSciNet  Google Scholar 

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprintarXiv:1409.1556

  18. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  19. Szegedy, C., et al.: Intriguing properties of neural networks (2013). arXiv preprintarXiv:1312.6199

  20. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses (2017). arXiv preprintarXiv:1705.07204

Download references

Author information

Authors and Affiliations

  1. Harbin Institute of Technology, Harbin Shi, China

    Xinjie Feng, Hongxun Yao & Wenbin Che

  2. Harbin Institute of Technology, Weihai Shi, China

    Shengping Zhang

Authors
  1. Xinjie Feng

    You can also search for this author inPubMed Google Scholar

  2. Hongxun Yao

    You can also search for this author inPubMed Google Scholar

  3. Wenbin Che

    You can also search for this author inPubMed Google Scholar

  4. Shengping Zhang

    You can also search for this author inPubMed Google Scholar

Corresponding authors

Correspondence toXinjie Feng orHongxun Yao.

Editor information

Editors and Affiliations

  1. Korea Advanced Institute of Science and, Daejeon, Korea (Republic of)

    Yong Man Ro

  2. National Chiao Tung University, Hsinchu, Taiwan

    Wen-Huang Cheng

  3. Korea Advanced Institute of Science and Technology, Daejeon, Korea (Republic of)

    Junmo Kim

  4. National Cheng Kung University, Tainan City, Taiwan

    Wei-Ta Chu

  5. Tsinghua University, Beijing, China

    Peng Cui

  6. Korea Advanced Institute of Science and Technology, Daejeon, Korea (Republic of)

    Jung-Woo Choi

  7. National Tsing Hua University, Hsinchu, Taiwan

    Min-Chun Hu

  8. Ghent University, Ghent, Belgium

    Wesley De Neve

Rights and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Feng, X., Yao, H., Che, W., Zhang, S. (2020). An Effective Way to Boost Black-Box Adversarial Attack. In: Ro, Y.,et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11961. Springer, Cham. https://doi.org/10.1007/978-3-030-37731-1_32

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 13727
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 17159
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp