Part of the book series:Lecture Notes in Computer Science ((LNISA,volume 11961))
Included in the following conference series:
3025Accesses
Abstract
Deep neural networks (DNNs) are vulnerable to adversarial examples. Generally speaking adversarial examples are defined by adding input samples a small-magnitude perturbation, which is hardly misleading human observers’ decision but would lead to misclassifications for a well trained models. Most of existing iterative adversarial attack methods suffer from low success rates in fooling model in a black-box manner. And we find that it is because perturbation neutralize each other in iterative process. To address this issue, we propose a novel boosted iterative method to effectively promote success rates. We conduct the experiments on ImageNet dataset, with five models normally trained for classification. The experimental results show that our proposed strategy can significantly improve success rates of fooling models in a black-box manner. Furthermore, it also outperforms the momentum iterative method (MI-FSGM), which won the first places in NeurIPS Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 13727
- Price includes VAT (Japan)
- Softcover Book
- JPY 17159
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell.40(4), 834–848 (2018)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). arXiv preprintarXiv:1412.6572
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016). arXiv preprintarXiv:1607.02533
Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks (2016). arXiv preprintarXiv:1611.02770
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations (2017). arXiv preprintarXiv:1702.04267
Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement (2018). arXiv preprintarXiv:1804.02767
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-24574-4_28
Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis.115(3), 211–252 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprintarXiv:1409.1556
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
Szegedy, C., et al.: Intriguing properties of neural networks (2013). arXiv preprintarXiv:1312.6199
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses (2017). arXiv preprintarXiv:1705.07204
Author information
Authors and Affiliations
Harbin Institute of Technology, Harbin Shi, China
Xinjie Feng, Hongxun Yao & Wenbin Che
Harbin Institute of Technology, Weihai Shi, China
Shengping Zhang
- Xinjie Feng
You can also search for this author inPubMed Google Scholar
- Hongxun Yao
You can also search for this author inPubMed Google Scholar
- Wenbin Che
You can also search for this author inPubMed Google Scholar
- Shengping Zhang
You can also search for this author inPubMed Google Scholar
Corresponding authors
Correspondence toXinjie Feng orHongxun Yao.
Editor information
Editors and Affiliations
Korea Advanced Institute of Science and, Daejeon, Korea (Republic of)
Yong Man Ro
National Chiao Tung University, Hsinchu, Taiwan
Wen-Huang Cheng
Korea Advanced Institute of Science and Technology, Daejeon, Korea (Republic of)
Junmo Kim
National Cheng Kung University, Tainan City, Taiwan
Wei-Ta Chu
Tsinghua University, Beijing, China
Peng Cui
Korea Advanced Institute of Science and Technology, Daejeon, Korea (Republic of)
Jung-Woo Choi
National Tsing Hua University, Hsinchu, Taiwan
Min-Chun Hu
Ghent University, Ghent, Belgium
Wesley De Neve
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Feng, X., Yao, H., Che, W., Zhang, S. (2020). An Effective Way to Boost Black-Box Adversarial Attack. In: Ro, Y.,et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11961. Springer, Cham. https://doi.org/10.1007/978-3-030-37731-1_32
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-030-37730-4
Online ISBN:978-3-030-37731-1
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative