Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 12358))
Included in the following conference series:
4579Accesses
Abstract
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to generate adversarial patches with strong generalization ability. In other words, the adversarial patches were input-specific and failed to attack images from all classes, especially unseen ones during training. To address the problem, this paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability, which exploits both the perceptual and semantic bias of models. Regarding the perceptual bias, since DNNs are strongly biased towards textures, we exploit the hard examples which convey strong model uncertainties and extract a textural patch prior from them by adopting the style similarities. The patch prior is more close to decision boundaries and would promote attacks. To further alleviate the heavy dependency on large amounts of data in training universal attacks, we further exploit the semantic bias. As the class-wise preference, prototypes are introduced and pursued by maximizing the multi-class margin to help universal training. Taking Automatic Check-out (ACO) as the typical scenario, extensive experiments including white-box/black-box settings in both digital-world (RPC, the largest ACO related dataset) and physical-world scenario (Taobao and JD, the worlds largest online shopping platforms) are conducted. Experimental results demonstrate that our proposed framework outperforms state-of-the-art adversarial patch attack methods.(Our code can be found athttps://github.com/liuaishan/ModelBiasedAttack.)
A. Liu and J. Wang—These authors contributed equally to this work.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 11439
- Price includes VAT (Japan)
- Softcover Book
- JPY 14299
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprintarXiv:1712.09665 (2017)
Chen, W., Zhang, Z., Hu, X., Wu, B.: Boosting decision-based black-box adversarial attacks with random sign flip. In: Proceedings of the European Conference on Computer Vision (2020)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn.20, 273–297 (1995).https://doi.org/10.1007/BF00994018
Ekanayake, P., Deng, Z., Yang, C., Hong, X., Yang, J.: Naïve approach for bounding box annotation and object detection towards smart retail systems. In: Wang, G., Feng, J., Bhuiyan, M.Z.A., Lu, R. (eds.) SpaCCS 2019. LNCS, vol. 11637, pp. 218–227. Springer, Cham (2019).https://doi.org/10.1007/978-3-030-24900-7_18
Eykholt, K., et al.: Robust physical-world attacks on deep learning models. arXiv preprintarXiv:1707.08945 (2017)
Eykholt, K., et al.: Robust physical-world attacks on deep learning models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)
Fan, Y., et al.: Sparse adversarial attack via perturbation factorization. In: European Conference on Computer Vision (2020)
Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: 2008 IEEE conference on computer vision and pattern recognition, pp. 1–8. IEEE (2008)
Gao, L., Zhang, Q., Song, J., Liu, X., Shen, H.: Patch-wise attack for fooling deep neural network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M.: (eds.) Computer Vision–ECCV 2020. ECCV 2020. Lecture Notes in Computer Science, vol 12373. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58604-1_19
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprintarXiv:1811.12231 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprintarXiv:1412.6572 (2014)
Karmon, D., Zoran, D., Goldberg, Y.: Lavan: localized and visible adversarial noise. arXiv preprintarXiv:1801.02608 (2018)
Kim, B., Rudin, C., Shah, J.A.: The bayesian case model: a generative approach for case-based reasoning and prototype classification. In: Advances in neural information processing systems (pp. 1952-1960)In Advances in neural information processing systems, pp. 1952-1960 (2014)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM60(6), 84–90 (2012)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprintarXiv:1607.02533 (2016)
Li, C., et al.: Data priming network for automatic check-out. arXiv preprintarXiv:1904.04978 (2019)
Liu, A., et al.: Spatiotemporal attacks for embodied agents. In: European Conference on Computer Vision (2020)
Liu, A., et al.: Perceptual-sensitive GAN for generating adversarial patches. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1028–1035 (2019)
Liu, A., et al.: Training robust deep neural networks via adversarial noise propagation. arXiv preprintarXiv:1909.09034 (2019)
Liu, H., et al.: Universal adversarial perturbation via prior driven uncertainty approximation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2941–2949 (2019)
Mohamed, A.R., Dahl, G.E., Hinton, G.: Acoustic modeling using deep belief networks. IEEE Trans. Audio, Speech Lang. Process.20(1), 14–22 (2011)
Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773 (2017)
Mopuri, K.R., Ganeshan, A., Radhakrishnan, V.B.: Generalizable data-free objective for crafting universal adversarial perturbations. IEEE Trans. Pattern Anal. Mach. Intell.41(10), 2452–2465 (2018)
Reddy Mopuri, K., Krishna Uppala, P., Venkatesh Babu, R.: Ask, acquire, and attack: data-free uap generation using class impressions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 19–34 (2018)
Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Grad-cam: why did you say that? arXiv preprintarXiv:1611.07450 (2016)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprintarXiv:1312.6034 (2013)
Sutskever, I., Vinyals, O., Le, Q.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprintarXiv:1312.6199 (2013)
Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)
Wei, X.S., Cui, Q., Yang, L., Wang, P., Liu, L.: Rpc: a large-scale retail product checkout dataset. arXiv preprintarXiv:1901.07249 (2019)
Zhang, C., et al.: Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity. arXiv preprintarXiv:1909.06978 (2019)
Zhang, T., Zhu, Z.: Interpreting adversarially trained convolutional neural networks. arXiv preprintarXiv:1905.09797 (2019)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Acknowledgement
This work was supported by National Natural Science Foundation of China (61872021, 61690202), Beijing Nova Program of Science and Technology (Z191100001119050), and Fundamental Research Funds for Central Universities (YWF-20-BJ-J-646).
Author information
Authors and Affiliations
State Key Lab of Software Development Environment, Beihang University, Beijing, China
Aishan Liu, Jiakai Wang, Xianglong Liu, Bowen Cao, Chongzhi Zhang & Hang Yu
Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
Xianglong Liu
- Aishan Liu
You can also search for this author inPubMed Google Scholar
- Jiakai Wang
You can also search for this author inPubMed Google Scholar
- Xianglong Liu
You can also search for this author inPubMed Google Scholar
- Bowen Cao
You can also search for this author inPubMed Google Scholar
- Chongzhi Zhang
You can also search for this author inPubMed Google Scholar
- Hang Yu
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toXianglong Liu.
Editor information
Editors and Affiliations
University of Oxford, Oxford, UK
Andrea Vedaldi
Graz University of Technology, Graz, Austria
Horst Bischof
University of Freiburg, Freiburg im Breisgau, Germany
Thomas Brox
University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
Jan-Michael Frahm
1Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, A., Wang, J., Liu, X., Cao, B., Zhang, C., Yu, H. (2020). Bias-Based Universal Adversarial Patch Attack for Automatic Check-Out. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12358. Springer, Cham. https://doi.org/10.1007/978-3-030-58601-0_24
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-030-58600-3
Online ISBN:978-3-030-58601-0
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative