Authors:Hiroki Adachi1;Tsubasa Hirakawa1;Takayoshi Yamashita1;Hironobu Fujiyoshi1;Yasunori Ishii2 andKazuki Kozuka2
Affiliations:1Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi, Japan;2Panasonic Corporation, Japan
Keyword(s):Deep Learning, Convolutional Neural Networks, Adversarial Defense, Adversarial Training, Mixup.
Abstract:While convolutional neural networks (CNNs) have achieved excellent performances in various computer vision tasks, they often misclassify with malicious samples, a.k.a. adversarial examples. Adversarial training is a popular and straightforward technique to defend against the threat of adversarial examples. Unfortunately, CNNs must sacrifice the accuracy of standard samples to improve robustness against adversarial examples when adversarial training is used. In this work, we propose Masking and Mixing Adversarial Training (M2 AT) to mitigate the trade-off between accuracy and robustness. We focus on creating diverse adversarial examples during training. Specifically, our approach consists of two processes: 1) masking a perturbation with a binary mask and 2) mixing two partially perturbed images. Experimental results on CIFAR-10 dataset demonstrate that our method achieves better robustness against several adversarial attacks than previous methods.