Movatterモバイル変換


[0]ホーム

URL:


[International Conference on Machine Learning Logo] Proceedings of Machine Learning Research

[edit]

First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

Carl-Johann Simon-Gabriel, Yann Ollivier, Leon Bottou, Bernhard Schölkopf, David Lopez-Paz
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5809-5817, 2019.

Abstract

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the L1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-simon-gabriel19a, title = {First-Order Adversarial Vulnerability of Neural Networks and Input Dimension}, author = {Simon-Gabriel, Carl-Johann and Ollivier, Yann and Bottou, Leon and Sch{\"o}lkopf, Bernhard and Lopez-Paz, David}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5809--5817}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/simon-gabriel19a/simon-gabriel19a.pdf}, url = {https://proceedings.mlr.press/v97/simon-gabriel19a.html}, abstract = {Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the L1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.}}
Endnote
%0 Conference Paper%T First-Order Adversarial Vulnerability of Neural Networks and Input Dimension%A Carl-Johann Simon-Gabriel%A Yann Ollivier%A Leon Bottou%A Bernhard Schölkopf%A David Lopez-Paz%B Proceedings of the 36th International Conference on Machine Learning%C Proceedings of Machine Learning Research%D 2019%E Kamalika Chaudhuri%E Ruslan Salakhutdinov%F pmlr-v97-simon-gabriel19a%I PMLR%P 5809--5817%U https://proceedings.mlr.press/v97/simon-gabriel19a.html%V 97%X Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the L1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.
APA
Simon-Gabriel, C., Ollivier, Y., Bottou, L., Schölkopf, B. & Lopez-Paz, D.. (2019). First-Order Adversarial Vulnerability of Neural Networks and Input Dimension.Proceedings of the 36th International Conference on Machine Learning, inProceedings of Machine Learning Research 97:5809-5817 Available from https://proceedings.mlr.press/v97/simon-gabriel19a.html.

Related Material


[8]ページ先頭

©2009-2025 Movatter.jp