[edit]
Uncertainty Estimation Using a Single Deep Deterministic Neural Network
Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin GalProceedings of the 37th International Conference on Machine Learning, PMLR 119:9690-9700, 2020.
Abstract
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass. Our approach, deterministic uncertainty quantification (DUQ), builds upon ideas of RBF networks. We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models. By enforcing detectability of changes in the input using a gradient penalty, we are able to reliably detect out of distribution data. Our uncertainty quantification scales well to large datasets, and using a single model, we improve upon or match Deep Ensembles in out of distribution detection on notable difficult dataset pairs such as FashionMNIST vs. MNIST, and CIFAR-10 vs. SVHN.
Cite this Paper
BibTeX
@InProceedings{pmlr-v119-van-amersfoort20a, title = {Uncertainty Estimation Using a Single Deep Deterministic Neural Network}, author = {Van Amersfoort, Joost and Smith, Lewis and Teh, Yee Whye and Gal, Yarin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9690--9700}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/van-amersfoort20a/van-amersfoort20a.pdf}, url = {https://proceedings.mlr.press/v119/van-amersfoort20a.html}, abstract = {We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass. Our approach, deterministic uncertainty quantification (DUQ), builds upon ideas of RBF networks. We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models. By enforcing detectability of changes in the input using a gradient penalty, we are able to reliably detect out of distribution data. Our uncertainty quantification scales well to large datasets, and using a single model, we improve upon or match Deep Ensembles in out of distribution detection on notable difficult dataset pairs such as FashionMNIST vs. MNIST, and CIFAR-10 vs. SVHN.}}
Endnote
%0 Conference Paper%T Uncertainty Estimation Using a Single Deep Deterministic Neural Network%A Joost Van Amersfoort%A Lewis Smith%A Yee Whye Teh%A Yarin Gal%B Proceedings of the 37th International Conference on Machine Learning%C Proceedings of Machine Learning Research%D 2020%E Hal Daumé III%E Aarti Singh%F pmlr-v119-van-amersfoort20a%I PMLR%P 9690--9700%U https://proceedings.mlr.press/v119/van-amersfoort20a.html%V 119%X We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass. Our approach, deterministic uncertainty quantification (DUQ), builds upon ideas of RBF networks. We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models. By enforcing detectability of changes in the input using a gradient penalty, we are able to reliably detect out of distribution data. Our uncertainty quantification scales well to large datasets, and using a single model, we improve upon or match Deep Ensembles in out of distribution detection on notable difficult dataset pairs such as FashionMNIST vs. MNIST, and CIFAR-10 vs. SVHN.
APA
Van Amersfoort, J., Smith, L., Teh, Y.W. & Gal, Y.. (2020). Uncertainty Estimation Using a Single Deep Deterministic Neural Network.Proceedings of the 37th International Conference on Machine Learning, inProceedings of Machine Learning Research 119:9690-9700 Available from https://proceedings.mlr.press/v119/van-amersfoort20a.html.