Publications by J. Rauber

Preprints


J. Rauber, M. Bethge, and W. Brendel
EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy
2020
URL, BibTex

Journal Articles


J. Rauber, R. Zimmermann, M. Bethge, and W. Brendel
Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX
Journal of Open Source Software, 5(53), 2607, 2020
URL, DOI, BibTex
F. Croce, J. Rauber, and M. Hein
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
International Journal of Computer Vision, 2019
Code, URL, DOI, PDF, BibTex

Conference Papers


W. Brendel, J. Rauber, M. Kümmerer, I. Ustyuzhaninov, and M. Bethge
Accurate, reliable and fast robustness evaluation
Advances in Neural Information Processing Systems 32, 2019, 2019
URL, BibTex
J. Rauber, E. Fox, and L. Gatys
Modeling patterns of smartphone usage and their relationship to cognitive health
Machine Learning for Health Workshop, NeurIPS 2019, 2019
BibTex
L. Schott, J. Rauber, W. Brendel, and M. Bethge
Towards the first adversarially robust neural network model on MNIST
International Conference on Learning Representations (ICLR), 2019
URL, BibTex
W. Brendel, J. Rauber, A. Kurakin, N. Papernot, B. Veliqi, M. Salathé, S. P. Mohanty, and M. Bethge
Adversarial Vision Challenge (Proposal)
32nd Conference on Neural Information Processing Systems (NIPS 2018) Competition Track, 2018
Code, URL, BibTex
R. Geirhos, C. R. M. Temme, J. Rauber, H. H. Schütt, M. Bethge, and F. A. Wichmann
Generalisation in humans and deep neural networks
Advances in Neural Information Processing Systems 31, 2018
Code, URL, BibTex
W. Brendel, J. Rauber, and M. Bethge
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
International Conference on Learning Representations, 2018
#adversarial attacks, #adversarial examples, #adversarials
Code, URL, OpenReview, BibTex
J. Rauber, W. Brendel, and M. Bethge
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning, 2017
#adversarial attacks, #adversarial examples, #adversarials
Code, URL, BibTex

Technical Reports


J. Rauber and M. Bethge
Fast Differentiable Clipping-Aware Normalization and Rescaling
2020
URL, BibTex
N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, and A. Madry
On Evaluating Adversarial Robustness
2019
Code, URL, BibTex

Book Chapters


W. Brendel, J. Rauber, A. Kurakin, N. Papernot, B. Veliqi, S. P. Mohanty, F. Laurent, M. Salathé, et al.
Adversarial Vision Challenge (Results)
The NeurIPS'18 Competition, Springer, Cham, 2020, ISBN 978-3-030-29135-8
URL, DOI, ISBN, BibTex

Preprint versions of published papers


R. Geirhos, D. H. J. Janssen, H. H. Schütt, J. Rauber, M. Bethge, and F. A. Wichmann
Comparing deep neural networks against humans: object recognition when the signal gets weaker
arXiv (superseded by "Generalisation in humans and deep neural networks"), 170606969, 2017
Code, URL, BibTex

Abstracts


R. Geirhos, P. Rubisch, J. Rauber, C. R. M. Temme, C. Michaelis, W. Brendel, M. Bethge, and F. A. Wichmann
Inducing a human-like shape bias leads to emergent human-level distortion robustness in CNNs
Journal of Vision, 19(10), 2019
DOI, BibTex
University of Tuebingen BCCN CIN MPI