2019


M. Rolínek, V. Musil, A. Paulus, M. Vlastelica, C. Michaelis, and G. Martius
Optimizing Rank-based Metrics with Blackbox Differentiation
arXiv, 2019
URL, BibTex
W. Brendel, J. Rauber, M. Kümmerer, I. Ustyuzhaninov, and M. Bethge
Accurate, reliable and fast robustness evaluation
Advances in Neural Information Processing Systems 32, 2019, 2019
URL, BibTex
J. Rauber, E. Fox, and L. Gatys
Modeling patterns of smartphone usage and their relationship to cognitive health
Machine Learning for Health Workshop, NeurIPS 2019, 2019
BibTex
F. Croce, J. Rauber, and M. Hein
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
International Journal of Computer Vision, 2019
Code, URL, DOI, PDF, BibTex
S. Haghiri, P. Rubisch, R. Geirhos, F. Wichmann, and U. von Luxburg
Comparison-Based Framework for Psychophysics: Lab versus Crowdsourcing
arXiv, 2019
URL, BibTex
C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, and W. Brendel
Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Machine Learning for Autonomous Driving Workshop, NeurIPS 2019, 2019
Code, URL, BibTex
E. Creager, D. Madras, J.-H. Jacobsen, M. A. Weis, K. Swersky, T. Pitassi, and R. Zemel
Flexibly Fair Representation Learning by Disentanglement
International Conference on Machine Learning (ICML), 2019
URL, BibTex
W. Brendel and M. Bethge
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
International Conference on Learning Representations (ICLR), 2019
BibTex
R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
International Conference on Learning Representations (ICLR), 2019
Code, URL, BibTex
A. S. Ecker, F. H. Sinz, E. Froudarakis, P. G. Fahey, S. A. Cadena, E. Y. Walker, E. Cobos, J. Reimer, et al.
A rotation-equivariant convolutional neural network model of primary visual cortex
International Conference on Learning Representations (ICLR), 2019
#v1, #system identification, #microns, #convolutional neural network, #rotation equivariance
Code, URL, PDF, Data, BibTex
L. Schott, J. Rauber, W. Brendel, and M. Bethge
Towards the first adversarially robust neural network model on MNIST
International Conference on Learning Representations (ICLR), 2019
URL, BibTex
N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, and A. Madry
On Evaluating Adversarial Robustness
2019
Code, URL, BibTex
S. A. Cadena, F. H. Sinz, T. Muhammad, E. Froudarakis, E. Cobos, E. Y. Walker, J. Reimer, M. Bethge, et al.
How well do deep neural networks trained on object recognition characterize the mouse visual system?
NeurIPS Neuro AI Workshop, 2019
#mouse visual cortex, #goal-driven modeling, #object recognition, #deep neural networks, #hierarchical organization
URL, PDF, BibTex
M. F. Günthner, S. A. Cadena, G. H. Denfield, E. Y. Walker, A. S. Tolias, M. Bethge, and A. S. Ecker
Learning Divisive Normalization in Primary Visual Cortex
bioRxiv, 2019
#system identification, #v1, #convolutional neural networks, #divisive normalization
URL, PDF, BibTex
J.-H. Jacobsen, J. Behrmann, R. Zemel, and M. Bethge
Excessive Invariance Causes Adversarial Vulnerability
International Conference on Learning Representations (ICLR), 2019
BibTex
E. Y. Walker, F. H. Sinz, E. Froudarakis, P. G. Fahey, T. Muhammad, A. S. Ecker, E. Cobos, J. Reimer, et al.
Inception loops discover what excites neurons most using deep predictive models
Nature Neuroscience, 2019
#primary visual cortex, #deep neural network, #system identification, #inception
URL, DOI, BibTex
S. A. Cadena, G. H. Denfield, E. Y. Walker, L. A. Gatys, A. S. Tolias, M. Bethge, and A. S. Ecker
Deep convolutional models improve predictions of macaque V1 responses to natural images
PLoS Computational Biology, 2019
URL, DOI, BibTex
University of Tuebingen BCCN CIN MPI