Representation learning for compression, disentangling, and o.o.d. robustness
- ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness (R Geirhos, P Rubisch, C Michaelis, M Bethge, FA Wichmann, W Brendel)
- Towards the first adversarially robust neural network model on MNIST (L Schott, J Rauber, M Bethge, W Brendel)
- Generative image modeling using spatial lstms (L Theis, M Bethge)
- Improving robustness against common corruptions by covariate shift adaptation (S Schneider, E Rusak, L Eck, O Bringmann, W Brendel, M Bethge)
- Excessive invariance causes adversarial vulnerability (JH Jacobsen, J Behrmann, R Zemel, M Bethge)
- Unsupervised object learning via common fate (M Tangemann, S Schneider, J Von Kügelgen, F Locatello, P Gehler, T Brox, M Kümmerer, M Bethge, B Schölkopf)
- Disentanglement and Generalization Under Correlation Shifts (CM Funke, P Vicol, K-C Wang, M Kümmerer, R Zemel, M Bethge)
- One-shot segmentation in clutter (C Michaelis, M Bethge, A Ecker)
- Contrastive learning inverts the data generating process (RS Zimmermann, Y Sharma, S Schneider, M Bethge, W Brendel)
- Unsupervised learning of a steerable basis for invariant image representations (M Bethge, S Gerwinn, JH Macke)
- Towards nonlinear disentanglement in natural data with temporal sparse coding (D Klindt, L Schott, Y Sharma, I Ustyuzhaninov, W Brendel, M Bethge, …)