Bethge Lab
Bethge Lab
Home
Publications
People
Contact
Light
Dark
Automatic
1
CiteME: Can Language Models Accurately Cite Scientific Claims?
Thousands of new scientific papers are published each month. Such information overload complicates researcher efforts to stay current …
Ori Press
,
Andreas Hochlehnert
,
Ameya Prabhu
,
Vishaal Udandarao
,
Ofir Press
,
Matthias Bethge
PDF
Code
Project
Object segmentation from common fate: Motion energy processing enables human-like zero-shot generalization to random dot stimuli
Motion is a crucial cue for human object perception. Previous studies have demonstrated that humans can recognize objects even in …
Matthias Tangemann
,
Matthias Kümmerer
,
Matthias Bethge
Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models
With the advent and recent ubiquity of foundation models, continual learning (CL) has recently shifted from continual training from …
Lukas Thede
,
Karsten Roth
,
Olivier J. Henaff
,
Matthias Bethge
,
Zeynep Akata
PDF
Infinite dSprites for Disentangled Continual Learning: Separating Memory Edits from Generalization
The ability of machine learning systems to learn continually is hindered by catastrophic forgetting, the tendency of neural networks to …
Sebastian Dziadzio
,
Çağatay Yıldız
,
Gido M. van de Ven
,
Tomasz Trzciński
,
Tinne Tuytelaars
,
Matthias Bethge
PDF
Code
Dataset
Project
Video
Adaptation Odyssey in LLMs: Why Does Additional Pretraining Sometimes Fail to Improve?
In the last decade, the generalization and adaptation abilities of deep learning models were typically evaluated on fixed training and …
Fırat Öncel
,
Matthias Bethge
,
Beyza Ermis
,
Mirco Ravanelli
,
Cem Subakan
,
Çağatay Yıldız
PDF
Modulated Neural ODEs
Neural ordinary differential equations (NODEs) have been proven useful for learning non-linear dynamics of arbitrary trajectories. …
Ilze Amanda Auzina
,
Çağatay Yıldız
,
Sara Magliacane
,
Matthias Bethge
,
Efstratios Gavves
PDF
Code
The Entropy Enigma: Success and Failure of Entropy Minimization
Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they’re faced with new data …
Ori Press
,
Ravid Shwartz-Ziv
,
Yann LeCun
,
Matthias Bethge
PDF
Code
RDumb: A simple approach that questions our progress in continual test-time adaptation
Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time. While early work …
Ori Press
,
Steffen Schneider
,
Matthias Kümmerer
,
Matthias Bethge
PDF
Code
Unsupervised object learning via common fate
Learning generative object models from unlabelled videos is a long standing problem and required for causal scene modeling. We …
Matthias Tangemann
,
Steffen Schneider
,
Julius von Kügelgen
,
Francesco Locatello
,
Peter Gehler
,
Thomas Brox
,
Matthias Kümmerer
,
Matthias Bethge
,
Bernhard Schölkopf
PDF
Code
Cite
×