Bethge Lab
Bethge Lab
Home
Publications
People
Contact
Light
Dark
Automatic
Posts
Two Day Lab Hackathon 2023
Innovative research, collaboration, and fun are at the core of our lab’s mission. At our recent two-day hackathon we explored cutting-edge ideas on group actions, object-centric learning, and stable diffusion.
Matthias Bethge
Feb 13, 2023
0 min read
News
Bethgelab ❤️ ELLIS
Bethgelab is part of
ELLIS
- the European Laboratory for Learning and Intelligent Systems
Matthias Bethge
Jan 13, 2023
0 min read
News
AI sciencepreneurship and startups
Machine learning is rapidly expanding the range of skills that can be used for new solutions to relevant problems in the world either by being more scalable or more precise then human labor. We seek to develop a better understanding of how we can develop economically feasible solutions that best address long-term human needs. We spin off and collaborate with startups such as Maddox AI, Vara, or Aleph Alpha.
Matthias Bethge
Last updated on Feb 15, 2023
1 min read
Research
Neural data analysis, modeling, and tools
We develop machine learning models for neural data analysis to identify the function of biological neurons for inference and learning in the brain (mostly mammalian retina and visual cortex). We are particularly interested in understanding distributed processing in populations of neurons and building tools for automatic model extraction such as functional cell type identification. We collaborate with Thomas Euler, Andreas Tolias and Mackenzie Mathis.
Matthias Bethge
Last updated on Feb 15, 2023
2 min read
Research
Behavioral data analysis, modeling, and tools
We collect and use behavioral data to predict where people look and what features they use for visual decision making and memorization. We also built tools for tracking lifelong natural behavior such as keypoint extraction. We collaborate with Felix Wichmann, Alexander Mathis, Ralf Engbert, and Christoph Teufel.
Matthias Bethge
Last updated on Feb 15, 2023
2 min read
Research
Generative and explainable modeling methods
Discriminative methods learn to map data to labels but different models with identical i.i.d. test performance may use completely different features for decision making. This can be demonstrated e.g. by the use of carefully designed architectures that exclude the use of certain features. In addition, we use generative methods such as adversarial, controversial, or style transfer stimuli that can help to reveal the features used by a neural network, or that are used during inference (analysis-by-synthesis). Sometimes these methods also facilitate aesthetically compelling image manipulations similar to artistic styles.
Matthias Bethge
Last updated on Feb 15, 2023
2 min read
Research
Probabilistic inference and o.o.d. or few-shot generalization benchmarking
Benchmarking is a fundamental tool for evaluating the ability of an ML algorithm to generalize from previous experience to new situations. The academic standard concept of training and testing with samples from the same distribution does not capture the robustness of biological learning systems acting in an open world. We frequently work on benchmarks to improve the comparability of models and avoid shortcut learning.
Matthias Bethge
Last updated on Feb 15, 2023
2 min read
Research
Representation learning for compression, disentangling, and o.o.d. robustness
Lifelong learning requires making experiences in the past reusable for the future. Representations of these experiences need to be memory efficient (compressed) and compositional (disentangled) to facilitate reliable one-shot generalization to new situations that cannot be regarded as samples from a known distribution (Out-of-distribution robustness).
Matthias Bethge
Last updated on Feb 15, 2023
1 min read
Research
Cite
×