BETHGE LAB

AI Research Group at the University of Tübingen

Research

Neuro AI – Autonomous Lifelong Learning in Machines and Brains

Human learning is rich in feats we have yet to fully mimic in machine learning—such as open-ended knowledge acquisition or cognitive mapping of the environment and actions for self-logging, navigation, reflection and planning. Our mission is to develop agentic systems that can learn, adapt, and generalize over time, mirroring the open-ended nature of learning both by individual humans and collectively in science. Our approach is rooted in data-centric machine learning, focusing on open-ended evaluation and scalable compositional learning. To this end, we explore multi-modal foundation models that support rapid retrieval, reuse, and compositional integration of selected knowledge, enabling scalable and flexible learning. Our previous research on these topics can be summarized under the following headings:

Open-ended model evaluation & benchmarking

Machine learning has arrived in the post-dataset era with models being used on ever increasing data and tasks with dynamically evolving evaluation criteria including safety, domain contamination and computing costs in addition to performance. Therefore, developing new concepts and tools for lifelong/infinite benchmarking and the ability to efficiently democratize evaluation are increasingly vital for transparent model assessment. At the same time this opens many opportunities for using machine learning beyond prediction towards understanding in continual scientific model building.

Language Model Agents

Language model agents — AI systems capable of autonomous thinking, communication, and reasoning — enable rich, natural human-machine interactions and collaboration on complex tasks. We aim to develop assistants for theorem proving, automating scientific discovery, and aggregating information from the web to make reliable, near-future predictions in uncertain scenarios.

Lifelong compositional, scalable and object-centric learning

Lifelong learning requires making experiences in the past reusable for the future. Common regularization methods for preventing catastrophic forgetting in lifelong learning do not scale. We hypothesize that compositional learning is key to scalable lifelong learning in humans. The object-centric nature of human perception is a strong indication for an inherently compositional representation of the world. We combine conceptual research on compositionality and object-centric perception with scalable, practically relevant lifelong learning methods and benchmarks, until we figure out how to merge them.

Modeling brain representations & mechanistic interpretability

We develop machine learning models for neural data analysis to understand how populations of biological neurons perform inference and learning in the brain. We are particularly interested in understanding the principles that govern distributed processing in populations of neurons. To this end, we build and benchmark digital twins and detail-on-demand models of certain brain areas (primarily mammalian retina and visual cortex), and develop machine learning tools for interpreting, comparing and ultimately understanding the representations and computations in these neural networks.

Attention in Humans and Machines

Human Attention facilitates active perception and inference and can serve as a signature of lifelong cognitive navigation. We want to understand how humans benefit from this mechanism and how it may help us to improve attention mechanisms in machine learning. We build and benchmark models of human attention in various modalities (image and video saliency, scanpath prediction, eye movements in VR) and aim at including them as building blocks in downstream models of computer vision tasks and human behavior. We collaborate with Felix Wichmann, Alexander Mathis, Ralf Engbert, and Christoph Teufel.

AI sciencepreneurship and startups

Machine learning is rapidly expanding the range of skills that can be used for new solutions to relevant problems in the world either by being more scalable or more precise then human labor. We seek to develop a better understanding of how we can develop economically feasible solutions that best address long-term human needs. We spin off and collaborate with startups such as Maddox AI or Black Forest Labs.

News

Two Day Lab Hackathon 2023
Innovative research, collaboration, and fun are at the core of our lab’s mission. At our recent two-day hackathon we explored cutting-edge ideas on group actions, object-centric learning, and stable diffusion.
Two Day Lab Hackathon 2023
Bethgelab ❤️ ELLIS
Bethgelab is part of ELLIS - the European Laboratory for Learning and Intelligent Systems
Bethgelab ❤️ ELLIS

Broader Impact

Impact Beyond Science

BWKI

The Bundeswettbewerb für Künstliche Intelligenz (BWKI) is a federal competition for artificial intelligence (AI) in Germany, Austria and recently Switzerland. It is organized by the Tübingen AI Center funded by the Carl-Zeiss-Stiftung and is aimed at promoting interest and talent in AI among young people. The competition is open to students from different age groups, and participants work in teams to develop innovative solutions for real-world problems using AI technologies. The BWKI also aims to encourage the development of AI skills and knowledge, which is becoming increasingly important in today’s digital age.

IT4Kids

IT4Kids is a non-profit organization in Germany that provides programming education for primary and lower secondary school students. Their mission is to support the digital transformation of education in schools by teaching coding and programming skills to children in a fun and engaging way.