Deep Neural Networks

Based on recent advances in machine learning more and more complex artificial neural networks are developed that become increasingly proficient in mimicking perceptual inference abilities of humans and animals. As a side effect of their popularity in technology, the increasing availability and diversity of high-performing neural network models opens a new door for studying the neural mechanisms of perceptual skills such as robust object recognition, transfer learning or one-shot learning.

Neuroscientists have always studied the brains of a large variety of different species because different brains offer different opportunities for understanding neural systems. Our group does not only study neural networks that have been developed by biology but also artificial ones that are developed in technology. A large effort in neuroscience goes into connectomics and building atlases to obtain as complete descriptions as possible of different neural networks in the brain. Complete wiring diagrams are important but not enough. We need to understand the algorithms that are invariant to many variations in their implementation. For analogy, one also cannot tell how a radio works from just knowing the wiring diagram of one particular radio. Rather the other way round, it is crucial to have a conceptual understanding of the algorithms and computations to understand the wiring diagrams.

We are using the increasing wealth of high-performing artificial neural networks from technology to develop such a conceptual understanding of the algorithms implemented in the brain. In contrast to the brain, for these artificial neural networks we already have complete wiring diagrams and it is much easier to perform experiments and to probe different hypotheses about how computations are implemented by these networks efficiently. Yet, we still need to develop theoretical tools to identify and describe the critical features of these networks in a concise way. Thus, a good proof of principle for computational neuroscientists to demonstrate that they will ever be able to understand natural neural networks of the brain is if they are able to build good theories for complex neural networks engineered in Machine Learning.

In a recent workshop we discussed important similarities and differences between artificial and biological neural networks. Below are selected references from our own work which aim at utilizing artificial deep neural networks for neuroscience.

Selected References

L. A. Gatys, A. S. Ecker, and M. Bethge
A Neural Algorithm of Artistic Style
arXiv, 2015
#artistic style, #convolutional neural networks, #separating content from style
URL, Details, BibTex

L. A. Gatys, A. S. Ecker, and M. Bethge
Texture Synthesis Using Convolutional Neural Networks
Advances in Neural Information Processing Systems 28, 2015
#texture synthesis, #ventral stream, #convolutional neural networks, #deep learning
Code, URL, PDF, Example textures, BibTex

L. Theis and M. Bethge
Generative Image Modeling Using Spatial LSTMs
Advances in Neural Information Processing Systems 28, 2015
#deep learning, #generative modeling, #natural image statistics, #lstm, #mcgsm
Code, URL, PDF, Supplemental, BibTex

M. Kümmerer, L. Theis, and M. Bethge
Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet
ICLR Workshop, 2015
#saliency, #deep learning
URL, PDF, BibTex

L. Theis, S. Gerwinn, F. Sinz, and M. Bethge
In All Likelihood, Deep Belief Is Not Enough
Journal of Machine Learning Research, 12, 3071-3096, 2011
#natural image statistics, #deep belief networks, #boltzmann machines, #deep learning
Code, PDF, BibTex

University of Tuebingen BCCN CIN MPI