Neural Adaptation as Bayesian Inference

Abstract

Capturing stimulus-response relationships is one of the key problems in sensory neuroscience. Due to the stochasticity inherent in neural responses, probabilistic models provide a natural framework for approaching this problem. Generalized linear models (GLMs) are a family of probabilistic models frequently used for characterizing neural spike responses. Popular special cases include the linear nonlinear Poisson model (LNP) and, history dependent LNP models. We applied both types of models to data recorded from whisker-sensitive neurons in the right trigeminal ganglion cells of adult Sprague-Dawley rats stimulated with white noise. We found that the LNP model falls short of explaining the experimental data. Since most of these types of cells are highly adaptive, a likely explanation of the observed shortcoming of LNP models is their inability to represent adaptation effects. Here, we explore the idea that adaptation can be understood as a form of Bayesian inference. We use a dynamical latent variable model to infer parameters of the stimulus. Using the inferred parameters, we adjusted the history dependent LNP models. This not only allows us to improve the spike prediction performance of these models, but also to study the assumptions about the stimulus encoded in the cells, as well as their rate of adaptation.