Open Access Research

Multiscale analysis of slow-fast neuronal learning models with noise

Mathieu Galtier12* and Gilles Wainrib3

Author Affiliations

1 NeuroMathComp Project Team, INRIA/ENS Paris, 23 avenue d’Italie, Paris, 75013, France

2 School of Engineering and Science, Jacobs University Bremen gGmbH, College Ring 1, P.O. Box 750 561, Bremen, 28725, Germany

3 Laboratoire Analyse Géométrie et Applications, Université Paris 13, 99 avenue Jean-Baptiste Clément, Villetaneuse, Paris, France

For all author emails, please log on.

The Journal of Mathematical Neuroscience 2012, 2:13  doi:10.1186/2190-8567-2-13

Published: 22 November 2012

Abstract

This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations.

Keywords:
slow-fast systems; stochastic differential equations; inhomogeneous Markov process; averaging; model reduction; recurrent networks; unsupervised learning; Hebbian learning; STDP