SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research

Multiscale analysis of slow-fast neuronal learning models with noise

Mathieu Galtier12* and Gilles Wainrib3

Author Affiliations

1 NeuroMathComp Project Team, INRIA/ENS Paris, 23 avenue d’Italie, Paris, 75013, France

2 School of Engineering and Science, Jacobs University Bremen gGmbH, College Ring 1, P.O. Box 750 561, Bremen, 28725, Germany

3 Laboratoire Analyse Géométrie et Applications, Université Paris 13, 99 avenue Jean-Baptiste Clément, Villetaneuse, Paris, France

For all author emails, please log on.

The Journal of Mathematical Neuroscience 2012, 2:13  doi:10.1186/2190-8567-2-13

The electronic version of this article is the complete one and can be found online at: http://www.mathematical-neuroscience.com/content/2/1/13


Received:19 April 2012
Accepted:26 October 2012
Published:22 November 2012

© 2012 M. Galtier, G. Wainrib; licensee Springer

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations.

Keywords:
slow-fast systems; stochastic differential equations; inhomogeneous Markov process; averaging; model reduction; recurrent networks; unsupervised learning; Hebbian learning; STDP

1 Introduction

Complex systems are made of a large number of interacting elements leading to non-trivial behaviors. They arise in various areas of research such as biology, social sciences, physics or communication networks. In particular in neuroscience, the nervous system is composed of billions of interconnected neurons interacting with their environment. Two specific features of this class of complex systems are that (i) external inputs and (ii) internal sources of random fluctuations influence their dynamics. Their theoretical understanding is a great challenge and involves high-dimensional non-linear mathematical models integrating non-autonomous and stochastic perturbations.

Modeling these systems gives rise to many different scales both in space and in time. In particular, learning processes in the brain involve three time-scales: from neuronal activity (fast), external stimulation (intermediate) to synaptic plasticity (slow). Here, fast time-scale corresponds to a few milliseconds and slow time-scale to minutes/hour, and intermediate time-scale generally ranges between fast and slow scales, although some stimuli may be faster than neuronal activity time-scale (e.g., submilliseconds auditory signals [1]). The separation of these time-scales is an important and useful property in their study. Indeed, multiscale methods appear particularly relevant to handle and simplify such complex systems.

First, stochastic averaging principle [2,3] is a powerful tool to analyze the impact of noise on slow-fast dynamical systems. This method relies on approximating the fast dynamics by its quasi-stationary measure and averaging the slow evolution with respect to this measure. In the asymptotic regime of perfect time-scale separation, this leads to a slow reduced system whose analysis enables a better understanding of the original stochastic model.

Second, periodic averaging theory [4], which has been originally developed for celestial mechanics, is particularly relevant to study the effect of fast deterministic and periodic perturbations (external input) on dynamical systems. This method also leads to a reduced model where the external perturbation is time-averaged.

It seems appropriate to gather these two methods to address our case of a noisy and input-driven slow-fast dynamical system. This combined approach provides a novel way to understand the interactions between the three time-scales relevant in our models. More precisely, we will consider the following class of multiscale stochastic differential equations (SDEs), with ϵ 1 , ϵ 2 > 0 two small parameters

{ d v ϵ = 1 ϵ 1 [ F ( v ϵ , w ϵ , u ( t ϵ 2 ) ) ] d t + 1 ϵ 1 Σ d B ( t ) , d w ϵ = G ( v ϵ , w ϵ ) d t , (1)

where v ϵ R p represents the fast activity of the individual elements, w ϵ R q represents the connectivity weights that vary slowly due to plasticity, and u ( t ) R p represents the value of the external input at time t. Random perturbations are included in the form of a diffusion term, and ( B ( t ) ) is a standard Brownian motion.

We are interested in the double limit ϵ 1 0 and ϵ 2 0 to describe the evolution of the slow variable w in the asymptotic regime where both the variable v and the external input are much faster than w. This asymptotic regime corresponds to the study of a neuronal network in which both the external input u and the neuronal activity v operate on a faster time-scale than the slow plasticity-driven evolution of synaptic weights W. To account for the possible difference of time-scales between v and the input, we introduce the time-scale ratio μ = ϵ 1 / ϵ 2 [ 0 , ] . In the interesting case where μ ( 0 , ) , one needs to understand the long-time behavior of the rescaled periodically forced SDE for any w 0 fixed

d v = F ( v , w 0 , μ t ) d t + Σ ( v , w 0 ) d B ( t ) .

Recently, in an important contribution [5], a precise understanding of the long-time behavior of such processes has been obtained using methods from partial differential equations. In particular, conditions ensuring the existence of a periodic family of probability measures to which the law of v converges as time grows have been identified, together with a sharp estimation of the speed of mixing. These results are at the heart of the extension of the classical stochastic averaging principle [2] to the case of periodically forced slow-fast SDEs [6]. As a result, we obtain a reduced equation describing the slow evolution of variable w in the form of an ordinary differential equation,

d w d t = G ¯ ( w ) ,

where G ¯ is constructed as an average of G with respect to a specific probability measure, as explained in Section 2.

This paper first introduces the appropriate mathematical framework and then focuses on applying these multiscale methods to learning neural networks.

The individual elements of these networks are neurons or populations of neurons. A common assumption at the basis of mathematical neuroscience [7] is to model their behavior by a stochastic differential equation which is made of four different contributions: (i) an intrinsic dynamics term, (ii) a communication term, (iii) a term for the external input, and (iv) a stochastic term for the intrinsic variability. Assuming that their activity is represented by the fast variable v R n , the first equation of system (1) is a generic representation of a neural network (function F corresponds to the first three terms contributing to the dynamics). In the literature, the level of non-linearity of the function F ranges from a linear (or almost-linear) system to spiking neuron dynamics [8], yet the structure of the system is universal.

These neurons are interconnected through a connectivity matrix which represents the strength of the synapses connecting the real neurons together. The slow modification of the connectivity between the neurons is commonly thought to be the essence of learning. Unsupervised learning rules update the connectivity exclusively based on the value of the activity variable. Therefore, this mechanism is represented by the slow equation above, where w R n × n is the connectivity matrix and G is the learning rule. Probably the most famous of these rules is the Hebbian learning rule introduced in [9]. It says that if both neurons A and B are active at the same time, then the synapses from A to B and B to A should be strengthened proportionally to the product of the activity of A and B. There are many different variations of this correlation-based principle which can be found in [10,11]. Another recent, unsupervised, biologically motivated learning rule is the spike-timing-dependent plasticity (STDP) reviewed in [12]. It is similar to Hebbian learning except that it focuses on causation instead of correlation and that it occurs on a faster time-scale. Both of these types of rule correspond to G being quadratic in v.

Previous literature about dynamic learning networks is thick, yet we take a significantly different approach to understand the problem. An historical focus was the understanding of feedforward deterministic networks [13-15]. Another approach consisted in precomputing the connectivity of a recurrent network according to the principles underlying the Hebbian rule [16]. Actually, most of current research in the field is focused on STDP and is based on the precise times of the spikes, making them explicit in computations [17-20]. Our approach is different from the others regarding at least one of the following points: (i) we consider recurrent networks, (ii) we study the evolution of the coupled system activity/connectivity, and (iii) we consider bounded dynamical systems for the activity without asking them to be spiking. Besides, our approach is a rigorous mathematical analysis in a field where most results rely heavily on heuristic arguments and numerical simulations. To our knowledge, this is the first time such models expressed in a slow-fast SDE formalism are analyzed using temporal averaging principles.

The purpose of this application is to understand what the network learns from the exposition to time-dependent inputs. In other words, we are interested in the evolution of the connectivity variable, which evolves on a slow time-scale, under the influence of the external input and some noise added on the fast variable. More precisely, we intend to explicitly compute the equilibrium connectivities of such systems. This final matrix corresponds to the knowledge the network has extracted from the inputs. Although the derivation of the results is mathematically tough for untrained readers, we have tried to extract widely understandable conclusions from our mathematical results and we believe this paper brings novel elements to the debate about the role and mechanisms of learning in large scale networks.

Although the averaging method is a generic principle, we have made significant assumptions to keep the analysis of the averaged system mathematically tractable. In particular, we will assume that the activity evolves according to a linear stochastic differential equation. This is not very realistic when modeling individual neurons, but it seems more reasonable to model populations of neurons; see Chapter 11 of [7].

The paper is organized as follows. Section 2 is devoted to introducing the temporal averaging theory. Theorem 2.2 is the main result of this section. It provides the technical tool to tackle learning neural networks. Section 3 corresponds to application of the mathematical tools developed in the previous section onto the models of learning neural networks. A generic model is described and three different particular models of increasing complexity are analyzed. First, Hebbian learning, then trace-learning, and finally STDP learning are analyzed for linear activities. Finally, Section 4 is a discussion of the consequences of the previous results from the viewpoint of their biological interpretation.

2 Averaging principles: theory

In this section, we present multiscale theoretical results concerning stochastic averaging of periodically forced SDEs (Section 2.3). These results combine ideas from singular perturbations, classical periodic averaging and stochastic averaging principles. Therefore, we recall briefly, in Sections 2.1 and 2.2, several basic features of these principles, providing several examples that are closely related to the application developed in Section 3.

2.1 Periodic averaging principle

We present here an example of a slow-fast ordinary differential equation perturbed by a fast external periodic input. We have chosen this example since it readily illustrates many ideas that will be developed in the following sections. In particular, this example shows how the ratio between the time-scale separation of the system and the time-scale of the input appears as a new crucial parameter.

Example 2.1 Consider the following linear time-inhomogeneous dynamical system with ϵ 1 , ϵ 2 > 0 two parameters:

d v ϵ d t = 1 ϵ 1 ( v ϵ + sin ( t ϵ 2 ) ) , d w ϵ d t = w ϵ + v ϵ 2 .

This system is particularly handy since one can solve analytically the first ordinary differential equation, that is,

v ( t ) = 1 1 + μ 2 ( sin ( t ϵ 2 ) μ cos ( t ϵ 2 ) ) + v 0 e t ϵ 1 ,

where we have introduced the time-scales ratio

μ : = ϵ 1 ϵ 2 .

In this system, one can distinguish various asymptotic regimes when ϵ 1 and ϵ 2 are small according to the asymptotic value of μ:

• Regime 1: Slow input μ = 0 :

First, if ϵ 1 0 and ϵ 2 is fixed, then v ( t ) is close to sin ( t ϵ 2 ) , and from geometric singular perturbation theory[21,22] one can approximate the slow variable w ϵ by the solution of

d w d t = w + ( sin ( t ϵ 2 ) ) 2 .

Now taking the limit ϵ 2 0 and applying the classical averaging principle[4] for periodically driven differential equations, one can approximate w ϵ by the solution of

d w d t = w + 1 2 ,

since 1 2 π 0 2 π sin ( s ) 2 d s = 1 2 .

• Regime 2: Fast input μ = :

If ϵ 2 0 and ϵ 1 is fixed, then the classical averaging principle implies that v ϵ is close to the solution of

d v d t = v ϵ 1 ,

so that w ϵ can be approximated by

d w d t = w + ( v 0 e t / ϵ 1 ) 2 ,

and when ϵ 1 0 , one does not recover the same asymptotic behavior as in Regime 1.

• Regime 3: Time-scales matching 0 < μ < :

Now consider the intermediate case where ϵ 1 is asymptotically proportional to ϵ 2 . In this case, v ϵ can be approximated on the fast time-scale t / ϵ 1 by the periodic solution v ¯ μ ( t ) = 1 1 + μ 2 ( sin ( μ t ) μ cos ( μ t ) ) of d v d t = v + sin ( μ t ) . As a consequence, w ϵ will be close to the solution of

d w d t = w + 1 2 ( 1 + μ 2 ) ,

since 1 2 π 0 2 π v ¯ μ ( t / μ ) 2 d t = 1 2 ( 1 + μ 2 ) .

Thus, we have seen in this example that

1. the two limits ϵ 1 0 and ϵ 2 0 do not commute,

2. the ratio μ between the internal time-scale separation ϵ 1 and the input time-scale ϵ 2 is a key parameter in the study of slow-fast systems subject to a time-dependent perturbation.

2.2 Stochastic averaging principle

Time-scales separation is a key property to investigate the dynamical behavior of non-linear multiscale systems, with techniques ranging from averaging principles to geometric singular perturbation theory. This property appears to be also crucial to understanding the impact of noise. Instead of carrying a small noise analysis, a multiscale approach based on the stochastic averaging principle[2] can be a powerful tool to unravel subtle interplays between noise properties and non-linearities. More precisely, consider a system of SDEs in R p + q :

d v t ϵ = 1 ϵ F ( v t ϵ , w t ϵ ) d t + 1 ϵ Σ ( v t ϵ , w t ϵ ) d B ( t ) , d w t ϵ = G ( v t ϵ , w t ϵ ) d t ,

with initial conditions v ϵ ( 0 ) = v 0 , w ϵ ( 0 ) = w 0 , and where w ϵ R q is called the slow variable, v ϵ R p is the fast variable, with F, G, Σ smooth functions ensuring the existence and uniqueness for the solution ( v ϵ , w ϵ ) , and B ( t ) a p-dimensional standard Brownian motion, defined on a filtered probability space ( Ω , F , P ) . Time-scale separation in encoded in the small parameter ϵ, which denotes in this section a single positive real number.

In order to approximate the behavior of ( v ϵ , w ϵ ) for small ϵ, the idea is to average out the equation for the slow variable with respect to the stationary distribution of the fast one. More precisely, one first assumes that for each w R q fixed, the frozen fast SDE,

d v t = F ( v t , w ) d t + Σ ( v t , w ) d B ( t ) ,

admits a unique invariant measure, denoted ρ w ( d v ) . Then, one defines the averaged drift vector field G ¯

G ¯ ( w ) : = R m G ( v , w ) ρ w ( d v ) (2)

and w the solution of d w d t = G ¯ ( w ) with the initial condition w ( 0 ) = y 0 . Under some dissipativity assumptions, the stochastic averaging principle [2] states:

Theorem 2.1For any δ > 0 and T > 0 ,

lim ϵ 0 P [ sup t [ 0 , T ] w t ϵ w t 2 > δ ] = 0 . (3)

As a consequence, analyzing the behavior of the deterministic solution w can help to understand useful features of the stochastic process ( v ϵ , w ϵ ) .

Example 2.2 In this example we consider a similar system as in Example 2.1, but with a noise term instead of the periodic perturbation. Namely, we consider ( v ϵ , w ϵ ) the solution of the system of SDEs,

d v ϵ = 1 ϵ v ϵ d t + σ ϵ d B ( t ) , d w ϵ = ( w ϵ + ( v ϵ ) 2 ) d t ,

with ϵ > 0 a small parameter and σ > 0 a positive constant. From Theorem 2.1, the stochastic slow variable w ϵ can be approximated in the sense of (3) by the deterministic solution w of

d w d t = v R ( w + v 2 ) ρ ( d v ) ,

where ρ ( d v ) is the stationary measure of the linear diffusion process,

d v = v d t + σ d B ( t ) ,

that is,

ρ ( d v ) = 1 σ π e v 2 σ 2 .

Consequently, w ϵ can be approximated in the limit ϵ 0 by the solution of

d w d t = w + σ 2 2 .

Applying (3) leads to the following result: for any T > 0 and δ > 0 ,

lim ϵ 0 P [ sup t [ 0 , T ] | w t ϵ ( y 0 σ 2 2 ) e t + σ 2 2 | 2 > δ ] = 0 .

Interestingly, the asymptotic behavior of w ϵ for small ϵ is characterized by a deterministic trajectory that depends on the strength σ of the noise applied to the system. Thus, the stochastic averaging principle appears particularly interesting when unraveling the impact of noise strength on slow-fast systems.

Many other results have been developed since, extending the set-up to the case where the slow variable has a diffusion component or to infinite-dimensional settings for instance, and also refining the convergence study, providing homogenization results concerning the limit of ϵ 1 / 2 ( w ϵ w ) or establishing large deviation principles (see [23] for a recent monograph). However, fewer results are available in the case of non-homogeneous SDEs, that is, when the system is perturbed by an external time-dependent signal. This setting is of particular interest in the framework of stochastic learning models, and we present the main relevant mathematical results in the following section.

2.3 Double averaging principle

Combining ideas of periodic and stochastic averaging introduced previously, we present here theoretical results concerning multiscale SDEs driven by an external time-periodic input. Consider ( v ϵ , w ϵ ) the solution of

d v ϵ = 1 ϵ 1 [ F ( v ϵ , w ϵ , t ϵ 2 ) ] d t + 1 ϵ 1 Σ ( v ϵ , w ϵ ) d B ( t ) , d w ϵ = G ( v ϵ , w ϵ ) d t , (4)

with t F ( v , w , t ) R p a τ-periodic function and ϵ = ( ϵ 1 , ϵ 2 ) R + 2 . Parameter ϵ 1 represents the internal time-scale separation and ϵ 2 the input time-scale. We consider the case where both ϵ 1 and ϵ 2 are small, that is, a strong time-scale separation between the fast variable v ϵ R p and the slow one w ϵ R q , and a fast periodic modulation of the fast drift F ( v , w , ) .

We further denote z = ( v , w ) .

Definition 2.1 We define the asymptotic time-scale ratio

μ : = lim | ϵ | 0 ϵ 1 ϵ 2 . (5)

Accordingly, we denote lim | ϵ | 0 μ the distinguished limit when ϵ 1 0 , ϵ 2 0 with ϵ 1 / ϵ 2 μ .

The following assumption is made to ensure existence and uniqueness of a strong solution to system (4). In the following, z 1 , z 2 will denote the usual scalar product for vectors.

Assumption 2.1 Existence and uniqueness of a strong solution

(i) The functions F, G, and Σ are locally Lipschitz continuous in the space variable z. More precisely, for any R > 0 , there exists a constant α R such that

F ( z ) F ( z ) α R z z for any  z , z R p + q  with  z R  and  z R .

(ii) There exists a constant R > 0 such that

sup z > R , t > 0 ( F ( z , t ) , G ( z ) ) , z z 2 < 0 .

To control the asymptotic behavior of the fast variable, one further assumes the following.

Assumption 2.2 Asymptotic behavior of the fast process:

(i) The diffusion matrix Σ is bounded

M Σ > 0 s.t.  z , Σ ( z ) < M Σ

and uniformly non-degenerate

η 0 > 0 s.t.  v , z Σ ( z ) Σ ( z ) v , v η 0 v 2 .

(ii) There exists r 0 < 0 such that for all t 0 and for all z , x R p + q ,

z F ( z , t ) x , x r 0 x 2 .

According to the value of μ { 0 , R + , } , the stochastic averaging principle is based on a description of the asymptotic behavior of various rescaled fast frozen processes. More precisely, under Assumptions 2.1 and 2.2, one can deduce that:

• For any fixed w 0 R q and t 0 > 0 fixed, the law of the rescaled time-homogeneous frozen process,

d v = F ( v , w 0 , t 0 ) d t + Σ ( v , w 0 ) d B ( t ) ,

converges exponentially fast to a unique invariant probability measure denoted by ρ w 0 , t 0 ( d v ) .

• For any fixed w 0 R q , there exists a τ μ -periodic evolution system of measures ν μ w 0 ( t , d v ) , different from ρ w 0 , t ( d v ) above, such that the law of the rescaled time-inhomogeneous frozen process,

d v = F ( v , w 0 , μ t ) d t + Σ ( v , w 0 ) d B ( t ) , (6)

converges exponentially fast towards ν μ w 0 ( t , ) , uniformly with respect to w 0 (cf. the Appendix Theorem A.1).

• For any fixed w 0 R q , the law of the rescaled time-homogeneous frozen process,

d v = F ¯ ( v , w 0 ) d t + Σ ( v , w 0 ) d B ( t ) ,

where F ¯ ( v , w 0 ) : = τ 1 0 τ F ( v , w 0 , t ) d t , converges exponentially fast towards a unique invariant probability measure denoted by ρ ¯ w 0 ( d v ) .

According to the value of μ, we introduce a vector field G ¯ μ which will play a role similar to G ¯ introduced in equation (2).

Definition 2.2 We define G ¯ μ : R q R q as follows. In the time-scale matching case, that is, when 0 < μ < , then

G ¯ μ ( w ) : = ( τ μ ) 1 0 τ μ v R p G ( v , w ) ν μ w ( t , d v ) d t . (7)

Notation We may denote the periodic system of measures ν μ w ( t , d v ) associated with (6) by ν μ w [ F , Σ ] ( t , d v ) to emphasize its relationship with F and Σ. Accordingly, we may denote G ¯ μ ( w ) by G ¯ μ [ F , Σ ] ( w ) .

We are now able to present our main mathematical result. Extending Theorem 2.1, the following theorem describes the asymptotic behavior of the slow variable w ϵ when ϵ 0 with ϵ 1 / ϵ 2 μ . We refer to [6] for more details about the full mathematical proof of this result.

Theorem 2.2Let μ ( 0 , ) . Ifwis the solution of

d w d t = G ¯ μ ( w ) with  w ( 0 ) = w ϵ ( 0 ) , (8)

then the following convergence result holds, for all T > 0 and δ > 0 :

lim | ϵ | 0 μ P [ sup t [ 0 , T ] | w t ϵ w t | 2 > δ ] = 0 .

Remark 2.1

1. The extremal cases μ = 0 and μ = are not covered in full rigor by Theorem 2.2. However, the study of the sequential limits ϵ 1 0 followed by ϵ 2 0 or ϵ 2 0 followed by ϵ 1 0 can be deduced from an appropriate combination of classical periodic and stochastic averaging theorems:

• Slow input: If one considers the case where the limit ϵ 1 0 is taken first, so that from Theorem 2.1 with fast variable v ϵ and slow variables w ϵ and t (with the trivial equation t ˙ = 1 ), w ϵ is close in probability on finite time-intervals to the solution of the following inhomogeneous ordinary differential equation:

d w ˜ d t = v R p G ( v , w ˜ ) ρ w ˜ , t / ϵ 2 ( d v ) : = G ˜ ( w ˜ , t / ϵ 2 ) .

Then taking the limit ϵ 2 0 , one can apply the deterministic averaging principle to the fast periodic vector field G ˜ ( w , t / ϵ 2 ) , so that w ˜ converges when ϵ 2 0 to the solution of

d w d t = τ 1 0 τ G ˜ ( v , w ) d t = G ¯ 0 ( w ) ,

where

G ¯ 0 ( w ) : = τ 1 0 τ v R p G ( v , w ) ρ w , t ( d v ) d t .

• Fast input: If the limit ϵ 2 0 is taken first, one first has to perform a classical averaging of the periodic drift F ( v , w , t / ϵ 2 ) leading to the homogeneous system of SDEs (4), but with F ¯ ( v , w ) instead of F ( v , w , t / ϵ 2 ) . Then, an application of Theorem 2.1 on this system gives an averaged vector field

G ¯ ( w ) : = v R p G ( v , w ) ρ ¯ w ( d v ) .

2. To study the extremal cases μ = 0 and μ = in full generality, one would need to consider all the possible relationships between ϵ 1 and ϵ 2 , not only the linear one as in the present article, but also of the type ϵ 1 = ϵ 2 α for example. In this case, we believe that the regime α < 1 converges to the same limit as taking the limit ϵ 2 first and the regime α > 1 corresponds to taking the limit ϵ 1 first. The intermediate regime α = 1 seems to be the only one for which the limit cannot be obtained by combining classical averaging principles. Therefore, the present article is focused on this case, in which the averaged system depends explicitly on the scaling parameter μ. Moreover, in terms of applications, this parameter can have a relatively easy interpretation in terms of the ratio of time-scales between intrinsic neuronal activity and typical stimulus time-scales in a given situation. Although the zeroth order limit (i.e., the averaged system) seems to depend only on the position of α with respect to 1, it seems reasonable to expect that the fluctuations around the limit would depend on the precise value of α. This is a difficult question which may deserve further analysis.

The case 0 < μ < is already very rich in the sense that it combines simultaneously both the periodic and stochastic averaging principles in a new way that cannot be recovered by sequential applications of those principles. A particular role is played by the frozen periodically-forced SDE (6). The equivalent of the quasi-stationary measure ρ w of Theorem 2.1 is given by the asymptotically periodic behavior of equation (6), represented by the periodic family of measures ν μ w ( t , d v ) .

3. By a rescaling of the frozen process (6), one deduces the following scaling relationships:

ν μ w [ F , Σ ] ( t , d v ) = ν 1 w [ F μ , Σ μ ] ( μ t , d v )

and

G ¯ μ [ F , Σ ] ( w ) = G ¯ 1 [ F μ , Σ μ ] ( w ) .

Therefore, if one knows, in the case μ = 1 , the averaged vector field associated with the fast process generated by a drift F and a diffusion coefficient σ, denoted G ¯ 1 [ F , Σ ] , it is possible to deduce G ¯ μ in the general case μ ( 0 , ) with a change F μ F and Σ μ Σ .

4. It seems reasonable to expect that the above result is still valid when considering ergodic, but not necessarily periodic, time dependency of the function F ( v , w , ) . In equation (7), instead of integrating ν μ w ( t , d v ) over one period, one should integrate it with respect to an ergodic stationary measure. However, this extension requires non-trivial technical improvements of [5] which are beyond the scope of this paper.

2.3.1 Case of a fast linear SDE with periodic input

We present here an elementary case where one can compute explicitly the quasi-stationary time-periodic family of measures ν μ w ( t , x ) , when the equation for the fast variable is linear. Namely, we consider v R p the solution of

d v ( t ) = ( A v ( t ) + u ( μ t ) ) d t + Σ d B ( t ) ,

with initial condition v ( 0 ) = v 0 R p , and where A R p × p is a matrix whose eigenvalues have positive real parts and u ( ) is a τ-periodic function.

We are interested in the large time behavior of the law of v ( t ) , which is a time-inhomogeneous Ornstein-Uhlenbeck process. From [5] we know that its law converges to a τ-periodic family of probability measures ν ( t , d v ) . Due to the linearity in the previous equation, ν ( t , d v ) is Gaussian with a time-dependent mean and a constant covariance matrix

ν ( t , d v ) = N v ¯ ( t ) , Q ( d v ) ,

where v ¯ is the τ μ -periodic attractor of d v ¯ d t = A v ¯ ( t ) + u ( μ t ) , i.e.,

v ¯ ( t ) = t e A ( t s ) u ( μ s ) d s ,

and Q is the unique solution of the Lyapunov equation

A Q + Q A + Σ Σ = 0 . (9)

Indeed, if one denotes c ( t ) = v ( t ) v ¯ ( t ) , then c ( t ) is a solution of the classical homogeneous Ornstein-Uhlenbeck equation

d c ( t ) = A c ( t ) d t + Σ d B ( t ) ,

whose stationary distribution is known to be a centered Gaussian measure with the covariance matrix Q solution of (9); see Chapter 3.2 of [24]. Notice that if A is self-adjoint with respect to ( Σ Σ ) 1 (i.e., A ( Σ Σ ) = ( Σ Σ ) A ), then the solution is Q = A 1 ( Σ Σ ) 2 = ( Σ Σ ) A 1 2 , which will be used in Appendix B.2.

Hence, in the linear case, the averaged vector field of equation (7) becomes

G ¯ μ ( w ) : = ( τ μ ) 1 0 τ μ v R p G ( v ¯ ( t ) + v , w ) N 0 , Q ( d v ) d t , (10)

where N x , Q is the probability density function of the Gaussian law with mean x R q and covariance Q R p × p .

Therefore, due to the linearity of the fast SDE, the periodic system of measure ν is just a constant Gaussian distribution shifted by a periodic function of time v ( t ) . In case G is quadratic in v, this remark implies that one can perform independently the integral over time and over R p in formula (10) (noting that the crossed term has a zero average). In this case, contributions from the periodic input and from noise appear in the averaged vector field in an additive way.

Example 2.3 In this last example, we consider a combination between Example 2.1 and Example 2.2, namely we consider the following system of periodically forced SDEs:

d v ϵ = 1 ϵ 1 [ v ϵ + sin ( t ϵ 2 ) ] d t + σ ϵ 1 d B ( t ) , d w ϵ = ( w ϵ + ( v ϵ ) 2 ) d t .

As in Example 2.1 and as shown above, the behavior of this system when both ϵ 1 and ϵ 2 are small depends on the parameter μ defined in (5). More precisely, we have the following three regimes:

• Regime 1: slow input:

G ¯ 0 ( w ) = w + σ 2 2 + 1 2 .

• Regime 2: fast input:

G ¯ ( w ) = w + σ 2 2 .

• Regime 3: time-scale matching:

G ¯ μ ( w ) = w + σ 2 2 + 1 2 ( 1 + μ 2 ) .

2.4 Truncation and asymptotic well-posedness

In some cases, Assumptions 2.1-2.2 may not be satisfied on the entire phase space R p × R q , but only on a subset. Such situations will appear in Section 3 when considering learning models. We introduce here a more refined set of assumptions ensuring that Theorem 2.2 still applies.

Let us start with an example, namely the following bi-dimensional system with white noise input:

{ d v ϵ = 1 ϵ ( l v ϵ + w ϵ v ϵ ) d t + σ ϵ d B ( t ) , d w ϵ = ( κ w ϵ + ( v ϵ ) 2 ) d t , (11)

with ϵ > 0 , σ > 0 , l > 0 , μ > 0 .

For the fast drift ( l w ) v to be non-explosive, it is necessary to have w < l α with α > 0 for all time. The concern about this system comes from the fact that the slow variable w may reach l due to the fluctuations captured in the term v 2 , for instance, if κ is not large enough. Such a system may have exponentially growing trajectories. However, we claim that for small enough ϵ, w ϵ will remain close to its averaged limit w for a very long time, and if this limit remains below l α , then w ϵ can be considered as well-posed in the asymptotic limit ϵ 0 . To make this argument more rigorous, we suggest the following definition.

Definition 2.3 A stochastic differential equation with a given initial condition is asymptotically well posed in probability if for the given initial condition,

1. a unique solution exists until a random time τ ϵ

2. for all T > 0 ,

lim ϵ 0 P [ τ ϵ T ] = 1 .

We give in the following proposition sufficient conditions for system (4) to be asymptotically well posed in probability and to satisfy conclusions of Theorem 2.2.

Let us introduce the following set of additional assumptions.

Assumption 2.3 Moment conditions:

(i) There exists p > 2 such that

for any  T > 0 , sup ϵ E [ sup 0 t T v t ϵ p + w t ϵ p ] < .

(ii) For any T > 0 and any bounded subset K of R q ,

sup ϵ 1 > 0 , ϵ 2 > 0 , w K E [ sup 0 t T G ( v t ϵ , w ) 2 ] < .

Remark 2.2 This last set of assumptions will be satisfied in all the applications of Section 3 since we consider linear models with additive noise for the equation of v, implying this variable to be Gaussian and the function G only involves quadratic moments of v; therefore, the moment conditions (i) and (ii) will be satisfied without any difficulty. Moreover, if one considers non-linear models for the variable v, then the Gaussian property may be lost; however, adding sigmoidal non-linearity has, in general, the effect of bounding the dynamics, thus making these moment assumptions reasonable to check in most models of interest.

Property 2.3If there exists a subsetof R q such that

1. The functionsF, G, Σsatisfy Assumptions 2.1-2.3 restricted on R p × E .

2. ℰ is invariant under the flow of G ¯ μ , as defined in (7).

Then for any initial condition w 0 E , system (4) is asymptotically well posed in probability and w ϵ satisfies the conclusion of Theorem 2.2.

Proof See Appendix A.2. □

Here, we show that it applies to system (11). First, with E α = { w R , w < l α } , for some α ] 0 , l [ , it is possible to show that Assumptions 2.1-2.2 are satisfied on R p × E α . Then, as a special case of (10), we obtain the following averaged system:

d w d t = κ w + σ 2 2 ( l w ) : = G ¯ ( w ) .

It remains to check that the solution of this system satisfies

α > 0 , such that  w ( 0 ) < l α t > 0 , w ( t ) < l α ,

that is, the subset E α is invariant under the flow of G ¯ .

This property is satisfied as soon as

η : = 2 σ 2 κ l 2 < 1 .

Indeed, one can show that G ¯ ( w ) = 0 admits two solutions iff η < 1 ,

w ± = l 2 ( 1 ± 1 η ) ( 0 , l ) ,

and that w is stable whereas w + is unstable. Thus, if w ( 0 ) < l α with α = l w + > 0 , then w ( t ) < l α for all t > 0 . In fact, the invariance property is true for all α ] l w , l w + [ .

3 Averaging learning neural networks

In this section, we apply the temporal averaging methods derived in Section 2 on models of unsupervised learning neural networks. First, we design a generic learning model and show that one can define formally an averaged system with equation (7). However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear. In the three last subsections, we push the analysis for three examples of increasing complexity.

In the following, we always consider that the initial connectivity is 0. This is an arbitrary choice but without consequences, because we focus on the regime where there is a single globally stable equilibrium point (see Section 3.2.3).

3.1 A generic learning neural network

We now introduce a large class of stochastic neuronal networks with learning models. They are defined as coupled systems describing the simultaneous evolution of the activity of n N neurons and the connectivity between them. We define v R n , the activity field of the network, and W R n × n , the connectivity matrix.

Each neuron variable v i is assumed to follow the SDE

d v i = ( f i ( v i ) + u i ) d t + Σ d B i ( t ) ,

where the function f i characterizes the intrinsic non-linear dynamical behavior of neuron i and u i is the input received by neuron i. The stochastic term Σ d B i ( t ) is added to account for internal sources of noise. In terms of notations, ( B ( t ) ) t 0 is a standard n-dimensional Brownian motion, Σ is an n × n matrix, possibly function of v or other variables, and Σ d B i ( t ) denotes the ith component of the vector Σ d B ( t ) .

The input u i to neuron i has mainly two components: the external input u i ext and the input coming from other neurons in the network u i syn . The latter is a priori a complex combination of post-synaptic potentials coming from many other neurons. The coefficient W i j of the connectivity matrix accounts for the strength of a synapse j i . Note that neurons can be connected to themselves, i.e., W i i is not necessarily null. Thus, we can write

u i syn : = S ( j = 1 n W i j H ( v i , v j ) ) ,

where S : R R and ℋ is a function taking the history of v i and v j and returning a real for each time t (to take convolutions into account). In practical cases, they are often taken to be sigmoidal functions. We abusively redefine S and ℋ as vector valued operators corresponding to the element-wise application of their real counterparts. We also define the function F : R n R n such that F ( v ) i = f i ( v i ) . Together with a slow generic learning rule, this leads to defining a stochastic learning model as the following system of SDEs.

Definition 3.1

{ d v ϵ = 1 ϵ [ F ( v ϵ ) + S ( W ϵ H ( v ϵ ) ) + u ext ( t ) ] d t + 1 ϵ Σ ( v ϵ , W ϵ ) d B ( t ) , d W ϵ = G ( W ϵ , v ϵ ) d t .

Before applying the general theory of Section 2, let us make several comments about this generic model of neural network with learning. This model is a non-autonomous, stochastic, non-linear slow-fast system.

In order to apply Theorem 2.2, one needs Assumptions 2.1, 2.2, and 2.3 to be satisfied, restricting the space of possible functions S , ℋ, ℱ, Σ, and G. In particular, Assumption 2.2(ii) seems rather restrictive since it excludes systems with multiple equilibria and suggests that the general theory is more suited to deal with rate-based networks. However, one should keep in mind that these assumptions are only sufficient, and that the double averaging principle may work as well in systems which do not satisfy readily those assumptions.

As we will show in Section 3.3, a particular form of history-dependence can be taken into account, to a certain extent. Indeed, for instance, if the function ℱ is actually a functional of the past trajectory of variable v ϵ which can be expressed as the solution of an additional SDE, then it may be possible to include a certain form of history-dependence. However, purely time-delayed systems do not enter the scope of this theory, although it might be possible to derive an analogous averaging method in this framework.

The noise term can be purely additive or set by a particular function Σ ( v , W ) as long as it satisfies Assumption 2.2(i), meaning that it must be uniformly non-degenerate.

In the following subsection, we apply the averaging theory to various combinations of neuronal network models, embodied by choices of functions S , ℋ, ℱ, Σ, and various learning rules, embodied by a choice of the function G. We will also analyze the obtained averaged system, describing the slow dynamics of the connectivity matrix in the limit of perfect time-scale separation and, in particular, study the convergence of this averaged system to an equilibrium point.

3.2 Symmetric Hebbian learning

One of the simplest, yet non-trivial, stochastic learning models is obtained when considering

• A linear model for neuronal activity, namely f i ( v i ) = l v i with l a positive constant.

• A linear model for the synaptic transmission, namely S ( v i ) = v i and H ( v i , v j ) = v j .

• A constant diffusion matrix Σ (additive noise) proportional to the identity Σ = σ I d (spatially uncorrelated noise).

• A Hebbian learning rule with linear decay, namely G i j ( W , v ) = κ W i j + v i v j . Actually, it corresponds to the tensor product: { v v } i j = v i v j .

This model can be written as follows:

{ d v ϵ = 1 ϵ 1 ( L v ϵ + W ϵ v ϵ + u ( t ϵ 2 ) ) d t + σ ϵ 1 d B ( t ) , d W ϵ d t = G ( v ϵ , W ϵ ) = κ W ϵ + v ϵ v ϵ , (12)

where neurons are assumed to have the same decay constant: L = l I d ; u is a periodic continuous input (it replaces u ext in the previous section); σ , ϵ 1 , ϵ 2 , κ R + with ϵ 1 , ϵ 2 1 and B ( t ) is n-dimensional Brownian noise.

The first question that arises is about the well-posedness of the system: What is the definition interval of the solutions of system (12)? Do they explode in finite time? At first sight, it seems there may be a runaway of the solution if the largest real part among the eigenvalues of W grows bigger than l. In fact, it turns out this scenario can be avoided if the following assumption linking the parameters of the system is satisfied.

Assumption 3.1 There exists p ] 0 , 1 [ such that

( σ 2 l 2 p ( 1 p ) + u m 2 p ( 1 p ) 2 ) < κ l 3 ,

where u m = sup t R + u ( t ) 2 .

It corresponds to making sure the external (i.e., u m ) or internal (i.e., σ) excitations are not too large compared to the decay mechanism (represented by κ and l). Note that if p ] 0 , 1 [ , u m and d are fixed, it is sufficient to increase κ or l for this assumption to be satisfied.

Under this assumption, the space

E p = { W R n × n : W  is symmetric , W 0  and  W < p L }

is invariant by the flow of the averaged system G ¯ , where W 0 means W is semi-definite positive and W < p L means p L W is definite positive. Therefore, the averaged system is defined and bounded on R + . The slow/fast system being asymptotically close to the averaged system, it is therefore asymptotically well-defined in probability. This is summarized in the following theorem.

Theorem 3.1If Assumption 3.1 is verified for p ] 0 , 1 [ , then system (12) is asymptotically well posed in probability and the connectivity matrix W ϵ , the solution of system (12), converges toWin the sense that for all δ , T > 0 ,

lim ϵ 0 μ P [ sup t [ 0 , T ] | W t ϵ W t | 2 > δ ] = 0 ,

whereWis the deterministic solution of

d W i j d t = G ¯ ( W ) i j = κ W i j decay + μ τ 0 τ μ v ¯ i ( s ) v ¯ j ( s ) d s correlation + σ 2 2 ( L W ) i j 1 noise , (13)

where v ¯ ( t ) is the τ μ -periodic attractor of d v ¯ d t = ( W L ) v ¯ + u ( μ t ) , where W R n × n is supposed to be fixed.

Proof See Theorem B.1 in Appendix B.2. □

In the following, we focus on the averaged system described by (13). Its right-hand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following.

3.2.1 Noise term

As seen in Section 2, in the linear case, the noise term Q is the unique solution of the Lyapunov equation (9) with A = W L and Σ = σ I d . Because the noise is spatially uncorrelated and identical for each neuron and also because the connectivity is symmetric, observe that Q = σ 2 2 ( L W ) 1 is the unique solution of the system.

In more complicated cases, the computation of this term appears to be much more difficult as we will see in Section 3.4.

3.2.2 Correlation term

This term corresponds to the auto-correlation of neuronal activity. It is only implicitly defined; thus, this section is devoted to finding an explicit form depending only on the parameters l, μ, τ, the connectivity W, and the inputs u. Actually, one can perform an expansion of this term with respect to a small parameter corresponding to a weakly connected expansion. Most terms vanish if the connectivity W is small compared to the strength of the intrinsic decaying dynamics of neurons l.

The auto-correlation term of a τ μ -periodic function can be rewritten as

{ v ¯ v ¯ } i j = 0 τ μ v ¯ i ( s ) v ¯ j ( s ) d s .

With this notation, it is simple to think of v as a ‘semi-continuous matrix’ of R n × [ 0 , τ μ [ . Hence, the operator ‘⋅’ can be though of as a matrix multiplication. Similarly, the transpose operator turns a matrix v ¯ R n × [ 0 , τ μ [ into a matrix v ¯ R [ 0 , τ μ [ × n . See Appendix B.1 for details about the notations.

It is common knowledge, see [17] for instance, that this term gathers information about the correlation of the inputs. Indeed, if we assume that the input is sufficiently slow, then v ¯ has enough time to converge to u ( t ) for all t [ 0 , + [ . Therefore, in the first order v ¯ ( t ) ( W L ) 1 u ( t ) . This leads to v ¯ v ¯ ( W L ) 1 u u ( W L ) 1 . In the weakly connected regime, one can assume that W L L leading to v ¯ v ¯ 1 l 2 u u which is the auto-correlation of the inputs.

Actually, without the assumption of a slow input, lagged correlations of the input appear in the averaged system. Before giving the expression of these temporal correlations, we need to introduce some notations. First, define the convolution filter g l / μ : t l μ e l μ t H ( t ) , where H is the Heaviside function. This family of functions is displayed for different values of l μ in Figure 4(a). Note that g l / μ δ 0 when l μ + , where δ 0 is the Dirac distribution centered at the origin. In this asymptotic regime, the convolution filter and its iterates g l / μ g l / μ are equal to the identity.

We also define the filtered correlation of the inputs C k , p R n × n by

C k , q = def 1 u m 2 τ ( u g l / μ ( k + 1 ) ) ( u g l / μ ( q + 1 ) ) ,

where g l / μ ( k + 1 ) = g l / μ g l / μ is the kth convolution of g l / μ with itself and u m = sup t R + u ( t ) 2 . This is the correlation matrix of the inputs filtered by two different functions. It is easy to show that this is similar to computing the cross-correlation of the inputs with the inputs filtered by another function,

C k , q = 1 u m 2 τ ( u ( g l / μ ( k + 1 ) g l / μ ( q + 1 ) ) ) u = 1 u m 2 τ u ( u ( g l / μ ( k + 1 ) g l / μ ( q + 1 ) ) ) , (14)

which motivates the definition of the ( k , p ) -temporal profile g l / μ ( k + 1 ) g l / μ ( q + 1 ) , where ( g l / μ ) ( k ) ( t ) = ( g l / μ ( k ) ) ( t ) = g l / μ ( k ) ( t ) . This notation is deliberately similar to that of the transpose operator we use in the proofs. These functions are shown in Figure 1. We have not found a way to make them explicit; therefore, the following remarks are simply based on numerical illustrations. When k = q , the temporal profiles are centered. The larger the difference k q , the larger the center of the bell. The larger the sum k + q , the larger the standard deviation. This motivates the idea that C k , p can be thought of as the k q lagged correlation of the inputs. One can also say that C 10 , 10 is more blurred than C 0 , 0 in the sense that the inputs are temporally integrated over a ‘wider’ window in the first case.

thumbnailFig. 1. This shows the ( k , q ) -temporal profiles with l μ = 1 , i.e., the functions g 1 ( k + 1 ) g 1 ( q + 1 ) for q = 0 and k ranging from 0 to 6. For k = q = 0 , the temporal profile is even and this also occurs to be true for any k = q . When k > q , the function reaches its maximum for strictly positive values that grow with the difference k q . Besides, the temporal profiles are flattened when k + q increases.

Observe that g l / μ ( k + 1 ) ( t ) = l k + 1 μ k + 1 k ! t k e l μ t H ( t ) . Therefore, g l / μ ( k + 1 ) 1 = Γ ( k + 1 ) k ! = 1 . Thanks to Young’s inequality for convolutions, which says that u g l / μ ( k ) 2 u 2 g l / μ ( k ) 1 , it can be proved that C k , q 2 1 .

We intend to express the correlation term as an infinite converging sum involving these filtered correlations. In this perspective, we use a result we have proved in [25] to write the solution of a general class of non-autonomous linear systems (e.g., d v ¯ d t = ( W L ) v ¯ + u ( t ) ) as an infinite sum, in the case μ = 1 .

Lemma 3.2If v ¯ is the solution, with zero as initial condition, of d v ¯ d t = ( W L ) v ¯ + u ( t ) it can be written by the sum below which converges ifWis in E p for p ] 0 , 1 [ .

v ¯ = k = 0 + W k l k + 1 u g l ( k + 1 ) ,

where g l : t l e l t H ( t ) .

Proof See Lemma B.2 in Appendix B.2. □

This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled. This important step in a detailed study of the averaged equation cannot be achieved easily in models with non-linear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows. Indeed, we use the previous result to rewrite the correlation term as follows.

Property 3.3The correlation term can be written

μ τ v ¯ v ¯ = u m 2 l 2 k , q = 0 + W k l k C k , q W q l q .

Proof See Theorem B.3 in Appendix B.2. □

This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear input-output gain [26].

The speed of inputs characterized by μ only appears in the temporal profiles g l / μ ( k ) g l / μ ( q ) . In particular, if the inputs are much slower than neuronal activity time-scale, i.e., μ = 0 , then g + = δ 0 and u g + = u . Therefore, C k , q = C 0 , 0 and the sums in the formula of Property 3.3 are separable, leading to v ¯ v ¯ = ( L W ) 1 u u ( L W ) 1 , which corresponds to the heuristic result previously explained.

Therefore, the averaged equation can be explicitly rewritten

d W d t = G ¯ ( W ) = κ W + u m 2 l 2 k , q = 0 + W k l k C k , q W q l q + σ 2 2 ( L W ) 1 . (15)

In Figure 2, we illustrate this result by comparing, for different ϵ = ϵ 1 = ϵ 2 (i.e., we choose μ = 1 in this example), the stochastic system and its averaged version. The above decomposition has been used as the basis for numerical computation of trajectories of the averaged system.

thumbnailFig. 2. The first two figures, (a) and (b), represent the evolution of the connectivity for original stochastic system (12), superimposed with averaged system (13), for two different values of ϵ: respectively ϵ = 0.01 and ϵ = 0.001 , where we have chosen ϵ = ϵ 1 = ϵ 2 . Each color corresponds to the weight of an edge in a network made of n = 3 neurons. As expected, it seems that the smaller ϵ, the better the approximation. This can be seen in the picture (c) where we have plotted the precision on the y-axis and ϵ on the x-axis. The parameters used here are l = 12 , μ = 1 , κ = 100 , σ = 0.05 . The inputs have a random (but frozen) spatial structure and evolve according to a sinusoidal function.

3.2.3 Global stability of the equilibrium point

Now that we have found an explicit formulation for the averaged system, it is natural to study its dynamics. Actually, we prove in the following that if the connectivity W is kept smaller than l 3 , i.e., Assumption 3.1 is verified with p 1 3 , then the dynamics is trivial: the system converges to a single equilibrium point. Indeed, under the previous assumption, the system can be written G ¯ ( W ) = κ W + F ( W ) , where F is a contraction operator on E 1 3 . Therefore, one can prove the uniqueness of the fixed point with the Banach fixed point argument and exhibit an energy function for the system.

Theorem 3.4If Assumption 3.1 is verified for p 1 3 , then there is a unique equilibrium point in the invariant subset E p which is globally, asymptotically stable.

Proof See Theorem B.4 in Appendix B.2. □

The fact that the equilibrium point is unique means that the ‘knowledge’ of the network about its environment (corresponding by hypothesis to the connectivity) eventually is unique. For a given input and any initial condition, the network can only converge to the same ‘knowledge’ or ‘understanding’ of this input.

3.2.4 Explicit expansion of the equilibrium point

When the network is weakly connected, the high-order terms in expansion (15) may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion. The weaker the connectivity, the more terms can be neglected in the expansion.

In fact, it is not natural to speak about a weakly connected learning network since the connectivity is a variable. However, we are able to identify a weak connectivity index which controls the strength of the connectivity. We say the connectivity is weak when it is negligible compared to the intrinsic leak term, i.e., W l is small. We show in the Appendix that this weak connectivity index depends only on the parameters of the network and can be written

p ˜ = u m 2 κ l 3 + σ 2 2 κ l 2 .

In the asymptotic regime p ˜ 0 , we have W p ˜ l = O ( 1 ) . This index is the ‘small’ parameter needed to perform the expansion. We also define λ = σ 2 l 2 u m 2 , which has information about the way p ˜ is converging to zero. In fact, it is the ratio of the two terms of p ˜ .

With these, we can prove that the equilibrium connectivity W has the following asymptotic expansion in p ˜ .

Theorem 3.5

W = p ˜ l 1 + λ ( λ + C 0 , 0 ) + p ˜ 2 l ( 1 + λ ) 2 ( λ 2 + λ ( C 0 , 0 + C 1 , 0 + C 0 , 1 ) + C 0 , 0 C 1 , 0 + C 0 , 1 C 0 , 0 ) + O ( p ˜ 3 ) .

Proof See Theorem B.5 in Appendix B.2. □

At the first order, the final connectivity is C 0 , 0 , the filtered correlation of the inputs convolved with a bell-shaped centered temporal profile. In the case of Figure 3, this is quite a good approximation of the final connectivity.

thumbnailFig. 3. (a) shows the temporal evolution of the input to a n = 8 neurons network. It is made of two spatially random patterns that are shown alternatively. (b) shows the correlation matrix of the inputs. The off-diagonal terms are null because the two patterns are spatially orthogonal. (c), (d), and (e) represent the first order of Theorem 3.5 expansion for different μ. Actually, this approximation is quite good since the percentage of error between the averaged system and the first order, computed by error = W order 1 1 W 1 , have an order of magnitude of 10−4% for the three figures. These figures make it possible to observe the role of μ. If μ is small, i.e., the inputs are slow, then the transient can be neglected and the learned connectivity is roughly the correlation of the inputs; see (a). If μ increases, i.e., the inputs are faster, then the connectivity starts to encode a link between the two patterns that were flashed circularly and elicited responses that did not fade away when the other pattern appeared. The temporal structure of the inputs is also learned when μ is large. The parameters used in this figure are ϵ = 0.001 , l = 12 , κ = 100 , σ = 0.02 .

Not only the spatial correlation is encoded in the weights, but there is also some information about the temporal correlation, i.e., two successive but orthogonal events occurring in the inputs will be wired in the connectivity although they do not appear in the spatial correlations; see Figure 3 for an example.

3.3 Trace learning: band-pass filter effect

In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section. Given that Theorem 2.2 only applies to an instantaneous process, we will only be able to treat the history-dependent systems which can be reformulated as instantaneous processes. Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features:

• Trace learning.

It is likely that a biological learning rule will integrate the activity over a short time. As Földiàk suggested in [27], it makes sense to consider the learning equation as being

d W ϵ d t = κ W ϵ + ( v ϵ g 1 ) ( v ϵ g 1 ) ,

where ∗ is the convolution and g 1 : t R β 1 e β 1 t H ( t ) . Rolls and Deco numerically show [15] that the temporal convolution, leading to a spatio-temporal learning, makes it possible to perform invariant object recognition. Besides, trace learning appears to be the symmetric part of the biological STDP rule that we detail in Section 3.4.

• Damped oscillatory neurons.

Many neurons have an oscillatory behavior. Although we cannot take this into account in a linear model, we can model a neuron by a damped oscillator, which also introduces a new important time-scale in the system. Adding adaptation to neuronal dynamics is an elementary way to implement this idea. This corresponds to modeling a single neuron without inputs by the equivalent formulations

{ d v ϵ d t = l z ϵ , d z ϵ d t = β 2 ( v ϵ z ϵ ) { d v ϵ d t = l v ϵ g 2 , where  g 2 ( t ) = β 2 e β 2 t H ( t ) .

• Dynamic synapses.

The electro-chemical process of synaptic communication is very complicated and non-linear. Yet, one of the features of synaptic communication we can take into account in a linear model is the shape of the post-synaptic potentials. In this section, we consider that each synapse is a linear filter whose finite impulse response (i.e., the post-synaptic potential) has the shape g 3 ( t ) = β 3 e β 3 t H ( t ) . This is a common assumption which, for instance, is at the basis of traditional rate based models; see Chapter 11 of [7].

For mathematical tractability, we assume in the following that β = β 1 = β 2 = β 3 R + such that g β = g 1 = g 2 = g 3 , i.e., the time-scales of the neurons, those of the synapses and those of the learning windows are the same. Actually, there is a large variety of temporal scales of neurons, synapses, and learning windows, which makes this assumption not absurd. Besides, in many brain areas, examples of these time constants are in the same range (≃10 ms). Yet, investigating the impact of breaking this assumption would be necessary to model better biological networks. This leads to the following system:

{ d v ϵ = 1 ϵ 1 ( ( W ϵ L ) v ϵ g β + u ( t ϵ 2 ) ) d t + σ ϵ 1 d B ( t ) , d W ϵ d t = κ W ϵ + ( v ϵ g β ) ( v ϵ g β ) , (16)

where the notations are the same as in Section 3.2. The behavior of a single neuron will be oscillatory damped if Δ = 1 4 l β is a pure imaginary number, i.e., 4 l > β . This is the regime on which we focus. Actually, the Hebbian linear case of Section 3.2 corresponds to β = + in this delayed system.

To comply with the hypotheses of Theorem 2.2 (i.e., no dependence of the history of the process), we can add a variable z to the system which takes care of integrating the variable v over an exponential window. It leads to the equivalent system (in the limit σ z 0 )

{ d ( v ϵ z ϵ ) = 1 ϵ 1 [ ( 0 W L β β ) ( v ϵ z ϵ ) + ( u ( t ϵ 2 ) 0 ) ] d t + ( σ ϵ 1 d B ( t ) σ z ϵ 1 d B ( t ) ) , d W ϵ d t = κ W ϵ + z ϵ z ϵ .

This trick makes it possible to deal with some history-based processes where the dependence on the past is exponential.

It turns out most of the results of Section 3.2 remain true for system (16) as detailed in the following. The existence of the solution on R + is proved in Theorem B.6. The computations show that in the averaged system, the noise term remains identical, whereas the correlation term is to be replaced by μ τ ( v ¯ g β ) ( v ¯ g β ) . Besides, Lemma 3.2 can be extended to our delayed system by changing only the temporal filters; see Lemma 34. Together with Lemma C.3, this proves the result of Theorem B.8.

μ τ ( v ¯ g β ) ( v ¯ g β ) = u m 2 v 1 2 l 2 k , q = 0 + W k ( l / v 1 ) k C ˜ k , q W q ( l / v 1 ) q ,

where

C ˜ k , q = 1 u m 2 τ v 1 k + q + 2 ( u v ( k + 1 ) ) ( u v ( q + 1 ) ) ,

where v : t l μ Δ ( e β 2 μ ( 1 Δ ) t e β 2 μ ( 1 + Δ ) t ) H ( t ) . Observe that applying Young’s inequality to convolutions leads to C ˜ k , q 2 1 . Actually, Lemma C.3 shows that v ( k ) = v k : t π β k ! e β 2 t ( t | Δ | ) k + 1 2 J k + 1 2 ( β | Δ | 2 t ) H ( t ) , where J n ( z ) is the Bessel function of the first kind. The value of the L1 norm of v is computed in Appendix C.3. It leads to v 1 = coth ( π 2 Δ ) if Δ is a pure imaginary number and v 1 = 1 else.

Therefore, the averaged system can be rewritten

d W d t = G ¯ ( W ) = κ W + u m 2 v 1 2 l 2 k , q = 0 + W k ( l / v 1 ) k C ˜ k , q W q ( l / v 1 ) q + σ 2 2 ( L W ) 1 .

As before, the existence and uniqueness of a globally attractive equilibrium point is guaranteed if Assumption 3.1 is verified for p 1 2 v 1 3 + 1 ; see Theorem B.9.

Besides, the weakly connected expansion of the equilibrium point we did in Section 3.2.4 can be derived in this case (see Theorem B.10). At the first order, this leads to the equilibrium connectivity

W = p ˜ l 1 + λ ( λ + v 1 2 C ˜ 0 , 0 ) + O ( p ˜ 2 v 1 ) .

The second order is given in Theorem B.10. The main difference with the Hebbian linear case is the shape of the temporal filters. Actually, the temporal filters have an oscillatory damped behavior if Δ is purely imaginary. These two cases are compared in Figure 4.

thumbnailFig. 4. These represent the temporal filter v : t v ( t ) for different parameters. (a) When β = + , we are in the Hebbian linear case of Appendix B.2. The temporal filters are just decaying exponentials which averaged the signal over a past window. (b) When the dynamics of the neurons and synapse are oscillatory damped, some oscillations appear in the temporal filters. The number of oscillations depends on Δ. If Δ is real, then there are no oscillations as in the previous case. However, when Δ becomes a pure imaginary number, it creates a few oscillations which are even more numerous if | Δ | increases.

These oscillatory damped filters have the effect of amplifying a particular frequency of the input signal. As shown in Figure 5, if Δ is a pure imaginary number, then D 0 , 0 is the cross-correlation of the band-pass filtered inputs with themselves. This band-pass filter effect can also be observed in the higher-order terms of the weakly connected expansion. This suggests that the biophysical oscillatory behavior of neurons and synapses leads to selecting the corresponding frequency of the inputs and performing the same computation as for the Hebbian linear case of the previous section: computing the correlation of the (filtered) inputs.

thumbnailFig. 5. This is the spectral profile | v v ˆ | ( ξ ) for β = 1 and l ] 0 , 2 ] , where v v ˆ denotes the Fourier transform of v v . When 4 l < β , the filter reaches its maximum for the null frequency, but if l increases beyond β 4 , the filter becomes a band-pass filter with long tails in  1 ξ 2 .

3.4 Asymmetric ‘STDP’ learning with correlated noise

Here, we extend the results to temporally asymmetric learning rules and spatially correlated noise. We consider a learning rule that is similar to the spike-timing-dependent plasticity (STDP) which is closer to biological experiments than the previous Hebbian rules. It has been observed that the strength of the connection between two neurons depends mainly on the difference between the time of the spikes emitted by each neuron as shown in Figure 6; see [12].

thumbnailFig. 6. This figure represents the synapse strength modification when the post-synaptic neuron emits a spike. The y-axis corresponds to an additive or multiplicative update of the connectivity. For instance, in [28], this is Δ W i j W i j for the negative part of the curve. However, we assume an additive update in this paper. The x-axis is the time at which a pre-synaptic spike reaches the synapse, relatively to the time of post-synaptic time chosen to be 0.

Assuming that the decay time of the positive and negative parts of Figure 6 are equal, we approximate this function by t a + g γ ( t ) a g γ ( t ) , where g γ ( t ) = γ e γ t H ( t ) . Actually, this corresponds to W ϵ ˙ i j = κ W i j ϵ + a + v i ( v j ϵ g γ ) a ( v i ϵ g γ ) v j ϵ . If the neuron has a spiking behavior, then the term a + v i ϵ ( t ) ( v j ϵ g γ ) ( t ) is significant when the post-synaptic neuron i is spiking at time t, and then it counts the number of previous spikes from the pre-synaptic neuron j that might have caused the post-synaptic spike. This calculus is weighted by an exponentially decaying function. This accounts for the left part of Figure 6. The last term a ( v i ϵ g γ ) v j ϵ takes the opposite perspective. It is significant when the pre-synaptic neuron j is spiking and counts the number of previous spikes from the post-synaptic neuron i that are not likely to have been caused by the pre-synaptic neuron. The computation is also weighted by the mirrored function of an exponentially decaying function. This accounts for the right part of Figure 6. This leads to the coupled system

{ d v ϵ = 1 ϵ 1 ( f ( v ϵ ) + W v ϵ + u ( t ϵ 2 ) ) d t + 1 ϵ 1 Σ d B ( t ) , d W ϵ d t = G ( v ϵ , W ϵ ) = κ W ϵ + a + v ϵ ( v ϵ g γ ) a ( v ϵ g γ ) v ϵ , (17)

where the non-linear intrinsic dynamics of the neurons is represented by f. Indeed, the term { a + v ϵ ( t ) ( v ϵ g γ ) ( t ) } i j = a + v i ϵ ( t ) ( v ϵ g γ ) j ( t ) is negligible when the neuron is quiet and maximal at the top of the spikes emitted by neuron i. Therefore, it records the value of the pre-synaptic membrane potential weighted by the function g γ when the post-synaptic neuron spikes. This accounts for the positive part of Figure 6. Similarly, the negative part corresponds to a ( v ϵ g γ ) v ϵ .

Actually, this formulation is valid for any non-linear activity with correlated noise. However, studying the role of STDP in spiking networks is beyond the scope of this paper since we are only able to have explicit results for models with linear activity. Therefore, we will assume that the activity is linear while keeping the learning rule as it was derived in the spiking case, i.e., we assume f ( v ) = l v = L v in the system above.

We also use the trick of adding additional variables to get rid of the history-dependency. This reads

{ d ( v ϵ z ϵ ) = 1 ϵ 1 [ ( W L 0 γ γ ) ( v ϵ z ϵ ) + ( u ( t ϵ 2 ) 0 ) ] d t + ( σ ϵ 1 d B ( t ) σ z ϵ 1 d B ( t ) ) , d W ϵ d t = κ W ϵ + a + v ϵ z ϵ a z ϵ v ϵ .

In this framework, the method exposed in Section 3.2 holds with small changes. First, the well-posedness assumption becomes

Assumption 3.2 There exists p ] 0 , 1 [ such that

| a + | + | a | p ( 1 p ) ( s 2 γ 2 ( 1 + γ / l p ) + u m 2 ( 1 p ) ) < κ l 3 ,

where s 2 is the maximal eigenvalue of Σ Σ .

Under this assumption, the system is asymptotically well posed in probability (Theorem B.11). And we show the averaged system is

d W d t = G ¯ ( W ) = κ W + u m 2 ( | a + | + | a | ) l 2 k , q = 0 + W k l k D k , q W q l q + Q , (18)

where we have used Theorem B.12 to expand the correlation term. The noise term Q is equal to Q 11 ( L + γ W ) 1 , where Q 11 is the unique solution of the Lyapunov equation ( W L ) Q 11 + Q 11 ( W L ) + Σ Σ = 0 . Lemma D.1 gives a solution for this equation which leads to Q = γ k = 0 + W k Σ Σ ( 2 L W ) ( k + 1 ) ( L + γ W ) 1 . In equation (18), the correlation matrices D k , q are given by

D k , q = 1 u m 2 τ ( | a + | + | a | ) ( u g l / μ ( k + 1 ) ( a + g γ a g γ ) ) ( u g l / μ ( q + 1 ) ) .

According to Theorem B.13, the system is also globally asymptotically convergent to a single equilibrium, which we study in the following.

We perform a weakly connected expansion of the equilibrium connectivity of system (18). As shown in Theorem B.14, the first order of the expansion is

W = p ˜ l 1 + λ ( λ ( α + α ) Σ Σ d + D 0 , 0 ) + O ( p ˜ 2 ) .

Writing D 0 , 0 = 1 u m 2 τ ( | a + | + | a | ) ( S + A ) , where S is symmetric and A is skew-symmetric, leads to

S = a + a 2 u g l / μ ( g γ + g γ ) ( u g l / μ ) , A = a + + a 2 u g l / μ ( g γ g γ ) ( u g l / μ ) .

According to Lemma C.1, the symmetric part is very similar to the trace learning case in Section 3.3. Applying Lemma C.2 leads to

S = ( a + a ) ( u g l / μ g γ ) ( u g l / μ g γ ) , A = a + + a γ ( d u d t g l / μ g γ ) ( u g l / μ g γ ) . (19)

Therefore, the STDP learning rule simply adds an antisymmetric part to the final connectivity keeping the symmetric part as the Hebbian case. Besides, the antisymmetric part corresponds to computing the cross-correlation of the inputs with its derivative. For high-order terms, this remains true although the temporal profiles are different from the first order. These results are in line with previous works underlying the similarity between STDP learning and differential Hebbian learning, where G ( v ) v ˙ v ; see [29].

Figure 7 shows an example of purely antisymmetric STDP learning, i.e., a + = a . The final connectivity matrix is therefore antisymmetric as shown in Figure 7(b) and the noise has no impact on learning. It shows the network finally approximates the connectivity given in (19).

thumbnailFig. 7. Antisymmetric STDP learning for a network of n = 3 neurons. (a) Temporal evolution of the inputs to the network. The three neurons are successively and periodically excited. The red color corresponds to an excitation of 1 and the blue to no excitation. (b) Equilibrium connectivity. The matrix is antisymmetric and shows that neurons excite one of their neighbors and are inhibited by the other. (c) Temporal evolution of the connectivity strength. The colors correspond to those of (b). The connectivity of system (17) corresponds to the plain thin oscillatory curves. The connectivity of the averaged system (18) (with k , q < 4 ) corresponds to the plain thick lines. Note that each curve corresponds to the superposition of three connections which remain equal through learning. The dashed curves correspond to the antisymmetric part in (19). The parameters chosen for this simulation were l = 10 , κ = 100 , γ = 3 , a + = a = 1 , τ = 3 , σ = 0.001 , μ = 1 , ϵ = 0.001 . The system was simulated on the fast time-scale during T = 10 , 000 time steps of size d t = 0.01 .

4 Discussion

We have applied temporal averaging methods on slow/fast systems modeling the learning mechanisms occurring in linear stochastic neural networks. When we make sure the connectivity remains small, the dynamics of the averaged system appears to be simple: the connectivity always converges to a unique equilibrium point. Then, we performed a weakly connected expansion of this final connectivity whose terms are combinations of the noise covariance and the lagged correlations of the inputs: the first-order term is simply the sum of the noise covariance and the correlation of the inputs.

• As opposed to the former input/ouput vision of the neurons, we have considered the membrane potential v to be the solution of a dynamical system. The consequence of this modeling choice is that not only the spatial correlations, but also the temporal correlations are learned. Due to the fact we take the transients into account, the activity never converges but it lives between the representation of the inputs. Therefore, it links concepts together.

The parameter μ is the ratio of the time-scales between the inputs and the activity variable. If μ = 0 , the inputs are infinitely slow and the activity variable has enough time to converge towards its equilibrium point. When μ grows, the dynamics becomes more and more transient, it has no time to converge. Therefore, if the inputs are extremely slow, the network only learns the spatial correlation of the inputs. If the inputs are fast, it also learns the temporal correlations. This is illustrated in Figure 3.

This suggests that learning associations between concepts, for instance, learning words in two different languages, may be optimized by presenting two words to be associated circularly with a certain frequency. Indeed, increasing the frequency (with a fixed duration of exposition to each word) amounts to increasing μ. Therefore, the network learns better the temporal correlations of the inputs and thus strengthens the link between these two concepts.

• According to the model of resonator neuron [30], Section 3.3 suggests that neurons and synapses with a preferred frequency of oscillation will preferably extract the correlation of the inputs filtered by a band pass filter centered on the intrinsic frequency of the neurons.

Actually, it has been observed that the auditory cortex is tonotopically organized, i.e., the neurons are arranged by frequency [31]. It is traditionally thought that this is achieved thanks to a particular connectivity between the neurons. We exhibit here another mechanism to select this frequency which is solely based on the parameters of the neurons: a network with a lot of different neurons whose intrinsic frequencies are uniformly spread is likely to perform a Fourier-like operation: decomposing the signal by frequency.

In particular, this emphasizes the fact that the network does not treat space and time similarly. Roughly speaking, associating several pictures and associating several sounds are therefore two different tasks which involve different mechanisms.

• In this paper, the original hierarchy of the network has been neglected: the network is made of neurons which receive external inputs. A natural way to include a hierarchical structure (with layers for instance), without changing the setup of the paper, is therefore to remove the external input to some neurons. However, according to Theorem 3.5 (and its extensions Theorems B.10 and B.14), one can see that these neurons will be disconnected from the others at the first order (if the noise is spatially uncorrelated). Linear activities imply that the high level neurons disconnect from others, which is a problem. In fact, one can observe that the second-order term in Theorem 3.5 is not null if the noise matrix Σ is not diagonal. In fact, this is the noise between neurons which will recruit the high level neurons to build connections from and to them.

It is likely that a significant part of noise in the brain is locally induced, e.g., local perturbations due to blood vessels or local chemical signals. In a way, the neurons close to each other share their noise and it seems reasonable to choose the matrix Σ so that it reflects the biological proximity between neurons. In fact, Σ specifies the original structure of the network and makes it possible for close-by neurons to recruit each other.

Another idea to address hierarchy in networks would be to replace the synaptic decay term κ W by another homeostatic term [32] which would enforce the emergence of a strong hierarchical structure.

• It is also interesting to observe that most of the noise contribution to the equilibrium connectivity for STDP learning (see Theorem B.14) vanishes if the learning is purely skew-symmetric, i.e., a + = a . In fact, it is only the symmetric part of learning, i.e., the Hebbian mechanism, that writes the noise in the connectivity.

• We have shown that there is a natural analogous STDP learning for spiking neurons in our case of linear neurons. This asymmetric rule converges to a final connectivity which can be decomposed into symmetric and skew-symmetric parts. The first one is similar to the symmetric Hebbian learning case, emphasizing that the STDP is nothing more than an asymmetric Hebbian-like learning rule. The skew-symmetric part of the final connectivity is the cross-correlation between the inputs and their derivatives.

This has an interesting signification when looking at the spontaneous activity of the network post-learning. In fact, if we assume that the inputs are generated by an autonomous system d u d t = ζ ( u ) , then according to the bottom equation in formula (19), the spontaneous activity is governed by

d v = ( ζ ( u ) u v l v ) d t + Σ d B ( t ) .

In a way, the noise terms generate random patterns which tend to be forgotten by the network due to the leak term l v . The only drift is due to ζ ( u ) u v E v , u ( ζ ( u ) ) which is the expectation of the vector field defining the dynamics of inputs with a measure being the scalar product between the activity variable and the inputs. In other words, if the activity is close to the inputs at a given time t R + , i.e., v , u ( t ) is large, then the activity will evolve in the same direction as what this input would have done. The network has modeled the temporal structure of the inputs. The spontaneous activity predicts and replays the inputs the network has learned.

There are still numerous challenges to carry on in this direction.

First, it seems natural to look for an application of these mathematical methods to more realistic models. The two main limitations of the class of models we study in Section 3 are (i) the activity variable is governed by a linear equation and (ii) all the neurons are assumed to be identical. The mathematical analysis in this paper was made possible by the assumption that the neural network has a linear dynamics, which does not reflect the intrinsic non-linear behavior of the neurons. However, the cornerstone of the application of temporal averaging methods to a learning neural network, namely Property 3.3, is similar to the behavior of Poisson processes [26] which has useful applications for learning neural networks [19,20]. This suggests that the dynamics studied in this paper might be quite similar to some non-linear network models. Studying more rigorously the extension of the present theory to non-linear and heterogeneous models is the next step toward a better modeling of biologically plausible neural networks.

Second, we have shown that the equilibrium connectivity was made of a symmetric and antisymmetric term. In terms of statistical analysis of data sets, the symmetric part corresponds to classical correlation matrices. However, the antisymmetric part suggests a way to improve the purely correlation-based approach used in many statistical analyses (e.g., PCA) toward a causality-oriented framework which might be better suited to deal with dynamical data.

Appendix A: Stochastic and periodic averaging

A.1 Long-time behavior of inhomogeneous Markov processes

In order to construct the averaged vector field G ¯ μ ( w ) in the time-scale matching case ( 0 < μ < ), one needs to understand properly the long-time behavior of the rescaled inhomogeneous frozen process

d v = F ( v , w 0 , μ t ) d t + Σ ( v , w 0 ) d B ( t ) . (20)

Under regularity and dissipativity conditions, [5] proves the following general result about the asymptotic behavior of the solution of

d X t = b ( X t , t ) d t + σ ( X t , t ) d B ( t ) , t > s , X s = x ,

where t b ( x , t ) and t σ ( x , t ) are τ-periodic.

The first point of the following theorem gives the definition of evolution systems of measures, which generalizes the notion of invariant measures in the case of inhomogeneous Markov processes. The exponential estimate of 2. in the following theorem is a key point to prove the averaging principle of Theorem 2.2.

Theorem A.1 ([5])

1. There exists a uniqueτ-periodic family of probability measures { μ ( s , ) , s R } such that for all functionsϕcontinuous and bounded,

x R p E [ ϕ ( X t ) ] μ ( s , d x ) = x R p ϕ ( x ) μ ( t , d x ) .

Such a family is called evolution systems of measures.

2. Furthermore, under stronger dissipativity condition, the convergence of the law ofXtoμis exponentially fast. More precisely, for any r ( 1 , + ) , there exist M > 0 and ω < 0 such that for allϕin the space ofp-integrable functions with respect to μ ( t , ) , L r ( R p , μ ( t , ) ) ,

x R p E [ ϕ ( X t s , x ) ] x R p ϕ ( x ) μ ( t , d x ) r μ ( s , d x ) M e ω ( t s ) x R p ϕ ( x ) r μ ( t , d x ) .

A.2 Proof of Property 2.3

Property A.2If there exists a smooth subsetof R q such that

1. The functionsF, G, Σsatisfy Assumptions 2.1-2.3 restricted on R p × E .

2. ℰ is invariant under the flow of G ¯ μ , as defined in (7).

Then for any initial condition w 0 E , system (4) is asymptotically well posed in probability and w ϵ satisfies the conclusion of Theorem 2.2.

Proof The idea of the proof is to truncate the original system, replacing G by a smooth truncation which coincides with G on ℰ and which is close to 0 outside ℰ. More precisely, for β > 0 , we introduce ψ β : R q R q a regular function (locally Lipschitz) such that ψ β ( w ) = 0 if w E or w E and lim β 0 ψ β ( w ) = 1 if w E E . We define

G ˜ β ( v , w ) = G ( v , w ) ψ β ( w ) .

Then, we introduce ( v ˜ ϵ , β , w ˜ ϵ , β ) the solution of the auxiliary system

d v ˜ ϵ , β = 1 ϵ 1 [ F ( v ˜ ϵ , β , w ˜ ϵ , β , t ϵ 2 ) ] d t + 1 ϵ 1 Σ ( v ˜ ϵ , β , w ˜ ϵ , β ) d B ( t ) , d w ˜ ϵ , β = G ˜ β ( v ˜ ϵ , β , w ˜ ϵ , β ) d t

with the same initial condition as ( v ϵ , w ϵ ) .

Let T , δ , η > 0 be three positive reals. Let us introduce a few more notations. We will need to consider a subset of ℰ defined by

E β : = { w E ; ψ β ( w ) 1 ( η ) 1 / 2 δ } .

We also introduce the following stopping times:

τ ϵ : = inf { t 0 ; w t ϵ E } , τ ϵ β : = inf { t 0 ; w t ϵ E β } , τ ˜ ϵ : = inf { t 0 ; w ˜ t ϵ , β E } , τ ˜ ϵ β : = inf { t 0 ; w ˜ t ϵ , β E β } .

Finally, we define T ϵ : = min ( T , τ ϵ , τ ˜ ϵ ) and T ϵ β : = min ( T , τ ϵ β , τ ˜ ϵ β ) .

Let us remark at this point that in order to prove that P [ τ ϵ T ] 1 (which is our aim), it is sufficient to work on the bounded stopping time min ( T , τ ϵ ) , since P [ τ ϵ T ] = P [ min ( T , τ ϵ ) T ] . In other words, the realizations of w ϵ which stay longer than T inside ℰ are not problematic. Therefore, we introduce τ ϵ ˆ : = min ( T , τ ϵ ) .

Our first claim is that on finite time intervals [ 0 , T ] , w ˜ ϵ , β is a good approximation of w ϵ inside ℰ as long as one chooses β sufficiently small. To prove our claim, we proceed in two steps, first working inside E β and then in E E β :

1. For any β > 0 , one controls the difference between w ϵ and w ˜ ϵ , β on E β since one controls the difference between the drifts. By an application of Lemma A.3 below (we need here the moment Assumption 2.3(i)), there exists a constant C (which may depend on T , β , ) such that

E [ sup 0 t T ϵ β w t ϵ w ˜ t ϵ , β 2 ] C η δ 2 . (21)

We conclude by an application of the Markov inequality, implying

P [ sup 0 t T ϵ β w t ϵ w ˜ t ϵ , β > δ ] 1 δ <